2024
Autores
Sulun, S; Viana, P; Davies, MEP;
Publicação
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
We introduce a novel method for movie genre classification, capitalizing on a diverse set of readily accessible pretrained models. These models extract high-level features related to visual scenery, objects, characters, text, speech, music, and audio effects. To intelligently fuse these pretrained features, we train small classifier models with low time and memory requirements. Employing the transformer model, our approach utilizes all video and audio frames of movie trailers without performing any temporal pooling, efficiently exploiting the correspondence between all elements, as opposed to the fixed and low number of frames typically used by traditional methods. Our approach fuses features originating from different tasks and modalities, with different dimensionalities, different temporal lengths, and complex dependencies as opposed to current approaches. Our method outperforms state-of-the-art movie genre classification models in terms of precision, recall, and mean average precision (mAP). To foster future research, we make the pretrained features for the entire MovieNet dataset, along with our genre classification code and the trained models, publicly available.
2024
Autores
Tavares, P; Paiva, A; Amalfitano, D; Just, R;
Publicação
PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024
Abstract
Mutation testing has evolved beyond academic research, is deployed in industrial and open-source settings, and is increasingly part of universities' software engineering curricula. While many mutation testing tools exist, each with different strengths and weaknesses, integrating them into educational activities and exercises remains challenging due to the tools' complexity and the need to integrate them into a development environment. Additionally, it may be desirable to use different tools so that students can explore differences, e.g.. in the types or numbers of generated mutants. Asking students to install and learn multiple tools would only compound technical complexity and likely result in unwanted differences in how and what students learn. This paper presents FRAFOL, a framework for learning mutation testing. FRAME provides a common environment for using different mutation testing tools in an educational setting.
2024
Autores
Correia, FF; Ferreira, R; Queiroz, PGG; Nunes, H; Barra, M; Figueiredo, D;
Publicação
CoRR
Abstract
2024
Autores
dos Santos, AF; Leal, JP;
Publicação
13th Symposium on Languages, Applications and Technologies, SLATE 2024, July 4-5, 2024, Águeda, Portugal
Abstract
Semantic measure (SM) algorithms allow software to mimic the human ability of assessing the strength of the semantic relations between elements such as concepts, entities, words, or sentences. SM algorithms are typically evaluated by comparison against gold standard datasets built by human annotators. These datasets are composed of pairs of elements and an averaged numeric rating. Building such datasets usually requires asking human annotators to assign a numeric value to their perception of the strength of the semantic relation between two elements. Large language models (LLMs) have recently been successfully used to perform tasks which previously required human intervention, such as text summarization, essay writing, image description, image synthesis, question answering, and so on. In this paper, we present ongoing research on LLMs capabilities for semantic relations assessment. We queried several LLMs to rate the relationship of pairs of elements from existing semantic measures evaluation datasets, and measured the correlation between the results from the LLMs and gold standard datasets. Furthermore, we performed additional experiments to evaluate which other factors can influence LLMs performance in this task. We present and discuss the results obtained so far. © André Fernandes dos Santos and José Paulo Leal.
2024
Autores
Amarelo, A; Mota, M; Amarelo, B; Ferreira, MC; Fernandes, CS;
Publicação
CANCERS
Abstract
Background/Objectives: Cancer patients undergoing chemotherapy often face challenges that reduce their physical function and quality of life. Technological resources offer innovative solutions for physical rehabilitation, but the extent of their application in this context remains unclear. This scoping review aims to explore and map the various technological tools used to support physical rehabilitation in cancer patients during chemotherapy, focusing on their potential to improve outcomes and enhance patient care. Methods: A scoping review was conducted following the Joanna Briggs Institute (JBI) guidelines and the PRISMA-ScR framework. Comprehensive searches were performed in the MEDLINE, CINAHL, Scopus, SPORTDiscus, and COCHRANE databases. The included studies focused on the technological resources used in physical rehabilitation for cancer patients undergoing chemotherapy. Data extraction followed the World Health Organization's Classification of Digital Health Interventions v1.0 to categorize the technologies. Results: A total of 32 studies met the inclusion criteria. The most commonly used technologies included wearable devices (16 studies), web-based platforms and telerehabilitation systems (7 studies), mHealth applications (6 studies), virtual reality (2 studies), and exergaming (3 studies). These tools were designed to enhance physical function, manage treatment-related symptoms, and improve overall quality of life. Wearable devices were particularly effective for monitoring physical activity, while web-based platforms and mHealth applications supported remote rehabilitation and patient engagement. Conclusions: Technological resources offer significant opportunities for personalized rehabilitation interventions in cancer patients undergoing chemotherapy. However, further research is needed to evaluate the long-term effectiveness, cost-efficiency, and clinical integration of these tools to ensure broader accessibility and sustainable impact.
2024
Autores
Almeida, PS;
Publicação
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.