2025
Autores
Patrício, C; Teixeira, LF; Neves, JC;
Publicação
COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL
Abstract
The main challenges hindering the adoption of deep learning-based systems in clinical settings are the scarcity of annotated data and the lack of interpretability and trust in these systems. Concept Bottleneck Models (CBMs) offer inherent interpretability by constraining the final disease prediction on a set of human-understandable concepts. However, this inherent interpretability comes at the cost of greater annotation burden. Additionally, adding new concepts requires retraining the entire system. In this work, we introduce a novel two-step methodology that addresses both of these challenges. By simulating the two stages of a CBM, we utilize a pretrained Vision Language Model (VLM) to automatically predict clinical concepts, and an off-the-shelf Large Language Model (LLM) to generate disease diagnoses grounded on the predicted concepts. Furthermore, our approach supports test-time human intervention, enabling corrections to predicted concepts, which improves final diagnoses and enhances transparency in decision-making. We validate our approach on three skin lesion datasets, demonstrating that it outperforms traditional CBMs and state-of-the-art explainable methods, all without requiring any training and utilizing only a few annotated examples. The code is available at https://github.com/CristianoPatricio/2step-concept-based-skin-diagnosis.
2025
Autores
Mamede, S; Santos, A;
Publicação
AI and Learning Analytics in Distance Learning
Abstract
The ever-changing landscape of distance learning AI and learning analytics transforms engagement and efficiency in education. AI systems analyze behavior and performance data to provide real-time feedback for improved outcomes. Learning analytics further help educators to identify at-risk students while fostering better teaching strategies. By integrating AI with learning analytics, distance education becomes more inclusive, ensuring learners receive the support necessary to thrive in an increasingly digital and knowledge-driven world. AI and Learning Analytics in Distance Learning explores the development of distance learning. It examines the challenges of using these systems and integrating them with distance learning. The book covers topics such as AI, distance learning technology, and management systems, and is an excellent resource for academicians, educators, researchers, computer engineers, and data scientists. © 2025 by IGI Global Scientific Publishing. All rights reserved.
2025
Autores
Cruz, RPM; Cristino, R; Cardoso, JS;
Publicação
IEEE ACCESS
Abstract
Semantic segmentation consists of predicting a semantic label for each image pixel. While existing deep learning approaches achieve high accuracy, they often overlook the ordinal relationships between classes, which can provide critical domain knowledge (e.g., the pupil lies within the iris, and lane markings are part of the road). This paper introduces novel methods for spatial ordinal segmentation that explicitly incorporate these inter-class dependencies. By treating each pixel as part of a structured image space rather than as an independent observation, we propose two regularization terms and a new metric to enforce ordinal consistency between neighboring pixels. Two loss regularization terms and one metric are proposed for structural ordinal segmentation, which penalizes predictions of non-ordinal adjacent classes. Five biomedical datasets and multiple configurations of autonomous driving datasets demonstrate the efficacy of the proposed methods. Our approach achieves improvements in ordinal metrics and enhances generalization, with up to a 15.7% relative increase in the Dice coefficient. Importantly, these benefits come without additional inference time costs. This work highlights the significance of spatial ordinal relationships in semantic segmentation and provides a foundation for further exploration in structured image representations.
2025
Autores
Vincenzi, AMR; Kuroishi, PH; Bispo, J; da Veiga, ARC; da Mata, DRC; Azevedo, FB; Paiva, ACR;
Publicação
JOURNAL OF SYSTEMS AND SOFTWARE
Abstract
Mutation testing maybe used to guide test case generation and as a technique to assess the quality of test suites. Despite being used frequently, mutation testing is not so commonly applied in the mobile world. One critical challenge in mutation testing is dealing with its computational cost. Generating mutants, running test cases over each mutant, and analyzing the results may require significant time and resources. This research aims to contribute to reducing Android mutation testing costs. It implements mutation testing operators (traditional and Android-specific) according to mutant schemata (implementing multiple mutants into a single code file). It also describes an Android mutation testing framework developed to execute test cases and determine mutation scores. Additional mutation operators can be implemented in JavaScript and easily integrated into the framework. The overall approach is validated through case studies showing that mutant schemata have advantages over the traditional mutation strategy (one file per mutant). The results show mutant schemata overcome traditional mutation in all evaluated aspects with no additional cost: it takes 8.50% less time for mutant generation, requires 99.78% less disk space, and runs, on average, 6.45% faster than traditional mutation. Moreover, considering sustainability metrics, mutant schemata have 8,18% less carbon footprint than traditional strategy.
2025
Autores
Baccega, D; Aguilar, J; Baquero, C; Fernández Anta, A; Ramirez, JM;
Publicação
Abstract
2025
Autores
Tame, ID; Tolosana, R; Melzi, P; Rodríguez, RV; Kim, M; Rathgeb, C; Liu, X; Gomez, LF; Morales, A; Fierrez, J; Garcia, JO; Zhong, Z; Huang, Y; Mi, Y; Ding, S; Zhou, S; He, S; Fu, L; Cong, H; Zhang, R; Xiao, Z; Smirnov, E; Pimenov, A; Grigorev, A; Timoshenko, D; Asfaw, KM; Low, CY; Liu, H; Wang, C; Zuo, Q; He, Z; Shahreza, HO; George, A; Unnervik, A; Rahimi, P; Marcel, S; Neto, PC; Huber, M; Kolf, JN; Damer, N; Boutros, F; Cardoso, JS; Sequeira, AF; Atzori, A; Fenu, G; Marras, M; Struc, V; Yu, J; Li, Z; Li, J; Zhao, W; Lei, Z; Zhu, X; Zhang, X; Biesseck, B; Vidal, P; Coelho, L; Granada, R; Menotti, D;
Publicação
Inf. Fusion
Abstract
Synthetic data is gaining increasing popularity for face recognition technologies, mainly due to the privacy concerns and challenges associated with obtaining real data, including diverse scenarios, quality, and demographic groups, among others. It also offers some advantages over real data, such as the large amount of data that can be generated or the ability to customize it to adapt to specific problem-solving needs. To effectively use such data, face recognition models should also be specifically designed to exploit synthetic data to its fullest potential. In order to promote the proposal of novel Generative AI methods and synthetic data, and investigate the application of synthetic data to better train face recognition systems, we introduce the 2nd FRCSyn-onGoing challenge, based on the 2nd Face Recognition Challenge in the Era of Synthetic Data (FRCSyn), originally launched at CVPR 2024. This is an ongoing challenge that provides researchers with an accessible platform to benchmark (i) the proposal of novel Generative AI methods and synthetic data, and (ii) novel face recognition systems that are specifically proposed to take advantage of synthetic data. We focus on exploring the use of synthetic data both individually and in combination with real data to solve current challenges in face recognition such as demographic bias, domain adaptation, and performance constraints in demanding situations, such as age disparities between training and testing, changes in the pose, or occlusions. Very interesting findings are obtained in this second edition, including a direct comparison with the first one, in which synthetic databases were restricted to DCFace and GANDiffFace. © 2025
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.