2025
Autores
Montenegro, H; Cardoso, MJ; Cardoso, JS;
Publicação
CoRR
Abstract
2025
Autores
Vieira, AB; Valente, M; Montezuma, D; Albuquerque, T; Ribeiro, L; Oliveira, D; Monteiro, JC; Gonçalves, S; Pinto, IM; Cardoso, JS; Oliveira, AL;
Publicação
CoRR
Abstract
2024
Autores
Rio-Torto, I; Cardoso, JS; Teixeira, LF;
Publicação
MEDICAL IMAGING WITH DEEP LEARNING
Abstract
The increased interest and importance of explaining neural networks' predictions, especially in the medical community, associated with the known unreliability of saliency maps, the most common explainability method, has sparked research into other types of explanations. Natural Language Explanations (NLEs) emerge as an alternative, with the advantage of being inherently understandable by humans and the standard way that radiologists explain their diagnoses. We extend upon previous work on NLE generation for multi-label chest X-ray diagnosis by replacing the traditional decoder-only NLE generator with an encoder-decoder architecture. This constitutes a first step towards Reinforcement Learning-free adversarial generation of NLEs when no (or few) ground-truth NLEs are available for training, since the generation is done in the continuous encoder latent space, instead of in the discrete decoder output space. However, in the current scenario, large amounts of annotated examples are still required, which are especially costly to obtain in the medical domain, given that they need to be provided by clinicians. Thus, we explore how the recent developments in Parameter-Efficient Fine-Tuning (PEFT) can be leveraged for this usecase. We compare different PEFT methods and find that integrating the visual information into the NLE generator layers instead of only at the input achieves the best results, even outperforming the fully fine-tuned encoder-decoder-based model, while only training 12% of the model parameters. Additionally, we empirically demonstrate the viability of supervising the NLE generation process on the encoder latent space, thus laying the foundation for RL-free adversarial training in low ground-truth NLE availability regimes. The code is publicly available at https://github.com/icrto/peft-nles.
2025
Autores
Patrício, C; Torto, IR; Cardoso, JS; Teixeira, LF; Neves, J;
Publicação
Comput. Biol. Medicine
Abstract
The main challenges limiting the adoption of deep learning-based solutions in medical workflows are the availability of annotated data and the lack of interpretability of such systems. Concept Bottleneck Models (CBMs) tackle the latter by constraining the model output on a set of predefined and human-interpretable concepts. However, the increased interpretability achieved through these concept-based explanations implies a higher annotation burden. Moreover, if a new concept needs to be added, the whole system needs to be retrained. Inspired by the remarkable performance shown by Large Vision-Language Models (LVLMs) in few-shot settings, we propose a simple, yet effective, methodology, CBVLM, which tackles both of the aforementioned challenges. First, for each concept, we prompt the LVLM to answer if the concept is present in the input image. Then, we ask the LVLM to classify the image based on the previous concept predictions. Moreover, in both stages, we incorporate a retrieval module responsible for selecting the best examples for in-context learning. By grounding the final diagnosis on the predicted concepts, we ensure explainability, and by leveraging the few-shot capabilities of LVLMs, we drastically lower the annotation cost. We validate our approach with extensive experiments across four medical datasets and twelve LVLMs (both generic and medical) and show that CBVLM consistently outperforms CBMs and task-specific supervised methods without requiring any training and using just a few annotated examples. More information on our project page: https://cristianopatricio.github.io/CBVLM/. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Zolfagharnasab, MH; Freitas, N; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Oliveira, HP; Cardoso, JS;
Publicação
ARTIFICIAL INTELLIGENCE AND IMAGING FOR DIAGNOSTIC AND TREATMENT CHALLENGES IN BREAST CARE, DEEP-BREATH 2024
Abstract
Breast cancer treatments often affect patients' body image, making aesthetic outcome predictions vital. This study introduces a Deep Learning (DL) multimodal retrieval pipeline using a dataset of 2,193 instances combining clinical attributes and RGB images of patients' upper torsos. We evaluate four retrieval techniques: Weighted Euclidean Distance (WED) with various configurations and shallow Artificial Neural Network (ANN) for tabular data, pre-trained and fine-tuned Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), and a multimodal approach combining both data types. The dataset, categorised into Excellent/Good and Fair/Poor outcomes, is organised into over 20K triplets for training and testing. Results show fine-tuned multimodal ViTs notably enhance performance, achieving up to 73.85% accuracy and 80.62% Adjusted Discounted Cumulative Gain (ADCG). This framework not only aids in managing patient expectations by retrieving the most relevant post-surgical images but also promises broad applications in medical image analysis and retrieval. The main contributions of this paper are the development of a multimodal retrieval system for breast cancer patients based on post-surgery aesthetic outcome and the evaluation of different models on a new dataset annotated by clinicians for image retrieval.
2025
Autores
Freitas, N; Veloso, C; Mavioso, C; Cardoso, MJ; Oliveira, HP; Cardoso, JS;
Publicação
ARTIFICIAL INTELLIGENCE AND IMAGING FOR DIAGNOSTIC AND TREATMENT CHALLENGES IN BREAST CARE, DEEP-BREATH 2024
Abstract
Breast cancer is the most common type of cancer in women worldwide. Because of high survival rates, there has been an increased interest in patient Quality of Life after treatment. Aesthetic results play an important role in this aspect, as these treatments can leave a mark on a patient's self-image. Despite that, there are no standard ways of assessing aesthetic outcomes. Commonly used software such as BCCT.core or BAT require the manual annotation of keypoints, which makes them time-consuming for clinical use and can lead to result variability depending on the user. Recently, there have been attempts to leverage both traditional and Deep Learning algorithms to detect keypoints automatically. In this paper, we compare several methods for the detection of Breast Endpoints across two datasets. Furthermore, we present an extended evaluation of using these models as input for full contour prediction and aesthetic evaluation using the BCCT.core software. Overall, the YOLOv9 model, fine-tuned for this task, presents the best results considering both accuracy and usability, making this architecture the best choice for this application. The main contribution of this paper is the development of a pipeline for full breast contour prediction, which reduces clinician workload and user variability for automatic aesthetic assessment.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.