Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Luís Filipe Teixeira

2023

Towards Concept-based Interpretability of Skin Lesion Diagnosis using Vision-Language Models

Authors
Patrício, C; Teixeira, LF; Neves, JC;

Publication
CoRR

Abstract

2024

Anatomical Concept-based Pseudo-labels for Increased Generalizability in Breast Cancer Multi-center Data

Authors
Miranda, I; Agrotis, G; Tan, RB; Teixeira, LF; Silva, W;

Publication
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024

Abstract
Breast cancer, the most prevalent cancer among women, poses a significant healthcare challenge, demanding effective early detection for optimal treatment outcomes. Mammography, the gold standard for breast cancer detection, employs low-dose X-rays to reveal tissue details, particularly cancerous masses and calcium deposits. This work focuses on evaluating the impact of incorporating anatomical knowledge to improve the performance and robustness of a breast cancer classification model. In order to achieve this, a methodology was devised to generate anatomical pseudo-labels, simulating plausible anatomical variations in cancer masses. These variations, encompassing changes in mass size and intensity, closely reflect concepts from the BI-RADs scale. Besides anatomical-based augmentation, we propose a novel loss term promoting the learning of cancer grading by our model. Experiments were conducted on publicly available datasets simulating both in-distribution and out-of-distribution scenarios to thoroughly assess the model's performance under various conditions.

2025

CBVLM: Training-free explainable concept-based Large Vision Language Models for medical image classification

Authors
Patrício, C; Torto, IR; Cardoso, JS; Teixeira, LF; Neves, J;

Publication
Comput. Biol. Medicine

Abstract
The main challenges limiting the adoption of deep learning-based solutions in medical workflows are the availability of annotated data and the lack of interpretability of such systems. Concept Bottleneck Models (CBMs) tackle the latter by constraining the model output on a set of predefined and human-interpretable concepts. However, the increased interpretability achieved through these concept-based explanations implies a higher annotation burden. Moreover, if a new concept needs to be added, the whole system needs to be retrained. Inspired by the remarkable performance shown by Large Vision-Language Models (LVLMs) in few-shot settings, we propose a simple, yet effective, methodology, CBVLM, which tackles both of the aforementioned challenges. First, for each concept, we prompt the LVLM to answer if the concept is present in the input image. Then, we ask the LVLM to classify the image based on the previous concept predictions. Moreover, in both stages, we incorporate a retrieval module responsible for selecting the best examples for in-context learning. By grounding the final diagnosis on the predicted concepts, we ensure explainability, and by leveraging the few-shot capabilities of LVLMs, we drastically lower the annotation cost. We validate our approach with extensive experiments across four medical datasets and twelve LVLMs (both generic and medical) and show that CBVLM consistently outperforms CBMs and task-specific supervised methods without requiring any training and using just a few annotated examples. More information on our project page: https://cristianopatricio.github.io/CBVLM/. © 2025 Elsevier B.V., All rights reserved.

2025

A two-step concept-based approach for enhanced interpretability and trust in skin lesion diagnosis

Authors
Patrício, C; Teixeira, LF; Neves, JC;

Publication
COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL

Abstract
The main challenges hindering the adoption of deep learning-based systems in clinical settings are the scarcity of annotated data and the lack of interpretability and trust in these systems. Concept Bottleneck Models (CBMs) offer inherent interpretability by constraining the final disease prediction on a set of human-understandable concepts. However, this inherent interpretability comes at the cost of greater annotation burden. Additionally, adding new concepts requires retraining the entire system. In this work, we introduce a novel two-step methodology that addresses both of these challenges. By simulating the two stages of a CBM, we utilize a pretrained Vision Language Model (VLM) to automatically predict clinical concepts, and an off-the-shelf Large Language Model (LLM) to generate disease diagnoses grounded on the predicted concepts. Furthermore, our approach supports test-time human intervention, enabling corrections to predicted concepts, which improves final diagnoses and enhances transparency in decision-making. We validate our approach on three skin lesion datasets, demonstrating that it outperforms traditional CBMs and state-of-the-art explainable methods, all without requiring any training and utilizing only a few annotated examples. The code is available at https://github.com/CristianoPatricio/2step-concept-based-skin-diagnosis.

2024

Latent diffusion models for Privacy-preserving Medical Case-based Explanations

Authors
Campos, F; Petrychenko, L; Teixeira, LF; Silva, W;

Publication
Proceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024) co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 20, 2024.

Abstract
Deep-learning techniques can improve the efficiency of medical diagnosis while challenging human experts’ accuracy. However, the rationale behind these classifier’s decisions is largely opaque, which is dangerous in sensitive applications such as healthcare. Case-based explanations explain the decision process behind these mechanisms by exemplifying similar cases using previous studies from other patients. Yet, these may contain personally identifiable information, which makes them impossible to share without violating patients’ privacy rights. Previous works have used GANs to generate anonymous case-based explanations, which had limited visual quality. We solve this issue by employing a latent diffusion model in a three-step procedure: generating a catalogue of synthetic images, removing the images that closely resemble existing patients, and using this anonymous catalogue during an explanation retrieval process. We evaluate the proposed method on the MIMIC-CXR-JPG dataset and achieve explanations that simultaneously have high visual quality, are anonymous, and retain their explanatory value.

2024

Finding Patterns in Ambiguity: Interpretable Stress Testing in the Decision~Boundary

Authors
Gomes, I; Teixeira, LF; van Rijn, JN; Soares, C; Restivo, A; Cunha, L; Santos, M;

Publication
CoRR

Abstract

  • 8
  • 13