Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Cristiano Patrício received a B.Sc. in Computer Science and Engineering (17/20) in 2019 from the Polytechnic of Guarda and his M.Sc. in Computer Science and Engineering (18/20) in 2021 from the University of Beira Interior. He received 1 Merit Scholarship in the 2018/2019 academic year. Cristiano is pursuing his Ph.D. in Computer Science and Engineering from the University of Beira Interior under a Ph.D. research grant from the Portuguese national funding agency for science (FCT). Cristiano is currently a Research Assistant at INESC TEC and was a visiting assistant at the Polytechnic of Guarda in the academic year of 2021/2022. Previously, Cristiano participated in developing solutions for the Altice Portugal Foundation (MagicContact Web) projects and for the “Perception for a Service Robot” project of NOVA-LINCS. His work focuses on developing inherently interpretable deep learning models for pathology diagnosis in medical imaging. His research interests include the topics of Explainable AI, Deep Learning and Medical Image Analysis. He authored 6 research papers in international conferences and journals.

Interest
Topics
Details

Details

  • Name

    Cristiano Pires Patrício
  • Role

    Research Assistant
  • Since

    07th February 2022
Publications

2026

Unsupervised contrastive analysis for anomaly detection in brain MRIs via conditional diffusion models

Authors
Patrício, C; Barbano, CA; Fiandrotti, A; Renzulli, R; Grangetto, M; Teixeira, LF; Neves, JC;

Publication
PATTERN RECOGNITION LETTERS

Abstract
Contrastive Analysis (CA) detects anomalies by contrasting patterns unique to a target group (e.g., unhealthy subjects) from those in a background group (e.g., healthy subjects). In the context of brain MRIs, existing CA approaches rely on supervised contrastive learning or variational autoencoders (VAEs) using both healthy and unhealthy data, but such reliance on target samples is challenging in clinical settings. Unsupervised Anomaly Detection (UAD) learns a reference representation of healthy anatomy, eliminating the need for target samples. Deviations from this reference distribution can indicate potential anomalies. In this context, diffusion models have been increasingly adopted in UAD due to their superior performance in image generation compared to VAEs. Nonetheless, precisely reconstructing the anatomy of the brain remains a challenge. In this work, we bridge CA and UAD by reformulating contrastive analysis principles for the unsupervised setting. We propose an unsupervised framework to improve the reconstruction quality by training a self-supervised contrastive encoder on healthy images to extract meaningful anatomical features. These features are used to condition a diffusion model to reconstruct the healthy appearance of a given image, enabling interpretable anomaly localization via pixel-wise comparison. We validate our approach through a proof-of-concept on a facial image dataset and further demonstrate its effectiveness on four brain MRI datasets, outperforming baseline methods in anomaly localization on the NOVA benchmark.

2025

A two-step concept-based approach for enhanced interpretability and trust in skin lesion diagnosis

Authors
Patrício, C; Teixeira, LF; Neves, JC;

Publication
COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL

Abstract
The main challenges hindering the adoption of deep learning-based systems in clinical settings are the scarcity of annotated data and the lack of interpretability and trust in these systems. Concept Bottleneck Models (CBMs) offer inherent interpretability by constraining the final disease prediction on a set of human-understandable concepts. However, this inherent interpretability comes at the cost of greater annotation burden. Additionally, adding new concepts requires retraining the entire system. In this work, we introduce a novel two-step methodology that addresses both of these challenges. By simulating the two stages of a CBM, we utilize a pretrained Vision Language Model (VLM) to automatically predict clinical concepts, and an off-the-shelf Large Language Model (LLM) to generate disease diagnoses grounded on the predicted concepts. Furthermore, our approach supports test-time human intervention, enabling corrections to predicted concepts, which improves final diagnoses and enhances transparency in decision-making. We validate our approach on three skin lesion datasets, demonstrating that it outperforms traditional CBMs and state-of-the-art explainable methods, all without requiring any training and utilizing only a few annotated examples. The code is available at https://github.com/CristianoPatricio/2step-concept-based-skin-diagnosis.

2025

CBVLM: Training-free explainable concept-based Large Vision Language Models for medical image classification

Authors
Patrício, C; Torto, IR; Cardoso, JS; Teixeira, LF; Neves, J;

Publication
Comput. Biol. Medicine

Abstract
The main challenges limiting the adoption of deep learning-based solutions in medical workflows are the availability of annotated data and the lack of interpretability of such systems. Concept Bottleneck Models (CBMs) tackle the latter by constraining the model output on a set of predefined and human-interpretable concepts. However, the increased interpretability achieved through these concept-based explanations implies a higher annotation burden. Moreover, if a new concept needs to be added, the whole system needs to be retrained. Inspired by the remarkable performance shown by Large Vision-Language Models (LVLMs) in few-shot settings, we propose a simple, yet effective, methodology, CBVLM, which tackles both of the aforementioned challenges. First, for each concept, we prompt the LVLM to answer if the concept is present in the input image. Then, we ask the LVLM to classify the image based on the previous concept predictions. Moreover, in both stages, we incorporate a retrieval module responsible for selecting the best examples for in-context learning. By grounding the final diagnosis on the predicted concepts, we ensure explainability, and by leveraging the few-shot capabilities of LVLMs, we drastically lower the annotation cost. We validate our approach with extensive experiments across four medical datasets and twelve LVLMs (both generic and medical) and show that CBVLM consistently outperforms CBMs and task-specific supervised methods without requiring any training and using just a few annotated examples. More information on our project page: https://cristianopatricio.github.io/CBVLM/. © 2025 Elsevier B.V., All rights reserved.

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Authors
Patrício, C; Neves, C; Teixeira, F;

Publication
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2024

TOWARDS CONCEPT-BASED INTERPRETABILITY OF SKIN LESION DIAGNOSIS USING VISION-LANGUAGE MODELS

Authors
Patricio, C; Teixeira, LF; Neves, JC;

Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation.