Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Cristiano Patrício concluiu a Licenciatura em Engenharia Informática (17/20 valores) em 2019 pelo Instituto Politécnico da Guarda, e o Mestrado em Engenharia Informática (18/20 valores) em 2021 pela Universidade da Beira Interior. Recebeu 1 Bolsa de Mérito no ano letivo de 2018/2019. Atualmente, está a obter o grau de Doutoramento em Engenharia Informática na Universidade da Beira Interior, com uma bolsa de investigação de doutoramento da Fundação para a Ciência e Tecnologia (FCT). Cristiano é Assistente de Investigação no INESC TEC e foi Assistente Convidado no Instituto Politécnico da Guarda. Anteriormente, Cristiano participou no desenvolvimento de soluções para projetos da Fundação Altice Portugal (MagicContact Web) e para o projeto ``Perception for a Service Robot'', do NOVA-LINCS. O seu trabalho centra-se no desenvolvimento de modelos de aprendizagem profunda inerentemente interpretáveis para o diagnóstico de patologias em imagem médica. Os seus interesses de investigação incluem os tópicos de Explainable AI, Deep Learning e Medical Image Analysis. É autor de 6 artigos científicos em revistas e conferências internationais.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Cristiano Pires Patrício
  • Cargo

    Assistente de Investigação
  • Desde

    07 fevereiro 2022
Publicações

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Autores
Patrício, C; Neves, C; Teixeira, F;

Publicação
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2023

Zero-shot face recognition: Improving the discriminability of visual face features using a Semantic-Guided Attention Model

Autores
Patricio, C; Neves, JC;

Publicação
EXPERT SYSTEMS WITH APPLICATIONS

Abstract
Zero-shot learning enables the recognition of classes not seen during training through the use of semantic information comprising a visual description of the class either in textual or attribute form. Despite the advances in the performance of zero-shot learning methods, most of the works do not explicitly exploit the correlation between the visual attributes of the image and their corresponding semantic attributes for learning discriminative visual features. In this paper, we introduce an attention-based strategy for deriving features from the image regions regarding the most prominent attributes of the image class. In particular, we train a Convolutional Neural Network (CNN) for image attribute prediction and use a gradient-weighted method for deriving the attention activation maps of the most salient image attributes. These maps are then incorporated into the feature extraction process of Zero-Shot Learning (ZSL) approaches for improving the discriminability of the features produced through the implicit inclusion of semantic information. For experimental validation, the performance of state-of-the-art ZSL methods was determined using features with and without the proposed attention model. Surprisingly, we discover that the proposed strategy degrades the performance of ZSL methods in classical ZSL datasets (AWA2), but it can significantly improve performance when using face datasets. Our experiments show that these results are a consequence of the interpretability of the dataset attributes, suggesting that existing ZSL datasets attributes are, in most cases, difficult to be identifiable in the image. Source code is available at https://github.com/CristianoPatricio/SGAM.

2023

Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis

Autores
Patrício, C; Neves, JC; Teixeira, LF;

Publicação
IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Workshops, Vancouver, BC, Canada, June 17-24, 2023

Abstract
Early detection of melanoma is crucial for preventing severe complications and increasing the chances of successful treatment. Existing deep learning approaches for melanoma skin lesion diagnosis are deemed black-box models, as they omit the rationale behind the model prediction, compromising the trustworthiness and acceptability of these diagnostic methods. Attempts to provide concept-based explanations are based on post-hoc approaches, which depend on an additional model to derive interpretations. In this paper, we propose an inherently interpretable framework to improve the interpretability of concept-based models by incorporating a hard attention mechanism and a coherence loss term to assure the visual coherence of concept activations by the concept encoder, without requiring the supervision of additional annotations. The proposed framework explains its decision in terms of human-interpretable concepts and their respective contribution to the final prediction, as well as a visual interpretation of the locations where the concept is present in the image. Experiments on skin image datasets demonstrate that our method outperforms existing black-box and concept-based models for skin lesion classification. © 2023 IEEE.

2023

Towards Concept-based Interpretability of Skin Lesion Diagnosis using Vision-Language Models

Autores
Patrício, C; Teixeira, LF; Neves, JC;

Publicação
CoRR

Abstract