Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Helena Montenegro

2024

Explainable AI for medical image analysis

Autores
Brás, C; Montenegro, H; Cai, Y; Corbetta, V; Huo, Y; Silva, W; Cardoso, S; Landman, A; Išgum, I;

Publicação
Trustworthy Ai in Medical Imaging

Abstract
Rising adoption of AI-driven solutions in medical imaging is associated with an emerging need to develop strategies to introduce explainability as an important aspect of trustworthiness of AI models. This chapter addresses the most commonly used explainability techniques in medical image analysis, namely methods generating visual, example-based, textual, and concept-based explanations. To obtain visual explanations, we explore backpropagation- and perturbation-based methods. To yield example-based explanations, we focus on prototype-, distance-, and retrieval-based techniques, as well as counterfactual explanations. Finally, to produce textual and concept-based explanations, we delve into image captioning and testing with concept activation vectors, respectively. This chapter aims at providing understanding of the conceptual underpinning, advantages and limitations of each method, as well as to interpret their generated explanations in the context of medical image analysis. © 2025 Elsevier Inc. All rights reserved.

2025

Conditional Generative Adversarial Network for Predicting the Aesthetic Outcomes of Breast Cancer Treatment

Autores
Montenegro, H; Cardoso, MJ; Cardoso, JS;

Publicação
2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)

Abstract

2025

A Literature Review on Example-Based Explanations in Medical Image Analysis

Autores
Montenegro, H; Cardoso, JS;

Publicação
JOURNAL OF HEALTHCARE INFORMATICS RESEARCH

Abstract
Deep learning has been extensively applied to medical imaging tasks over the past years, achieving outstanding results. However, the obscure reasoning of the models and the lack of supportive evidence causes both clinicians and patients to distrust the models' predictions, hindering their adoption in clinical practice. In recent years, the research community has focused on developing explanations capable of revealing a model's reasoning. Among various types of explanations, example-based explanations emerged as particularly intuitive for medical practitioners. Despite the intuitiveness and wide development of example-based explanations, no work provides a comprehensive review of existing example-based explainability works in the medical image domain. In this work, we review works that provide example-based explanations for medical imaging tasks, reflecting on their strengths and limitations. We identify the absence of objective evaluation metrics, the lack of clinical validation and privacy concerns as the main issues that hinder the deployment of example-based explanations in clinical practice. Finally, we reflect on future directions contributing towards the deployment of example-based explainability in clinical practice.

  • 3
  • 3