Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Helena Montenegro

2021

Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis

Autores
Montenegro, H; Silva, W; Cardoso, JS;

Publicação
IEEE ACCESS

Abstract
Although Deep Learning models have achieved incredible results in medical image classification tasks, their lack of interpretability hinders their deployment in the clinical context. Case-based interpretability provides intuitive explanations, as it is a much more human-like approach than saliency-map-based interpretability. Nonetheless, since one is dealing with sensitive visual data, there is a high risk of exposing personal identity, threatening the individuals' privacy. In this work, we propose a privacy-preserving generative adversarial network for the privatization of case-based explanations. We address the weaknesses of current privacy-preserving methods for visual data from three perspectives: realism, privacy, and explanatory value. We also introduce a counterfactual module in our Generative Adversarial Network that provides counterfactual case-based explanations in addition to standard factual explanations. Experiments were performed in a biometric and medical dataset, demonstrating the network's potential to preserve the privacy of all subjects and keep its explanatory evidence while also maintaining a decent level of intelligibility.

2022

Privacy-Preserving Case-Based Explanations: Enabling Visual Interpretability by Protecting Privacy

Autores
Montenegro, H; Silva, W; Gaudio, A; Fredrikson, M; Smailagic, A; Cardoso, JS;

Publicação
IEEE ACCESS

Abstract
Deep Learning achieves state-of-the-art results in many domains, yet its black-box nature limits its application to real-world contexts. An intuitive way to improve the interpretability of Deep Learning models is by explaining their decisions with similar cases. However, case-based explanations cannot be used in contexts where the data exposes personal identity, as they may compromise the privacy of individuals. In this work, we identify the main limitations and challenges in the anonymization of case-based explanations of image data through a survey on case-based interpretability and image anonymization methods. We empirically analyze the anonymization methods in regards to their capacity to remove personally identifiable information while preserving relevant semantic properties of the data. Through this analysis, we conclude that most privacy-preserving methods are not sufficiently good to be applied to case-based explanations. To promote research on this topic, we formalize the privacy protection of visual case-based explanations as a multi-objective problem to preserve privacy, intelligibility, and relevant explanatory evidence regarding a predictive task. We empirically verify the potential of interpretability saliency maps as qualitative evaluation tools for anonymization. Finally, we identify and propose new lines of research to guide future work in the generation of privacy-preserving case-based explanations.

2023

Disentangled Representation Learning for Privacy-Preserving Case-Based Explanations

Autores
Montenegro, H; Silva, W; Cardoso, JS;

Publicação
MEDICAL APPLICATIONS WITH DISENTANGLEMENTS, MAD 2022

Abstract
The lack of interpretability of Deep Learning models hinders their deployment in clinical contexts. Case-based explanations can be used to justify these models' decisions and improve their trustworthiness. However, providing medical cases as explanations may threaten the privacy of patients. We propose a generative adversarial network to disentangle identity and medical features from images. Using this network, we can alter the identity of an image to anonymize it while preserving relevant explanatory features. As a proof of concept, we apply the proposed model to biometric and medical datasets, demonstrating its capacity to anonymize medical images while preserving explanatory evidence and a reasonable level of intelligibility. Finally, we demonstrate that the model is inherently capable of generating counterfactual explanations.

2025

107P Surgeon preference for AI-generated aesthetic predictions after breast-conserving surgery: A multicentre pilot study

Autores
Pfob, A; Montenegro, H; Bonci, E; Romariz, M; Zolfgharnasab, M; Gonçalves, T; Mavioso, C; Andrés-Luna, R; Heil, J; Ekman, M; Bobowicz, M; Kabata, P; Di Micco, R; Corona, S; Menes, T; Herman, N; Cardoso, J; Cardoso, M;

Publicação
ESMO Real World Data and Digital Oncology

Abstract

2025

Conditional Generative Adversarial Network for Predicting the Aesthetic Outcomes of Breast Cancer Treatment

Autores
Montenegro, H; Cardoso, MJ; Cardoso, JS;

Publicação
2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)

Abstract

2025

A Literature Review on Example-Based Explanations in Medical Image Analysis

Autores
Montenegro, H; Cardoso, JS;

Publicação
JOURNAL OF HEALTHCARE INFORMATICS RESEARCH

Abstract
Deep learning has been extensively applied to medical imaging tasks over the past years, achieving outstanding results. However, the obscure reasoning of the models and the lack of supportive evidence causes both clinicians and patients to distrust the models' predictions, hindering their adoption in clinical practice. In recent years, the research community has focused on developing explanations capable of revealing a model's reasoning. Among various types of explanations, example-based explanations emerged as particularly intuitive for medical practitioners. Despite the intuitiveness and wide development of example-based explanations, no work provides a comprehensive review of existing example-based explainability works in the medical image domain. In this work, we review works that provide example-based explanations for medical imaging tasks, reflecting on their strengths and limitations. We identify the absence of objective evaluation metrics, the lack of clinical validation and privacy concerns as the main issues that hinder the deployment of example-based explanations in clinical practice. Finally, we reflect on future directions contributing towards the deployment of example-based explainability in clinical practice.

  • 1
  • 3