2021
Autores
Montenegro, H; Silva, W; Cardoso, JS;
Publicação
IEEE ACCESS
Abstract
Although Deep Learning models have achieved incredible results in medical image classification tasks, their lack of interpretability hinders their deployment in the clinical context. Case-based interpretability provides intuitive explanations, as it is a much more human-like approach than saliency-map-based interpretability. Nonetheless, since one is dealing with sensitive visual data, there is a high risk of exposing personal identity, threatening the individuals' privacy. In this work, we propose a privacy-preserving generative adversarial network for the privatization of case-based explanations. We address the weaknesses of current privacy-preserving methods for visual data from three perspectives: realism, privacy, and explanatory value. We also introduce a counterfactual module in our Generative Adversarial Network that provides counterfactual case-based explanations in addition to standard factual explanations. Experiments were performed in a biometric and medical dataset, demonstrating the network's potential to preserve the privacy of all subjects and keep its explanatory evidence while also maintaining a decent level of intelligibility.
2022
Autores
Montenegro, H; Silva, W; Gaudio, A; Fredrikson, M; Smailagic, A; Cardoso, JS;
Publicação
IEEE ACCESS
Abstract
Deep Learning achieves state-of-the-art results in many domains, yet its black-box nature limits its application to real-world contexts. An intuitive way to improve the interpretability of Deep Learning models is by explaining their decisions with similar cases. However, case-based explanations cannot be used in contexts where the data exposes personal identity, as they may compromise the privacy of individuals. In this work, we identify the main limitations and challenges in the anonymization of case-based explanations of image data through a survey on case-based interpretability and image anonymization methods. We empirically analyze the anonymization methods in regards to their capacity to remove personally identifiable information while preserving relevant semantic properties of the data. Through this analysis, we conclude that most privacy-preserving methods are not sufficiently good to be applied to case-based explanations. To promote research on this topic, we formalize the privacy protection of visual case-based explanations as a multi-objective problem to preserve privacy, intelligibility, and relevant explanatory evidence regarding a predictive task. We empirically verify the potential of interpretability saliency maps as qualitative evaluation tools for anonymization. Finally, we identify and propose new lines of research to guide future work in the generation of privacy-preserving case-based explanations.
2023
Autores
Montenegro, H; Silva, W; Cardoso, JS;
Publicação
MEDICAL APPLICATIONS WITH DISENTANGLEMENTS, MAD 2022
Abstract
The lack of interpretability of Deep Learning models hinders their deployment in clinical contexts. Case-based explanations can be used to justify these models' decisions and improve their trustworthiness. However, providing medical cases as explanations may threaten the privacy of patients. We propose a generative adversarial network to disentangle identity and medical features from images. Using this network, we can alter the identity of an image to anonymize it while preserving relevant explanatory features. As a proof of concept, we apply the proposed model to biometric and medical datasets, demonstrating its capacity to anonymize medical images while preserving explanatory evidence and a reasonable level of intelligibility. Finally, we demonstrate that the model is inherently capable of generating counterfactual explanations.
2025
Autores
Pfob, A; Montenegro, H; Bonci, E; Romariz, M; Zolfgharnasab, M; Gonçalves, T; Mavioso, C; Andrés-Luna, R; Heil, J; Ekman, M; Bobowicz, M; Kabata, P; Di Micco, R; Corona, S; Menes, T; Herman, N; Cardoso, J; Cardoso, M;
Publicação
ESMO Real World Data and Digital Oncology
Abstract
2025
Autores
Montenegro, H; Cardoso, MJ; Cardoso, JS;
Publicação
2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
Abstract
2025
Autores
Montenegro, H; Cardoso, JS;
Publicação
JOURNAL OF HEALTHCARE INFORMATICS RESEARCH
Abstract
Deep learning has been extensively applied to medical imaging tasks over the past years, achieving outstanding results. However, the obscure reasoning of the models and the lack of supportive evidence causes both clinicians and patients to distrust the models' predictions, hindering their adoption in clinical practice. In recent years, the research community has focused on developing explanations capable of revealing a model's reasoning. Among various types of explanations, example-based explanations emerged as particularly intuitive for medical practitioners. Despite the intuitiveness and wide development of example-based explanations, no work provides a comprehensive review of existing example-based explainability works in the medical image domain. In this work, we review works that provide example-based explanations for medical imaging tasks, reflecting on their strengths and limitations. We identify the absence of objective evaluation metrics, the lack of clinical validation and privacy concerns as the main issues that hinder the deployment of example-based explanations in clinical practice. Finally, we reflect on future directions contributing towards the deployment of example-based explainability in clinical practice.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.