Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Helena Montenegro

2021

Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis

Autores
Montenegro, H; Silva, W; Cardoso, JS;

Publicação
IEEE ACCESS

Abstract
Although Deep Learning models have achieved incredible results in medical image classification tasks, their lack of interpretability hinders their deployment in the clinical context. Case-based interpretability provides intuitive explanations, as it is a much more human-like approach than saliency-map-based interpretability. Nonetheless, since one is dealing with sensitive visual data, there is a high risk of exposing personal identity, threatening the individuals' privacy. In this work, we propose a privacy-preserving generative adversarial network for the privatization of case-based explanations. We address the weaknesses of current privacy-preserving methods for visual data from three perspectives: realism, privacy, and explanatory value. We also introduce a counterfactual module in our Generative Adversarial Network that provides counterfactual case-based explanations in addition to standard factual explanations. Experiments were performed in a biometric and medical dataset, demonstrating the network's potential to preserve the privacy of all subjects and keep its explanatory evidence while also maintaining a decent level of intelligibility.

2022

Privacy-Preserving Case-Based Explanations: Enabling Visual Interpretability by Protecting Privacy

Autores
Montenegro, H; Silva, W; Gaudio, A; Fredrikson, M; Smailagic, A; Cardoso, JS;

Publicação
IEEE ACCESS

Abstract
Deep Learning achieves state-of-the-art results in many domains, yet its black-box nature limits its application to real-world contexts. An intuitive way to improve the interpretability of Deep Learning models is by explaining their decisions with similar cases. However, case-based explanations cannot be used in contexts where the data exposes personal identity, as they may compromise the privacy of individuals. In this work, we identify the main limitations and challenges in the anonymization of case-based explanations of image data through a survey on case-based interpretability and image anonymization methods. We empirically analyze the anonymization methods in regards to their capacity to remove personally identifiable information while preserving relevant semantic properties of the data. Through this analysis, we conclude that most privacy-preserving methods are not sufficiently good to be applied to case-based explanations. To promote research on this topic, we formalize the privacy protection of visual case-based explanations as a multi-objective problem to preserve privacy, intelligibility, and relevant explanatory evidence regarding a predictive task. We empirically verify the potential of interpretability saliency maps as qualitative evaluation tools for anonymization. Finally, we identify and propose new lines of research to guide future work in the generation of privacy-preserving case-based explanations.

2023

Disentangled Representation Learning for Privacy-Preserving Case-Based Explanations

Autores
Montenegro, H; Silva, W; Cardoso, JS;

Publicação
MEDICAL APPLICATIONS WITH DISENTANGLEMENTS, MAD 2022

Abstract
The lack of interpretability of Deep Learning models hinders their deployment in clinical contexts. Case-based explanations can be used to justify these models' decisions and improve their trustworthiness. However, providing medical cases as explanations may threaten the privacy of patients. We propose a generative adversarial network to disentangle identity and medical features from images. Using this network, we can alter the identity of an image to anonymize it while preserving relevant explanatory features. As a proof of concept, we apply the proposed model to biometric and medical datasets, demonstrating its capacity to anonymize medical images while preserving explanatory evidence and a reasonable level of intelligibility. Finally, we demonstrate that the model is inherently capable of generating counterfactual explanations.

2023

Evaluating the ability of an artificial-intelligence cloud-based platform designed to provide information prior to locoregional therapy for breast cancer in improving patient's satisfaction with therapy: The CINDERELLA trial

Autores
Kaidar Person, O; Antunes, M; Cardoso, S; Ciani, O; Cruz, H; Di Micco, R; Gentilini, D; Gonçalves, T; Gouveia, P; Heil, J; Kabata, P; Lopes, D; Martinho, M; Martins, H; Mavioso, C; Mika, M; Montenegro, H; Oliveira, P; Pfob, A; Rotmensz, N; Schinköthe, T; Silva, G; Tarricone, R; Cardoso, M;

Publicação
PLOS ONE

Abstract
BackgroundBreast cancer therapy improved significantly, allowing for different surgical approaches for the same disease stage, therefore offering patients different aesthetic outcomes with similar locoregional control. The purpose of the CINDERELLA trial is to evaluate an artificial-intelligence (AI) cloud-based platform (CINDERELLA platform) vs the standard approach for patient education prior to therapy. MethodsA prospective randomized international multicentre trial comparing two methods for patient education prior to therapy. After institutional ethics approval and a written informed consent, patients planned for locoregional treatment will be randomized to the intervention (CINDERELLA platform) or controls. The patients in the intervention arm will use the newly designed web-application (CINDERELLA platform, CINDERELLA APProach) to access the information related to surgery and/or radiotherapy. Using an AI system, the platform will provide the patient with a picture of her own aesthetic outcome resulting from the surgical procedure she chooses, and an objective evaluation of this aesthetic outcome (e.g., good/fair). The control group will have access to the standard approach. The primary objectives of the trial will be i) to examine the differences between the treatment arms with regards to patients' pre-treatment expectations and the final aesthetic outcomes and ii) in the experimental arm only, the agreement of the pre-treatment AI-evaluation (output) and patient's post-therapy self-evaluation. DiscussionThe project aims to develop an easy-to-use cost-effective AI-powered tool that improves shared decision-making processes. We assume that the CINDERELLA APProach will lead to higher satisfaction, better psychosocial status, and wellbeing of breast cancer patients, and reduce the need for additional surgeries to improve aesthetic outcome.

2024

Anonymizing medical case-based explanations through disentanglement

Autores
Montenegro, H; Cardoso, JS;

Publicação
MEDICAL IMAGE ANALYSIS

Abstract
Case-based explanations are an intuitive method to gain insight into the decision-making process of deep learning models in clinical contexts. However, medical images cannot be shared as explanations due to privacy concerns. To address this problem, we propose a novel method for disentangling identity and medical characteristics of images and apply it to anonymize medical images. The disentanglement mechanism replaces some feature vectors in an image while ensuring that the remaining features are preserved, obtaining independent feature vectors that encode the images' identity and medical characteristics. We also propose a model to manufacture synthetic privacy-preserving identities to replace the original image's identity and achieve anonymization. The models are applied to medical and biometric datasets, demonstrating their capacity to generate realistic-looking anonymized images that preserve their original medical content. Additionally, the experiments show the network's inherent capacity to generate counterfactual images through the replacement of medical features.

2023

Evaluating Privacy on Synthetic Images Generated using GANs: Contributions of the VCMI Team to ImageCLEFmedical GANs 2023

Autores
Montenegro, H; Neto, PC; Patrício, C; Torto, IR; Gonçalves, T; Teixeira, LF;

Publicação
Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2023), Thessaloniki, Greece, September 18th to 21st, 2023.

Abstract
This paper presents the main contributions of the VCMI Team to the ImageCLEFmedical GANs 2023 task. This task aims to evaluate whether synthetic medical images generated using Generative Adversarial Networks (GANs) contain identifiable characteristics of the training data. We propose various approaches to classify a set of real images as having been used or not used in the training of the model that generated a set of synthetic images. We use similarity-based approaches to classify the real images based on their similarity to the generated ones. We develop autoencoders to classify the images through outlier detection techniques. Finally, we develop patch-based methods that operate on patches extracted from real and generated images to measure their similarity. On the development dataset, we attained an F1-score of 0.846 and an accuracy of 0.850 using an autoencoder-based method. On the test dataset, a similarity-based approach achieved the best results, with an F1-score of 0.801 and an accuracy of 0.810. The empirical results support the hypothesis that medical data generated using deep generative models trained without privacy constraints threatens the privacy of patients in the training data. © 2023 Copyright for this paper by its authors.

  • 1
  • 3