Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Wilson Santos Silva

2020

A novel approach to keypoint detection for the aesthetic evaluation of breast cancer surgery outcomes

Authors
Goncalves, T; Silva, W; Cardoso, MJ; Cardoso, JS;

Publication
HEALTH AND TECHNOLOGY

Abstract
The implementation of routine breast cancer screening and better treatment strategies made possible to offer to the majority of women the option of breast conservation instead of a mastectomy. The most important aim of breast cancer conservative treatment (BCCT) is to try to optimize aesthetic outcome and implicitly, quality of life (QoL) without jeopardizing local cancer control and overall survival. As a consequence of the impact aesthetic outcome has on QoL, there has been an effort to try to define an optimal tool capable of performing this type of evaluation. Starting from the classical subjective aesthetic evaluation of BCCT (either by the patient herself or by a group of clinicians through questionnaires) to an objective aesthetic evaluation (where machine learning and computer vision methods are employed), leads to less variability and increasing reproducibility of results. Currently, there are some offline software applications available such as BAT(c) and BCCT.core, which perform the assessment based on asymmetry measurements that are computed based on semi-automatically annotated keypoints. In the literature, one can find algorithms that attempt to do the completely automatic keypoint annotation with reasonable success. However, these algorithms are very time-consuming. As the course of research goes more and more into the development of web software applications, these time-consuming tasks are not desirable. In this work, we propose a novel approach to the keypoints detection task treating the problem in part as image segmentation. This novel approach can improve both execution-time and results.

2020

Interpretable Biometrics: Should We Rethink How Presentation Attack Detection is Evaluated?

Authors
Sequeira, AF; Silva, W; Pinto, JR; Goncalves, T; Cardoso, JS;

Publication
2020 8TH INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF 2020)

Abstract
Presentation attack detection (PAD) methods are commonly evaluated using metrics based on the predicted labels. This is a limitation, especially for more elusive methods based on deep learning which can freely learn the most suitable features. Though often being more accurate, these models operate as complex black boxes which makes the inner processes that sustain their predictions still baffling. Interpretability tools are now being used to delve deeper into the operation of machine learning methods, especially artificial networks, to better understand how they reach their decisions. In this paper, we make a case for the integration of interpretability tools in the evaluation of PAD. A simple model for face PAD, based on convolutional neural networks, was implemented and evaluated using both traditional metrics (APCER, BPCER and EER) and interpretability tools (Grad-CAM), using data from the ROSE Youtu video collection. The results show that interpretability tools can capture more completely the intricate behavior of the implemented model, and enable the identification of certain properties that should be verified by a PAD method that is robust, coherent, meaningful, and can adequately generalize to unseen data and attacks. One can conclude that, with further efforts devoted towards higher objectivity in interpretability, this can be the key to obtain deeper and more thorough PAD performance evaluation setups.

2020

Deep Image Segmentation for Breast Keypoint Detection

Authors
Gonçalves, T; Silva, W; Cardoso, MJ; Cardoso, JS;

Publication
Proceedings

Abstract
The main aim of breast cancer conservative treatment is the optimisation of the aesthetic outcome and, implicitly, women’s quality of life, without jeopardising local cancer control and overall survival. Moreover, there has been an effort to try to define an optimal tool capable of performing the aesthetic evaluation of breast cancer conservative treatment outcomes. Recently, a deep learning algorithm that integrates the learning of keypoints’ probability maps in the loss function as a regularisation term for the robust learning of the keypoint localisation has been proposed. However, it achieves the best results when used in cooperation with a shortest-path algorithm that models images as graphs. In this work, we analysed a novel algorithm based on the interaction of deep image segmentation and deep keypoint detection models capable of improving both state-of-the-art performance and execution-time on the breast keypoint detection task.

2020

Interpretability-Guided Content-Based Medical Image Retrieval

Authors
Silva, W; Pollinger, A; Cardoso, JS; Reyes, M;

Publication
Medical Image Computing and Computer Assisted Intervention - MICCAI 2020 - 23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part I

Abstract
When encountering a dubious diagnostic case, radiologists typically search in public or internal databases for similar cases that would help them in their decision-making process. This search represents a massive burden to their workflow, as it considerably reduces their time to diagnose new cases. It is, therefore, of utter importance to replace this manual intensive search with an automatic content-based image retrieval system. However, general content-based image retrieval systems are often not helpful in the context of medical imaging since they do not consider the fact that relevant information in medical images is typically spatially constricted. In this work, we explore the use of interpretability methods to localize relevant regions of images, leading to more focused feature representations, and, therefore, to improved medical image retrieval. As a proof-of-concept, experiments were conducted using a publicly available Chest X-ray dataset, with results showing that the proposed interpretability-guided image retrieval translates better the similarity measure of an experienced radiologist than state-of-the-art image retrieval methods. Furthermore, it also improves the class-consistency of top retrieved results, and enhances the interpretability of the whole system, by accompanying the retrieval with visual explanations. © Springer Nature Switzerland AG 2020.

2021

Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis

Authors
Montenegro, H; Silva, W; Cardoso, JS;

Publication
IEEE ACCESS

Abstract
Although Deep Learning models have achieved incredible results in medical image classification tasks, their lack of interpretability hinders their deployment in the clinical context. Case-based interpretability provides intuitive explanations, as it is a much more human-like approach than saliency-map-based interpretability. Nonetheless, since one is dealing with sensitive visual data, there is a high risk of exposing personal identity, threatening the individuals' privacy. In this work, we propose a privacy-preserving generative adversarial network for the privatization of case-based explanations. We address the weaknesses of current privacy-preserving methods for visual data from three perspectives: realism, privacy, and explanatory value. We also introduce a counterfactual module in our Generative Adversarial Network that provides counterfactual case-based explanations in addition to standard factual explanations. Experiments were performed in a biometric and medical dataset, demonstrating the network's potential to preserve the privacy of all subjects and keep its explanatory evidence while also maintaining a decent level of intelligibility.

2021

An exploratory study of interpretability for face presentation attack detection

Authors
Sequeira, AF; Goncalves, T; Silva, W; Pinto, JR; Cardoso, JS;

Publication
IET BIOMETRICS

Abstract
Biometric recognition and presentation attack detection (PAD) methods strongly rely on deep learning algorithms. Though often more accurate, these models operate as complex black boxes. Interpretability tools are now being used to delve deeper into the operation of these methods, which is why this work advocates their integration in the PAD scenario. Building upon previous work, a face PAD model based on convolutional neural networks was implemented and evaluated both through traditional PAD metrics and with interpretability tools. An evaluation on the stability of the explanations obtained from testing models with attacks known and unknown in the learning step is made. To overcome the limitations of direct comparison, a suitable representation of the explanations is constructed to quantify how much two explanations differ from each other. From the point of view of interpretability, the results obtained in intra and inter class comparisons led to the conclusion that the presence of more attacks during training has a positive effect in the generalisation and robustness of the models. This is an exploratory study that confirms the urge to establish new approaches in biometrics that incorporate interpretability tools. Moreover, there is a need for methodologies to assess and compare the quality of explanations.

  • 2
  • 3