Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Wilson Silva holds an integrated master (BSc+MSc) degree in Electrical and Computer Engineering obtained from the Faculty of Engineering of the University of Porto (FEUP) in 2016. During the master, he was also a visiting student at the Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany. Since the end of 2017, Wilson is a PhD student in Electrical and Computer Engineering at FEUP and a Research Assistant at INESC TEC, where he is associated with the Visual Computing and Machine Intelligence (VCMI) and Breast Research groups. In between these academic and research experiences, he worked for one year as an IT Advisor at KPMG Portugal in Lisbon. During the academic year of 2018/2019, Wilson was an Invited Assistant at FEUP, teaching practical classes of introductory courses in programming and digital systems. Currently, he is a visiting PhD student at the Bern University Hospital (Inselspital) and at the University of Bern, in Bern, Switzerland. His main research interests include Machine Learning and Computer Vision, with a particular focus on Explainable Artificial Intelligence and Medical Image Analysis.

Interest
Topics
Details

Details

  • Name

    Wilson Santos Silva
  • Role

    External Research Collaborator
  • Since

    15th February 2016
003
Publications

2023

Fill in the blank for fashion complementary outfit product Retrieval: VISUM summer school competition

Authors
Castro, E; Ferreira, PM; Rebelo, A; Rio-Torto, I; Capozzi, L; Ferreira, MF; Goncalves, T; Albuquerque, T; Silva, W; Afonso, C; Sousa, RG; Cimarelli, C; Daoudi, N; Moreira, G; Yang, HY; Hrga, I; Ahmad, J; Keswani, M; Beco, S;

Publication
MACHINE VISION AND APPLICATIONS

Abstract
Every year, the VISion Understanding and Machine intelligence (VISUM) summer school runs a competition where participants can learn and share knowledge about Computer Vision and Machine Learning in a vibrant environment. 2021 VISUM's focused on applying those methodologies in fashion. Recently, there has been an increase of interest within the scientific community in applying computer vision methodologies to the fashion domain. That is highly motivated by fashion being one of the world's largest industries presenting a rapid development in e-commerce mainly since the COVID-19 pandemic. Computer Vision for Fashion enables a wide range of innovations, from personalized recommendations to outfit matching. The competition enabled students to apply the knowledge acquired in the summer school to a real-world problem. The ambition was to foster research and development in fashion outfit complementary product retrieval by leveraging vast visual and textual data with domain knowledge. For this, a new fashion outfit dataset (acquired and curated by FARFETCH) for research and benchmark purposes is introduced. Additionally, a competitive baseline with an original negative sampling process for triplet mining was implemented and served as a starting point for participants. The top 3 performing methods are described in this paper since they constitute the reference state-of-the-art for this particular problem. To our knowledge, this is the first challenge in fashion outfit complementary product retrieval. Moreover, this joint project between academia and industry brings several relevant contributions to disseminating science and technology, promoting economic and social development, and helping to connect early-career researchers to real-world industry challenges.

2022

Privacy-Preserving Case-Based Explanations: Enabling Visual Interpretability by Protecting Privacy

Authors
Montenegro, H; Silva, W; Gaudio, A; Fredrikson, M; Smailagic, A; Cardoso, JS;

Publication
IEEE ACCESS

Abstract
Deep Learning achieves state-of-the-art results in many domains, yet its black-box nature limits its application to real-world contexts. An intuitive way to improve the interpretability of Deep Learning models is by explaining their decisions with similar cases. However, case-based explanations cannot be used in contexts where the data exposes personal identity, as they may compromise the privacy of individuals. In this work, we identify the main limitations and challenges in the anonymization of case-based explanations of image data through a survey on case-based interpretability and image anonymization methods. We empirically analyze the anonymization methods in regards to their capacity to remove personally identifiable information while preserving relevant semantic properties of the data. Through this analysis, we conclude that most privacy-preserving methods are not sufficiently good to be applied to case-based explanations. To promote research on this topic, we formalize the privacy protection of visual case-based explanations as a multi-objective problem to preserve privacy, intelligibility, and relevant explanatory evidence regarding a predictive task. We empirically verify the potential of interpretability saliency maps as qualitative evaluation tools for anonymization. Finally, we identify and propose new lines of research to guide future work in the generation of privacy-preserving case-based explanations.

2022

Increased Robustness in Chest X-Ray Classification Through Clinical Report-Driven Regularization

Authors
Mata, D; Silva, W; Cardoso, JS;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2022)

Abstract
In highly regulated areas such as healthcare there is a demand for explainable and trustworthy systems that are capable of providing some sort of foundation or logical reasoning to their functionality. Therefore, deep learning applications associated with such industry are increasingly required by this sense of accountability regarding their production value. Additionally, it is of utter importance to take advantage of all possible data resources, in order to achieve a greater amount of efficiency respecting such intelligent frameworks, while maintaining a realistic medical scenario. As a way to explore this issue, we propose two models trained with information retained in chest radiographs and regularized by the associated medical reports. We argue that the knowledge extracted from the free-radiology text, in a multimodal training context, promotes more coherence, leading to better decisions and interpretability saliency maps. Our proposed approach demonstrated to be more robust than their baseline counterparts, showing better classification performances, and also ensuring more concise, consistent and less dispersed saliency maps. Our proof-of-concept experiments were done using the publicly available multimodal radiology dataset MIMIC-CXR that contains a myriad of chest X-rays and its correspondent free-text reports.

2022

Deep Aesthetic Assessment and Retrieval of Breast Cancer Treatment Outcomes

Authors
Silva, W; Carvalho, M; Mavioso, C; Cardoso, MJ; Cardoso, JS;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2022)

Abstract
Treatments for breast cancer have continued to evolve and improve in recent years, resulting in a substantial increase in survival rates, with approximately 80% of patients having a 10-year survival period. Given the serious that impact breast cancer treatments can have on a patient's body image, consequently affecting her self-confidence and sexual and intimate relationships, it is paramount to ensure that women receive the treatment that optimizes both survival and aesthetic outcomes. Currently, there is no gold standard for evaluating the aesthetic outcome of breast cancer treatment. In addition, there is no standard way to show patients the potential outcome of surgery. The presentation of similar cases from the past would be extremely important to manage women's expectations of the possible outcome. In this work, we propose a deep neural network to perform the aesthetic evaluation. As a proof-of-concept, we focus on a binary aesthetic evaluation. Besides its use for classification, this deep neural network can also be used to find the most similar past cases by searching for nearest neighbours in the high-semantic space before classification. We performed the experiments on a dataset consisting of 143 photos of women after conservative treatment for breast cancer. The results for accuracy and balanced accuracy showed the superior performance of our proposed model compared to the state of the art in aesthetic evaluation of breast cancer treatments. In addition, the model showed a good ability to retrieve similar previous cases, with the retrieved cases having the same or adjacent class (in the 4-class setting) and having similar types of asymmetry. Finally, a qualitative interpretability assessment was also performed to analyse the robustness and trustworthiness of the model.

2022

Computer-aided diagnosis through medical image retrieval in radiology

Authors
Silva, W; Goncalves, T; Harma, K; Schroder, E; Obmann, VC; Barroso, MC; Poellinger, A; Reyes, M; Cardoso, JS;

Publication
SCIENTIFIC REPORTS

Abstract
Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability.

Supervised
thesis

2022

Towards Biometrically-Morphed Medical Case-based Explanations

Author
Maria Manuel Domingos Carvalho

Institution
UM

2022

Biomedical Multimodal Explanations – Increasing Diversity and Complementarity in Explainable Artificial Intelligence

Author
Diogo Baptista Martins da Mata

Institution
UM

2021

A privacy-preserving framework for case-based interpretability in machine learning

Author
Maria Helena Sampaio de Mendonça Montenegro e Almeida

Institution
UM