Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Tiago Gonçalves received his MSc in Bioengineering (Biomedical Engineering) from Faculdade de Engenharia da Universidade do Porto (FEUP) in 2019. Currently, he is a PhD Candidate in Electrical and Computer Engineering at FEUP and a research assistant at the Centre for Telecommunications and Multimedia of INESC TEC with the Visual Computing & Machine Intelligence (VCMI) Research Group. His research interests include machine learning, explainable artificial intelligence (in-model approaches), computer vision, medical decision support systems, and machine learning deployment.

Interest
Topics
Details

Details

  • Name

    Tiago Filipe Gonçalves
  • Role

    External Research Collaborator
  • Since

    10th February 2019
003
Publications

2025

An Integrated and User-Friendly Platform for the Deployment of Explainable Artificial Intelligence Methods Applied to Face Recognition

Authors
Albuquerque, C; Neto, PC; Gonçalves, T; Sequeira, AF;

Publication
HCI for Cybersecurity, Privacy and Trust - 7th International Conference, HCI-CPT 2025, Held as Part of the 27th HCI International Conference, HCII 2025, Gothenburg, Sweden, June 22-27, 2025, Proceedings, Part II

Abstract
Face recognition technology, despite its advancements and increasing accuracy, still presents significant challenges in explainability and ethical concerns, especially when applied in sensitive domains such as surveillance, law enforcement, and access control. The opaque nature of deep learning models jeopardises transparency, bias, and user trust. Concurrently, the proliferation of web applications presents a unique opportunity to develop accessible and interactive tools for demonstrating and analysing these complex systems. These tools can facilitate model decision exploration with various images, aiding in bias mitigation or enhancing users’ trust by allowing them to see the model in action and understand its reasoning. We propose an explainable face recognition web application designed to support enrolment, identification, authentication, and verification while providing visual explanations through pixel-wise importance maps to clarify the model’s decision-making process. The system is built in compliance with the European Union General Data Protection Regulation, ensuring data privacy and user control over personal information. The application is also designed for scalability, capable of efficiently managing large datasets. Load tests conducted on databases containing up to 1,000,000 images confirm its efficiency. This scalability ensures robust performance and a seamless user experience even with database growth. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2024

Massively Annotated Datasets for Assessment of Synthetic and Real Data in Face Recognition

Authors
Neto, PC; Mamede, RM; Albuquerque, C; Gonçalves, T; Sequeira, AF;

Publication
2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024

Abstract
Face recognition applications have grown in parallel with the size of datasets, complexity of deep learning models and computational power. However, while deep learning models evolve to become more capable and computational power keeps increasing, the datasets available are being retracted and removed from public access. Privacy and ethical concerns are relevant topics within these domains. Through generative artificial intelligence, researchers have put efforts into the development of completely synthetic datasets that can be used to train face recognition systems. Nonetheless, the recent advances have not been sufficient to achieve performance comparable to the state-of-the-art models trained on real data. To study the drift between the performance of models trained on real and synthetic datasets, we leverage a massive attribute classifier (MAC) to create annotations for four datasets: two real and two synthetic. From these annotations, we conduct studies on the distribution of each attribute within all four datasets. Additionally, we further inspect the differences between real and synthetic datasets on the attribute set. When comparing through the Kullback-Leibler divergence we have found differences between real and synthetic samples. Interestingly enough, we have verified that while real samples suffice to explain the synthetic distribution, the opposite could not be further from being true.

2024

ON THE SUITABILITY OF B-COS NETWORKS FOR THE MEDICAL DOMAIN

Authors
Rio-Torto, I; Gonçalves, T; Cardoso, JS; Teixeira, LF;

Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
In fields that rely on high-stakes decisions, such as medicine, interpretability plays a key role in promoting trust and facilitating the adoption of deep learning models by the clinical communities. In the medical image analysis domain, gradient-based class activation maps are the most widely used explanation methods and the field lacks a more in depth investigation into inherently interpretable models that focus on integrating knowledge that ensures the model is learning the correct rules. A new approach, B-cos networks, for increasing the interpretability of deep neural networks by inducing weight-input alignment during training showed promising results on natural image classification. In this work, we study the suitability of these B-cos networks to the medical domain by testing them on different use cases (skin lesions, diabetic retinopathy, cervical cytology, and chest X-rays) and conducting a thorough evaluation of several explanation quality assessment metrics. We find that, just like in natural image classification, B-cos explanations yield more localised maps, but it is not clear that they are better than other methods' explanations when considering more explanation properties.

2024

An End-to-End Framework to Classify and Generate Privacy-Preserving Explanations in Pornography Detection

Authors
Vieira, M; Goncalves, T; Silva, W; Sequeira, F;

Publication
BIOSIG 2024 - Proceedings of the 23rd International Conference of the Biometrics Special Interest Group

Abstract
The proliferation of explicit material online, particularly pornography, has emerged as a paramount concern in our society. While state-of-the-art pornography detection models already show some promising results, their decision-making processes are often opaque, raising ethical issues. This study focuses on uncovering the decision-making process of such models, specifically fine-tuned convolutional neural networks and transformer architectures. We compare various explainability techniques to illuminate the limitations, potential improvements, and ethical implications of using these algorithms. Results show that models trained on diverse and dynamic datasets tend to have more robustness and generalisability when compared to models trained on static datasets. Additionally, transformer models demonstrate superior performance and generalisation compared to convolutional ones. Furthermore, we implemented a privacy-preserving framework during explanation retrieval, which contributes to developing secure and ethically sound biometric applications. © 2024 IEEE.

2024

Interpretable AI for medical image analysis: methods, evaluation, and clinical considerations

Authors
Gonçalves, T; Hedström, A; Pahud de Mortanges, A; Li, X; Müller, H; Cardoso, S; Reyes, M;

Publication
Trustworthy Ai in Medical Imaging

Abstract
In the healthcare context, artificial intelligence (AI) has the potential to power decision support systems and help health professionals in their clinical decisions. However, given its complexity, AI is usually seen as a black box that receives data and outputs a prediction. This behavior may jeopardize the adoption of this technology by the healthcare community, which values the existence of explanations to justify a clinical decision. Besides, the developers must have a strategy to assess and audit these systems to ensure their reproducibility and quality in production. The field of interpretable artificial intelligence emerged to study how these algorithms work and clarify their behavior. This chapter reviews several interpretability of AI algorithms for medical imaging, discussing their functioning, limitations, benefits, applications, and evaluation strategies. The chapter concludes with considerations that might contribute to bringing these methods closer to the daily routine of healthcare professionals. © 2025 Elsevier Inc. All rights reserved.

Supervised
thesis

2022

Human Feedback During Neural Network Training

Author
Pedro João Cruz Serrano e Silva

Institution
UP-FEUP