Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Tiago Gonçalves received his MSc in Bioengineering (Biomedical Engineering) from Faculdade de Engenharia da Universidade do Porto (FEUP) in 2019. Currently, he is a PhD Candidate in Electrical and Computer Engineering at FEUP and a research assistant at the Centre for Telecommunications and Multimedia of INESC TEC with the Visual Computing & Machine Intelligence (VCMI) Research Group. His research interests include machine learning, explainable artificial intelligence (in-model approaches), computer vision, medical decision support systems, and machine learning deployment.

Interest
Topics
Details

Details

  • Name

    Tiago Filipe Gonçalves
  • Role

    External Research Collaborator
  • Since

    10th February 2019
003
Publications

2026

Enhancing Medical Image Analysis: A Pipeline Combining Synthetic Image Generation and Super-Resolution

Authors
Sousa, P; Campai, D; Andrade, J; Pereira, P; Goncalves, T; Teixeira, LF; Pereira, T; Oliveira, HP;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS, IBPRIA 2025, PT II

Abstract
Cancer is a leading cause of mortality worldwide, with breast and lung cancer being the most prevalent globally. Early and accurate diagnosis is crucial for successful treatment, and medical imaging techniques play a pivotal role in achieving this. This paper proposes a novel pipeline that leverages generative artificial intelligence to enhance medical images by combining synthetic image generation and super-resolution techniques. The framework is validated in two medical use cases (breast and lung cancers), demonstrating its potential to improve the quality and quantity of medical imaging data, ultimately contributing to more precise and effective cancer diagnosis and treatment. Overall, although some limitations do exist, this paper achieved satisfactory results for an image size which is conductive to specialist analysis, and further expands upon this field's capabilities.

2026

Deciphering the Silent Signals: Unveiling Frequency Importance for Wi-Fi-Based Human Pose Estimation with Explainability

Authors
Capozzi, L; Ferreira, L; Gonçalves, T; Rebelo, A; Cardoso, JS; Sequeira, AF;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS, IBPRIA 2025, PT II

Abstract
The rapid advancement of wireless technologies, particularly Wi-Fi, has spurred significant research into indoor human activity detection across various domains (e.g., healthcare, security, and industry). This work explores the non-invasive and cost-effective Wi-Fi paradigm and the application of deep learning for human activity recognition using Wi-Fi signals. Focusing on the challenges in machine interpretability, motivated by the increase in data availability and computational power, this paper uses explainable artificial intelligence to understand the inner workings of transformer-based deep neural networks designed to estimate human pose (i.e., human skeleton key points) from Wi-Fi channel state information. Using different strategies to assess the most relevant sub-carriers (i.e., rollout attention and masking attention) for the model predictions, we evaluate the performance of the model when it uses a given number of sub-carriers as input, selected randomly or by ascending (high-attention) or descending (low-attention) order. We concluded that the models trained with fewer (but relevant) sub-carriers are competitive with the baseline (trained with all sub-carriers) but better in terms of computational efficiency (i.e., processing more data per second).

2025

Evaluating the Impact of Pulse Oximetry Bias in Machine Learning Under Counterfactual Thinking

Authors
Martins, I; Matos, J; Goncalves, T; Celi, LA; Wong, AKI; Cardoso, JS;

Publication
APPLICATIONS OF MEDICAL ARTIFICIAL INTELLIGENCE, AMAI 2024

Abstract
Algorithmic bias in healthcare mirrors existing data biases. However, the factors driving unfairness are not always known. Medical devices capture significant amounts of data but are prone to errors; for instance, pulse oximeters overestimate the arterial oxygen saturation of darker-skinned individuals, leading to worse outcomes. The impact of this bias in machine learning (ML) models remains unclear. This study addresses the technical challenges of quantifying the impact of medical device bias in downstream ML. Our experiments compare a perfect world, without pulse oximetry bias, using SaO(2) (blood-gas), to the actual world, with biased measurements, using SpO(2) (pulse oximetry). Under this counterfactual design, two models are trained with identical data, features, and settings, except for the method of measuring oxygen saturation: models using SaO(2) are a control and models using SpO(2) a treatment. The blood-gas oximetry linked dataset was a suitable testbed, containing 163,396 nearly-simultaneous SpO(2) - SaO(2) paired measurements, aligned with a wide array of clinical features and outcomes. We studied three classification tasks: in-hospital mortality, respiratory SOFA score in the next 24 h, and SOFA score increase by two points. Models using SaO(2) instead of SpO(2) generally showed better performance. Patients with overestimation of O-2 by pulse oximetry of >= 3% had significant decreases in mortality prediction recall, from 0.63 to 0.59, P < 0.001. This mirrors clinical processes where biased pulse oximetry readings provide clinicians with false reassurance of patients' oxygen levels. A similar degradation happened in ML models, with pulse oximetry biases leading to more false negatives in predicting adverse outcomes.

2025

Predicting Aesthetic Outcomes in Breast Cancer Surgery: A Multimodal Retrieval Approach

Authors
Zolfagharnasab, MH; Freitas, N; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Oliveira, HP; Cardoso, JS;

Publication
ARTIFICIAL INTELLIGENCE AND IMAGING FOR DIAGNOSTIC AND TREATMENT CHALLENGES IN BREAST CARE, DEEP-BREATH 2024

Abstract
Breast cancer treatments often affect patients' body image, making aesthetic outcome predictions vital. This study introduces a Deep Learning (DL) multimodal retrieval pipeline using a dataset of 2,193 instances combining clinical attributes and RGB images of patients' upper torsos. We evaluate four retrieval techniques: Weighted Euclidean Distance (WED) with various configurations and shallow Artificial Neural Network (ANN) for tabular data, pre-trained and fine-tuned Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), and a multimodal approach combining both data types. The dataset, categorised into Excellent/Good and Fair/Poor outcomes, is organised into over 20K triplets for training and testing. Results show fine-tuned multimodal ViTs notably enhance performance, achieving up to 73.85% accuracy and 80.62% Adjusted Discounted Cumulative Gain (ADCG). This framework not only aids in managing patient expectations by retrieving the most relevant post-surgical images but also promises broad applications in medical image analysis and retrieval. The main contributions of this paper are the development of a multimodal retrieval system for breast cancer patients based on post-surgery aesthetic outcome and the evaluation of different models on a new dataset annotated by clinicians for image retrieval.

2025

An Integrated and User-Friendly Platform for the Deployment of Explainable Artificial Intelligence Methods Applied to Face Recognition

Authors
Albuquerque, C; Neto, PC; Gonc, T; Sequeira, AF;

Publication
HCI FOR CYBERSECURITY, PRIVACY AND TRUST, HCI-CPT 2025, PT II

Abstract
Face recognition technology, despite its advancements and increasing accuracy, still presents significant challenges in explainability and ethical concerns, especially when applied in sensitive domains such as surveillance, law enforcement, and access control. The opaque nature of deep learning models jeopardises transparency, bias, and user trust. Concurrently, the proliferation of web applications presents a unique opportunity to develop accessible and interactive tools for demonstrating and analysing these complex systems. These tools can facilitate model decision exploration with various images, aiding in bias mitigation or enhancing users' trust by allowing them to see the model in action and understand its reasoning. We propose an explainable face recognition web application designed to support enrolment, identification, authentication, and verification while providing visual explanations through pixel-wise importance maps to clarify the model's decision-making process. The system is built in compliance with the European Union General Data Protection Regulation, ensuring data privacy and user control over personal information. The application is also designed for scalability, capable of efficiently managing large datasets. Load tests conducted on databases containing up to 1,000,000 images confirm its efficiency. This scalability ensures robust performance and a seamless user experience even with database growth.

Supervised
thesis

2022

Human Feedback During Neural Network Training

Author
Pedro João Cruz Serrano e Silva

Institution
UP-FEUP