Detalhes
Nome
Tiago Filipe GonçalvesCargo
Investigador Colaborador ExternoDesde
10 fevereiro 2019
Nacionalidade
PortugalCentro
Telecomunicações e MultimédiaContactos
+351222094000
tiago.f.goncalves@inesctec.pt
2025
Autores
Zolfagharnasab, MH; Freitas, N; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Oliveira, HP; Cardoso, JS;
Publicação
ARTIFICIAL INTELLIGENCE AND IMAGING FOR DIAGNOSTIC AND TREATMENT CHALLENGES IN BREAST CARE, DEEP-BREATH 2024
Abstract
Breast cancer treatments often affect patients' body image, making aesthetic outcome predictions vital. This study introduces a Deep Learning (DL) multimodal retrieval pipeline using a dataset of 2,193 instances combining clinical attributes and RGB images of patients' upper torsos. We evaluate four retrieval techniques: Weighted Euclidean Distance (WED) with various configurations and shallow Artificial Neural Network (ANN) for tabular data, pre-trained and fine-tuned Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), and a multimodal approach combining both data types. The dataset, categorised into Excellent/Good and Fair/Poor outcomes, is organised into over 20K triplets for training and testing. Results show fine-tuned multimodal ViTs notably enhance performance, achieving up to 73.85% accuracy and 80.62% Adjusted Discounted Cumulative Gain (ADCG). This framework not only aids in managing patient expectations by retrieving the most relevant post-surgical images but also promises broad applications in medical image analysis and retrieval. The main contributions of this paper are the development of a multimodal retrieval system for breast cancer patients based on post-surgery aesthetic outcome and the evaluation of different models on a new dataset annotated by clinicians for image retrieval.
2025
Autores
Albuquerque, C; Neto, PC; Gonçalves, T; Sequeira, AF;
Publicação
HCI for Cybersecurity, Privacy and Trust - 7th International Conference, HCI-CPT 2025, Held as Part of the 27th HCI International Conference, HCII 2025, Gothenburg, Sweden, June 22-27, 2025, Proceedings, Part II
Abstract
Face recognition technology, despite its advancements and increasing accuracy, still presents significant challenges in explainability and ethical concerns, especially when applied in sensitive domains such as surveillance, law enforcement, and access control. The opaque nature of deep learning models jeopardises transparency, bias, and user trust. Concurrently, the proliferation of web applications presents a unique opportunity to develop accessible and interactive tools for demonstrating and analysing these complex systems. These tools can facilitate model decision exploration with various images, aiding in bias mitigation or enhancing users’ trust by allowing them to see the model in action and understand its reasoning. We propose an explainable face recognition web application designed to support enrolment, identification, authentication, and verification while providing visual explanations through pixel-wise importance maps to clarify the model’s decision-making process. The system is built in compliance with the European Union General Data Protection Regulation, ensuring data privacy and user control over personal information. The application is also designed for scalability, capable of efficiently managing large datasets. Load tests conducted on databases containing up to 1,000,000 images confirm its efficiency. This scalability ensures robust performance and a seamless user experience even with database growth. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
2025
Autores
Sousa, P; Campas, D; Andrade, J; Pereira, P; Gonçalves, T; Teixeira, LF; Pereira, T; Oliveira, HP;
Publicação
Pattern Recognition and Image Analysis - 12th Iberian Conference, IbPRIA 2025, Coimbra, Portugal, June 30 - July 3, 2025, Proceedings, Part II
Abstract
Cancer is a leading cause of mortality worldwide, with breast and lung cancer being the most prevalent globally. Early and accurate diagnosis is crucial for successful treatment, and medical imaging techniques play a pivotal role in achieving this. This paper proposes a novel pipeline that leverages generative artificial intelligence to enhance medical images by combining synthetic image generation and super-resolution techniques. The framework is validated in two medical use cases (breast and lung cancers), demonstrating its potential to improve the quality and quantity of medical imaging data, ultimately contributing to more precise and effective cancer diagnosis and treatment. Overall, although some limitations do exist, this paper achieved satisfactory results for an image size which is conductive to specialist analysis, and further expands upon this field’s capabilities. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Capozzi, L; Ferreira, L; Gonçalves, T; Rebelo, A; Cardoso, JS; Sequeira, AF;
Publicação
Pattern Recognition and Image Analysis - 12th Iberian Conference, IbPRIA 2025, Coimbra, Portugal, June 30 - July 3, 2025, Proceedings, Part II
Abstract
The rapid advancement of wireless technologies, particularly Wi-Fi, has spurred significant research into indoor human activity detection across various domains (e.g., healthcare, security, and industry). This work explores the non-invasive and cost-effective Wi-Fi paradigm and the application of deep learning for human activity recognition using Wi-Fi signals. Focusing on the challenges in machine interpretability, motivated by the increase in data availability and computational power, this paper uses explainable artificial intelligence to understand the inner workings of transformer-based deep neural networks designed to estimate human pose (i.e., human skeleton key points) from Wi-Fi channel state information. Using different strategies to assess the most relevant sub-carriers (i.e., rollout attention and masking attention) for the model predictions, we evaluate the performance of the model when it uses a given number of sub-carriers as input, selected randomly or by ascending (high-attention) or descending (low-attention) order. We concluded that the models trained with fewer (but relevant) sub-carriers are competitive with the baseline (trained with all sub-carriers) but better in terms of computational efficiency (i.e., processing more data per second). © 2025 Elsevier B.V., All rights reserved.
2024
Autores
Neto, PC; Mamede, RM; Albuquerque, C; Gonçalves, T; Sequeira, AF;
Publicação
2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024
Abstract
Face recognition applications have grown in parallel with the size of datasets, complexity of deep learning models and computational power. However, while deep learning models evolve to become more capable and computational power keeps increasing, the datasets available are being retracted and removed from public access. Privacy and ethical concerns are relevant topics within these domains. Through generative artificial intelligence, researchers have put efforts into the development of completely synthetic datasets that can be used to train face recognition systems. Nonetheless, the recent advances have not been sufficient to achieve performance comparable to the state-of-the-art models trained on real data. To study the drift between the performance of models trained on real and synthetic datasets, we leverage a massive attribute classifier (MAC) to create annotations for four datasets: two real and two synthetic. From these annotations, we conduct studies on the distribution of each attribute within all four datasets. Additionally, we further inspect the differences between real and synthetic datasets on the attribute set. When comparing through the Kullback-Leibler divergence we have found differences between real and synthetic samples. Interestingly enough, we have verified that while real samples suffice to explain the synthetic distribution, the opposite could not be further from being true.
Teses supervisionadas
2022
Autor
Pedro João Cruz Serrano e Silva
Instituição
UP-FEUP
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.