Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Jaime S. Cardoso, licenciado em Engenharia e Eletrotécnica e de Computadores em 1999, Mestre em Engenharia Matemática em 2005 e doutorado em Visão Computacional em 2006, todos pela Universidade do Porto. Professor Associado com agregação na Faculdade de Engenharia da Universidade do Porto (FEUP) e Investigador Sénior em 'Information Processing and Pattern Recognition' no Centro de Telecomunicações e Multimédia do INESC TEC.

A sua investigação assenta em três grandes domínios: visão computacional, "machine learning" e sistemas de suporte à decisão. A investigação em processamento de imagem e vídeo tem abordado a área de biometria, imagem médica e "video tracking" para aplicações de vigilância e desportos. O trabalho em "machine learning" foca-se na adaptação de sistemas de aprendizagem às condições desafiantes de informação visual. A ênfase dos sistemas de suporte à decisão tem sido dirigida a aplicações médicas, sempre ancoradas com a análise automática de informação visual.

É co-autor de mais de 150 artigos, dos quais mais de 50 em jornais internacionais, com mais de 6500 citações (google scholar). Foi investigador principal em 6 projectos de I&D e participou em 14 projectos de I&D, incluindo 5 projectos europeus e um contrato directo com a BBC do Reino Unido.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Jaime Cardoso
  • Cargo

    Investigador Coordenador
  • Desde

    15 setembro 1998
019
Publicações

2026

Deciphering the Silent Signals: Unveiling Frequency Importance for Wi-Fi-Based Human Pose Estimation with Explainability

Autores
Capozzi, L; Ferreira, L; Gonçalves, T; Rebelo, A; Cardoso, JS; Sequeira, AF;

Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS, IBPRIA 2025, PT II

Abstract
The rapid advancement of wireless technologies, particularly Wi-Fi, has spurred significant research into indoor human activity detection across various domains (e.g., healthcare, security, and industry). This work explores the non-invasive and cost-effective Wi-Fi paradigm and the application of deep learning for human activity recognition using Wi-Fi signals. Focusing on the challenges in machine interpretability, motivated by the increase in data availability and computational power, this paper uses explainable artificial intelligence to understand the inner workings of transformer-based deep neural networks designed to estimate human pose (i.e., human skeleton key points) from Wi-Fi channel state information. Using different strategies to assess the most relevant sub-carriers (i.e., rollout attention and masking attention) for the model predictions, we evaluate the performance of the model when it uses a given number of sub-carriers as input, selected randomly or by ascending (high-attention) or descending (low-attention) order. We concluded that the models trained with fewer (but relevant) sub-carriers are competitive with the baseline (trained with all sub-carriers) but better in terms of computational efficiency (i.e., processing more data per second).

2025

HER2match dataset

Autores
Klöckner, P; Teixeira, J; Montezuma, D; Cardoso, JS; Horlings, HM; de Oliveira, SP;

Publicação

Abstract

2025

Predicting Aesthetic Outcomes of Breast Cancer Surgery: A Robust and Explainable Image Retrieval Approach

Autores
Ferreira, P; Zolfagharnasab, MH; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Cardoso, JS;

Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings

Abstract
Accurate retrieval of post-surgical images plays a critical role in surgical planning for breast cancer patients. However, current content-based image retrieval methods face challenges related to limited interpretability, poor robustness to image noise, and reduced generalization across clinical settings. To address these limitations, we propose a multistage retrieval pipeline integrating saliency-based explainability, noise-reducing image pre-processing, and ensemble learning. Evaluated on a dataset of post-operative breast cancer patient images, our approach achieves contrastive accuracy of 77.67% for Excellent/Good and 84.98% for Fair/Poor outcomes, surpassing prior studies by 8.37% and 11.80%, respectively. Explainability analysis provided essential insight by showing that feature extractors often attend to irrelevant regions, thereby motivating targeted input refinement. Ablations show that expanded bounding box inputs improve performance over original images, with gains of 0.78% and 0.65% contrastive accuracy for Excellent/Good and Fair/Poor, respectively. In contrast, the use of segmented images leads to a performance drop (1.33% and 1.65%) due to the loss of contextual cues. Furthermore, ensemble learning yielded additional gains of 0.89% and 3.60% over the best-performing single-model baselines. These findings underscore the importance of targeted input refinement and ensemble integration for robust and generalizable image retrieval systems. © 2025 Elsevier B.V., All rights reserved.

2025

Towards Robust Breast Segmentation: Leveraging Depth Awareness and Convexity Optimization For Tackling Data Scarcity

Autores
Zolfagharnasab, MH; Gonalves, T; Ferreira, P; Cardoso, MJ; Cardoso, JS;

Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings

Abstract
Breast segmentation has a critical role for objective pre and postoperative aesthetic evaluation but challenged by limited data (privacy concerns), class imbalance, and anatomical variability. As a response to the noted obstacles, we introduce an encoder–decoder framework with a Segment Anything Model (SAM) backbone, enhanced with synthetic depth maps and a multiterm loss combining weighted crossentropy, convexity, and depth alignment constraints. Evaluated on a 120patient dataset split into 70% training, 10% validation, and 20% testing, our approach achieves a balanced test dice score of 98.75%—a 4.5% improvement over prior methods—with dice of 95.5% (breast) and 89.2% (nipple). Ablations show depth injection reduces noise and focuses on anatomical regions, yielding dice gains of 0.47% (body) and 1.04% (breast). Geometric alignment increases convexity by almost 3% up to 99.86%, enhancing geometric plausibility of the nipple masks. Lastly, crossdataset evaluation on CINDERELLA samples demonstrates robust generalization, with small performance gain primarily attributable to differences in annotation styles. © 2025 Elsevier B.V., All rights reserved.

2025

Anatomically and Clinically Informed Deep Generative Model for Breast Surgery Outcome Prediction

Autores
Santos, J; Montenegro, H; Bonci, E; Cardoso, MJ; Cardoso, JS;

Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings

Abstract
Breast cancer patients often face difficulties when choosing among diverse surgeries. To aid patients, this paper proposes ACID-GAN (Anatomically and Clinically Informed Deep Generative Adversarial Network), a conditional generative model for predicting post-operative breast cancer outcomes using deep learning. Built on Pix2Pix, the model incorporates clinical metadata, such as surgery type and cancer laterality, by introducing a dedicated encoder for semantic supervision. Further improvements include colour preservation and anatomically informed losses, as well as clinical supervision via segmentation and classification modules. Experiments on a private dataset demonstrate that the model produces realistic, context-aware predictions. The results demonstrate that the model presents a meaningful trade-off between generating precise, anatomically defined results and maintaining patient-specific appearance, such as skin tone and shape. © 2025 Elsevier B.V., All rights reserved.