Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Jaime S. Cardoso holds a Licenciatura (5-year degree) in Electrical and Computer Engineering in 1999, an MSc in Mathematical Engineering in 2005 and a Ph.D. in Computer Vision in 2006, all from the University of Porto.


Cardoso is an Associate Professor with Habilitation at the Faculty of Engineering of the University of Porto (FEUP), where he has been teaching Machine Learning and Computer Vision in Doctoral Programs and multiple courses for the graduate studies. Cardoso is currently a Senior Researcher of the ‘Information Processing and Pattern Recognition’ Area in the Telecommunications and Multimedia Unit of INESC TEC. He is also Senior Member of IEEE and co-founder of ClusterMedia Labs, an IT company developing automatic solutions for semantic audio-visual analysis.


His research can be summed up in three major topics: computer vision, machine learning and decision support systems. Cardoso has co-authored 150+ papers, 50+ of which in international journals. Cardoso has been the recipient of numerous awards, including the Honorable Mention in the Exame Informática Award 2011, in software category, for project “Semantic PACS” and the First Place in the ICDAR 2013 Music Scores Competition: Staff Removal (task: staff removal with local noise), August 2013. The research results have been recognized both by the peers, with 6500+ citations to his publications and the advertisement in the mainstream media several times.

Interest
Topics
Details

Details

  • Name

    Jaime Cardoso
  • Role

    Research Coordinator
  • Since

    15th September 1998
019
Publications

2025

A survey on cell nuclei instance segmentation and classification: Leveraging context and attention

Authors
Nunes, JD; Montezuma, D; Oliveira, D; Pereira, T; Cardoso, JS;

Publication
MEDICAL IMAGE ANALYSIS

Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&Estains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.

2025

MST-KD: Multiple Specialized Teachers Knowledge Distillation for Fair Face Recognition

Authors
Caldeira, E; Cardoso, JS; Sequeira, AF; Neto, PC;

Publication
COMPUTER VISION-ECCV 2024 WORKSHOPS, PT XV

Abstract
As in school, one teacher to cover all subjects is insufficient to distill equally robust information to a student. Hence, each subject is taught by a highly specialised teacher. Following a similar philosophy, we propose a multiple specialized teacher framework to distill knowledge to a student network. In our approach, directed at face recognition use cases, we train four teachers on one specific ethnicity, leading to four highly specialized and biased teachers. Our strategy learns a project of these four teachers into a common space and distill that information to a student network. Our results highlighted increased performance and reduced bias for all our experiments. In addition, we further show that having biased/specialized teachers is crucial by showing that our approach achieves better results than when knowledge is distilled from four teachers trained on balanced datasets. Our approach represents a step forward to the understanding of the importance of ethnicity-specific features.

2025

Evaluating the Impact of Pulse Oximetry Bias in Machine Learning Under Counterfactual Thinking

Authors
Martins, I; Matos, J; Goncalves, T; Celi, LA; Wong, AKI; Cardoso, JS;

Publication
APPLICATIONS OF MEDICAL ARTIFICIAL INTELLIGENCE, AMAI 2024

Abstract
Algorithmic bias in healthcare mirrors existing data biases. However, the factors driving unfairness are not always known. Medical devices capture significant amounts of data but are prone to errors; for instance, pulse oximeters overestimate the arterial oxygen saturation of darker-skinned individuals, leading to worse outcomes. The impact of this bias in machine learning (ML) models remains unclear. This study addresses the technical challenges of quantifying the impact of medical device bias in downstream ML. Our experiments compare a perfect world, without pulse oximetry bias, using SaO(2) (blood-gas), to the actual world, with biased measurements, using SpO(2) (pulse oximetry). Under this counterfactual design, two models are trained with identical data, features, and settings, except for the method of measuring oxygen saturation: models using SaO(2) are a control and models using SpO(2) a treatment. The blood-gas oximetry linked dataset was a suitable testbed, containing 163,396 nearly-simultaneous SpO(2) - SaO(2) paired measurements, aligned with a wide array of clinical features and outcomes. We studied three classification tasks: in-hospital mortality, respiratory SOFA score in the next 24 h, and SOFA score increase by two points. Models using SaO(2) instead of SpO(2) generally showed better performance. Patients with overestimation of O-2 by pulse oximetry of >= 3% had significant decreases in mortality prediction recall, from 0.63 to 0.59, P < 0.001. This mirrors clinical processes where biased pulse oximetry readings provide clinicians with false reassurance of patients' oxygen levels. A similar degradation happened in ML models, with pulse oximetry biases leading to more false negatives in predicting adverse outcomes.

2025

CNN explanation methods for ordinal regression tasks

Authors
Barbero-Gómez, J; Cruz, RPM; Cardoso, JS; Gutiérrez, PA; Hervás-Martínez, C;

Publication
NEUROCOMPUTING

Abstract
The use of Convolutional Neural Network (CNN) models for image classification tasks has gained significant popularity. However, the lack of interpretability in CNN models poses challenges for debugging and validation. To address this issue, various explanation methods have been developed to provide insights into CNN models. This paper focuses on the validity of these explanation methods for ordinal regression tasks, where the classes have a predefined order relationship. Different modifications are proposed for two explanation methods to exploit the ordinal relationships between classes: Grad-CAM based on Ordinal Binary Decomposition (GradOBDCAM) and Ordinal Information Bottleneck Analysis (OIBA). The performance of these modified methods is compared to existing popular alternatives. Experimental results demonstrate that GradOBD-CAM outperforms other methods in terms of interpretability for three out of four datasets, while OIBA achieves superior performance compared to IBA.

2025

Learning Ordinality in Semantic Segmentation

Authors
Cruz, RPM; Cristino, R; Cardoso, JS;

Publication
IEEE ACCESS

Abstract
Semantic segmentation consists of predicting a semantic label for each image pixel. While existing deep learning approaches achieve high accuracy, they often overlook the ordinal relationships between classes, which can provide critical domain knowledge (e.g., the pupil lies within the iris, and lane markings are part of the road). This paper introduces novel methods for spatial ordinal segmentation that explicitly incorporate these inter-class dependencies. By treating each pixel as part of a structured image space rather than as an independent observation, we propose two regularization terms and a new metric to enforce ordinal consistency between neighboring pixels. Two loss regularization terms and one metric are proposed for structural ordinal segmentation, which penalizes predictions of non-ordinal adjacent classes. Five biomedical datasets and multiple configurations of autonomous driving datasets demonstrate the efficacy of the proposed methods. Our approach achieves improvements in ordinal metrics and enhances generalization, with up to a 15.7% relative increase in the Dice coefficient. Importantly, these benefits come without additional inference time costs. This work highlights the significance of spatial ordinal relationships in semantic segmentation and provides a foundation for further exploration in structured image representations.

Supervised
thesis

2023

Don't look away! Keeping the human in the loop with a interactive active learning platform

Author
Fábio Manuel Taveira da Cunha

Institution
UP-FEUP

2023

A new vision in pathology: from clinical implementation of digital pathology to algorithm development in computational pathology

Author
Diana Leitão Montezuma Pego Felizardo

Institution
UP-FEUP

2023

Automatic recognition of criminals, victims, and illegal behaviour in videos

Author
Leonardo Gomes Capozzi

Institution
UP-FEUP

2023

AI-based Conditional Generation of Diffusion MR Images

Author
Pedro Fernandes Sousa

Institution
UP-FEUP

2023

Machine learning applied to deep space images

Author
Francisco Campos da Silva Ferreira Ribeiro

Institution
UP-FEUP