Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Jaime Cardoso

2025

HER2match dataset

Authors
Klöckner, P; Teixeira, J; Montezuma, D; Cardoso, JS; Horlings, HM; de Oliveira, SP;

Publication

Abstract

2025

BreLoAI - A Scalable Web Application for Breast Cancer Locoregional Treatment Approaches

Authors
Miguel M Romariz; Tiago F Gonçalves; Eduard Bonci; Hélder Oliveira; Carlos Mavioso; Maria J Cardoso; Jaime Cardoso;

Publication
Cureus Journal of Computer Science.

Abstract

2025

Predicting Aesthetic Outcomes of Breast Cancer Surgery: A Robust and Explainable Image Retrieval Approach

Authors
Ferreira, P; Zolfagharnasab, MH; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Cardoso, JS;

Publication
Deep-Breath@MICCAI

Abstract
Accurate retrieval of post-surgical images plays a critical role in surgical planning for breast cancer patients. However, current content-based image retrieval methods face challenges related to limited interpretability, poor robustness to image noise, and reduced generalization across clinical settings. To address these limitations, we propose a multistage retrieval pipeline integrating saliency-based explainability, noise-reducing image pre-processing, and ensemble learning. Evaluated on a dataset of post-operative breast cancer patient images, our approach achieves contrastive accuracy of 77.67% for Excellent/Good and 84.98% for Fair/Poor outcomes, surpassing prior studies by 8.37% and 11.80%, respectively. Explainability analysis provided essential insight by showing that feature extractors often attend to irrelevant regions, thereby motivating targeted input refinement. Ablations show that expanded bounding box inputs improve performance over original images, with gains of 0.78% and 0.65% contrastive accuracy for Excellent/Good and Fair/Poor, respectively. In contrast, the use of segmented images leads to a performance drop (1.33% and 1.65%) due to the loss of contextual cues. Furthermore, ensemble learning yielded additional gains of 0.89% and 3.60% over the best-performing single-model baselines. These findings underscore the importance of targeted input refinement and ensemble integration for robust and generalizable image retrieval systems.

2025

Towards Robust Breast Segmentation: Leveraging Depth Awareness and Convexity Optimization For Tackling Data Scarcity

Authors
Zolfagharnasab, MH; Gonçalves, T; Ferreira, P; Cardoso, MJ; Cardoso, JS;

Publication
Deep-Breath@MICCAI

Abstract
Breast segmentation has a critical role for objective pre and postoperative aesthetic evaluation but challenged by limited data (privacy concerns), class imbalance, and anatomical variability. As a response to the noted obstacles, we introduce an encoder–decoder framework with a Segment Anything Model (SAM) backbone, enhanced with synthetic depth maps and a multiterm loss combining weighted crossentropy, convexity, and depth alignment constraints. Evaluated on a 120patient dataset split into 70% training, 10% validation, and 20% testing, our approach achieves a balanced test dice score of 98.75%—a 4.5% improvement over prior methods—with dice of 95.5% (breast) and 89.2% (nipple). Ablations show depth injection reduces noise and focuses on anatomical regions, yielding dice gains of 0.47% (body) and 1.04% (breast). Geometric alignment increases convexity by almost 3% up to 99.86%, enhancing geometric plausibility of the nipple masks. Lastly, crossdataset evaluation on CINDERELLA samples demonstrates robust generalization, with small performance gain primarily attributable to differences in annotation styles.

2025

Anatomically and Clinically Informed Deep Generative Model for Breast Surgery Outcome Prediction

Authors
Santos, J; Montenegro, H; Bonci, E; Cardoso, MJ; Cardoso, JS;

Publication
Deep-Breath@MICCAI

Abstract
Breast cancer patients often face difficulties when choosing among diverse surgeries. To aid patients, this paper proposes ACID-GAN (Anatomically and Clinically Informed Deep Generative Adversarial Network), a conditional generative model for predicting post-operative breast cancer outcomes using deep learning. Built on Pix2Pix, the model incorporates clinical metadata, such as surgery type and cancer laterality, by introducing a dedicated encoder for semantic supervision. Further improvements include colour preservation and anatomically informed losses, as well as clinical supervision via segmentation and classification modules. Experiments on a private dataset demonstrate that the model produces realistic, context-aware predictions. The results demonstrate that the model presents a meaningful trade-off between generating precise, anatomically defined results and maintaining patient-specific appearance, such as skin tone and shape.

2025

SiameseOrdinalCLIP: A Language-Guided Siamese Network for the Aesthetic Evaluation of Breast Cancer Locoregional Treatment

Authors
Teixeira, LF; Montenegro, H; Bonci, E; Cardoso, MJ; Cardoso, JS;

Publication
Deep-Breath@MICCAI

Abstract
Breast cancer locoregional treatment includes a wide variety of procedures with diverse aesthetic outcomes. The aesthetic assessment of such procedures is typically subjective, hindering the fair comparison between their outcomes, and consequently restricting evidence-based improvements. Most objective evaluation tools were developed for conservative surgery, focusing on asymmetries while ignoring other relevant traits. To overcome these limitations, we propose SiameseOrdinalCLIP, an ordinal classification network based on image-text matching and pairwise ranking optimisation for the aesthetic evaluation of breast cancer treatment. Furthermore, we integrate a concept bottleneck module into the network for increased explainability. Experiments on a private dataset show that the proposed model surpasses the state-of-the-art aesthetic evaluation and ordinal classification networks.

  • 38
  • 66