2025
Authors
Montenegro, H; Cardoso, MJ; Cardoso, JS;
Publication
2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
Abstract
2025
Authors
Lima, PV; Cardoso, JS; Oliveira, HP;
Publication
BIBE
Abstract
Breast cancer remains one of the most prevalent and deadly cancers worldwide, making accurate evaluation of molecular markers important for effective disease management. Biomarkers such as ER, PR, and HER2 are typically assessed because they help inform prognosis and guide treatment decisions. Predicting these characteristics from imaging can support earlier clinical intervention, reduce reliance on invasive procedures, and contribute to more personalized care. While radiomics and deep learning approaches have demonstrated potential, comprehensive comparisons across these methods are still limited. This study evaluated handcrafted features, deep features, and end-to-end deep learning models for predicting ER, PR, and HER2 status from DCE-MRI. Each feature type was first assessed individually and then combined using early and late fusion. Handcrafted and deep features were processed through a pipeline that included resampling, dimensionality reduction, and model selection, while end-to-end models were trained using different initialization strategies and loss functions. The best models achieved AUCs of 0.659 for ER, 0.679 for PR, and 0.686 for HER2. Although late fusion generally improved performance, bias toward the majority classes persisted. Overall, the results suggest that combining different modeling strategies may enhance robustness in breast cancer characterization. © 2025 IEEE.
2025
Authors
Klöckner, P; Teixeira, J; Montezuma, D; Cardoso, JS; Horlings, HM; de Oliveira, SP;
Publication
Abstract
2025
Authors
Miguel M Romariz; Tiago F Gonçalves; Eduard Bonci; Hélder Oliveira; Carlos Mavioso; Maria J Cardoso; Jaime Cardoso;
Publication
Cureus Journal of Computer Science.
Abstract
2025
Authors
Ferreira, P; Zolfagharnasab, MH; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Cardoso, JS;
Publication
Deep-Breath@MICCAI
Abstract
Accurate retrieval of post-surgical images plays a critical role in surgical planning for breast cancer patients. However, current content-based image retrieval methods face challenges related to limited interpretability, poor robustness to image noise, and reduced generalization across clinical settings. To address these limitations, we propose a multistage retrieval pipeline integrating saliency-based explainability, noise-reducing image pre-processing, and ensemble learning. Evaluated on a dataset of post-operative breast cancer patient images, our approach achieves contrastive accuracy of 77.67% for Excellent/Good and 84.98% for Fair/Poor outcomes, surpassing prior studies by 8.37% and 11.80%, respectively. Explainability analysis provided essential insight by showing that feature extractors often attend to irrelevant regions, thereby motivating targeted input refinement. Ablations show that expanded bounding box inputs improve performance over original images, with gains of 0.78% and 0.65% contrastive accuracy for Excellent/Good and Fair/Poor, respectively. In contrast, the use of segmented images leads to a performance drop (1.33% and 1.65%) due to the loss of contextual cues. Furthermore, ensemble learning yielded additional gains of 0.89% and 3.60% over the best-performing single-model baselines. These findings underscore the importance of targeted input refinement and ensemble integration for robust and generalizable image retrieval systems.
2025
Authors
Zolfagharnasab, MH; Gonçalves, T; Ferreira, P; Cardoso, MJ; Cardoso, JS;
Publication
Deep-Breath@MICCAI
Abstract
Breast segmentation has a critical role for objective pre and postoperative aesthetic evaluation but challenged by limited data (privacy concerns), class imbalance, and anatomical variability. As a response to the noted obstacles, we introduce an encoder–decoder framework with a Segment Anything Model (SAM) backbone, enhanced with synthetic depth maps and a multiterm loss combining weighted crossentropy, convexity, and depth alignment constraints. Evaluated on a 120patient dataset split into 70% training, 10% validation, and 20% testing, our approach achieves a balanced test dice score of 98.75%—a 4.5% improvement over prior methods—with dice of 95.5% (breast) and 89.2% (nipple). Ablations show depth injection reduces noise and focuses on anatomical regions, yielding dice gains of 0.47% (body) and 1.04% (breast). Geometric alignment increases convexity by almost 3% up to 99.86%, enhancing geometric plausibility of the nipple masks. Lastly, crossdataset evaluation on CINDERELLA samples demonstrates robust generalization, with small performance gain primarily attributable to differences in annotation styles.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.