2025
Autores
Fernandes, L; Gonçalves, T; Matos, J; Nakayama, LF; Cardoso, JS;
Publicação
Fairness of AI in Medical Imaging - Third International Workshop, FAIMI 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
Diabetic retinopathy (DR) is a leading cause of vision loss in working-age adults. While screening reduces the risk of blindness, traditional imaging is often costly and inaccessible. Artificial intelligence (AI) algorithms present a scalable diagnostic solution, but concerns regarding fairness and generalization persist. This work evaluates the fairness and performance of image-trained models in DR prediction, as well as the impact of disentanglement as a bias mitigation technique, using the diverse mBRSET fundus dataset. Three models, ConvNeXt V2, DINOv2, and Swin V2, were trained on macula images to predict DR and sensitive attributes (SAs) (e.g., age and gender/sex). Fairness was assessed between subgroups of SAs, and disentanglement was applied to reduce bias. All models achieved high DR prediction performance in diagnosing (up to 94% AUROC) and could reasonably predict age and gender/sex (91% and 77% AUROC, respectively). Fairness assessment suggests disparities, such as a 10% AUROC gap between age groups in DINOv2. Disentangling SAs from DR prediction had varying results, depending on the model selected. Disentanglement improved DINOv2 performance (2% AUROC gain), but led to performance drops in ConvNeXt V2 and Swin V2 (7% and 3%, respectively). These findings highlight the complexity of disentangling fine-grained features in fundus imaging and emphasize the importance of fairness in medical imaging AI to ensure equitable and reliable healthcare solutions. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Ferreira, P; Zolfagharnasab, MH; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Cardoso, JS;
Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
Accurate retrieval of post-surgical images plays a critical role in surgical planning for breast cancer patients. However, current content-based image retrieval methods face challenges related to limited interpretability, poor robustness to image noise, and reduced generalization across clinical settings. To address these limitations, we propose a multistage retrieval pipeline integrating saliency-based explainability, noise-reducing image pre-processing, and ensemble learning. Evaluated on a dataset of post-operative breast cancer patient images, our approach achieves contrastive accuracy of 77.67% for Excellent/Good and 84.98% for Fair/Poor outcomes, surpassing prior studies by 8.37% and 11.80%, respectively. Explainability analysis provided essential insight by showing that feature extractors often attend to irrelevant regions, thereby motivating targeted input refinement. Ablations show that expanded bounding box inputs improve performance over original images, with gains of 0.78% and 0.65% contrastive accuracy for Excellent/Good and Fair/Poor, respectively. In contrast, the use of segmented images leads to a performance drop (1.33% and 1.65%) due to the loss of contextual cues. Furthermore, ensemble learning yielded additional gains of 0.89% and 3.60% over the best-performing single-model baselines. These findings underscore the importance of targeted input refinement and ensemble integration for robust and generalizable image retrieval systems. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Zolfagharnasab, MH; Gonalves, T; Ferreira, P; Cardoso, MJ; Cardoso, JS;
Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
Breast segmentation has a critical role for objective pre and postoperative aesthetic evaluation but challenged by limited data (privacy concerns), class imbalance, and anatomical variability. As a response to the noted obstacles, we introduce an encoder–decoder framework with a Segment Anything Model (SAM) backbone, enhanced with synthetic depth maps and a multiterm loss combining weighted crossentropy, convexity, and depth alignment constraints. Evaluated on a 120patient dataset split into 70% training, 10% validation, and 20% testing, our approach achieves a balanced test dice score of 98.75%—a 4.5% improvement over prior methods—with dice of 95.5% (breast) and 89.2% (nipple). Ablations show depth injection reduces noise and focuses on anatomical regions, yielding dice gains of 0.47% (body) and 1.04% (breast). Geometric alignment increases convexity by almost 3% up to 99.86%, enhancing geometric plausibility of the nipple masks. Lastly, crossdataset evaluation on CINDERELLA samples demonstrates robust generalization, with small performance gain primarily attributable to differences in annotation styles. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Santos, J; Montenegro, H; Bonci, E; Cardoso, MJ; Cardoso, JS;
Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
Breast cancer patients often face difficulties when choosing among diverse surgeries. To aid patients, this paper proposes ACID-GAN (Anatomically and Clinically Informed Deep Generative Adversarial Network), a conditional generative model for predicting post-operative breast cancer outcomes using deep learning. Built on Pix2Pix, the model incorporates clinical metadata, such as surgery type and cancer laterality, by introducing a dedicated encoder for semantic supervision. Further improvements include colour preservation and anatomically informed losses, as well as clinical supervision via segmentation and classification modules. Experiments on a private dataset demonstrate that the model produces realistic, context-aware predictions. The results demonstrate that the model presents a meaningful trade-off between generating precise, anatomically defined results and maintaining patient-specific appearance, such as skin tone and shape. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Teixeira, LF; Montenegro, H; Bonci, E; Cardoso, MJ; Cardoso, JS;
Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
Breast cancer locoregional treatment includes a wide variety of procedures with diverse aesthetic outcomes. The aesthetic assessment of such procedures is typically subjective, hindering the fair comparison between their outcomes, and consequently restricting evidence-based improvements. Most objective evaluation tools were developed for conservative surgery, focusing on asymmetries while ignoring other relevant traits. To overcome these limitations, we propose SiameseOrdinalCLIP, an ordinal classification network based on image-text matching and pairwise ranking optimisation for the aesthetic evaluation of breast cancer treatment. Furthermore, we integrate a concept bottleneck module into the network for increased explainability. Experiments on a private dataset show that the proposed model surpasses the state-of-the-art aesthetic evaluation and ordinal classification networks. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Teixeira, J; Klöckner, P; Montezuma, D; Cesur, ME; Fraga, J; Horlings, HM; Cardoso, JS; de Oliveira, SP;
Publicação
Deep Generative Models - 5th MICCAI Workshop, DGM4MICCAI 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
In addition to evaluating tumor morphology using H&E staining, immunohistochemistry is used to assess the presence of specific proteins within the tissue. However, this is a costly and labor-intensive technique, for which virtual staining, as an image-to-image translation task, offers a promising alternative. Although recent, this is an emerging field of research with 64% of published studies just in 2024. Most studies use publicly available datasets of H&E-IHC pairs from consecutive tissue sections. Recognizing the training challenges, many authors develop complex virtual staining models based on conditional Generative Adversarial Networks but ignore the impact of adversarial loss on the quality of virtual staining. Furthermore, overlooking the issues of model evaluation, they claim improved performance based on metrics such as SSIM and PSNR, which are not sufficiently robust to evaluate the quality of virtually stained images. In this paper, we developed CSSP2P GAN, which we demonstrate to achieve heightened pathological fidelity through a blind pathological expert evaluation. Furthermore, while iteratively developing our model, we study the impact of the adversarial loss and demonstrate its crucial role in the quality of virtually stained images. Finally, while comparing our model with reference works in the field, we underscore the limitations of the currently used evaluation metrics and demonstrate the superior performance of CSSP2P GAN. © 2025 Elsevier B.V., All rights reserved.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.