2025
Autores
Zolfagharnasab, MH; Gonalves, T; Ferreira, P; Cardoso, MJ; Cardoso, JS;
Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
Breast segmentation has a critical role for objective pre and postoperative aesthetic evaluation but challenged by limited data (privacy concerns), class imbalance, and anatomical variability. As a response to the noted obstacles, we introduce an encoder–decoder framework with a Segment Anything Model (SAM) backbone, enhanced with synthetic depth maps and a multiterm loss combining weighted crossentropy, convexity, and depth alignment constraints. Evaluated on a 120patient dataset split into 70% training, 10% validation, and 20% testing, our approach achieves a balanced test dice score of 98.75%—a 4.5% improvement over prior methods—with dice of 95.5% (breast) and 89.2% (nipple). Ablations show depth injection reduces noise and focuses on anatomical regions, yielding dice gains of 0.47% (body) and 1.04% (breast). Geometric alignment increases convexity by almost 3% up to 99.86%, enhancing geometric plausibility of the nipple masks. Lastly, crossdataset evaluation on CINDERELLA samples demonstrates robust generalization, with small performance gain primarily attributable to differences in annotation styles. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Santos, J; Montenegro, H; Bonci, E; Cardoso, MJ; Cardoso, JS;
Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
Breast cancer patients often face difficulties when choosing among diverse surgeries. To aid patients, this paper proposes ACID-GAN (Anatomically and Clinically Informed Deep Generative Adversarial Network), a conditional generative model for predicting post-operative breast cancer outcomes using deep learning. Built on Pix2Pix, the model incorporates clinical metadata, such as surgery type and cancer laterality, by introducing a dedicated encoder for semantic supervision. Further improvements include colour preservation and anatomically informed losses, as well as clinical supervision via segmentation and classification modules. Experiments on a private dataset demonstrate that the model produces realistic, context-aware predictions. The results demonstrate that the model presents a meaningful trade-off between generating precise, anatomically defined results and maintaining patient-specific appearance, such as skin tone and shape. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Teixeira, LF; Montenegro, H; Bonci, E; Cardoso, MJ; Cardoso, JS;
Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
Breast cancer locoregional treatment includes a wide variety of procedures with diverse aesthetic outcomes. The aesthetic assessment of such procedures is typically subjective, hindering the fair comparison between their outcomes, and consequently restricting evidence-based improvements. Most objective evaluation tools were developed for conservative surgery, focusing on asymmetries while ignoring other relevant traits. To overcome these limitations, we propose SiameseOrdinalCLIP, an ordinal classification network based on image-text matching and pairwise ranking optimisation for the aesthetic evaluation of breast cancer treatment. Furthermore, we integrate a concept bottleneck module into the network for increased explainability. Experiments on a private dataset show that the proposed model surpasses the state-of-the-art aesthetic evaluation and ordinal classification networks. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Teixeira, J; Klöckner, P; Montezuma, D; Cesur, ME; Fraga, J; Horlings, HM; Cardoso, JS; de Oliveira, SP;
Publicação
Deep Generative Models - 5th MICCAI Workshop, DGM4MICCAI 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
In addition to evaluating tumor morphology using H&E staining, immunohistochemistry is used to assess the presence of specific proteins within the tissue. However, this is a costly and labor-intensive technique, for which virtual staining, as an image-to-image translation task, offers a promising alternative. Although recent, this is an emerging field of research with 64% of published studies just in 2024. Most studies use publicly available datasets of H&E-IHC pairs from consecutive tissue sections. Recognizing the training challenges, many authors develop complex virtual staining models based on conditional Generative Adversarial Networks but ignore the impact of adversarial loss on the quality of virtual staining. Furthermore, overlooking the issues of model evaluation, they claim improved performance based on metrics such as SSIM and PSNR, which are not sufficiently robust to evaluate the quality of virtually stained images. In this paper, we developed CSSP2P GAN, which we demonstrate to achieve heightened pathological fidelity through a blind pathological expert evaluation. Furthermore, while iteratively developing our model, we study the impact of the adversarial loss and demonstrate its crucial role in the quality of virtually stained images. Finally, while comparing our model with reference works in the field, we underscore the limitations of the currently used evaluation metrics and demonstrate the superior performance of CSSP2P GAN. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Pinto, G; Zolfagharnasab, MH; Teixeira, LF; Cruz, H; Cardoso, MJ; Cardoso, JS;
Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - Second Deep Breast Workshop, Deep-Breath 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings
Abstract
3D models are crucial in predicting aesthetic outcomes in breast reconstruction, supporting personalized surgical planning, and improving patient communication. In response to this necessity, this is the first application of Radiance Fields to 3D breast reconstruction. Building on this, the work compares six SoTA 3D reconstruction models. It introduces a novel variant tailored to medical contexts: Depth-Splatfacto, designed to improve denoising and geometric consistency through pseudo-depth supervision. Additionally, we extended model training to grayscale, which enhances robustness under grayscale-only input constraints. Experiments on a breast cancer patient dataset demonstrate that Splatfacto consistently outperforms others, delivering the highest reconstruction quality (PSNR 27.11, SSIM 0.942) and the fastest training times (×1.3 faster at 200k iterations). At the same time, the depth-enhanced variant offers an efficient and stable alternative with minimal fidelity loss. The grayscale train improves speed by ×1.6 with a PSNR drop of 0.70. Depth-Splatfacto further improves robustness, reducing PSNR variance by 10% and making images less blurry across test cases. These results establish a foundation for future clinical applications, supporting personalized surgical planning and improved patient-doctor communication. © 2025 Elsevier B.V., All rights reserved.
2025
Autores
Capozzi, L; Cardoso, JS; Rebelo, A;
Publicação
IEEE ACCESS
Abstract
In recent years, the task of person re-identification (Re-ID) has improved considerably with the advances in deep learning methodologies. However, occluded person Re-ID remains a challenging task, as parts of the body of the individual are frequently hidden by various objects, obstacles, or other people, making the identification process more difficult. To address these issues, we introduce a novel data augmentation strategy using artificial occlusions, consisting of random shapes and objects from a small image dataset that was created. We also propose an end-to-end methodology for occluded person Re-ID, which consists of three branches: a global branch, a feature dropping branch, and an occlusion detection branch. Experimental results show that the use of random shape occlusions is superior to random erasing using our architecture. Results on six datasets consisting of three tasks (holistic, partial and occluded person Re-ID) demonstrate that our method performs favourably against state-of-the-art methodologies.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.