2026
Autores
Santos, J; Montenegro, H; Bonci, E; Cardoso, MJ; Cardoso, JS;
Publicação
ARTIFICIAL INTELLIGENCE AND IMAGING FOR DIAGNOSTIC AND TREATMENT CHALLENGES IN BREAST CARE, DEEP-BREATH 2025
Abstract
Breast cancer patients often face difficulties when choosing among diverse surgeries. To aid patients, this paper proposes ACID-GAN (Anatomically and Clinically Informed Deep Generative Adversarial Network), a conditional generative model for predicting post-operative breast cancer outcomes using deep learning. Built on Pix2Pix, the model incorporates clinical metadata, such as surgery type and cancer laterality, by introducing a dedicated encoder for semantic supervision. Further improvements include colour preservation and anatomically informed losses, as well as clinical supervision via segmentation and classification modules. Experiments on a private dataset demonstrate that the model produces realistic, context-aware predictions. The results demonstrate that the model presents a meaningful trade-off between generating precise, anatomically defined results and maintaining patient-specific appearance, such as skin tone and shape.
2026
Autores
Teixeira, F; Montenegro, H; Bonci, E; Cardoso, MJ; Cardoso, JS;
Publicação
ARTIFICIAL INTELLIGENCE AND IMAGING FOR DIAGNOSTIC AND TREATMENT CHALLENGES IN BREAST CARE, DEEP-BREATH 2025
Abstract
Breast cancer locoregional treatment includes a wide variety of procedures with diverse aesthetic outcomes. The aesthetic assessment of such procedures is typically subjective, hindering the fair comparison between their outcomes, and consequently restricting evidence-based improvements. Most objective evaluation tools were developed for conservative surgery, focusing on asymmetries while ignoring other relevant traits. To overcome these limitations, we propose SiameseOrdinalCLIP, an ordinal classification network based on image-text matching and pairwise ranking optimisation for the aesthetic evaluation of breast cancer treatment. Furthermore, we integrate a concept bottleneck module into the network for increased explainability. Experiments on a private dataset show that the proposed model surpasses the state-of-the-art aesthetic evaluation and ordinal classification networks.
2026
Autores
Teixeira, J; Klöckner, P; Montezuma, D; Cesur, ME; Fraga, J; Horlings, HM; Cardoso, JS; Oliveira, SP;
Publicação
DEEP GENERATIVE MODELS, DGM4MICCAI 2025
Abstract
In addition to evaluating tumor morphology using H&E staining, immunohistochemistry is used to assess the presence of specific proteins within the tissue. However, this is a costly and labor-intensive technique, for which virtual staining, as an image-to-image translation task, offers a promising alternative. Although recent, this is an emerging field of research with 64% of published studies just in 2024. Most studies use publicly available datasets of H&E-IHC pairs from consecutive tissue sections. Recognizing the training challenges, many authors develop complex virtual staining models based on conditional Generative Adversarial Networks but ignore the impact of adversarial loss on the quality of virtual staining. Furthermore, overlooking the issues of model evaluation, they claim improved performance based on metrics such as SSIM and PSNR, which are not sufficiently robust to evaluate the quality of virtually stained images. In this paper, we developed CSSP2P GAN, which we demonstrate to achieve heightened pathological fidelity through a blind pathological expert evaluation. Furthermore, while iteratively developing our model, we study the impact of the adversarial loss and demonstrate its crucial role in the quality of virtually stained images. Finally, while comparing our model with reference works in the field, we underscore the limitations of the currently used evaluation metrics and demonstrate the superior performance of CSSP2P GAN.
2026
Autores
Pinto, G; Zolfagharnasab, MH; Teixeira, LF; Cruz, H; Cardoso, MJ; Cardoso, JS;
Publicação
ARTIFICIAL INTELLIGENCE AND IMAGING FOR DIAGNOSTIC AND TREATMENT CHALLENGES IN BREAST CARE, DEEP-BREATH 2025
Abstract
3D models are crucial in predicting aesthetic outcomes in breast reconstruction, supporting personalized surgical planning, and improving patient communication. In response to this necessity, this is the first application of Radiance Fields to 3D breast reconstruction. Building on this, the work compares six SoTA 3D reconstruction models. It introduces a novel variant tailored to medical contexts: Depth-Splatfacto, designed to improve denoising and geometric consistency through pseudo-depth supervision. Additionally, we extended model training to grayscale, which enhances robustness under grayscale-only input constraints. Experiments on a breast cancer patient dataset demonstrate that Splatfacto consistently outperforms others, delivering the highest reconstruction quality (PSNR 27.11, SSIM 0.942) and the fastest training times (x1.3 faster at 200k iterations). At the same time, the depth-enhanced variant offers an efficient and stable alternative with minimal fidelity loss. The grayscale train improves speed by x1.6 with a PSNR drop of 0.70. Depth-Splatfacto further improves robustness, reducing PSNR variance by 10% and making images less blurry across test cases. These results establish a foundation for future clinical applications, supporting personalized surgical planning and improved patient-doctor communication.
2026
Autores
Fernandes, L; Goncalves, T; Matos, J; Nakayama, L; Cardoso, JS;
Publicação
FAIRNESS OF AI IN MEDICAL IMAGING, FAIMI 2025
Abstract
Diabetic retinopathy (DR) is a leading cause of vision loss in working-age adults. While screening reduces the risk of blindness, traditional imaging is often costly and inaccessible. Artificial intelligence (AI) algorithms present a scalable diagnostic solution, but concerns regarding fairness and generalization persist. This work evaluates the fairness and performance of image-trained models in DR prediction, as well as the impact of disentanglement as a bias mitigation technique, using the diverse mBRSET fundus dataset. Three models, ConvNeXt V2, DINOv2, and Swin V2, were trained on macula images to predict DR and sensitive attributes (SAs) (e.g., age and gender/sex). Fairness was assessed between subgroups of SAs, and disentanglement was applied to reduce bias. All models achieved high DR prediction performance in diagnosing (up to 94% AUROC) and could reasonably predict age and gender/sex (91% and 77% AUROC, respectively). Fairness assessment suggests disparities, such as a 10% AUROC gap between age groups in DINOv2. Disentangling SAs from DR prediction had varying results, depending on the model selected. Disentanglement improved DINOv2 performance (2% AUROC gain), but led to performance drops in ConvNeXt V2 and Swin V2 (7% and 3%, respectively). These findings highlight the complexity of disentangling fine-grained features in fundus imaging and emphasize the importance of fairness in medical imaging AI to ensure equitable and reliable healthcare solutions.
2026
Autores
Capozzi, L; Ferreira, L; Gonçalves, T; Rebelo, A; Cardoso, JS; Sequeira, AF;
Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS, IBPRIA 2025, PT II
Abstract
The rapid advancement of wireless technologies, particularly Wi-Fi, has spurred significant research into indoor human activity detection across various domains (e.g., healthcare, security, and industry). This work explores the non-invasive and cost-effective Wi-Fi paradigm and the application of deep learning for human activity recognition using Wi-Fi signals. Focusing on the challenges in machine interpretability, motivated by the increase in data availability and computational power, this paper uses explainable artificial intelligence to understand the inner workings of transformer-based deep neural networks designed to estimate human pose (i.e., human skeleton key points) from Wi-Fi channel state information. Using different strategies to assess the most relevant sub-carriers (i.e., rollout attention and masking attention) for the model predictions, we evaluate the performance of the model when it uses a given number of sub-carriers as input, selected randomly or by ascending (high-attention) or descending (low-attention) order. We concluded that the models trained with fewer (but relevant) sub-carriers are competitive with the baseline (trained with all sub-carriers) but better in terms of computational efficiency (i.e., processing more data per second).
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.