Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Mohammad Hossein Zolfagharnasab
  • Cargo

    Assistente de Investigação
  • Desde

    25 setembro 2023
001
Publicações

2026

Automatic prediction and evaluation of aesthetic outcomes in plastic and oncological surgery: a systematic review

Autores
Montenegro, H; Zolfagharnasab, MH; Teixeira, F; Pinto, G; Santos, J; Ferreira, P; Bonci, EA; Mavioso, C; Cardoso, MJ; Cardoso, JS;

Publicação
ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING

Abstract
Aesthetic outcomes in plastic and oncological surgery play a fundamental role in restoring patients' self-esteem, social engagement, and overall quality of life. Yet, managing pre-operative expectations and objectively assessing post-operative results remain as difficult challenges, compounded by the subjective nature of beauty and the scarcity of standardized evaluation tools. To address these challenges, we conduct a systematic review assessing computational methods for the prediction and evaluation of the aesthetic outcomes of plastic and oncological surgery, adhering to PRISMA guidelines. We propose a goal-oriented taxonomy that partitions computational approaches into two main categories: (1) prediction methods that pre-operatively predict the results of surgery through retrieval-based systems, generative artificial intelligence and advanced 3D modeling techniques, and (2) evaluation strategies that assess the post-operative outcomes through objective measurements, traditional machine learning, and deep learning models. Our synthesis indicates a potential paradigm shift from early work that relied on manual image annotation and manipulation to recent research that predominantly employs artificial intelligence. Nevertheless, over 90% of datasets remain private, and validation processes diverge among techniques with similar goals, limiting reproducibility and fair methodological comparisons. We conclude by advocating for the creation of larger publicly accessible datasets, integration of vision-language models to capture patient-reported outcomes, and rigorous clinical validation to ensure equitable, patient-centered care. By bridging computational innovation with clinical practice, this study contributes towards a more transparent, reliable, and personalized aesthetic outcome prediction and assessment.

2025

Predicting Aesthetic Outcomes of Breast Cancer Surgery: A Robust and Explainable Image Retrieval Approach

Autores
Ferreira, P; Zolfagharnasab, MH; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Cardoso, JS;

Publicação
Deep-Breath@MICCAI

Abstract
Accurate retrieval of post-surgical images plays a critical role in surgical planning for breast cancer patients. However, current content-based image retrieval methods face challenges related to limited interpretability, poor robustness to image noise, and reduced generalization across clinical settings. To address these limitations, we propose a multistage retrieval pipeline integrating saliency-based explainability, noise-reducing image pre-processing, and ensemble learning. Evaluated on a dataset of post-operative breast cancer patient images, our approach achieves contrastive accuracy of 77.67% for Excellent/Good and 84.98% for Fair/Poor outcomes, surpassing prior studies by 8.37% and 11.80%, respectively. Explainability analysis provided essential insight by showing that feature extractors often attend to irrelevant regions, thereby motivating targeted input refinement. Ablations show that expanded bounding box inputs improve performance over original images, with gains of 0.78% and 0.65% contrastive accuracy for Excellent/Good and Fair/Poor, respectively. In contrast, the use of segmented images leads to a performance drop (1.33% and 1.65%) due to the loss of contextual cues. Furthermore, ensemble learning yielded additional gains of 0.89% and 3.60% over the best-performing single-model baselines. These findings underscore the importance of targeted input refinement and ensemble integration for robust and generalizable image retrieval systems.

2025

Towards Robust Breast Segmentation: Leveraging Depth Awareness and Convexity Optimization For Tackling Data Scarcity

Autores
Zolfagharnasab, MH; Gonçalves, T; Ferreira, P; Cardoso, MJ; Cardoso, JS;

Publicação
Deep-Breath@MICCAI

Abstract
Breast segmentation has a critical role for objective pre and postoperative aesthetic evaluation but challenged by limited data (privacy concerns), class imbalance, and anatomical variability. As a response to the noted obstacles, we introduce an encoder–decoder framework with a Segment Anything Model (SAM) backbone, enhanced with synthetic depth maps and a multiterm loss combining weighted crossentropy, convexity, and depth alignment constraints. Evaluated on a 120patient dataset split into 70% training, 10% validation, and 20% testing, our approach achieves a balanced test dice score of 98.75%—a 4.5% improvement over prior methods—with dice of 95.5% (breast) and 89.2% (nipple). Ablations show depth injection reduces noise and focuses on anatomical regions, yielding dice gains of 0.47% (body) and 1.04% (breast). Geometric alignment increases convexity by almost 3% up to 99.86%, enhancing geometric plausibility of the nipple masks. Lastly, crossdataset evaluation on CINDERELLA samples demonstrates robust generalization, with small performance gain primarily attributable to differences in annotation styles.

2025

Towards Utilizing Robust Radiance Fields for 3D Reconstruction of Breast Aesthetics

Autores
Pinto, G; Zolfagharnasab, MH; Teixeira, LF; Cruz, H; Cardoso, MJ; Cardoso, JS;

Publicação
Deep-Breath@MICCAI

Abstract
3D models are crucial in predicting aesthetic outcomes in breast reconstruction, supporting personalized surgical planning, and improving patient communication. In response to this necessity, this is the first application of Radiance Fields to 3D breast reconstruction. Building on this, the work compares six SoTA 3D reconstruction models. It introduces a novel variant tailored to medical contexts: Depth-Splatfacto, designed to improve denoising and geometric consistency through pseudo-depth supervision. Additionally, we extended model training to grayscale, which enhances robustness under grayscale-only input constraints. Experiments on a breast cancer patient dataset demonstrate that Splatfacto consistently outperforms others, delivering the highest reconstruction quality (PSNR 27.11, SSIM 0.942) and the fastest training times (×1.3 faster at 200k iterations). At the same time, the depth-enhanced variant offers an efficient and stable alternative with minimal fidelity loss. The grayscale train improves speed by ×1.6 with a PSNR drop of 0.70. Depth-Splatfacto further improves robustness, reducing PSNR variance by 10% and making images less blurry across test cases. These results establish a foundation for future clinical applications, supporting personalized surgical planning and improved patient-doctor communication.

2025

Predicting Aesthetic Outcomes in Breast Cancer Surgery: A Multimodal Retrieval Approach

Autores
Zolfagharnasab, MH; Freitas, N; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Oliveira, HP; Cardoso, JS;

Publicação
ARTIFICIAL INTELLIGENCE AND IMAGING FOR DIAGNOSTIC AND TREATMENT CHALLENGES IN BREAST CARE, DEEP-BREATH 2024

Abstract
Breast cancer treatments often affect patients' body image, making aesthetic outcome predictions vital. This study introduces a Deep Learning (DL) multimodal retrieval pipeline using a dataset of 2,193 instances combining clinical attributes and RGB images of patients' upper torsos. We evaluate four retrieval techniques: Weighted Euclidean Distance (WED) with various configurations and shallow Artificial Neural Network (ANN) for tabular data, pre-trained and fine-tuned Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), and a multimodal approach combining both data types. The dataset, categorised into Excellent/Good and Fair/Poor outcomes, is organised into over 20K triplets for training and testing. Results show fine-tuned multimodal ViTs notably enhance performance, achieving up to 73.85% accuracy and 80.62% Adjusted Discounted Cumulative Gain (ADCG). This framework not only aids in managing patient expectations by retrieving the most relevant post-surgical images but also promises broad applications in medical image analysis and retrieval. The main contributions of this paper are the development of a multimodal retrieval system for breast cancer patients based on post-surgery aesthetic outcome and the evaluation of different models on a new dataset annotated by clinicians for image retrieval.