Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Margarida Gonçalves Gouveia

2025

From Pixels to Pathways: AI-Based Approaches for Multimodal Lung Cancer Classification

Authors
Goncalves, S; Sousa, JV; Gouveia, M; Amaro, M; Oliveira, P; Pereira, T;

Publication
Proceedings - 2025 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2025

Abstract
Lung cancer remains the leading cause of cancer related deaths globally, responsible for approximately 1.8 million deaths each year. A key contributor to this high mortality rate is the late-stage diagnosis of the disease, underscoring the urgent need for effective early detection strategies. Low-dose computed tomography (CT) has shown great value in early screening, particularly when paired with clinical information. Clinical data, while valuable, lacks spatial and morphological insights essential for comprehensive evaluation. Combining both modalities offers a more holistic approach for lung cancer classification. This study presents AI-based methods for lung cancer classification using unimodal approaches - structured clinical data and chest CT imaging - alongside a novel multimodal deep learning framework that integrates both data types to classify lung nodules as malignant or benign. For the clinical modality, machine learning models including logistic regression, random forests, LightGBM, XGBoost, and multilayer perceptrons were evaluated with extensive hyperparameter tuning. In the imaging modality, ResNet18 and ResNet34 convolutional neural networks were used, with and without data augmentation. The study explored both intermediate and late fusion strategies to combine modality-specific representations. Results show that multimodal models consistently outperformed their unimodal counterparts, achieving a best-case area under the ROC curve (AUC) of 0.9138, with an accuracy of 0.8424 and an F1-score of 0.8422. These findings highlight the complementary strengths of imaging and clinical data and support the growing potential of multimodal deep learning in improving diagnostic accuracy in lung cancer classification. © 2025 IEEE.

2025

From CT Scans to 3D Printed Models: A Pipeline for Mandible Surgical Planning

Authors
Saraiva, A; Gouveia, M; Lopes, C; Marinho, J; Pereira, T; Mendes, J;

Publication
BIBM

Abstract
Accurate surgical planning is critical in mandibular reconstruction to restore the oncology patient's function and aesthetics. However, the use of physical three-dimensional (3D) models is often limited by time-consuming manual segmentation procedures or the high cost of commercial solutions. This work addresses the need for an accessible, quick, and low-cost pipeline to obtain a 3D printed model of the segmented mandible from a Computed Tomography (CT) scan. The automatic segmentation stage relied on the two-dimensional U-Net architecture, which was trained and validated with slices across two public datasets (PDDCA, HaN-Seg) and tested with the other two public datasets (TCIA RT, Austrian). The best model achieved an average dice similarity coefficient (DSC) of 0.912 ± 0.077 across all test sets. The segmentation output was reconstructed into a 3D volume, improved through a post-processing method (with morphological closing, upsample, smoothing, and mesh reduction), and 3D printed through fused deposition modelling. The assessment of a stomatologist confirmed overall high anatomical fidelity to the CT and clinical utility, even though further improvements in important fine anatomical elements were suggested. This solution contributes to a promising alternative to producing 3D personalised mandibles for surgical planning, reducing time and manual effort while improving the quality and accessibility. Future work may explore the use of 3D DL architectures and a broader evaluation of the 3D mandible models. © 2025 IEEE.

  • 2
  • 2