2025
Authors
Nunes, JD; Montezuma, D; Oliveira, D; Pereira, T; Zlobec, I; Pinto, IM; Cardoso, JS;
Publication
SENSORS
Abstract
Due to the high variability in Hematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs), hidden stratification, and batch effects, generalizing beyond the training distribution is one of the main challenges in Deep Learning (DL) for Computational Pathology (CPath). But although DL depends on large volumes of diverse and annotated data, it is common to have a significant number of annotated samples from one or multiple source distributions, and another partially annotated or unlabeled dataset representing a target distribution for which we want to generalize, the so-called Domain Adaptation (DA). In this work, we focus on the task of generalizing from a single source distribution to a target domain. As it is still not clear which domain adaptation strategy is best suited for CPath, we evaluate three different DA strategies, namely FixMatch, CycleGAN, and a self-supervised feature extractor, and show that DA is still a challenge in CPath.
2025
Authors
Rodrigues, EM; Gouveia, M; Oliveira, HP; Pereira, T;
Publication
IEEE ACCESS
Abstract
Deep learning techniques have demonstrated significant potential in computer-assisted diagnosis based on medical imaging. However, their integration into clinical workflows remains limited, largely due to concerns about interpretability. To address this challenge, we propose Efficient-Proto-Caps, a lightweight and inherently interpretable model that combines capsule networks with prototype learning for lung nodule characterization. Additionally, an innovative Davies-Bouldin Index with multiple centroids per cluster is employed as a loss function to promote clustering of lung nodule visual attribute representations. When evaluated on the LIDC-IDRI dataset, the most widely recognized benchmark for lung cancer prediction, our model achieved an overall accuracy of 89.7 % in predicting lung nodule malignancy and associated visual attributes. This performance is statistically comparable to that of the baseline model, while utilizing a backbone with only approximately 2 % of the parameters of the baseline model's backbone. State-of-the-art models achieved better performance in lung nodule malignancy prediction; however, our approach relies on multiclass malignancy predictions and provides a decision rationale aligned with globally accepted clinical guidelines. These results underscore the potential of our approach, as the integration of lightweight and less complex designs into accurate and inherently interpretable models represents a significant advancement toward more transparent and clinically viable computer-assisted diagnostic systems. Furthermore, these findings highlight the model's potential for broader applicability, extending beyond medicine to other domains where final classifications are grounded in concept-based or example-based attributes.
2025
Authors
Gouveia, M; Mendes, T; Rodrigues, EM; Oliveira, HP; Pereira, T;
Publication
APPLIED SCIENCES-BASEL
Abstract
Lung cancer stands as the most prevalent and deadliest type of cancer, with adenocarcinoma being the most common subtype. Computed Tomography (CT) is widely used for detecting tumours and their phenotype characteristics, for an early and accurate diagnosis that impacts patient outcomes. Machine learning algorithms have already shown the potential to recognize patterns in CT scans to classify the cancer subtype. In this work, two distinct pipelines were employed to perform binary classification between adenocarcinoma and non-adenocarcinoma. Firstly, radiomic features were classified by Random Forest and eXtreme Gradient Boosting classifiers. Next, a deep learning approach, based on a Residual Neural Network and a Transformer-based architecture, was utilised. Both 2D and 3D CT data were initially explored, with the Lung-PET-CT-Dx dataset being employed for training and the NSCLC-Radiomics and NSCLC-Radiogenomics datasets used for external evaluation. Overall, the 3D models outperformed the 2D ones, with the best result being achieved by the Hybrid Vision Transformer, with an AUC of 0.869 and a balanced accuracy of 0.816 on the internal test set. However, a lack of generalization capability was observed across all models, with the performances decreasing on the external test sets, a limitation that should be studied and addressed in future work.
2025
Authors
Ribeiro, R; Neves, I; Oliveira, HP; Pereira, T;
Publication
Comput. Biol. Medicine
Abstract
Traumatic Brain Injury (TBI) is a form of brain injury caused by external forces, resulting in temporary or permanent impairment of brain function. Despite advancements in healthcare, TBI mortality rates can reach 30%–40% in severe cases. This study aims to assist clinical decision-making and enhance patient care for TBI-related complications by employing Artificial Intelligence (AI) methods and data-driven approaches to predict decompensation. This study uses learning models based on sequential data from Electronic Health Records (EHR). Decompensation prediction was performed based on 24-h in-mortality prediction at each hour of the patient's stay in the Intensive Care Unit (ICU). A cohort of 2261 TBI patients was selected from the MIMIC-III dataset based on age and ICD-9 disease codes. Logistic Regressor (LR), Long-short term memory (LSTM), and Transformers architectures were used. Two sets of features were also explored combined with missing data strategies by imputing the normal value, data imbalance techniques with class weights, and oversampling. The best performance results were obtained using LSTMs with the original features with no unbalancing techniques and with the added features and class weight technique, with AUROC scores of 0.918 and 0.929, respectively. For this study, using EHR time series data with LSTM proved viable in predicting patient decompensation, providing a helpful indicator of the need for clinical interventions. © 2025 Elsevier Ltd
2024
Authors
Pinheiro, C; Figueiredo, J; Pereira, T; Santos, CP;
Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2
Abstract
Biofeedback is a promising tool to complement conventional physical therapy by fostering active participation of neurologically impaired patients during treatment. This work aims at a user-centered design and usability assessment for different age groups of a novel wearable augmented reality application composed of a multimodal sensor network and corresponding control strategies for personalized biofeedback during gait training. The proposed solution includes wearable AR glasses that deliver visual cues controlled in real-time according to mediolateral center of mass position, sagittal ankle angle, or tibialis anterior muscle activity from inertial and EMG sensors. Control strategies include positive and negative reinforcement conditions and are based on the user's performance by comparing real-time sensor data with an automatically user-personalized threshold. The proposed solution allows ambulatory practice on daily scenarios, physiotherapists' involvement through a laptop screen, and contributes to further benchmark biofeedback regarding the type of sensor. Although old healthy adults with low academic degrees have a preference for guidance from an expert person, excellent usability scores (SUS scores: 81.25-96.87) were achieved with young and middle-aged healthy adults and one neurologically impaired patient.
2024
Authors
Fernandes, L; Pereira, T; Oliveira, HP;
Publication
2024 IEEE 37TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS 2024
Abstract
Currently, lung cancer is one of the deadliest diseases that affects millions of people globally. However, Artificial Intelligence is being increasingly integrated with healthcare practices, with the goal to aid in the early diagnosis of lung cancer. Although such methods have shown very promising results, they still lack transparency to the user, which consequently could make their generalised adoption a challenging task. Therefore, in this work we explore the use of post-hoc explainable methods, to better understand the inner-workings of an already established multitasking framework that executes the segmentation and the classification task of lung nodules simultaneously. The idea behind such study is to understand how a multitasking approach impacts the model's performance in the lung nodule classification task when compared to single-task models. Our results show that the multitasking approach works as an attention mechanism by aiding the model to learn more meaningful features. Furthermore, the multitasking framework was able to achieve a better performance in regard to the explainability metric, with an increase of 7% when compared to our baseline, and also during the classification and segmentation task, with an increase of 4.84% and 15.03%; for each task respectively, when also compared to the studied baselines.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.