2025
Autores
Rodrigues, EM; Gouveia, M; Oliveira, HP; Pereira, T;
Publicação
IEEE Access
Abstract
Deep learning techniques have demonstrated significant potential in computer-assisted diagnosis based on medical imaging. However, their integration into clinical workflows remains limited, largely due to concerns about interpretability. To address this challenge, we propose Efficient-Proto-Caps, a lightweight and inherently interpretable model that combines capsule networks with prototype learning for lung nodule characterization. Additionally, an innovative Davies-Bouldin Index with multiple centroids per cluster is employed as a loss function to promote clustering of lung nodule visual attribute representations. When evaluated on the LIDC-IDRI dataset, the most widely recognized benchmark for lung cancer prediction, our model achieved an overall accuracy of 89.7 % in predicting lung nodule malignancy and associated visual attributes. This performance is statistically comparable to that of the baseline model, while utilizing a backbone with only approximately 2 % of the parameters of the baseline model’s backbone. State-of-the-art models achieved better performance in lung nodule malignancy prediction; however, our approach relies on multiclass malignancy predictions and provides a decision rationale aligned with globally accepted clinical guidelines. These results underscore the potential of our approach, as the integration of lightweight and less complex designs into accurate and inherently interpretable models represents a significant advancement toward more transparent and clinically viable computer-assisted diagnostic systems. Furthermore, these findings highlight the model’s potential for broader applicability, extending beyond medicine to other domains where final classifications are grounded in concept-based or example-based attributes. © 2013 IEEE.
2025
Autores
Nunes, JD; Montezuma, D; Oliveira, D; Pereira, T; Zlobec, I; Pinto, IM; Cardoso, JS;
Publicação
SENSORS
Abstract
Due to the high variability in Hematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs), hidden stratification, and batch effects, generalizing beyond the training distribution is one of the main challenges in Deep Learning (DL) for Computational Pathology (CPath). But although DL depends on large volumes of diverse and annotated data, it is common to have a significant number of annotated samples from one or multiple source distributions, and another partially annotated or unlabeled dataset representing a target distribution for which we want to generalize, the so-called Domain Adaptation (DA). In this work, we focus on the task of generalizing from a single source distribution to a target domain. As it is still not clear which domain adaptation strategy is best suited for CPath, we evaluate three different DA strategies, namely FixMatch, CycleGAN, and a self-supervised feature extractor, and show that DA is still a challenge in CPath.
2025
Autores
Castro, IAA; Oliveira, HP; Correia, R; Hayes-Gill, B; Morgan, SP; Korposh, S; Gomez, D; Pereira, T;
Publicação
PHYSIOLOGICAL MEASUREMENT
Abstract
Objective.The detection of arterial pulsating signals at the skin periphery with Photoplethysmography (PPG) are easily distorted by motion artifacts. This work explores the alternatives to the aid of PPG reconstruction with movement sensors (accelerometer and/or gyroscope) which to date have demonstrated the best pulsating signal reconstruction. Approach. A generative adversarial network with fully connected layers is proposed for the reconstruction of distorted PPG signals. Artificial corruption was performed to the clean selected signals from the BIDMC Heart Rate dataset, processed from the larger MIMIC II waveform database to create the training, validation and testing sets. Main results. The heart rate (HR) of this dataset was further extracted to evaluate the performance of the model obtaining a mean absolute error of 1.31 bpm comparing the HR of the target and reconstructed PPG signals with HR between 70 and 115 bpm. Significance. The model architecture is effective at reconstructing noisy PPG signals regardless the length and amplitude of the corruption introduced. The performance over a range of HR (70-115 bpm), indicates a promising approach for real-time PPG signal reconstruction without the aid of acceleration or angular velocity inputs.
2025
Autores
Fernandes, L; Pereira, T; Oliveira, HP;
Publicação
IEEE ACCESS
Abstract
Segmentation of lung nodules in CT images is an important step during the clinical evaluation of patients with lung cancer. Furthermore, early assessment of the cancer is crucial to increase the overall survival chances of patients with such disease, and the segmentation of lung nodules can help detect the cancer in its early stages. Consequently, there are many works in the literature that explore the use of neural networks for the segmentation of lung nodules. However, these frameworks tend to rely on accurate labelling of the nodule centre to then crop the input image. Although such works are able to achieve remarkable results, they do not take into account that the healthcare professional may fail to correctly label the centre of the nodule. Therefore, in this work, we propose a new framework based on the U-Net model that allows to correct such inaccuracies in an interactive fashion. It is composed of two U-Net models in cascade, where the first model is used to predict a rough estimation of the lung nodule location and the second model refines the generated segmentation mask. Our results show that the proposed framework is able to be more robust than the studied baselines. Furthermore, it is able to achieve state-of-the-art performance, reaching a Dice of 91.12% when trained and tested on the LIDC-IDRI public dataset.
2014
Autores
Pereira, T; Santos, H; Pereira, H; Correia, C; Cardoso, J;
Publicação
Artery Research
Abstract
2017
Autores
Pereira, T; Vilaprinyo, E; Belli, G; Herrero, E; Salvado, B; Sorribas, A; Altés, G; Alves, R;
Publicação
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.