2022
Autores
Pedrosa, J; Aresta, G; Ferreira, CA; Rodrigues, M; Leitão, P; Carvalho, AS; Rebelo, J; Negrão, E; Ramos, I; Cunha, A; Campilho, A;
Publicação
Abstract
2025
Autores
Santos, R; Castro, R; Baeza, R; Nunes, F; Filipe, VM; Renna, F; Paredes, H; Carvalho, RF; Pedrosa, J;
Publicação
Comput. Biol. Medicine
Abstract
Cardiovascular diseases are the leading cause of death in the world, with coronary artery disease being the most prevalent. Coronary artery calcifications are critical biomarkers for cardiovascular disease, and their quantification via non-contrast computed tomography is a widely accepted and heavily employed technique for risk assessment. Manual segmentation of these calcifications is a time-consuming task, subject to variability. State-of-the-art methods often employ convolutional neural networks for an automated approach. However, there is a lack of studies that perform these segmentations with 3D architectures that can gather important and necessary anatomical context to distinguish the different coronary arteries. This paper proposes a novel and automated approach that uses a lightweight three-dimensional convolutional neural network to perform efficient and accurate segmentations and calcium scoring. Results show that this method achieves Dice score coefficients of 0.93 ± 0.02, 0.93 ± 0.03, 0.84 ± 0.02, 0.63 ± 0.06 and 0.89 ± 0.03 for the foreground, left anterior descending artery (LAD), left circumflex artery (LCX), left main artery (LM) and right coronary artery (RCA) calcifications, respectively, outperforming other state-of-the-art architectures. An external cohort validation also showed the generalization of this method's performance and how it can be applied in different clinical scenarios. In conclusion, the proposed lightweight 3D convolutional neural network demonstrates high efficiency and accuracy, outperforming state-of-the-art methods and showcasing robust generalization potential.
2025
Autores
Santos, R; Pedrosa, J; Mendonça, AM; Campilho, A;
Publicação
COMPUTER VISION AND IMAGE UNDERSTANDING
Abstract
The increase in complexity of deep learning models demands explanations that can be obtained with methods like Grad-CAM. This method computes an importance map for the last convolutional layer relative to a specific class, which is then upsampled to match the size of the input. However, this final step assumes that there is a spatial correspondence between the last feature map and the input, which may not be the case. We hypothesize that, for models with large receptive fields, the feature spatial organization is not kept during the forward pass, which may render the explanations devoid of meaning. To test this hypothesis, common architectures were applied to a medical scenario on the public VinDr-CXR dataset, to a subset of ImageNet and to datasets derived from MNIST. The results show a significant dispersion of the spatial information, which goes against the assumption of Grad-CAM, and that explainability maps are affected by this dispersion. Furthermore, we discuss several other caveats regarding Grad-CAM, such as feature map rectification, empty maps and the impact of global average pooling or flatten layers. Altogether, this work addresses some key limitations of Grad-CAM which may go unnoticed for common users, taking one step further in the pursuit for more reliable explainability methods.
2025
Autores
Martinez-Rodrigo, A; Pedrosa, J; Carneiro, D; Cavero-Redondo, I; Saz-Lara, A;
Publicação
APPLIED SCIENCES-BASEL
Abstract
Arterial stiffness (AS) is a well-established predictor of cardiovascular events, including myocardial infarction and stroke. One of the most recognized methods for assessing AS is through arterial pulse wave velocity (aPWV), which provides valuable clinical insights into vascular health. However, its measurement typically requires specialized equipment, making it inaccessible in primary healthcare centers and low-resource settings. In this study, we developed and validated different machine learning models to estimate aPWV using common clinical markers routinely collected in standard medical examinations. Thus, we trained five regression models: Linear Regression, Polynomial Regression (PR), Gradient Boosting Regression, Support Vector Regression, and Neural Networks (NNs) on the EVasCu dataset, a cohort of apparently healthy individuals. A 10-fold cross-validation demonstrated that PR and NN achieved the highest predictive performance, effectively capturing nonlinear relationships in the data. External validation on two independent datasets, VascuNET (a healthy population) and ExIC-FEp (a cohort of cardiopathic patients), confirmed the robustness of PR and NN (R- (2)> 0.90) across different vascular conditions. These results indicate that by using easily accessible clinical variables and AI-driven insights, it is possible to develop a cost-effective tool for aPWV estimation, enabling early cardiovascular risk stratification in underserved and rural areas where specialized AS measurement devices are unavailable.
2024
Autores
Pereira, SC; Pedrosa, J; Rocha, J; Sousa, P; Campilho, A; Mendon a, AM;
Publicação
2024 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, BIBM
Abstract
Large-scale datasets are essential for training deep learning models in medical imaging. However, many of these datasets contain poor-quality images that can compromise model performance and clinical reliability. In this study, we propose a framework to detect non-compliant images, such as corrupted scans, incomplete thorax X-rays, and images of non-thoracic body parts, by leveraging contrastive learning for feature extraction and parametric or non-parametric scoring methods for out-ofdistribution ranking. Our approach was developed and tested on the CheXpert dataset, achieving an AUC of 0.75 in a manually labeled subset of 1,000 images, and further qualitatively and visually validated on the external PadChest dataset, where it also performed effectively. Our results demonstrate the potential of contrastive learning to detect non-compliant images in largescale medical datasets, laying the foundation for future work on reducing dataset pollution and improving the robustness of deep learning models in clinical practice.
2024
Autores
Magalhaes, B; Pedrosa, J; Renna, F; Paredes, H; Filipe, V;
Publicação
2024 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, BIBM
Abstract
Coronary artery disease (CAD) remains a leading cause of morbidity and mortality worldwide, underscoring the need for accurate and reliable diagnostic tools. While AI- driven models have shown significant promise in identifying CAD through imaging techniques, their 'black box' nature often hinders clinical adoption due to a lack of interpretability. In response, this paper proposes a novel approach to image captioning specifically tailored for CAD diagnosis, aimed at enhancing the transparency and usability of AI systems. Utilizing the COCA dataset, which comprises gated coronary CT images along with Ground Truth (GT) segmentation annotations, we introduce a hybrid model architecture that combines a Vision Transformer (ViT) for feature extraction with a Generative Pretrained Transformer (GPT) for generating clinically relevant textual descriptions. This work builds on a previously developed 3D Convolutional Neural Network (CNN) for coronary artery segmentation, leveraging its accurate delineations of calcified regions as critical inputs to the captioning process. By incorporating these segmentation outputs, our approach not only focuses on accurately identifying and describing calcified regions within the coronary arteries but also ensures that the generated captions are clinically meaningful and reflective of key diagnostic features such as location, severity, and artery involvement. This methodology provides medical practitioners with clear, context-rich explanations of AI-generated findings, thereby bridging the gap between advanced AI technologies and practical clinical applications. Furthermore, our work underscores the critical role of Explainable AI (XAI) in fostering trust, improving decision- making, and enhancing the efficacy of AI-driven diagnostics, paving the way for future advancements in the field.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.