Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por João Manuel Pedrosa

2025

Development of a Non-Invasive Clinical Machine Learning System for Arterial Pulse Wave Velocity Estimation

Autores
Martinez-Rodrigo, A; Pedrosa, J; Carneiro, D; Cavero-Redondo, I; Saz-Lara, A;

Publicação
APPLIED SCIENCES-BASEL

Abstract
Arterial stiffness (AS) is a well-established predictor of cardiovascular events, including myocardial infarction and stroke. One of the most recognized methods for assessing AS is through arterial pulse wave velocity (aPWV), which provides valuable clinical insights into vascular health. However, its measurement typically requires specialized equipment, making it inaccessible in primary healthcare centers and low-resource settings. In this study, we developed and validated different machine learning models to estimate aPWV using common clinical markers routinely collected in standard medical examinations. Thus, we trained five regression models: Linear Regression, Polynomial Regression (PR), Gradient Boosting Regression, Support Vector Regression, and Neural Networks (NNs) on the EVasCu dataset, a cohort of apparently healthy individuals. A 10-fold cross-validation demonstrated that PR and NN achieved the highest predictive performance, effectively capturing nonlinear relationships in the data. External validation on two independent datasets, VascuNET (a healthy population) and ExIC-FEp (a cohort of cardiopathic patients), confirmed the robustness of PR and NN (R- (2)> 0.90) across different vascular conditions. These results indicate that by using easily accessible clinical variables and AI-driven insights, it is possible to develop a cost-effective tool for aPWV estimation, enabling early cardiovascular risk stratification in underserved and rural areas where specialized AS measurement devices are unavailable.

2024

DeepClean - Contrastive Learning Towards Quality Assessment in Large-Scale CXR Data Sets

Autores
Pereira, SC; Pedrosa, J; Rocha, J; Sousa, P; Campilho, A; Mendon a, AM;

Publicação
2024 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, BIBM

Abstract
Large-scale datasets are essential for training deep learning models in medical imaging. However, many of these datasets contain poor-quality images that can compromise model performance and clinical reliability. In this study, we propose a framework to detect non-compliant images, such as corrupted scans, incomplete thorax X-rays, and images of non-thoracic body parts, by leveraging contrastive learning for feature extraction and parametric or non-parametric scoring methods for out-ofdistribution ranking. Our approach was developed and tested on the CheXpert dataset, achieving an AUC of 0.75 in a manually labeled subset of 1,000 images, and further qualitatively and visually validated on the external PadChest dataset, where it also performed effectively. Our results demonstrate the potential of contrastive learning to detect non-compliant images in largescale medical datasets, laying the foundation for future work on reducing dataset pollution and improving the robustness of deep learning models in clinical practice.

2024

Image Captioning for Coronary Artery Disease Diagnosis

Autores
Magalhaes, B; Pedrosa, J; Renna, F; Paredes, H; Filipe, V;

Publicação
2024 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, BIBM

Abstract
Coronary artery disease (CAD) remains a leading cause of morbidity and mortality worldwide, underscoring the need for accurate and reliable diagnostic tools. While AI- driven models have shown significant promise in identifying CAD through imaging techniques, their 'black box' nature often hinders clinical adoption due to a lack of interpretability. In response, this paper proposes a novel approach to image captioning specifically tailored for CAD diagnosis, aimed at enhancing the transparency and usability of AI systems. Utilizing the COCA dataset, which comprises gated coronary CT images along with Ground Truth (GT) segmentation annotations, we introduce a hybrid model architecture that combines a Vision Transformer (ViT) for feature extraction with a Generative Pretrained Transformer (GPT) for generating clinically relevant textual descriptions. This work builds on a previously developed 3D Convolutional Neural Network (CNN) for coronary artery segmentation, leveraging its accurate delineations of calcified regions as critical inputs to the captioning process. By incorporating these segmentation outputs, our approach not only focuses on accurately identifying and describing calcified regions within the coronary arteries but also ensures that the generated captions are clinically meaningful and reflective of key diagnostic features such as location, severity, and artery involvement. This methodology provides medical practitioners with clear, context-rich explanations of AI-generated findings, thereby bridging the gap between advanced AI technologies and practical clinical applications. Furthermore, our work underscores the critical role of Explainable AI (XAI) in fostering trust, improving decision- making, and enhancing the efficacy of AI-driven diagnostics, paving the way for future advancements in the field.

2024

Deep Left Ventricular Motion Estimation Methods in Echocardiography: A Comparative Study

Autores
Ferraz, S; Coimbra, MT; Pedrosa, J;

Publicação
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024

Abstract
Motion estimation in echocardiography is critical when assessing heart function and calculating myocardial deformation indices. Nevertheless, there are limitations in clinical practice, particularly with regard to the accuracy and reliability of measurements retrieved from images. In this study, deep learning-based motion estimation architectures were used to determine the left ventricular longitudinal strain in echocardiography. Three motion estimation approaches, pretrained on popular optical flow datasets, were applied to a simulated echocardiographic dataset. Results show that PWC-Net, RAFT and FlowFormer achieved an average end point error of 0.20, 0.11 and 0.09 mm per frame, respectively. Additionally, global longitudinal strain was calculated from the FlowFormer outputs to assess strain correlation. Notably, there is variability in strain accuracy among different vendors. Thus, optical flow-based motion estimation has the potential to facilitate the use of strain imaging in clinical practice.

2024

BEAS-Net: A Shape-Prior-Based Deep Convolutional Neural Network for Robust Left Ventricular Segmentation in 2-D Echocardiography

Autores
Akbari, S; Tabassian, M; Pedrosa, J; Queirós, S; Papangelopoulou, K; D'hooge, J;

Publicação
IEEE TRANSACTIONS ON ULTRASONICS FERROELECTRICS AND FREQUENCY CONTROL

Abstract
Left ventricle (LV) segmentation of 2-D echocardiography images is an essential step in the analysis of cardiac morphology and function and-more generally-diagnosis of cardiovascular diseases (CVD). Several deep learning (DL) algorithms have recently been proposed for the automatic segmentation of the LV, showing significant performance improvement over the traditional segmentation algorithms. However, unlike the traditional methods, prior information about the segmentation problem, e.g., anatomical shape information, is not usually incorporated for training the DL algorithms. This can degrade the generalization performance of the DL models on unseen images if their characteristics are somewhat different from those of the training images, e.g., low-quality testing images. In this study, a new shape-constrained deep convolutional neural network (CNN)-called B-spline explicit active surface (BEAS)-Net-is introduced for automatic LV segmentation. The BEAS-Net learns how to associate the image features, encoded by its convolutional layers, with anatomical shape-prior information derived by the BEAS algorithm to generate physiologically meaningful segmentation contours when dealing with artifactual or low-quality images. The performance of the proposed network was evaluated using three different in vivo datasets and was compared with a deep segmentation algorithm based on the U-Net model. Both the networks yielded comparable results when tested on images of acceptable quality, but the BEAS-Net outperformed the benchmark DL model on artifactual and low-quality images.

2024

Machine Learning Computed Tomography Radiomics of Abdominal Adipose Tissue to Optimize Cardiovascular Risk Assessment

Autores
Mancio, J; Lopes, A; Sousa, I; Nunes, F; Xara, S; Carvalho, M; Ferreira, W; Ferreira, N; Barros, A; Fontes-Carvalho, R; Ribeiro, VG; Bettencourt, N; Pedrosa, J;

Publicação

Abstract
Abstract

Background Subcutaneous (SAF) and visceral (VAF) abdominal fat have specific properties which the global body fat and total abdominal fat (TAF) size metrics do not capture. Beyond size, radiomics allows deep tissue phenotyping and may capture fat dysfunction. We aimed to characterize the computed tomography (CT) radiomics of SAF and VAF and assess their incremental value above fat size to detect coronary calcification. Methods SAF, VAF and TAF area, signal distribution and texture were extracted from non-contrast CT of 1001 subjects (57% male, 57?±?10 years) with no established cardiovascular disease who underwent CT for coronary calcium score (CCS) with additional abdominal slice (L4/5-S1). XGBoost machine learning models (ML) were used to identify the best features that discriminate SAF from VAF and to train/test ML to detect any coronary calcification (CCS?>?0). Results SAF and VAF appearance in non-contrast CT differs: SAF displays brighter and finer texture than VAF. Compared with CCS?=?0, SAF of CCS?>?0 has higher signal and homogeneous texture, while VAF of CCS?>?0 has lower signal and heterogeneous texture. SAF signal/texture improved SAF area performance to detect CCS?>?0. A ML including SAF and VAF area performed better than TAF area to discriminate CCS?>?0 from CCS?=?0, however, a combined ML of the best SAF and VAF features detected CCS?>?0 as the best TAF features. Conclusion In non-contrast CT, SAF and VAF appearance differs and SAF radiomics improves the detection of CCS?>?0 when added to fat area; TAF radiomics (but not TAF area) spares the need for separate SAF and VAF segmentations.

  • 7
  • 13