Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by João Manuel Pedrosa

2024

Deep Left Ventricular Motion Estimation Methods in Echocardiography: A Comparative Study

Authors
Ferraz, S; Coimbra, MT; Pedrosa, J;

Publication
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024

Abstract
Motion estimation in echocardiography is critical when assessing heart function and calculating myocardial deformation indices. Nevertheless, there are limitations in clinical practice, particularly with regard to the accuracy and reliability of measurements retrieved from images. In this study, deep learning-based motion estimation architectures were used to determine the left ventricular longitudinal strain in echocardiography. Three motion estimation approaches, pretrained on popular optical flow datasets, were applied to a simulated echocardiographic dataset. Results show that PWC-Net, RAFT and FlowFormer achieved an average end point error of 0.20, 0.11 and 0.09 mm per frame, respectively. Additionally, global longitudinal strain was calculated from the FlowFormer outputs to assess strain correlation. Notably, there is variability in strain accuracy among different vendors. Thus, optical flow-based motion estimation has the potential to facilitate the use of strain imaging in clinical practice.

2024

BEAS-Net: A Shape-Prior-Based Deep Convolutional Neural Network for Robust Left Ventricular Segmentation in 2-D Echocardiography

Authors
Akbari, S; Tabassian, M; Pedrosa, J; Queirós, S; Papangelopoulou, K; D'hooge, J;

Publication
IEEE TRANSACTIONS ON ULTRASONICS FERROELECTRICS AND FREQUENCY CONTROL

Abstract
Left ventricle (LV) segmentation of 2-D echocardiography images is an essential step in the analysis of cardiac morphology and function and-more generally-diagnosis of cardiovascular diseases (CVD). Several deep learning (DL) algorithms have recently been proposed for the automatic segmentation of the LV, showing significant performance improvement over the traditional segmentation algorithms. However, unlike the traditional methods, prior information about the segmentation problem, e.g., anatomical shape information, is not usually incorporated for training the DL algorithms. This can degrade the generalization performance of the DL models on unseen images if their characteristics are somewhat different from those of the training images, e.g., low-quality testing images. In this study, a new shape-constrained deep convolutional neural network (CNN)-called B-spline explicit active surface (BEAS)-Net-is introduced for automatic LV segmentation. The BEAS-Net learns how to associate the image features, encoded by its convolutional layers, with anatomical shape-prior information derived by the BEAS algorithm to generate physiologically meaningful segmentation contours when dealing with artifactual or low-quality images. The performance of the proposed network was evaluated using three different in vivo datasets and was compared with a deep segmentation algorithm based on the U-Net model. Both the networks yielded comparable results when tested on images of acceptable quality, but the BEAS-Net outperformed the benchmark DL model on artifactual and low-quality images.

2024

Machine Learning Computed Tomography Radiomics of Abdominal Adipose Tissue to Optimize Cardiovascular Risk Assessment

Authors
Mancio, J; Lopes, A; Sousa, I; Nunes, F; Xara, S; Carvalho, M; Ferreira, W; Ferreira, N; Barros, A; Fontes-Carvalho, R; Ribeiro, VG; Bettencourt, N; Pedrosa, J;

Publication

Abstract
Abstract

Background Subcutaneous (SAF) and visceral (VAF) abdominal fat have specific properties which the global body fat and total abdominal fat (TAF) size metrics do not capture. Beyond size, radiomics allows deep tissue phenotyping and may capture fat dysfunction. We aimed to characterize the computed tomography (CT) radiomics of SAF and VAF and assess their incremental value above fat size to detect coronary calcification. Methods SAF, VAF and TAF area, signal distribution and texture were extracted from non-contrast CT of 1001 subjects (57% male, 57?±?10 years) with no established cardiovascular disease who underwent CT for coronary calcium score (CCS) with additional abdominal slice (L4/5-S1). XGBoost machine learning models (ML) were used to identify the best features that discriminate SAF from VAF and to train/test ML to detect any coronary calcification (CCS?>?0). Results SAF and VAF appearance in non-contrast CT differs: SAF displays brighter and finer texture than VAF. Compared with CCS?=?0, SAF of CCS?>?0 has higher signal and homogeneous texture, while VAF of CCS?>?0 has lower signal and heterogeneous texture. SAF signal/texture improved SAF area performance to detect CCS?>?0. A ML including SAF and VAF area performed better than TAF area to discriminate CCS?>?0 from CCS?=?0, however, a combined ML of the best SAF and VAF features detected CCS?>?0 as the best TAF features. Conclusion In non-contrast CT, SAF and VAF appearance differs and SAF radiomics improves the detection of CCS?>?0 when added to fat area; TAF radiomics (but not TAF area) spares the need for separate SAF and VAF segmentations.

2024

A Cascade Approach for Automatic Segmentation of Coronary Arteries Calcification in Computed Tomography Images Using Deep Learning

Authors
Araúo, ADC; Silva, AC; Pedrosa, JM; Silva, IFS; Diniz, JOB;

Publication
WIRELESS MOBILE COMMUNICATION AND HEALTHCARE, MOBIHEALTH 2023

Abstract
One of the indicators of possible occurrences of cardiovascular diseases is the amount of coronary artery calcium. Recently, approaches using new technologies such as deep learning have been used to help identify these indicators. This work proposes a segmentation method for calcification of the coronary arteries that has three steps: (1) extraction of the ROI using U-Net with batch normalization after convolution layers, (2) segmentation of the calcifications and (3) removal of false positives using Modified U-Net with EfficientNet. The method uses histogram matching as preprocessing in order to increase the contrast between tissue and calcification and normalize the different types of exams. Multiple architectures were tested and the best achieved 96.9% F1-Score, 97.1% recall and 98.3% in the OrcaScore Dataset.

2024

Evaluating Visual Explainability in Chest X-Ray Pathology Detection

Authors
Pereira, P; Rocha, J; Pedrosa, J; Mendonça, AM;

Publication
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024

Abstract
Chest X-Ray (CXR), plays a vital role in diagnosing lung and heart conditions, but the high demand for CXR examinations poses challenges for radiologists. Automatic support systems can ease this burden by assisting radiologists in the image analysis process. While Deep Learning models have shown promise in this task, concerns persist regarding their complexity and decision-making opacity. To address this, various visual explanation techniques have been developed to elucidate the model reasoning, some of which have received significant attention in literature and are widely used such as GradCAM. However, it is unclear how different explanations methods perform and how to quantitatively measure their performance, as well as how that performance may be dependent on the model architecture used and the dataset characteristics. In this work, two widely used deep classification networks - DenseNet121 and ResNet50 - are trained for multi-pathology classification on CXR and visual explanations are then generated using GradCAM, GradCAM++, EigenGrad-CAM, Saliency maps, LRP and DeepLift. These explanations methods are then compared with radiologist annotations using previously proposed explainability evaluations metrics - intersection over union and hit rate. Furthermore, a novel method to convey visual explanations in the form of radiological written reports is proposed, allowing for a clinically-oriented explainability evaluation metric - zones score. It is shown that Grad-CAM++ and Saliency methods offer the most accurate explanations and that the effectiveness of visual explanations is found to vary based on the model and corresponding input size. Additionally, the explainability performance across different CXR datasets is evaluated, highlighting that the explanation quality depends on the dataset's characteristics and annotations.

2025

Anatomically-Guided Inpainting for Local Synthesis of Normal Chest Radiographs

Authors
Pedrosa, J; Pereira, SC; Silva, J; Mendonça, AM; Campilho, A;

Publication
DEEP GENERATIVE MODELS, DGM4MICCAI 2024

Abstract
Chest radiography (CXR) is one of the most used medical imaging modalities. Nevertheless, the interpretation of CXR images is time-consuming and subject to variability. As such, automated systems for pathology detection have been proposed and promising results have been obtained, particularly using deep learning. However, these tools suffer from poor explainability, which represents a major hurdle for their adoption in clinical practice. One proposed explainability method in CXR is through contrastive examples, i.e. by showing an alternative version of the CXR except without the lesion being investigated. While image-level normal/healthy image synthesis has been explored in literature, normal patch synthesis via inpainting has received little attention. In this work, a method to synthesize contrastive examples in CXR based on local synthesis of normal CXR patches is proposed. Based on a contextual attention inpainting network (CAttNet), an anatomically-guided inpainting network (AnaCAttNet) is proposed that leverages anatomical information of the original CXR through segmentation to guide the inpainting for a more realistic reconstruction. A quantitative evaluation of the inpainting is performed, showing that AnaCAttNet outperforms CAttNet (FID of 0.0125 and 0.0132 respectively). Qualitative evaluation by three readers also showed that AnaCAttNet delivers superior reconstruction quality and anatomical realism. In conclusion, the proposed anatomical segmentation module for inpainting is shown to improve inpainting performance.

  • 8
  • 14