Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por João Manuel Pedrosa

2024

AUTOMATED VISCERAL AND SUBCUTANEOUS FAT SEGMENTATION IN COMPUTED TOMOGRAPHY

Autores
Castro, R; Sousa, I; Nunes, F; Mancio, J; Fontes-Carvalho, R; Ferreira, C; Pedrosa, J;

Publicação
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
Cardiovascular diseases are the leading causes of death worldwide. While there are a number of cardiovascular risk indicators, recent studies have found a connection between cardiovascular risk and the accumulation and characteristics of visceral adipose tissue in the ventral cavity. The quantification of visceral adipose tissue can be easily performed in computed tomography scans but the manual delineation of these structures is a time consuming process subject to variability. This has motivated the development of automatic tools to achieve a faster and more precise solution. This paper explores the use of a U-Net architecture to perform ventral cavity segmentation followed by the use of threshold-based approaches for visceral and subcutaneous adipose tissue segmentation. Experiments with different learning rates, input image sizes and types of loss functions were employed to assess the hyperparameters most suited to this problem. In an external test set, the ventral cavity segmentation model with the best performance achieved a 0.967 Dice Score Coefficient, while the visceral and subcutaneous adipose tissue achieve Dice Score Coefficients of 0.986 and 0.995. Not only are these competitive results when compared to state of the art, the interobserver variability measured in this external dataset was similar to these results confirming the robustness and reliability of the proposed segmentation.

2024

Lightweight 3D CNN for the Segmentation of Coronary Calcifications and Calcium Scoring

Autores
Santos, R; Baeza, R; Filipe, VM; Renna, F; Paredes, H; Pedrosa, J;

Publicação
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024

Abstract
Coronary artery calcium is a good indicator of coronary artery disease and can be used for cardiovascular risk stratification. Over the years, different deep learning approaches have been proposed to automatically segment coronary calcifications in computed tomography scans and measure their extent through calcium scores. However, most methodologies have focused on using 2D architectures which neglect most of the information present in those scans. In this work, we use a 3D convolutional neural network capable of leveraging the 3D nature of computed tomography scans and including more context in the segmentation process. In addition, the selected network is lightweight, which means that we can have 3D convolutions while having low memory requirements. Our results show that the predictions of the model, trained on the COCA dataset, are close to the ground truth for the majority of the patients in the test set obtaining a Dice score of 0.90 +/- 0.16 and a Cohen's linearly weighted kappa of 0.88 in Agatston score risk categorization. In conclusion, our approach shows promise in the tasks of segmenting coronary artery calcifications and predicting calcium scores with the objectives of optimizing clinical workflow and performing cardiovascular risk stratification.

2024

Quality assessment of Low-cost retinal Videos for Glaucoma screening

Autores
Abay, SG; Lima, F; Geurts, L; Camara, J; Pedrosa, J; Cunha, A;

Publicação
Procedia Computer Science

Abstract
Low-cost smartphone-compatible portable ophthalmoscopes can capture visuals of the patient's retina to screen several ophthalmological diseases like glaucoma. The images captured have lower quality and resolution than standard retinography devices but enough for glaucoma screening. Small videos are captured to improve the chance of inspecting the eye properly; however, those videos may not always have enough quality for screening glaucoma, and the patient needs to repeat the inspection later. In this paper, a method for automatic assessment of the quality of videos captured using the D-Eye lens is proposed and evaluated with a personal dataset with 539 videos. Based on two methods developed for retina localization on the images/frames, the Circle Hough Transform method with a precision of 78,12% and the YOLOv7 method with a precision of 99,78%, the quality assessment method automatically decides on the quality of the video by measuring the number of frames of good-quality in each video, according to the chosen threshold. © 2024 Elsevier B.V.. All rights reserved.

2023

Automatic Contrast Generation from Contrastless Computed Tomography

Autores
Domingues, R; Nunes, F; Mancio, J; Fontes Carvalho, R; Coimbra, M; Pedrosa, J; Renna, F;

Publicação
2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC

Abstract
The use of contrast-enhanced computed tomography (CTCA) for detection of coronary artery disease (CAD) exposes patients to the risks of iodine contrast-agents and excessive radiation, increases scanning time and healthcare costs. Deep learning generative models have the potential to artificially create a pseudo-enhanced image from non-contrast computed tomography (CT) scans. In this work, two specific models of generative adversarial networks (GANs) - the Pix2Pix-GAN and the Cycle-GAN - were tested with paired non-contrasted CT and CTCA scans from a private and public dataset. Furthermore, an exploratory analysis of the trade-off of using 2D and 3D inputs and architectures was performed. Using only the Structural Similarity Index Measure (SSIM) and the Peak Signal-to-Noise Ratio (PSNR), it could be concluded that the Pix2Pix-GAN using 2D data reached better results with 0.492 SSIM and 16.375 dB PSNR. However, visual analysis of the output shows significant blur in the generated images, which is not the case for the Cycle-GAN models. This behavior can be captured by the evaluation of the Fr ' echet Inception Distance (FID), that represents a fundamental performance metric that is usually not considered by related works in the literature.

2023

DEEPBEAS3D: Deep Learning and B-Spline EXPLICIT Active Surfaces

Autores
Williams H.; Pedrosa J.; Asad M.; Cattani L.; Vercauteren T.; Deprest J.; D'Hooge J.;

Publicação
IEEE International Ultrasonics Symposium, IUS

Abstract
Deep learning-based automatic segmentation methods have become state-of-the-art. However, they are often not robust enough for direct clinical application, as domain shifts between training and testing data affect their performance. Failure in automatic segmentation can cause sub-optimal results that require correction. To address these problems, we propose a novel 3D extension of an interactive segmentation framework that represents a segmentation from a convolutional neural network (CNN) as a B-spline explicit active surface (BEAS). BEAS ensures segmentations are smooth in 3D space, increasing anatomical plausibility, while allowing the user to precisely edit the 3D surface. We apply this framework to the task of 3D segmentation of the anal sphincter complex (AS) from transperineal ultrasound (TPUS) images, and compare it to the clinical tool used in the pelvic floor disorder clinic (4D View VOCAL, GE Healthcare; Zipf, Austria). Experimental results show that: 1) the proposed framework gives the user explicit control of the surface contour; 2) the perceived workload calculated via the NASA-TLX index was reduced by 30% compared to VOCAL; and 3) it required 70% (170 seconds) less user time than VOCAL (p< 0.00001).

2023

MITEA: A dataset for machine learning segmentation of the left ventricle in 3D echocardiography using subject-specific labels from cardiac magnetic resonance imaging

Autores
Zhao, DB; Ferdian, E; Talou, GDM; Quill, GM; Gilbert, K; Wang, VY; Gamage, TPB; Pedrosa, J; D'hooge, J; Sutton, TM; Lowe, BS; Legget, ME; Ruygrok, PN; Doughty, RN; Camara, O; Young, AA; Nash, MP;

Publicação
FRONTIERS IN CARDIOVASCULAR MEDICINE

Abstract
Segmentation of the left ventricle (LV) in echocardiography is an important task for the quantification of volume and mass in heart disease. Continuing advances in echocardiography have extended imaging capabilities into the 3D domain, subsequently overcoming the geometric assumptions associated with conventional 2D acquisitions. Nevertheless, the analysis of 3D echocardiography (3DE) poses several challenges associated with limited spatial resolution, poor contrast-to-noise ratio, complex noise characteristics, and image anisotropy. To develop automated methods for 3DE analysis, a sufficiently large, labeled dataset is typically required. However, ground truth segmentations have historically been difficult to obtain due to the high inter-observer variability associated with manual analysis. We address this lack of expert consensus by registering labels derived from higher-resolution subject-specific cardiac magnetic resonance (CMR) images, producing 536 annotated 3DE images from 143 human subjects (10 of which were excluded). This heterogeneous population consists of healthy controls and patients with cardiac disease, across a range of demographics. To demonstrate the utility of such a dataset, a state-of-the-art, self-configuring deep learning network for semantic segmentation was employed for automated 3DE analysis. Using the proposed dataset for training, the network produced measurement biases of -9 +/- 16 ml, -1 +/- 10 ml, -2 +/- 5 %, and 5 +/- 23 g, for end-diastolic volume, end-systolic volume, ejection fraction, and mass, respectively, outperforming an expert human observer in terms of accuracy as well as scan-rescan reproducibility. As part of the Cardiac Atlas Project, we present here a large, publicly available 3DE dataset with ground truth labels that leverage the higher resolution and contrast of CMR, to provide a new benchmark for automated 3DE analysis. Such an approach not only reduces the effect of observer-specific bias present in manual 3DE annotations, but also enables the development of analysis techniques which exhibit better agreement with CMR compared to conventional methods. This represents an important step for enabling more efficient and accurate diagnostic and prognostic information to be obtained from echocardiography.

  • 9
  • 13