Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

João Pedrosa was born in Figueira da Foz, Portugal, in 1990. He received the M.Sc. degree in biomedical engineering from the University of Porto, Porto, Portugal, in 2013 and the Ph.D. degree in biomedical sciences with KU Leuven, Leuven, Belgium, in 2018 where he focused on the development of a framework for segmentation of the left ventricle in 3D echocardiography. He joined INESC TEC (Porto, Portugal) in 2018 as a postdoctoral researcher and is an invited assistant professor at the Faculty of Engineering of the University of Porto since 2020. His research interests include medical imaging acquisition and processing, machine/deep learning and applied research for improved patient care.

Interest
Topics
Details

Details

  • Name

    João Manuel Pedrosa
  • Role

    Assistant Researcher
  • Since

    05th December 2018
  • Nationality

    Portugal
  • Contacts

    +351222094106
    joao.m.pedrosa@inesctec.pt
003
Publications

2022

Computer-aided lung cancer screening in computed tomography: state-of the-art and future perspectives

Authors
Pedrosa, J; Aresta, G; Ferreira, C;

Publication
Detection Systems in Lung Cancer and Imaging, Volume 1

Abstract

2022

Lesion-Based Chest Radiography Image Retrieval for Explainability in Pathology Detection

Authors
Pedrosa, J; Sousa, P; Silva, J; Mendonca, AM; Campilho, A;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2022)

Abstract

2022

Assessing clinical applicability of COVID-19 detection in chest radiography with deep learning

Authors
Pedrosa, J; Aresta, G; Ferreira, C; Carvalho, C; Silva, J; Sousa, P; Ribeiro, L; Mendonca, AM; Campilho, A;

Publication
SCIENTIFIC REPORTS

Abstract
The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55–0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61–0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients. © 2022, The Author(s).

2022

Attention-driven Spatial Transformer Network for Abnormality Detection in Chest X-Ray Images

Authors
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonca, AM;

Publication
2022 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)

Abstract

2022

Leveraging CMR for 3D echocardiography: an annotated multimodality dataset for AI

Authors
Zhao, D; Ferdian, E; Maso Talou, GD; Gilbert, K; Quill, GM; Wang, VY; Pedrosa, J; D'hooge, J; Sutton, T; Lowe, BS; Legget, ME; Ruygrok, PN; Doughty, RN; Young, AA; Nash, MP;

Publication
European Heart Journal - Cardiovascular Imaging

Abstract
Abstract Funding Acknowledgements: Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Health Research Council of New Zealand (HRC) National Heart Foundation of New Zealand (NHF) Segmentation of the left ventricular myocardium and cavity in 3D echocardiography (3DE) is a critical task for the quantification of systolic function in heart disease. Continuing advances in 3DE have considerably improved image quality, prompting increased clinical uptake in recent years, particularly for volumetric measurements. Nevertheless, analysis of 3DE remains a difficult problem due to inherently complex noise characteristics, anisotropic image resolution, and regions of acoustic dropout. One of the primary challenges associated with the development of automated methods for 3DE analysis is the requirement of a sufficiently large training dataset. Historically, ground truth annotations have been difficult to obtain due to the high degree of inter- and intra-observer variability associated with manual 3DE segmentation, thus, limiting the scope of AI-based solutions. To address the lack of expert consensus, we instead used labels derived from cardiac magnetic resonance (CMR) images of the same subjects. By spatiotemporally registering CMR labels to corresponding 3DE image data on a per subject basis (Figure 1), we collated 520 annotated 3DE images from a mixed cohort of 130 human subjects (2 independent single-beat acquisitions per subject at end-diastole and end-systole) consisting of healthy controls and patients with acquired cardiac disease. Comprising images acquired across a range of patient demographics, this curated dataset exhibits variation in image quality, 3DE acquisition parameters, as well as left ventricular shape and pose within the 3D image volume. To demonstrate the utility of such a dataset, nn-UNet, a self-configuring deep learning method for semantic segmentation was employed. An 80/20 split of the dataset was used for training and testing, respectively, and data augmentations were applied in the form of scaling, rotation, and reflection. The trained network was capable of reproducing measurements derived from CMR for end-diastolic volume, end-systolic volume, ejection fraction, and mass, while outperforming an expert human observer in terms of accuracy as well as scan-rescan reproducibility (Table I). As part of ongoing efforts to improve the accuracy and efficiency of 3DE analysis, we have leveraged the high resolution and signal-to-noise-ratio of CMR (relative to 3DE), to create a novel, publicly available benchmark dataset for developing and evaluating 3DE labelling methods. This approach not only significantly reduces the effects of observer-specific bias and variability in training data arising from conventional manual 3DE analysis methods, but also improves the agreement between cardiac indices derived from 3DE and CMR. Figure 1. Data annotation workflow Table I. Results

Supervised
thesis

2022

Automatic Eyetracking-Assisted Chest Radiography Pathology Screening

Author
Rui Manuel Azevedo dos Santos

Institution
UP-FEUP

2022

Automatic contrast generation from contrastless CTs

Author
Rúben André Dias Domingues

Institution
UP-FCUP

2022

Anatomical Segmentation in Automated Chest Radiography Screening

Author
Emanuel Ricardo Coimbra Quintas Brioso

Institution
UP-FEUP

2021

Generative Adversarial Networks in Automated Chest Radiography Screening

Author
Martim de Aguiar Quintas Penha e Sousa

Institution
UP-FEUP

2021

Multi-Modal Tasking for Skin Lesion Classi cation using DNN

Author
Rafaela Garrido Ribeiro de Carvalho

Institution
UP-FEUP