Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

João Pedrosa was born in Figueira da Foz, Portugal, in 1990. He received the M.Sc. degree in biomedical engineering from the University of Porto, Porto, Portugal, in 2013 and the Ph.D. degree in biomedical sciences with KU Leuven, Leuven, Belgium, in 2018 where he focused on the development of a framework for segmentation of the left ventricle in 3D echocardiography. He joined INESC TEC (Porto, Portugal) in 2018 as a postdoctoral researcher and is an invited assistant professor at the Faculty of Engineering of the University of Porto since 2020. His research interests include medical imaging acquisition and processing, machine/deep learning and applied research for improved patient care.

Interest
Topics
Details

Details

  • Name

    João Manuel Pedrosa
  • Role

    Assistant Researcher
  • Since

    05th December 2018
  • Nationality

    Portugal
  • Contacts

    +351222094106
    joao.m.pedrosa@inesctec.pt
003
Publications

2023

Automatic Eye-Tracking-Assisted Chest Radiography Pathology Screening

Authors
Santos, R; Pedrosa, J; Mendonça, AM; Campilho, A;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract

2022

Computer-aided lung cancer screening in computed tomography: state-of the-art and future perspectives

Authors
Pedrosa, J; Aresta, G; Ferreira, C;

Publication
Detection Systems in Lung Cancer and Imaging, Volume 1

Abstract

2022

Lesion-Based Chest Radiography Image Retrieval for Explainability in Pathology Detection

Authors
Pedrosa, J; Sousa, P; Silva, J; Mendonca, AM; Campilho, A;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2022)

Abstract
Chest radiography is one of the most common medical imaging modalites. However, chest radiography interpretation is a complex task that requires significant expertise. As such, the development of automatic systems for pathology detection has been proposed in literature, particularly using deep learning. However, these techniques suffer from a lack of explainability, which hinders their adoption in clinical scenarios. One technique commonly used by radiologists to support and explain decisions is to search for cases with similar findings for direct comparison. However, this process is extremely time-consuming and can be prone to confirmation bias. Automatic image retrieval methods have been proposed in literature but typically extract features from the whole image, failing to focus on the lesion in which the radiologist is interested. In order to overcome these issues, a novel framework LXIR for lesion-based image retrieval is proposed in this study, based on a state of the art object detection framework (YOLOv5) for the detection of relevant lesions as well as feature representation of those lesions. It is shown that the proposed method can successfully identify lesions and extract features which accurately describe high-order characteristics of each lesion, allowing to retrieve lesions of the same pathological class. Furthermore, it is show that in comparison to SSIM-based retrieval, a classical perceptual metric, and random retrieval of lesions, the proposed method retrieves the most relevant lesions 81% of times, according to the evaluation of two independent radiologists, in comparison to 42% of times by SSIM.

2022

Assessing clinical applicability of COVID-19 detection in chest radiography with deep learning

Authors
Pedrosa, J; Aresta, G; Ferreira, C; Carvalho, C; Silva, J; Sousa, P; Ribeiro, L; Mendonca, AM; Campilho, A;

Publication
SCIENTIFIC REPORTS

Abstract
The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55-0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61-0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients.

2022

Attention-driven Spatial Transformer Network for Abnormality Detection in Chest X-Ray Images

Authors
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonca, AM;

Publication
2022 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)

Abstract
Backed by more powerful computational resources and optimized training routines, deep learning models have attained unprecedented performance in extracting information from chest X-ray data. Preceding other tasks, an automated abnormality detection stage can be useful to prioritize certain exams and enable a more efficient clinical workflow. However, the presence of image artifacts such as lettering often generates a harmful bias in the classifier, leading to an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise using an attention-driven and spatially unsupervised Spatial Transformer Network (STN). The results indicate that the STN achieves similar results to using YOLO-cropped images, with fewer computational expenses and without the need for localization labels. More specifically, the system is able to distinguish between normal and abnormal CheXpert images with a mean AUC of 84.22%.

Supervised
thesis

2022

Automatic Eyetracking-Assisted Chest Radiography Pathology Screening

Author
Rui Manuel Azevedo dos Santos

Institution
UP-FEUP

2022

Automatic contrast generation from contrastless CTs

Author
Rúben André Dias Domingues

Institution
UP-FCUP

2022

Anatomical Segmentation in Automated Chest Radiography Screening

Author
Emanuel Ricardo Coimbra Quintas Brioso

Institution
UP-FEUP

2021

Generative Adversarial Networks in Automated Chest Radiography Screening

Author
Martim de Aguiar Quintas Penha e Sousa

Institution
UP-FEUP

2021

Multi-Modal Tasking for Skin Lesion Classi cation using DNN

Author
Rafaela Garrido Ribeiro de Carvalho

Institution
UP-FEUP