Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About
Download Photo HD

About

João Pedrosa was born in Figueira da Foz, Portugal, in 1990. He received the M.Sc. degree in biomedical engineering from the University of Porto, Porto, Portugal, in 2013 and the Ph.D. degree in biomedical sciences with KU Leuven, Leuven, Belgium, in 2018. He is currently a postdoctoral researcher at INESC TEC, Porto Portugal working on image processing and computer-aided diagnosis in lung cancer CT screening and diabetic retinopathy. His research interests include medical imaging acquisition and processing, machine learning and applied research for improved patient care.

Interest
Topics
Details

Details

  • Name

    João Manuel Pedrosa
  • Role

    Assistant Researcher
  • Since

    05th December 2018
  • Nationality

    Portugal
  • Contacts

    +351222094106
    joao.m.pedrosa@inesctec.pt
004
Publications

2022

Computer-aided lung cancer screening in computed tomography: state-of the-art and future perspectives

Authors
Pedrosa, J; Aresta, G; Ferreira, C;

Publication
Detection Systems in Lung Cancer and Imaging, Volume 1

Abstract

2021

LNDb Challenge on automatic lung cancer patient management

Authors
Pedrosa, J; Aresta, G; Ferreira, C; Atwal, G; Phoulady, HA; Chen, XY; Chen, RZ; Li, JL; Wang, LS; Galdran, A; Bouchachia, H; Kaluva, KC; Vaidhya, K; Chunduru, A; Tarai, S; Nadimpalli, SPP; Vaidya, S; Kim, I; Rassadin, A; Tian, ZH; Sun, ZW; Jia, YZ; Men, XJ; Ramos, I; Cunha, A; Campilho, A;

Publication
Medical Image Analysis

Abstract

2021

Extracting neuronal activity signals from microscopy recordings of contractile tissue using B-spline Explicit Active Surfaces (BEAS) cell tracking

Authors
Kazwiny, Y; Pedrosa, J; Zhang, ZQ; Boesmans, W; D'hooge, J; Vanden Berghe, P;

Publication
SCIENTIFIC REPORTS

Abstract
Ca2+ imaging is a widely used microscopy technique to simultaneously study cellular activity in multiple cells. The desired information consists of cell-specific time series of pixel intensity values, in which the fluorescence intensity represents cellular activity. For static scenes, cellular signal extraction is straightforward, however multiple analysis challenges are present in recordings of contractile tissues, like those of the enteric nervous system (ENS). This layer of critical neurons, embedded within the muscle layers of the gut wall, shows optical overlap between neighboring neurons, intensity changes due to cell activity, and constant movement. These challenges reduce the applicability of classical segmentation techniques and traditional stack alignment and regions-of-interest (ROIs) selection workflows. Therefore, a signal extraction method capable of dealing with moving cells and is insensitive to large intensity changes in consecutive frames is needed. Here we propose a b-spline active contour method to delineate and track neuronal cell bodies based on local and global energy terms. We develop both a single as well as a double-contour approach. The latter takes advantage of the appearance of GCaMP expressing cells, and tracks the nucleus' boundaries together with the cytoplasmic contour, providing a stable delineation of neighboring, overlapping cells despite movement and intensity changes. The tracked contours can also serve as landmarks to relocate additional and manually-selected ROIs. This improves the total yield of efficacious cell tracking and allows signal extraction from other cell compartments like neuronal processes. Compared to manual delineation and other segmentation methods, the proposed method can track cells during large tissue deformations and high-intensity changes such as during neuronal firing events, while preserving the shape of the extracted Ca2+ signal. The analysis package represents a significant improvement to available Ca2+ imaging analysis workflows for ENS recordings and other systems where movement challenges traditional Ca2+ signal extraction workflows.

2021

A multi-task CNN approach for lung nodule malignancy classification and characterization

Authors
Marques, S; Schiavo, F; Ferreira, CA; Pedrosa, J; Cunha, A; Campilho, A;

Publication
Expert Systems with Applications

Abstract

2021

Automated analysis of 3D-echocardiography using spatially registered patient-specific CMR meshes

Authors
Zhao, D; Ferdian, E; Maso Talou, G; Quill, G; Gilbert, K; Babarenda Gamage, T; Wang, V; Pedrosa, J; D"hooge, J; Legget, M; Ruygrok, P; Doughty, R; Camara, O; Young, A; Nash, M;

Publication
European Heart Journal - Cardiovascular Imaging

Abstract
Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): National Heart Foundation (NHF) of New Zealand Health Research Council (HRC) of New Zealand Artificial intelligence shows considerable promise for automated analysis and interpretation of medical images, particularly in the domain of cardiovascular imaging. While application to cardiac magnetic resonance (CMR) has demonstrated excellent results, automated analysis of 3D echocardiography (3D-echo) remains challenging, due to the lower signal-to-noise ratio (SNR), signal dropout, and greater interobserver variability in manual annotations. As 3D-echo is becoming increasingly widespread, robust analysis methods will substantially benefit patient evaluation.  We sought to leverage the high SNR of CMR to provide training data for a convolutional neural network (CNN) capable of analysing 3D-echo. We imaged 73 participants (53 healthy volunteers, 20 patients with non-ischaemic cardiac disease) under both CMR and 3D-echo (<1 hour between scans). 3D models of the left ventricle (LV) were independently constructed from CMR and 3D-echo, and used to spatially align the image volumes using least squares fitting to a cardiac template. The resultant transformation was used to map the CMR mesh to the 3D-echo image. Alignment of mesh and image was verified through volume slicing and visual inspection (Fig. 1) for 120 paired datasets (including 47 rescans) each at end-diastole and end-systole. 100 datasets (80 for training, 20 for validation) were used to train a shallow CNN for mesh extraction from 3D-echo, optimised with a composite loss function consisting of normalised Euclidian distance (for 290 mesh points) and volume. Data augmentation was applied in the form of rotations and tilts (<15 degrees) about the long axis. The network was tested on the remaining 20 datasets (different participants) of varying image quality (Tab. I). For comparison, corresponding LV measurements from conventional manual analysis of 3D-echo and associated interobserver variability (for two observers) were also estimated. Initial results indicate that the use of embedded CMR meshes as training data for 3D-echo analysis is a promising alternative to manual analysis, with improved accuracy and precision compared with conventional methods. Further optimisations and a larger dataset are expected to improve network performance. (n?=?20) LV EDV (ml) LV ESV (ml) LV EF (%) LV mass (g) Ground truth CMR 150.5 ± 29.5 57.9 ± 12.7 61.5 ± 3.4 128.1 ± 29.8 Algorithm error -13.3 ± 15.7 -1.4 ± 7.6 -2.8 ± 5.5 0.1 ± 20.9 Manual error -30.1 ± 21.0 -15.1 ± 12.4 3.0 ± 5.0 Not available Interobserver error 19.1 ± 14.3 14.4 ± 7.6 -6.4 ± 4.8 Not available Tab. 1. LV mass and volume differences (means ± standard deviations) for 20 test cases. Algorithm: CNN – CMR (as ground truth). Abstract Figure. Fig 1. CMR mesh registered to 3D-echo.

Supervised
thesis

2021

Generative Adversarial Networks in Automated Chest Radiography Screening

Author
Martim de Aguiar Quintas Penha e Sousa

Institution
UP-FEUP

2021

Detection of Pulmonary Lesions for COVID-19 Screening

Author
Joana Soares Maximino

Institution
UP-FCUP

2021

Multi-Modal Tasking for Skin Lesion Classi cation using DNN

Author
Rafaela Garrido Ribeiro de Carvalho

Institution
UP-FEUP

2021

Multi-Modal Tasking for Skin Lesion Classification using Deep Neural Networks

Author
Rafaela Garrido Ribeiro de Carvalho

Institution
UP-FEUP

2020

Generative Adversarial Networks in Automated Chest Radiography Screening

Author
Martim Quintas e Sousa

Institution
UP-FEUP