Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    João Manuel Pedrosa
  • Cargo

    Investigador Auxiliar
  • Desde

    05 dezembro 2018
  • Nacionalidade

    Portugal
  • Contactos

    +351222094106
    joao.m.pedrosa@inesctec.pt
003
Publicações

2023

Automatic Eye-Tracking-Assisted Chest Radiography Pathology Screening

Autores
Santos, R; Pedrosa, J; Mendonça, AM; Campilho, A;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract

2022

Computer-aided lung cancer screening in computed tomography: state-of the-art and future perspectives

Autores
Pedrosa, J; Aresta, G; Ferreira, C;

Publicação
Detection Systems in Lung Cancer and Imaging, Volume 1

Abstract

2022

Lesion-Based Chest Radiography Image Retrieval for Explainability in Pathology Detection

Autores
Pedrosa, J; Sousa, P; Silva, J; Mendonca, AM; Campilho, A;

Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2022)

Abstract
Chest radiography is one of the most common medical imaging modalites. However, chest radiography interpretation is a complex task that requires significant expertise. As such, the development of automatic systems for pathology detection has been proposed in literature, particularly using deep learning. However, these techniques suffer from a lack of explainability, which hinders their adoption in clinical scenarios. One technique commonly used by radiologists to support and explain decisions is to search for cases with similar findings for direct comparison. However, this process is extremely time-consuming and can be prone to confirmation bias. Automatic image retrieval methods have been proposed in literature but typically extract features from the whole image, failing to focus on the lesion in which the radiologist is interested. In order to overcome these issues, a novel framework LXIR for lesion-based image retrieval is proposed in this study, based on a state of the art object detection framework (YOLOv5) for the detection of relevant lesions as well as feature representation of those lesions. It is shown that the proposed method can successfully identify lesions and extract features which accurately describe high-order characteristics of each lesion, allowing to retrieve lesions of the same pathological class. Furthermore, it is show that in comparison to SSIM-based retrieval, a classical perceptual metric, and random retrieval of lesions, the proposed method retrieves the most relevant lesions 81% of times, according to the evaluation of two independent radiologists, in comparison to 42% of times by SSIM.

2022

Assessing clinical applicability of COVID-19 detection in chest radiography with deep learning

Autores
Pedrosa, J; Aresta, G; Ferreira, C; Carvalho, C; Silva, J; Sousa, P; Ribeiro, L; Mendonca, AM; Campilho, A;

Publicação
SCIENTIFIC REPORTS

Abstract
The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55-0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61-0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients.

2022

Attention-driven Spatial Transformer Network for Abnormality Detection in Chest X-Ray Images

Autores
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonca, AM;

Publicação
2022 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)

Abstract
Backed by more powerful computational resources and optimized training routines, deep learning models have attained unprecedented performance in extracting information from chest X-ray data. Preceding other tasks, an automated abnormality detection stage can be useful to prioritize certain exams and enable a more efficient clinical workflow. However, the presence of image artifacts such as lettering often generates a harmful bias in the classifier, leading to an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise using an attention-driven and spatially unsupervised Spatial Transformer Network (STN). The results indicate that the STN achieves similar results to using YOLO-cropped images, with fewer computational expenses and without the need for localization labels. More specifically, the system is able to distinguish between normal and abnormal CheXpert images with a mean AUC of 84.22%.

Teses
supervisionadas

2022

Anatomical Segmentation in Automated Chest Radiography Screening

Autor
Emanuel Ricardo Coimbra Quintas Brioso

Instituição
UP-FEUP

2022

Automatic Eyetracking-Assisted Chest Radiography Pathology Screening

Autor
Rui Manuel Azevedo dos Santos

Instituição
UP-FEUP

2022

Automatic contrast generation from contrastless CTs

Autor
Rúben André Dias Domingues

Instituição
UP-FCUP

2021

Multi-Modal Tasking for Skin Lesion Classification using Deep Neural Networks

Autor
Rafaela Garrido Ribeiro de Carvalho

Instituição
UP-FEUP

2021

Generative Adversarial Networks in Automated Chest Radiography Screening

Autor
Martim de Aguiar Quintas Penha e Sousa

Instituição
UP-FEUP