Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por João Manuel Pedrosa

2024

A Cascade Approach for Automatic Segmentation of Coronary Arteries Calcification in Computed Tomography Images Using Deep Learning

Autores
Araúo, ADC; Silva, AC; Pedrosa, JM; Silva, IFS; Diniz, JOB;

Publicação
WIRELESS MOBILE COMMUNICATION AND HEALTHCARE, MOBIHEALTH 2023

Abstract
One of the indicators of possible occurrences of cardiovascular diseases is the amount of coronary artery calcium. Recently, approaches using new technologies such as deep learning have been used to help identify these indicators. This work proposes a segmentation method for calcification of the coronary arteries that has three steps: (1) extraction of the ROI using U-Net with batch normalization after convolution layers, (2) segmentation of the calcifications and (3) removal of false positives using Modified U-Net with EfficientNet. The method uses histogram matching as preprocessing in order to increase the contrast between tissue and calcification and normalize the different types of exams. Multiple architectures were tested and the best achieved 96.9% F1-Score, 97.1% recall and 98.3% in the OrcaScore Dataset.

2024

Evaluating Visual Explainability in Chest X-Ray Pathology Detection

Autores
Pereira, P; Rocha, J; Pedrosa, J; Mendonça, AM;

Publicação
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024

Abstract
Chest X-Ray (CXR), plays a vital role in diagnosing lung and heart conditions, but the high demand for CXR examinations poses challenges for radiologists. Automatic support systems can ease this burden by assisting radiologists in the image analysis process. While Deep Learning models have shown promise in this task, concerns persist regarding their complexity and decision-making opacity. To address this, various visual explanation techniques have been developed to elucidate the model reasoning, some of which have received significant attention in literature and are widely used such as GradCAM. However, it is unclear how different explanations methods perform and how to quantitatively measure their performance, as well as how that performance may be dependent on the model architecture used and the dataset characteristics. In this work, two widely used deep classification networks - DenseNet121 and ResNet50 - are trained for multi-pathology classification on CXR and visual explanations are then generated using GradCAM, GradCAM++, EigenGrad-CAM, Saliency maps, LRP and DeepLift. These explanations methods are then compared with radiologist annotations using previously proposed explainability evaluations metrics - intersection over union and hit rate. Furthermore, a novel method to convey visual explanations in the form of radiological written reports is proposed, allowing for a clinically-oriented explainability evaluation metric - zones score. It is shown that Grad-CAM++ and Saliency methods offer the most accurate explanations and that the effectiveness of visual explanations is found to vary based on the model and corresponding input size. Additionally, the explainability performance across different CXR datasets is evaluated, highlighting that the explanation quality depends on the dataset's characteristics and annotations.

2025

Anatomically-Guided Inpainting for Local Synthesis of Normal Chest Radiographs

Autores
Pedrosa, J; Pereira, SC; Silva, J; Mendonça, AM; Campilho, A;

Publicação
DEEP GENERATIVE MODELS, DGM4MICCAI 2024

Abstract
Chest radiography (CXR) is one of the most used medical imaging modalities. Nevertheless, the interpretation of CXR images is time-consuming and subject to variability. As such, automated systems for pathology detection have been proposed and promising results have been obtained, particularly using deep learning. However, these tools suffer from poor explainability, which represents a major hurdle for their adoption in clinical practice. One proposed explainability method in CXR is through contrastive examples, i.e. by showing an alternative version of the CXR except without the lesion being investigated. While image-level normal/healthy image synthesis has been explored in literature, normal patch synthesis via inpainting has received little attention. In this work, a method to synthesize contrastive examples in CXR based on local synthesis of normal CXR patches is proposed. Based on a contextual attention inpainting network (CAttNet), an anatomically-guided inpainting network (AnaCAttNet) is proposed that leverages anatomical information of the original CXR through segmentation to guide the inpainting for a more realistic reconstruction. A quantitative evaluation of the inpainting is performed, showing that AnaCAttNet outperforms CAttNet (FID of 0.0125 and 0.0132 respectively). Qualitative evaluation by three readers also showed that AnaCAttNet delivers superior reconstruction quality and anatomical realism. In conclusion, the proposed anatomical segmentation module for inpainting is shown to improve inpainting performance.

2023

LNDb Dataset

Autores
Pedrosa, J; Aresta, G; Ferreira, CA; Rodrigues, M; Leitão, P; Carvalho, AS; Rebelo, J; Negrão, E; Ramos, I; Cunha, A; Campilho, A;

Publicação

Abstract

2022

LNDb Dataset

Autores
Pedrosa, J; Aresta, G; Ferreira, CA; Rodrigues, M; Leitão, P; Carvalho, AS; Rebelo, J; Negrão, E; Ramos, I; Cunha, A; Campilho, A;

Publicação

Abstract

2025

<i>MedShapeNet</i> - a large-scale dataset of 3D medical shapes for computer vision

Autores
Li, JN; Zhou, ZW; Yang, JC; Pepe, A; Gsaxner, C; Luijten, G; Qu, CY; Zhang, TZ; Chen, XX; Li, WX; Wodzinski, M; Friedrich, P; Xie, KX; Jin, Y; Ambigapathy, N; Nasca, E; Solak, N; Melito, GM; Vu, VD; Memon, AR; Schlachta, C; De Ribaupierre, S; Patel, R; Eagleson, R; Chen, XJ; Mächler, H; Kirschke, JS; de la Rosa, E; Christ, PF; Li, HB; Ellis, DG; Aizenberg, MR; Gatidis, S; Küstner, T; Shusharina, N; Heller, N; Andrearczyk, V; Depeursinge, A; Hatt, M; Sekuboyina, A; Löffler, MT; Liebl, H; Dorent, R; Vercauteren, T; Shapey, J; Kujawa, A; Cornelissen, S; Langenhuizen, P; Ben Hamadou, A; Rekik, A; Pujades, S; Boyer, E; Bolelli, F; Grana, C; Lumetti, L; Salehi, H; Ma, J; Zhang, Y; Gharleghi, R; Beier, S; Sowmya, A; Garza Villarreal, EA; Balducci, T; Angeles Valdez, D; Souza, R; Rittner, L; Frayne, R; Ji, Y; Ferrari, V; Chatterjee, S; Dubost, F; Schreiber, S; Mattern, H; Speck, O; Haehn, D; John, C; Nürnberger, A; Pedrosa, J; Ferreira, C; Aresta, G; Cunha, A; Campilho, A; Suter, Y; Garcia, J; Lalande, A; Vandenbossche, V; Van Oevelen, A; Duquesne, K; Mekhzoum, H; Vandemeulebroucke, J; Audenaert, E; Krebs, C; van Leeuwen, T; Vereecke, E; Heidemeyer, H; Röhrig, R; Hölzle, F; Badeli, V; Krieger, K; Gunzer, M; Chen, JX; van Meegdenburg, T; Dada, A; Balzer, M; Fragemann, J; Jonske, F; Rempe, M; Malorodov, S; Bahnsen, FH; Seibold, C; Jaus, A; Marinov, Z; Jaeger, PF; Stiefelhagen, R; Santos, AS; Lindo, M; Ferreira, A; Alves, V; Kamp, M; Abourayya, A; Nensa, F; Hörst, F; Brehmer, A; Heine, L; Hanusrichter, Y; Wessling, M; Dudda, M; Podleska, LE; Fink, MA; Keyl, J; Tserpes, K; Kim, MS; Elhabian, S; Lamecker, H; Zukic, D; Paniagua, B; Wachinger, C; Urschler, M; Duong, L; Wasserthal, J; Hoyer, PF; Basu, O; Maal, T; Witjes, MJH; Schiele, G; Chang, TC; Ahmadi, SA; Luo, P; Menze, B; Reyes, M; Deserno, TM; Davatzikos, C; Puladi, B; Fua, P; Yuille, AL; Kleesiek, J; Egger, J;

Publicação
BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK

Abstract
Objectives: The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. Methods: We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. Results: By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Conclusions: MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/.

  • 8
  • 13