2021
Authors
Pedrosa, J; Aresta, G; Ferreira, C; Mendonca, A; Campilho, A;
Publication
PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES (BIOIMAGING), VOL 2
Abstract
Chest radiography is one of the most ubiquitous medical imaging exams used for the diagnosis and follow-up of a wide array of pathologies. However, chest radiography analysis is time consuming and often challenging, even for experts. This has led to the development of numerous automatic solutions for multipathology detection in chest radiography, particularly after the advent of deep learning. However, the black-box nature of deep learning solutions together with the inherent class imbalance of medical imaging problems often leads to weak generalization capabilities, with models learning features based on spurious correlations such as the aspect and position of laterality, patient position, equipment and hospital markers. In this study, an automatic method based on a YOLOv3 framework was thus developed for the detection of markers and written labels in chest radiography images. It is shown that this model successfully detects a large proportion of markers in chest radiography, even in datasets different from the training source, with a low rate of false positives per image. As such, this method could be used for performing automatic obscuration of markers in large datasets, so that more generic and meaningful features can be learned, thus improving classification performance and robustness.
2022
Authors
Pedrosa, J; Aresta, G; Ferreira, CA; Rodrigues, M; Leitão, P; Carvalho, AS; Rebelo, J; Negrão, E; Ramos, I; Cunha, A; Campilho, A;
Publication
Abstract
2023
Authors
Pedrosa, J; Aresta, G; Ferreira, CA; Rodrigues, M; Leitão, P; Carvalho, AS; Rebelo, J; Negrão, E; Ramos, I; Cunha, A; Campilho, A;
Publication
Abstract
2022
Authors
Pedrosa, J; Aresta, G; Ferreira, CA; Rodrigues, M; Leitão, P; Carvalho, AS; Rebelo, J; Negrão, E; Ramos, I; Cunha, A; Campilho, A;
Publication
Abstract
2025
Authors
Li, JN; Zhou, ZW; Yang, JC; Pepe, A; Gsaxner, C; Luijten, G; Qu, CY; Zhang, TZ; Chen, XX; Li, WX; Wodzinski, M; Friedrich, P; Xie, KX; Jin, Y; Ambigapathy, N; Nasca, E; Solak, N; Melito, GM; Vu, VD; Memon, AR; Schlachta, C; De Ribaupierre, S; Patel, R; Eagleson, R; Chen, XJ; Mächler, H; Kirschke, JS; de la Rosa, E; Christ, PF; Li, HB; Ellis, DG; Aizenberg, MR; Gatidis, S; Küstner, T; Shusharina, N; Heller, N; Andrearczyk, V; Depeursinge, A; Hatt, M; Sekuboyina, A; Löffler, MT; Liebl, H; Dorent, R; Vercauteren, T; Shapey, J; Kujawa, A; Cornelissen, S; Langenhuizen, P; Ben Hamadou, A; Rekik, A; Pujades, S; Boyer, E; Bolelli, F; Grana, C; Lumetti, L; Salehi, H; Ma, J; Zhang, Y; Gharleghi, R; Beier, S; Sowmya, A; Garza Villarreal, EA; Balducci, T; Angeles Valdez, D; Souza, R; Rittner, L; Frayne, R; Ji, Y; Ferrari, V; Chatterjee, S; Dubost, F; Schreiber, S; Mattern, H; Speck, O; Haehn, D; John, C; Nürnberger, A; Pedrosa, J; Ferreira, C; Aresta, G; Cunha, A; Campilho, A; Suter, Y; Garcia, J; Lalande, A; Vandenbossche, V; Van Oevelen, A; Duquesne, K; Mekhzoum, H; Vandemeulebroucke, J; Audenaert, E; Krebs, C; van Leeuwen, T; Vereecke, E; Heidemeyer, H; Röhrig, R; Hölzle, F; Badeli, V; Krieger, K; Gunzer, M; Chen, JX; van Meegdenburg, T; Dada, A; Balzer, M; Fragemann, J; Jonske, F; Rempe, M; Malorodov, S; Bahnsen, FH; Seibold, C; Jaus, A; Marinov, Z; Jaeger, PF; Stiefelhagen, R; Santos, AS; Lindo, M; Ferreira, A; Alves, V; Kamp, M; Abourayya, A; Nensa, F; Hörst, F; Brehmer, A; Heine, L; Hanusrichter, Y; Wessling, M; Dudda, M; Podleska, LE; Fink, MA; Keyl, J; Tserpes, K; Kim, MS; Elhabian, S; Lamecker, H; Zukic, D; Paniagua, B; Wachinger, C; Urschler, M; Duong, L; Wasserthal, J; Hoyer, PF; Basu, O; Maal, T; Witjes, MJH; Schiele, G; Chang, TC; Ahmadi, SA; Luo, P; Menze, B; Reyes, M; Deserno, TM; Davatzikos, C; Puladi, B; Fua, P; Yuille, AL; Kleesiek, J; Egger, J;
Publication
BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK
Abstract
Objectives: The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. Methods: We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. Results: By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Conclusions: MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/.
2024
Authors
Castro, R; Sousa, I; Nunes, F; Mancio, J; Fontes-Carvalho, R; Ferreira, C; Pedrosa, J;
Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024
Abstract
Cardiovascular diseases are the leading causes of death worldwide. While there are a number of cardiovascular risk indicators, recent studies have found a connection between cardiovascular risk and the accumulation and characteristics of visceral adipose tissue in the ventral cavity. The quantification of visceral adipose tissue can be easily performed in computed tomography scans but the manual delineation of these structures is a time consuming process subject to variability. This has motivated the development of automatic tools to achieve a faster and more precise solution. This paper explores the use of a U-Net architecture to perform ventral cavity segmentation followed by the use of threshold-based approaches for visceral and subcutaneous adipose tissue segmentation. Experiments with different learning rates, input image sizes and types of loss functions were employed to assess the hyperparameters most suited to this problem. In an external test set, the ventral cavity segmentation model with the best performance achieved a 0.967 Dice Score Coefficient, while the visceral and subcutaneous adipose tissue achieve Dice Score Coefficients of 0.986 and 0.995. Not only are these competitive results when compared to state of the art, the interobserver variability measured in this external dataset was similar to these results confirming the robustness and reliability of the proposed segmentation.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.