Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Carlos Alexandre Ferreira

2025

<i>MedShapeNet</i> - a large-scale dataset of 3D medical shapes for computer vision

Authors
Li, JN; Zhou, ZW; Yang, JC; Pepe, A; Gsaxner, C; Luijten, G; Qu, CY; Zhang, TZ; Chen, XX; Li, WX; Wodzinski, M; Friedrich, P; Xie, KX; Jin, Y; Ambigapathy, N; Nasca, E; Solak, N; Melito, GM; Vu, VD; Memon, AR; Schlachta, C; De Ribaupierre, S; Patel, R; Eagleson, R; Chen, XJ; Mächler, H; Kirschke, JS; de la Rosa, E; Christ, PF; Li, HB; Ellis, DG; Aizenberg, MR; Gatidis, S; Küstner, T; Shusharina, N; Heller, N; Andrearczyk, V; Depeursinge, A; Hatt, M; Sekuboyina, A; Löffler, MT; Liebl, H; Dorent, R; Vercauteren, T; Shapey, J; Kujawa, A; Cornelissen, S; Langenhuizen, P; Ben Hamadou, A; Rekik, A; Pujades, S; Boyer, E; Bolelli, F; Grana, C; Lumetti, L; Salehi, H; Ma, J; Zhang, Y; Gharleghi, R; Beier, S; Sowmya, A; Garza Villarreal, EA; Balducci, T; Angeles Valdez, D; Souza, R; Rittner, L; Frayne, R; Ji, Y; Ferrari, V; Chatterjee, S; Dubost, F; Schreiber, S; Mattern, H; Speck, O; Haehn, D; John, C; Nürnberger, A; Pedrosa, J; Ferreira, C; Aresta, G; Cunha, A; Campilho, A; Suter, Y; Garcia, J; Lalande, A; Vandenbossche, V; Van Oevelen, A; Duquesne, K; Mekhzoum, H; Vandemeulebroucke, J; Audenaert, E; Krebs, C; van Leeuwen, T; Vereecke, E; Heidemeyer, H; Röhrig, R; Hölzle, F; Badeli, V; Krieger, K; Gunzer, M; Chen, JX; van Meegdenburg, T; Dada, A; Balzer, M; Fragemann, J; Jonske, F; Rempe, M; Malorodov, S; Bahnsen, FH; Seibold, C; Jaus, A; Marinov, Z; Jaeger, PF; Stiefelhagen, R; Santos, AS; Lindo, M; Ferreira, A; Alves, V; Kamp, M; Abourayya, A; Nensa, F; Hörst, F; Brehmer, A; Heine, L; Hanusrichter, Y; Wessling, M; Dudda, M; Podleska, LE; Fink, MA; Keyl, J; Tserpes, K; Kim, MS; Elhabian, S; Lamecker, H; Zukic, D; Paniagua, B; Wachinger, C; Urschler, M; Duong, L; Wasserthal, J; Hoyer, PF; Basu, O; Maal, T; Witjes, MJH; Schiele, G; Chang, TC; Ahmadi, SA; Luo, P; Menze, B; Reyes, M; Deserno, TM; Davatzikos, C; Puladi, B; Fua, P; Yuille, AL; Kleesiek, J; Egger, J;

Publication
BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK

Abstract
Objectives: The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. Methods: We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. Results: By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Conclusions: MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/.

2024

AUTOMATED VISCERAL AND SUBCUTANEOUS FAT SEGMENTATION IN COMPUTED TOMOGRAPHY

Authors
Castro, R; Sousa, I; Nunes, F; Mancio, J; Fontes-Carvalho, R; Ferreira, C; Pedrosa, J;

Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
Cardiovascular diseases are the leading causes of death worldwide. While there are a number of cardiovascular risk indicators, recent studies have found a connection between cardiovascular risk and the accumulation and characteristics of visceral adipose tissue in the ventral cavity. The quantification of visceral adipose tissue can be easily performed in computed tomography scans but the manual delineation of these structures is a time consuming process subject to variability. This has motivated the development of automatic tools to achieve a faster and more precise solution. This paper explores the use of a U-Net architecture to perform ventral cavity segmentation followed by the use of threshold-based approaches for visceral and subcutaneous adipose tissue segmentation. Experiments with different learning rates, input image sizes and types of loss functions were employed to assess the hyperparameters most suited to this problem. In an external test set, the ventral cavity segmentation model with the best performance achieved a 0.967 Dice Score Coefficient, while the visceral and subcutaneous adipose tissue achieve Dice Score Coefficients of 0.986 and 0.995. Not only are these competitive results when compared to state of the art, the interobserver variability measured in this external dataset was similar to these results confirming the robustness and reliability of the proposed segmentation.

  • 4
  • 4