2023
Authors
Reyna, A; Kiarashi, Y; Elola, A; Oliveira, J; Renna, F; Gu, A; Perez Alday, A; Sadr, N; Sharma, A; Kpodonu, J; Mattos, S; Coimbra, T; Sameni, R; Rad, AB; Clifford, D;
Publication
PLOS Digital Health
Abstract
Cardiac auscultation is an accessible diagnostic screening tool that can help to identify patients with heart murmurs, who may need follow-up diagnostic screening and treatment for abnormal cardiac function. However, experts are needed to interpret the heart sounds, limiting the accessibility of cardiac auscultation in resource-constrained environments. Therefore, the George B. Moody PhysioNet Challenge 2022 invited teams to develop algorithmic approaches for detecting heart murmurs and abnormal cardiac function from phonocardiogram (PCG) recordings of heart sounds. For the Challenge, we sourced 5272 PCG recordings from 1452 primarily pediatric patients in rural Brazil, and we invited teams to implement diagnostic screening algorithms for detecting heart murmurs and abnormal cardiac function from the recordings. We required the participants to submit the complete training and inference code for their algorithms, improving the transparency, reproducibility, and utility of their work. We also devised an evaluation metric that considered the costs of screening, diagnosis, misdiagnosis, and treatment, allowing us to investigate the benefits of algorithmic diagnostic screening and facilitate the development of more clinically relevant algorithms. We received 779 algorithms from 87 teams during the Challenge, resulting in 53 working codebases for detecting heart murmurs and abnormal cardiac function from PCG recordings. These algorithms represent a diversity of approaches from both academia and industry, including methods that use more traditional machine learning techniques with engineered clinical and statistical features as well as methods that rely primarily on deep learning models to discover informative features. The use of heart sound recordings for identifying heart murmurs and abnormal cardiac function allowed us to explore the potential of algorithmic approaches for providing more accessible diagnostic screening in resourceconstrained environments. The submission of working, open-source algorithms and the use of novel evaluation metrics supported the reproducibility, generalizability, and clinical relevance of the research from the Challenge. © 2023 Reyna et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
2024
Authors
Martins, ML; Coimbra, MT; Renna, F;
Publication
32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024
Abstract
This paper is concerned with the semantic segmentation within domain-specific contexts, such as those pertaining to biology, physics, or material science. Under these circumstances, the objects of interest are often irregular and have fine structure, i.e., detail at arbitrarily small scales. Empirically, they are often understood as self-similar processes, a concept grounded in Multifractal Analysis. We find that this multifractal behaviour is carried out through a convolutional neural network (CNN), if we view its channel-wise responses as self-similar measures. A function of the local singularities of each measure we call Singularity Stregth Recalibration (SSR) is set forth to modulate the response at each layer of the CNN. SSR is a lightweight, plug-in module for CNNs. We observe that it improves a baseline U-Net in two biomedical tasks: skin lesion and colonic polyp segmentation, by an average of 1.36% and 1.12% Dice score, respectively. To the best of our knowledge, this is the first time multifractal-analysis is conducted end-to-end for semantic segmentation.
2021
Authors
Cardoso, AS; Renna, F; Moreno-Llorca, R; Alcaraz-Segura, D; Tabik, S; Ladle, RJ; Vaz, AS;
Publication
Abstract
2025
Authors
Antonelli, G; Libanio, D; De Groof, AJ; van der Sommen, F; Mascagni, P; Sinonquel, P; Abdelrahim, M; Ahmad, O; Berzin, T; Bhandari, P; Bretthauer, M; Coimbra, M; Dekker, E; Ebigbo, A; Eelbode, T; Frazzoni, L; Gross, SA; Ishihara, R; Kaminski, MF; Messmann, H; Mori, Y; Padoy, N; Parasa, S; Pilonis, ND; Renna, F; Repici, A; Simsek, C; Spadaccini, M; Bisschops, R; Bergman, JJGHM; Hassan, C; Ribeiro, MD;
Publication
GUT
Abstract
Artificial intelligence (AI) holds significant potential for enhancing quality of gastrointestinal (GI) endoscopy, but the adoption of AI in clinical practice is hampered by the lack of rigorous standardisation and development methodology ensuring generalisability. The aim of the Quality Assessment of pre-clinical AI studies in Diagnostic Endoscopy (QUAIDE) Explanation and Checklist was to develop recommendations for standardised design and reporting of preclinical AI studies in GI endoscopy. The recommendations were developed based on a formal consensus approach with an international multidisciplinary panel of 32 experts among endoscopists and computer scientists. The Delphi methodology was employed to achieve consensus on statements, with a predetermined threshold of 80% agreement. A maximum three rounds of voting were permitted. Consensus was reached on 18 key recommendations, covering 6 key domains: data acquisition and annotation (6 statements), outcome reporting (3 statements), experimental setup and algorithm architecture (4 statements) and result presentation and interpretation (5 statements). QUAIDE provides recommendations on how to properly design (1. Methods, statements 1-14), present results (2. Results, statements 15-16) and integrate and interpret the obtained results (3. Discussion, statements 17-18). The QUAIDE framework offers practical guidance for authors, readers, editors and reviewers involved in AI preclinical studies in GI endoscopy, aiming at improving design and reporting, thereby promoting research standardisation and accelerating the translation of AI innovations into clinical practice.
2024
Authors
Martins, ML; Coimbra, MT; Renna, F;
Publication
Abstract
2024
Authors
Magalhães, B; Pedrosa, J; Renna, F; Paredes, H; Filipe, V;
Publication
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024, Lisbon, Portugal, December 3-6, 2024
Abstract
Coronary artery disease (CAD) remains a leading cause of morbidity and mortality worldwide, underscoring the need for accurate and reliable diagnostic tools. While AI-driven models have shown significant promise in identifying CAD through imaging techniques, their 'black box' nature often hinders clinical adoption due to a lack of interpretability. In response, this paper proposes a novel approach to image captioning specifically tailored for CAD diagnosis, aimed at enhancing the transparency and usability of AI systems. Utilizing the COCA dataset, which comprises gated coronary CT images along with Ground Truth (GT) segmentation annotations, we introduce a hybrid model architecture that combines a Vision Transformer (ViT) for feature extraction with a Generative Pretrained Transformer (GPT) for generating clinically relevant textual descriptions. This work builds on a previously developed 3D Convolutional Neural Network (CNN) for coronary artery segmentation, leveraging its accurate delineations of calcified regions as critical inputs to the captioning process. By incorporating these segmentation outputs, our approach not only focuses on accurately identifying and describing calcified regions within the coronary arteries but also ensures that the generated captions are clinically meaningful and reflective of key diagnostic features such as location, severity, and artery involvement. This methodology provides medical practitioners with clear, context-rich explanations of AI-generated findings, thereby bridging the gap between advanced AI technologies and practical clinical applications. Furthermore, our work underscores the critical role of Explainable AI (XAI) in fostering trust, improving decision-making, and enhancing the efficacy of AI-driven diagnostics, paving the way for future advancements in the field. © 2024 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.