Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Francesco Renna

2023

Heart murmur detection from phonocardiogram recordings: The George B. Moody PhysioNet Challenge 2022

Authors
Reyna, A; Kiarashi, Y; Elola, A; Oliveira, J; Renna, F; Gu, A; Perez Alday, A; Sadr, N; Sharma, A; Kpodonu, J; Mattos, S; Coimbra, T; Sameni, R; Rad, AB; Clifford, D;

Publication
PLOS Digital Health

Abstract
Cardiac auscultation is an accessible diagnostic screening tool that can help to identify patients with heart murmurs, who may need follow-up diagnostic screening and treatment for abnormal cardiac function. However, experts are needed to interpret the heart sounds, limiting the accessibility of cardiac auscultation in resource-constrained environments. Therefore, the George B. Moody PhysioNet Challenge 2022 invited teams to develop algorithmic approaches for detecting heart murmurs and abnormal cardiac function from phonocardiogram (PCG) recordings of heart sounds. For the Challenge, we sourced 5272 PCG recordings from 1452 primarily pediatric patients in rural Brazil, and we invited teams to implement diagnostic screening algorithms for detecting heart murmurs and abnormal cardiac function from the recordings. We required the participants to submit the complete training and inference code for their algorithms, improving the transparency, reproducibility, and utility of their work. We also devised an evaluation metric that considered the costs of screening, diagnosis, misdiagnosis, and treatment, allowing us to investigate the benefits of algorithmic diagnostic screening and facilitate the development of more clinically relevant algorithms. We received 779 algorithms from 87 teams during the Challenge, resulting in 53 working codebases for detecting heart murmurs and abnormal cardiac function from PCG recordings. These algorithms represent a diversity of approaches from both academia and industry, including methods that use more traditional machine learning techniques with engineered clinical and statistical features as well as methods that rely primarily on deep learning models to discover informative features. The use of heart sound recordings for identifying heart murmurs and abnormal cardiac function allowed us to explore the potential of algorithmic approaches for providing more accessible diagnostic screening in resourceconstrained environments. The submission of working, open-source algorithms and the use of novel evaluation metrics supported the reproducibility, generalizability, and clinical relevance of the research from the Challenge. © 2023 Reyna et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

2024

Singularity Strength Re-calibration of Fully Convolutional Neural Networks for Biomedical Image Segmentation

Authors
Martins, ML; Coimbra, MT; Renna, F;

Publication
32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024

Abstract
This paper is concerned with the semantic segmentation within domain-specific contexts, such as those pertaining to biology, physics, or material science. Under these circumstances, the objects of interest are often irregular and have fine structure, i.e., detail at arbitrarily small scales. Empirically, they are often understood as self-similar processes, a concept grounded in Multifractal Analysis. We find that this multifractal behaviour is carried out through a convolutional neural network (CNN), if we view its channel-wise responses as self-similar measures. A function of the local singularities of each measure we call Singularity Stregth Recalibration (SSR) is set forth to modulate the response at each layer of the CNN. SSR is a lightweight, plug-in module for CNNs. We observe that it improves a baseline U-Net in two biomedical tasks: skin lesion and colonic polyp segmentation, by an average of 1.36% and 1.12% Dice score, respectively. To the best of our knowledge, this is the first time multifractal-analysis is conducted end-to-end for semantic segmentation.

2021

Deep learning assessment of cultural ecosystem services from social media images

Authors
Cardoso, AS; Renna, F; Moreno-Llorca, R; Alcaraz-Segura, D; Tabik, S; Ladle, RJ; Vaz, AS;

Publication

Abstract
ABSTRACTCrowdsourced social media data has become popular in the assessment of cultural ecosystem services (CES). Advances in deep learning show great potential for the timely assessment of CES at large scales. Here, we describe a procedure for automating the assessment of image elements pertaining to CES from social media. We focus on a binary (natural, human) and a multiclass (posing, species, nature, landscape, human activities, human structures) classification of those elements using two Convolutional Neural Networks (CNNs; VGG16 and ResNet152) with the weights from two large datasets - Places365 and ImageNet -, and our own dataset. We train those CNNs over Flickr and Wikiloc images from the Peneda-Gerês region (Portugal) and evaluate their transferability to wider areas, using Sierra Nevada (Spain) as test. CNNs trained for Peneda-Gerês performed well, with results for the binary classification (F1-score > 80%) exceeding those for the multiclass classification (> 60%). CNNs pre-trained with Places365 and ImageNet data performed significantly better than with our data. Model performance decreased when transferred to Sierra Nevada, but their performances were satisfactory (> 60%). The combination of manual annotations, freely available CNNs and pre-trained local datasets thereby show great relevance to support automated CES assessments from social media.

2025

QUAIDE - Quality assessment of AI preclinical studies in diagnostic endoscopy

Authors
Antonelli, G; Libanio, D; De Groof, AJ; van der Sommen, F; Mascagni, P; Sinonquel, P; Abdelrahim, M; Ahmad, O; Berzin, T; Bhandari, P; Bretthauer, M; Coimbra, M; Dekker, E; Ebigbo, A; Eelbode, T; Frazzoni, L; Gross, SA; Ishihara, R; Kaminski, MF; Messmann, H; Mori, Y; Padoy, N; Parasa, S; Pilonis, ND; Renna, F; Repici, A; Simsek, C; Spadaccini, M; Bisschops, R; Bergman, JJGHM; Hassan, C; Ribeiro, MD;

Publication
GUT

Abstract
Artificial intelligence (AI) holds significant potential for enhancing quality of gastrointestinal (GI) endoscopy, but the adoption of AI in clinical practice is hampered by the lack of rigorous standardisation and development methodology ensuring generalisability. The aim of the Quality Assessment of pre-clinical AI studies in Diagnostic Endoscopy (QUAIDE) Explanation and Checklist was to develop recommendations for standardised design and reporting of preclinical AI studies in GI endoscopy. The recommendations were developed based on a formal consensus approach with an international multidisciplinary panel of 32 experts among endoscopists and computer scientists. The Delphi methodology was employed to achieve consensus on statements, with a predetermined threshold of 80% agreement. A maximum three rounds of voting were permitted. Consensus was reached on 18 key recommendations, covering 6 key domains: data acquisition and annotation (6 statements), outcome reporting (3 statements), experimental setup and algorithm architecture (4 statements) and result presentation and interpretation (5 statements). QUAIDE provides recommendations on how to properly design (1. Methods, statements 1-14), present results (2. Results, statements 15-16) and integrate and interpret the obtained results (3. Discussion, statements 17-18). The QUAIDE framework offers practical guidance for authors, readers, editors and reviewers involved in AI preclinical studies in GI endoscopy, aiming at improving design and reporting, thereby promoting research standardisation and accelerating the translation of AI innovations into clinical practice.

2024

Monofractal and Multifractal Recalibration of Fully Convolutional Networks for Medical Image Segmentation

Authors
Martins, ML; Coimbra, MT; Renna, F;

Publication

Abstract

2024

Image Captioning for Coronary Artery Disease Diagnosis

Authors
Magalhães, B; Pedrosa, J; Renna, F; Paredes, H; Filipe, V;

Publication
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024, Lisbon, Portugal, December 3-6, 2024

Abstract
Coronary artery disease (CAD) remains a leading cause of morbidity and mortality worldwide, underscoring the need for accurate and reliable diagnostic tools. While AI-driven models have shown significant promise in identifying CAD through imaging techniques, their 'black box' nature often hinders clinical adoption due to a lack of interpretability. In response, this paper proposes a novel approach to image captioning specifically tailored for CAD diagnosis, aimed at enhancing the transparency and usability of AI systems. Utilizing the COCA dataset, which comprises gated coronary CT images along with Ground Truth (GT) segmentation annotations, we introduce a hybrid model architecture that combines a Vision Transformer (ViT) for feature extraction with a Generative Pretrained Transformer (GPT) for generating clinically relevant textual descriptions. This work builds on a previously developed 3D Convolutional Neural Network (CNN) for coronary artery segmentation, leveraging its accurate delineations of calcified regions as critical inputs to the captioning process. By incorporating these segmentation outputs, our approach not only focuses on accurately identifying and describing calcified regions within the coronary arteries but also ensures that the generated captions are clinically meaningful and reflective of key diagnostic features such as location, severity, and artery involvement. This methodology provides medical practitioners with clear, context-rich explanations of AI-generated findings, thereby bridging the gap between advanced AI technologies and practical clinical applications. Furthermore, our work underscores the critical role of Explainable AI (XAI) in fostering trust, improving decision-making, and enhancing the efficacy of AI-driven diagnostics, paving the way for future advancements in the field. © 2024 IEEE.

  • 14
  • 15