Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Francesco Renna

2022

Artificial Intelligence for Upper Gastrointestinal Endoscopy: A Roadmap from Technology Development to Clinical Practice

Authors
Renna, F; Martins, M; Neto, A; Cunha, A; Libanio, D; Dinis-Ribeiro, M; Coimbra, M;

Publication
DIAGNOSTICS

Abstract
Stomach cancer is the third deadliest type of cancer in the world (0.86 million deaths in 2017). In 2035, a 20% increase will be observed both in incidence and mortality due to demographic effects if no interventions are foreseen. Upper GI endoscopy (UGIE) plays a paramount role in early diagnosis and, therefore, improved survival rates. On the other hand, human and technical factors can contribute to misdiagnosis while performing UGIE. In this scenario, artificial intelligence (AI) has recently shown its potential in compensating for the pitfalls of UGIE, by leveraging deep learning architectures able to efficiently recognize endoscopic patterns from UGIE video data. This work presents a review of the current state-of-the-art algorithms in the application of AI to gastroscopy. It focuses specifically on the threefold tasks of assuring exam completeness (i.e., detecting the presence of blind spots) and assisting in the detection and characterization of clinical findings, both gastric precancerous conditions and neoplastic lesion changes. Early and promising results have already been obtained using well-known deep learning architectures for computer vision, but many algorithmic challenges remain in achieving the vision of AI-assisted UGIE. Future challenges in the roadmap for the effective integration of AI tools within the UGIE clinical practice are discussed, namely the adoption of more robust deep learning architectures and methods able to embed domain knowledge into image/video classifiers as well as the availability of large, annotated datasets.

2022

Classifying the content of social media images to support cultural ecosystem service assessments using deep learning models

Authors
Cardoso, AS; Renna, F; Moreno-Llorca, R; Alcaraz-Segura, D; Tabik, S; Ladle, RJ; Vaz, AS;

Publication
ECOSYSTEM SERVICES

Abstract
Crowdsourced social media data has become popular for assessing cultural ecosystem services (CES). Nevertheless, social media data analyses in the context of CES can be time consuming and costly, particularly when based on the manual classification of images or texts shared by people. The potential of deep learning for automating the analysis of crowdsourced social media content is still being explored in CES research. Here, we use freely available deep learning models, i.e., Convolutional Neural Networks, for automating the classification of natural and human (e.g., species and human structures) elements relevant to CES from Flickr and Wikiloc images. Our approach is developed for Peneda-Ger <^>es (Portugal) and then applied to Sierra Nevada (Spain). For Peneda-Ger <^>es, image classification showed promising results (F1-score ca. 80%), highlighting a preference for aesthetics appreciation by social media users. In Sierra Nevada, even though model performance decreased, it was still satisfactory (F1-score ca. 60%), indicating a predominance of people's pursuit for cultural heritage and spiritual enrichment. Our study shows great potential from deep learning to assist in the automated classification of human-nature interactions and elements from social media content and, by extension, for supporting researchers and stakeholders to decode CES distributions, benefits, and values.

2023

Beyond Heart Murmur Detection: Automatic Murmur Grading From Phonocardiogram

Authors
Elola, A; Aramendi, E; Oliveira, J; Renna, F; Coimbra, MT; Reyna, MA; Sameni, R; Clifford, GD; Rad, AB;

Publication
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Objective: Murmurs are abnormal heart sounds, identified by experts through cardiac auscultation. The murmur grade, a quantitative measure of the murmur intensity, is strongly correlated with the patient's clinical condition. This work aims to estimate each patient's murmur grade (i.e., absent, soft, loud) from multiple auscultation location phonocardiograms (PCGs) of a large population of pediatric patients from a low-resource rural area. Methods: The Mel spectrogram representation of each PCG recording is given to an ensemble of 15 convolutional residual neural networks with channel-wise attention mechanisms to classify each PCG recording. The final murmur grade for each patient is derived based on the proposed decision rule and considering all estimated labels for available recordings. The proposed method is cross-validated on a dataset consisting of 3456 PCG recordings from 1007 patients using a stratified ten-fold cross-validation. Additionally, the method was tested on a hidden test set comprised of 1538 PCG recordings from 442 patients. Results: The overall cross-validation performances for patient-level murmur gradings are 86.3% and 81.6% in terms of the unweighted average of sensitivities and F1-scores, respectively. The sensitivities (and F1-scores) for absent, soft, and loud murmurs are 90.7% (93.6%), 75.8% (66.8%), and 92.3% (84.2%), respectively. On the test set, the algorithm achieves an unweighted average of sensitivities of 80.4% and an F1-score of 75.8%. Conclusions: This study provides a potential approach for algorithmic pre-screening in low-resource settings with relatively high expert screening costs. Significance: The proposed method represents a significant step beyond detection of murmurs, providing characterization of intensity, which may provide an enhanced classification of clinical outcomes.

2022

Heart Murmur Detection from Phonocardiogram Recordings: The George B. Moody PhysioNet Challenge 2022

Authors
Reyna, MA; Kiarashi, Y; Elola, A; Oliveira, J; Renna, F; Gu, A; Perez Alday, EA; Sadr, N; Sharma, A; Silva Mattos, Sd; Coimbra, MT; Sameni, R; Rad, AB; Clifford, GD;

Publication
Computing in Cardiology, CinC 2022, Tampere, Finland, September 4-7, 2022

Abstract
The George B. Moody PhysioNet Challenge 2022 explored the detection of abnormal heart function from phonocardiogram (PCG) recordings. Although ultrasound imaging is becoming more common for investigating heart defects, the PCG still has the potential to assist with rapid and low-cost screening, and the automated annotation of PCG recordings has the potential to further improve access. Therefore, for this Challenge, we asked participants to design working, open-source algorithms that use PCG recordings to identify heart murmurs and clinical outcomes. This Challenge makes several innovations. First, we sourced 5272 PCG recordings from 1568 patients in Brazil, providing high-quality data for an underrepresented population. Second, we required the Challenge teams to submit working code for training and running their models, improving the reproducibility and reusability of the algorithms. Third, we devised a cost-based evaluation metric that reflects the costs of screening, treatment, and diagnostic errors, facilitating the development of more clinically relevant algorithms. A total of 87 teams submitted 779 algorithms during the Challenge. These algorithms represent a diversity of approaches from both academia and industry for detecting abnormal cardiac function from PCG recordings. © 2022 Creative Commons.

2022

A Generalization Study of Automatic Pericardial Segmentation in Computed Tomography Images

Authors
Baeza, R; Santos, C; Nunes, F; Mancio, J; Carvalho, RF; Coimbra, MT; Renna, F; Pedrosa, J;

Publication
Wireless Mobile Communication and Healthcare - 11th EAI International Conference, MobiHealth 2022, Virtual Event, November 30 - December 2, 2022, Proceedings

Abstract
The pericardium is a thin membrane sac that covers the heart. As such, the segmentation of the pericardium in computed tomography (CT) can have several clinical applications, namely as a preprocessing step for extraction of different clinical parameters. However, manual segmentation of the pericardium can be challenging, time-consuming and subject to observer variability, which has motivated the development of automatic pericardial segmentation methods. In this study, a method to automatically segment the pericardium in CT using a U-Net framework is proposed. Two datasets were used in this study: the publicly available Cardiac Fat dataset and a private dataset acquired at the hospital centre of Vila Nova de Gaia e Espinho (CHVNGE). The Cardiac Fat database was used for training with two different input sizes - 512 512 and 256 256. A superior performance was obtained with the 256 256 image size, with a mean Dice similarity score (DCS) of 0.871 ± 0.01 and 0.807 ± 0.06 on the Cardiac Fat test set and the CHVNGE dataset, respectively. Results show that reasonable performance can be achieved with a small number of patients for training and an off-the-shelf framework, with only a small decrease in performance in an external dataset. Nevertheless, additional data will increase the robustness of this approach for difficult cases and future approaches must focus on the integration of 3D information for a more accurate segmentation of the lower pericardium. © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.

2022

Explainable Deep Learning for Non-Invasive Detection of Pulmonary Artery Hypertension from Heart Sounds

Authors
Gaudio, A; Coimbra, MT; Campilho, A; Smailagic, A; Schmidt, SE; Renna, F;

Publication
Computing in Cardiology, CinC 2022, Tampere, Finland, September 4-7, 2022

Abstract
Late diagnoses of patients affected by pulmonary artery hypertension (PH) have a poor outcome. This observation has led to a call for earlier, non-invasive PH detection. Cardiac auscultation offers a non-invasive and cost-effective alternative to both right heart catheterization and doppler analysis in analysis of PH. We propose to detect PH via analysis of digital heart sound recordings with over-parameterized deep neural networks. In contrast with previous approaches in the literature, we assess the impact of a pre-processing step aiming to separate S2 sound into the aortic (A2) and pulmonary (P2) components. We obtain an area under the ROC curve of. 95, improving over our adaptation of a state-of-the-art Gaussian mixture model PH detector by +.17. Post-hoc explanations and analysis show that the availability of separated A2 and P2 components contributes significantly to prediction. Analysis of stethoscope heart sound recordings with deep networks is an effective, low-cost and non-invasive solution for the detection of pulmonary hypertension. © 2022 Creative Commons.

  • 4
  • 11