Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

  • Name

    Francesco Renna
  • Since

    01st June 2020
  • Nationality

    Itália
  • Contacts

    +351222094000
    francesco.renna@inesctec.pt
001
Publications

2022

The CirCor DigiScope Dataset: From Murmur Detection to Murmur Classification

Authors
Oliveira, J; Renna, F; Costa, PD; Nogueira, M; Oliveira, C; Ferreira, C; Jorge, A; Mattos, S; Hatem, T; Tavares, T; Elola, A; Rad, AB; Sameni, R; Clifford, GD; Coimbra, MT;

Publication
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract

2022

Artificial Intelligence for Upper Gastrointestinal Endoscopy: A Roadmap from Technology Development to Clinical Practice

Authors
Renna, F; Martins, M; Neto, A; Cunha, A; Libanio, D; Dinis-Ribeiro, M; Coimbra, M;

Publication
DIAGNOSTICS

Abstract
Stomach cancer is the third deadliest type of cancer in the world (0.86 million deaths in 2017). In 2035, a 20% increase will be observed both in incidence and mortality due to demographic effects if no interventions are foreseen. Upper GI endoscopy (UGIE) plays a paramount role in early diagnosis and, therefore, improved survival rates. On the other hand, human and technical factors can contribute to misdiagnosis while performing UGIE. In this scenario, artificial intelligence (AI) has recently shown its potential in compensating for the pitfalls of UGIE, by leveraging deep learning architectures able to efficiently recognize endoscopic patterns from UGIE video data. This work presents a review of the current state-of-the-art algorithms in the application of AI to gastroscopy. It focuses specifically on the threefold tasks of assuring exam completeness (i.e., detecting the presence of blind spots) and assisting in the detection and characterization of clinical findings, both gastric precancerous conditions and neoplastic lesion changes. Early and promising results have already been obtained using well-known deep learning architectures for computer vision, but many algorithmic challenges remain in achieving the vision of AI-assisted UGIE. Future challenges in the roadmap for the effective integration of AI tools within the UGIE clinical practice are discussed, namely the adoption of more robust deep learning architectures and methods able to embed domain knowledge into image/video classifiers as well as the availability of large, annotated datasets.

2022

Classifying the content of social media images to support cultural ecosystem service assessments using deep learning models

Authors
Cardoso, AS; Renna, F; Moreno-Llorca, R; Alcaraz-Segura, D; Tabik, S; Ladle, RJ; Vaz, AS;

Publication
ECOSYSTEM SERVICES

Abstract
Crowdsourced social media data has become popular for assessing cultural ecosystem services (CES). Nevertheless, social media data analyses in the context of CES can be time consuming and costly, particularly when based on the manual classification of images or texts shared by people. The potential of deep learning for automating the analysis of crowdsourced social media content is still being explored in CES research. Here, we use freely available deep learning models, i.e., Convolutional Neural Networks, for automating the classification of natural and human (e.g., species and human structures) elements relevant to CES from Flickr and Wikiloc images. Our approach is developed for Peneda-Ger <^>es (Portugal) and then applied to Sierra Nevada (Spain). For Peneda-Ger <^>es, image classification showed promising results (F1-score ca. 80%), highlighting a preference for aesthetics appreciation by social media users. In Sierra Nevada, even though model performance decreased, it was still satisfactory (F1-score ca. 60%), indicating a predominance of people's pursuit for cultural heritage and spiritual enrichment. Our study shows great potential from deep learning to assist in the automated classification of human-nature interactions and elements from social media content and, by extension, for supporting researchers and stakeholders to decode CES distributions, benefits, and values.

2022

Beyond Heart Murmur Detection: Automatic Murmur Grading from Phonocardiogram

Authors
Elola, A; Aramendi, E; Oliveira, J; Renna, F; Coimbra, MT; Reyna, MA; Sameni, R; Clifford, GD; Rad, AB;

Publication
CoRR

Abstract

2021

Standalone performance of artificial intelligence for upper GI neoplasia: a meta-analysis

Authors
Arribas, J; Antonelli, G; Frazzoni, L; Fuccio, L; Ebigbo, A; van der Sommen, F; Ghatwary, N; Palm, C; Coimbra, M; Renna, F; Bergman, JJGHM; Sharma, P; Messmann, H; Hassan, C; Dinis Ribeiro, MJ;

Publication
GUT

Abstract
Objective Artificial intelligence (AI) may reduce underdiagnosed or overlooked upper GI (UGI) neoplastic and preneoplastic conditions, due to subtle appearance and low disease prevalence. Only disease-specific AI performances have been reported, generating uncertainty on its clinical value. Design We searched PubMed, Embase and Scopus until July 2020, for studies on the diagnostic performance of AI in detection and characterisation of UGI lesions. Primary outcomes were pooled diagnostic accuracy, sensitivity and specificity of AI. Secondary outcomes were pooled positive (PPV) and negative (NPV) predictive values. We calculated pooled proportion rates (%), designed summary receiving operating characteristic curves with respective area under the curves (AUCs) and performed metaregression and sensitivity analysis. Results Overall, 19 studies on detection of oesophageal squamous cell neoplasia (ESCN) or Barrett's esophagus-related neoplasia (BERN) or gastric adenocarcinoma (GCA) were included with 218, 445, 453 patients and 7976, 2340, 13 562 images, respectively. AI-sensitivity/specificity/PPV/NPV/positive likelihood ratio/negative likelihood ratio for UGI neoplasia detection were 90% (CI 85% to 94%)/89% (CI 85% to 92%)/87% (CI 83% to 91%)/91% (CI 87% to 94%)/8.2 (CI 5.7 to 11.7)/0.111 (CI 0.071 to 0.175), respectively, with an overall AUC of 0.95 (CI 0.93 to 0.97). No difference in AI performance across ESCN, BERN and GCA was found, AUC being 0.94 (CI 0.52 to 0.99), 0.96 (CI 0.95 to 0.98), 0.93 (CI 0.83 to 0.99), respectively. Overall, study quality was low, with high risk of selection bias. No significant publication bias was found. Conclusion We found a high overall AI accuracy for the diagnosis of any neoplastic lesion of the UGI tract that was independent of the underlying condition. This may be expected to substantially reduce the miss rate of precancerous lesions and early cancer when implemented in clinical practice.

Supervised
thesis

2022

Automatic contrast generation from contrastless CTs

Author
Rúben André Dias Domingues

Institution
UP-FCUP

2022

Listening for wolf conservation: Deep learning for automated howl recognition and classification

Author
Rafael de Faria Campos

Institution
UP-FCUP

2021

Deep convolutional neural networks for gastric landmark detection

Author
Inês Filipa Fernandes Videira Lopes

Institution
UA-UA

2021

Deep learning algorithms for anatomical gastric landmark detection

Author
Miguel Lopes Martins

Institution
UP-FCUP

2021

Deep learning for gastric cancer detection

Author
Gabriel Trovão Pereira Lima

Institution
UP-FCUP