Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Francesco Renna

2025

Impact of the Input Representation on Pulmonary Hypertension Detection from Heart Sounds through CNNs

Autores
Giordano, N; Gaudio, A; Emil Schmidt, S; Renna, F;

Publicação
Computing in Cardiology Conference (CinC) - 2025 Computing in Cardiology Conference (CinC)

Abstract

2025

Bidirectional Fiducial Matching of Electrocardiography and Phonocardiography for Multimodal Signal Quality Assessment

Autores
Daniel David Proaño-Guevara; André Lobo; Cristina Oliveira; Cátia Isabel Costa; Ricardo Fontes-Carvalho; Hugo Plácido da Silva; Francesco Renna;

Publicação
Computing in cardiology

Abstract

2025

HOWLish: a CNN for automated wolf howl detection

Autores
Campos, R; Krofel, M; Rio-Maior, H; Renna, F;

Publicação
REMOTE SENSING IN ECOLOGY AND CONSERVATION

Abstract
Automated sound-event detection is crucial for large-scale passive acoustic monitoring of wildlife, but the availability of ready-to-use tools is narrow across taxa. Machine learning is currently the state-of-the-art framework for developing sound-event detection tools tailored to specific wildlife calls. Gray wolves (Canis lupus), a species with intricate management necessities, howl spontaneously for long-distance intra- and inter-pack communication, which makes them a prime target for passive acoustic monitoring. Yet, there is currently no pre-trained, open-access tool that allows reliable automated detection of wolf howls in recorded soundscapes. We collected 50 137 h of soundscape data, where we manually labeled 841 unique howling events. We used this dataset to fine-tune VGGish-a convolutional neural network trained for audio classification-effectively retraining it for wolf howl detection. HOWLish correctly classified 77% of the wolf howling examples present on our test set, with a false positive rate of 1.74%; still, precision was low (0.006) granted extreme class imbalance (7124:1). During field tests, HOWLish retrieved 81.3% of the observed howling events while offering a 15-fold reduction in operator time when compared to fully manual detection. This work establishes the baseline for open-access automated wolf howl detection. HOWLish facilitates remote sensing of wild wolf populations, offering new opportunities in non-invasive large-scale monitoring and communication research of wolves. The knowledge gap we addressed here spans across many soniferous taxa, to which our approach also tallies.

2025

Impact of Preprocessing on the Performance of Heart Sound Segmentation

Autores
Daniel Proaño-Guevara; Hugo Plácido da Silva; Francesco Renna;

Publicação
2025 IEEE 8th Portuguese Meeting on Bioengineering (ENBENG)

Abstract

2025

On the impact of input resolution on CNN-based gastrointestinal endoscopic image classification

Autores
Lopes I.; Almeida E.; Libanio D.; Dinis-Ribeiro M.; Coimbra M.; Renna F.;

Publicação
Annual International Conference of the IEEE Engineering in Medicine and Biology Society IEEE Engineering in Medicine and Biology Society Annual International Conference

Abstract
Gastric cancer (GC) remains a significant global health issue, and convolutional neural networks (CNNs) have shown their high potential for detecting precancerous gastrointestinal (GI) conditions on endoscopic images [1] [2]. Despite the need for high resolution to capture the complexity of GI tissue patterns, the impact of endoscopic image resolution on the performance of these models remains underexplored. This study investigates how different image resolutions affect CNNs classification of intestinal metaplasia (IM) using two datasets with different resolutions and imaging modalities. Our results reveal that the often adopted input resolution of 224×224 pixels does not provide optimal performance for detecting IM, even when using transfer learning from networks pre-trained on images with this resolution. Higher resolutions, such as 512×512, consistently outperform 224 × 224, with notable improvements in F1-scores (e.g., InceptionV3: 94.46% at 512 × 512 vs. 91.49% at 224 × 224). Additionally, our findings indicate that model performance is constrained by the original image quality, underscoring the critical importance of maintaining the higher original image resolutions and quality provided by endoscopes during clinical exams, for the purposes of training and testing CNNs for gastric cancer management.Clinical Relevance- This research highlights the importance of image quality, particularly when endoscopes capture lower-resolution images. Understanding how image resolution impacts diagnostic accuracy can guide clinicians in improving imaging techniques and employing Artificial Intelligence-driven tools effectively for more accurate GC detection and better patient outcomes.

2025

A Comparative Analysis of Centralized and Federated Learning for Multimodal ECG and PCG Classification

Autores
Silva M.G.; Oliveira B.; Coimbra M.; Renna F.; de Carvalho A.V.;

Publicação
Annual International Conference of the IEEE Engineering in Medicine and Biology Society IEEE Engineering in Medicine and Biology Society Annual International Conference

Abstract
In this study, we analyzed federated learning (FL) for ECG and PCG data from the PhysioNet 2016 challenge dataset. We tested multiple approaches of FL and evaluated how these approaches affect the performance metrics of cardiac abnormality detection while preserving data privacy. We compared the performance of the centralized and federated models with two and four clients. The results demonstrated that multimodal federated models using both ECG and PCG data consistently outperformed centralized single-modality ECG or PCG models; in fact the gains provided by multimodal approaches can compensate for the loss in performance induced by distributed learning. These findings highlight the potential of multimodal federated learning to not only provide decentralization advantages but also to achieve comparable performance with the centralized single-modality approaches.Clinical relevance- The clinical relevance of this research lies in its potential to improve cardiovascular disease detection by exploring multimodal models and federated learning. It can also help to optimize machine learning models for real-world clinical deployment while preserving patient privacy and achieving comparable performance metrics.

  • 5
  • 17