Detalhes
Nome
Francesco RennaDesde
01 junho 2020
Nacionalidade
ItáliaContactos
+351222094000
francesco.renna@inesctec.pt
2026
Autores
Campos, R; Krofel, M; Rio Maior, H; Renna, F;
Publicação
REMOTE SENSING IN ECOLOGY AND CONSERVATION
Abstract
Automated sound-event detection is crucial for large-scale passive acoustic monitoring of wildlife, but the availability of ready-to-use tools is narrow across taxa. Machine learning is currently the state-of-the-art framework for developing sound-event detection tools tailored to specific wildlife calls. Gray wolves (Canis lupus), a species with intricate management necessities, howl spontaneously for long-distance intra- and inter-pack communication, which makes them a prime target for passive acoustic monitoring. Yet, there is currently no pre-trained, open-access tool that allows reliable automated detection of wolf howls in recorded soundscapes. We collected 50 137 h of soundscape data, where we manually labeled 841 unique howling events. We used this dataset to fine-tune VGGish-a convolutional neural network trained for audio classification-effectively retraining it for wolf howl detection. HOWLish correctly classified 77% of the wolf howling examples present on our test set, with a false positive rate of 1.74%; still, precision was low (0.006) granted extreme class imbalance (7124:1). During field tests, HOWLish retrieved 81.3% of the observed howling events while offering a 15-fold reduction in operator time when compared to fully manual detection. This work establishes the baseline for open-access automated wolf howl detection. HOWLish facilitates remote sensing of wild wolf populations, offering new opportunities in non-invasive large-scale monitoring and communication research of wolves. The knowledge gap we addressed here spans across many soniferous taxa, to which our approach also tallies.
2026
Autores
Ferreira, VRS; de Paiva, AC; de Almeida, JDS; Braz, G Jr; Silva, AC; Renna, F;
Publicação
ENTERPRISE INFORMATION SYSTEMS, ICEIS 2024, PT I
Abstract
This paper explores a Cycle-GAN architecture based on diffusion models for translating cardiac CT images with and without contrast, aiming to enhance the quality and accuracy of medical imaging. The combination of GANs and diffusion models has demonstrated promising results, particularly in generating high-quality, visually similar contrast-enhanced cardiac images. This effectiveness is evidenced by metrics such as a PSNR of 32.85, an SSIM of 0.766, and an FID of 42.348, highlighting the model's capability for accurate and detailed image generation. Although these results indicate substantial potential for improving diagnostic accuracy, challenges remain, particularly concerning the generation of image artefacts and brightness inconsistencies, which could affect the clinical validation of these images. These issues have important implications for the reliability of the images in real medical diagnoses. The results of this study suggest that future research should focus on optimizing these aspects, improving the handling of artefacts, and investigating alternative architectures further to enhance the quality and reliability of the generated images, ensuring their applicability in clinical settings
2025
Autores
Baeza, R; Nunes, F; Santos, C; Mancio, J; Fontes Carvalho, R; Renna, F; Pedrosa, J;
Publicação
INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING
Abstract
The link between epicardial adipose tissue (EAT) and cardiovascular risk is well established, with EAT volume being strongly associated with inflammation, coronary artery disease (CAD) risk, and mortality. However, its EAT quantification is hindered by the time-consuming nature of manual EAT segmentation in cardiac computed tomography (CT). 300 non-contrast cardiac CT scans were collected and the pericardium was manually delineated. In a subset of this data (N = 30), manual delineation was repeated by the same operator and by a second operator. Two automatic methods were then used for pericardial segmentation: a commercially available tool, Siemens Cardiac Risk Assessment (CRA) software; and a deep learning solution based on a U-Net architecture trained exclusively with external public datasets (CardiacFat and OSIC). EAT segmentations were obtained through thresholding to [- 150,- 50] Hounsfield units. Pericardial and EAT segmentation performance was evaluated considering the segmentations by the first operator as reference. Statistical significance of differences for all metrics and segmentation methods was tested through Student t-tests. Pericardial segmentation intra-/interobserver variability was excellent, with the U-Net outperforming Siemens CRA (p < 0.0001). The intra- and interobserver agreement for EAT segmentation was lower with Dice Scores (DSC) of 0.862 and 0.775 respectively, while the U-Net and Siemens CRA obtained DSCs of 0.723 and 0.679 respectively. EAT volume quantification showed that the agreement between a human observer and the U-Net was better than that of two human observers (p = 0.0141), with a Pearson Correlation Coefficient (PCC) of 0.896 and a bias of - 2.83 cm(3) (below the interobserver bias of 9.05 cm3). The lower performances of EAT segmentation highlight the difficulty in segmenting this structure. For both pericardial and EAT segmentation, the deep learning method outperformed the commercial solution. While the segmentation performance of the U-Net solution was below interobserver variability, EAT volume quantification performance was competitive with human readers, motivating future use of these tools. Clinical trial number: NCT03280433, registered retrospectively on 2017-09-08.
2025
Autores
Petersen, FT; Lobo, A; Oliveira, C; Costa, CI; Fontes Carvalho, R; Schmidt, E; Renna, F;
Publicação
Computing in Cardiology
Abstract
Aims: Heart Failure (HF) is a global health challenge that is often associated with reduced left ventricular ejection fraction (EF). Current EF assessments rely on echocardiography exams performed by specialists. This study explores the feasibility of predicting EF using cardiac intervals derived from synchronous phonocardiography (PCG) and single-lead electrocardiography (ECG) recorded with a bimodal stethoscope. Methods: 84 pairs of synchronous PCG and ECG signals were collected from 42 patients. Signal pairs were categorized into three different EF groups: EF <40%, EF 40-49% and EF =50%. Results: Logistic regression revealed that the QS2 interval was a significant predictor of reduced ejection fraction, with p = 0.0186 for EF > 40% and p = 0.0090 for EF > 50%. QT interval showed no predictive value. The Kruskal-Wallis test showed significant group differences for QS2 (p=0.008) and S1S2 (p=0.009), but not for QT (p=0.299) or QS1 (p=0.673). Mann-Whitney U-test confirmed that QS2 and S1S2 intervals differed significantly between EF. © 2025 IEEE Computer Society. All rights reserved.
2025
Autores
Giordano, N; Gaudio, A; Schmidt, E; Renna, F;
Publicação
Computing in Cardiology
Abstract
Pulmonary hypertension (PH) is a hemodynamic condition describing elevated pulmonary artery pressure. To date, right heart catheterism is the gold standard diagnostic test for PH, but it is an invasive and expensive procedure. Deep learning (DL) techniques applied to heart sounds have previously shown promising performances for PH screening. In this work, we analyze the impact of different input representations for PH detection with convolutional neural networks (CNNs). We found that considering each heartbeat as an independent input yielded systematically lower performance than considering the recordings as a whole: preserving the information about the variability over the heartbeats is key. Time-domain feature maps outperformed handcrafted features and combining the time- and frequency-domain proved consistently most effective. Reducing the number of heartbeats to 30 did not affect the performance, and even reducing to 10 beats preserves the diagnostic value. The proposed analysis moves one step further the applicability of DL for PH detection from heart sounds in the clinical practice. © 2025 IEEE Computer Society. All rights reserved.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.