Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Francesco Renna

2017

On modifying the temporal modeling of HSMMs for pediatric heart sound segmentation

Authors
Oliveira, J; Mantadelis, T; Renna, F; Gomes, P; Coimbra, M;

Publication
2017 IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS)

Abstract
Heart sounds are difficult to interpret because a) they are composed by several different sounds, all contained in very tight time windows; b) they vary from physiognomy even if the show similar characteristics; c) human ears are not naturally trained to recognize heart sounds. Computer assisted decision systems may help but they require robust signal processing algorithms. In this paper, we use a real life dataset in order to compare the performance of a hidden Markov model and several hidden semi Markov models that used the Poisson, Gaussian, Gamma distributions, as well as a non-parametric probability mass function to model the sojourn time. Using a subject dependent approach, a model that uses the Poisson distribution as an approximation for the sojourn time is shown to outperform all other models. This model was able to recreate the "true" state sequence with a positive predictability per state of 96%. Finally, we used a conditional distribution in order to compute the confidence of our classifications. By using the proposed confidence metric, we were able to identify wrong classifications and boost our system (in average) from an approximate to 83% up to approximate to 90% of positive predictability per sample.

2019

Using Soft Attention Mechanisms to Classify Heart Sounds

Authors
Oliveira, J; Nogueira, M; Ramos, C; Renna, F; Ferreira, C; Coimbra, M;

Publication
2019 41ST ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC)

Abstract
Recently, soft attention mechanisms have been successfully used in a wide variety of applications such as the generation of image captions, text translation, etc. This mechanism attempts to mimic the visual cortex of a human brain by not analyzing all the objects in a scene equally, but by looking for clues (or salient features) which might give a more compact representation of the environment. In doing so, the human brain can process information more quickly and without overloading. Having learned this lesson, in this paper, we try to make a bridge from the visual to the audio scene classification problem, namely the classification of heart sound signals. To do so, a novel approach merging soft attention mechanisms and recurrent neural nets is proposed. Using the proposed methodology, the algorithm can successfully learn automatically significant audio segments when detecting and classifying abnormal heart sound signals, both improving these classification results and somehow creating a simple justification for them.

2019

A Subject-Driven Unsupervised Hidden Semi-Markov Model and Gaussian Mixture Model for Heart Sound Segmentation

Authors
Oliveira, J; Renna, F; Coimbra, M;

Publication
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING

Abstract
The analysis of heart sounds is a challenging task, due to the quick temporal onset between successive events and the fact that an important fraction of the information carried by phonocardiogram (PCG) signals lies in the inaudible part of the human spectrum. For these reasons, computer-aided analysis of the PCG can dramatically improve the quantity of information recovered from such signals. In this paper, a hidden semi-Markov model (HSMM) is used to automatically segment PCG signals. In the proposed models, the emission probability distributions are approximated via Gaussian mixture model (GMM) priors. The choice of GMM emission probability distributions allow to apply re-estimation routines to automatically adjust the HSMM emission probability distributions to each subject. Building on the proposed method for fine tuning emission distributions, a novel subject-driven unsupervised heart sound segmentation algorithm is proposed and validated over the publicly available PhysioNet dataset. Perhaps surprisingly, the proposed unsupervised method achieved results in line with state-of-the-art supervised approaches, when applied to long heart sounds.

2019

Adaptive Sojourn Time HSMM for Heart Sound Segmentation

Authors
Oliveira, J; Renna, F; Mantadelis, T; Coimbra, M;

Publication
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Heart sounds are difficult to interpret due to events with very short temporal onset between them (tens of milliseconds) and dominant frequencies that are out of the human audible spectrum. Computer-assisted decision systems may help but they require robust signal processing algorithms. In this paper, we propose a new algorithm for heart sound segmentation using a hidden semi-Markov model. The proposed algorithm infers more suitable sojourn time parameters than those currently suggested by the state of the art, through a maximum likelihood approach. We test our approach over three different datasets, including the publicly available PhysioNet and Pascal datasets. We also release a pediatric dataset composed of 29 heart sounds. In contrast with any other dataset available online, the annotations of the heart sounds in the released dataset contain information about the beginning and the ending of each heart sound event. Annotations were made by two cardiopulmonologists. The proposed algorithm is compared with the current state of the art. The results show a significant increase in segmentation performance, regardless the dataset or the methodology presented. For example, when using the PhysioNet dataset to train and to evaluate the HSMMs, our algorithm achieved average an F-score of 92% compared to 89% achieved by the algorithm described in [D.B. Springer, L. Tarassenko, and G. D. Clifford, "Logistic regressionHSMM-based heart sound segmentation," IEEE Transactions on Biomedical Engineering, vol. 63, no. 4, pp. 822-832, 2016]. In this sense, the proposed approach to adapt sojourn time parameters represents an effective solution for heart sound segmentation problems, even when the training data does not perfectly express the variability of the testing data.

2017

A Data-Driven Feature Extraction Method for Enhanced Phonocardiogram Segmentation

Authors
Renna, F; Oliveira, J; Coimbra, MT;

Publication
2017 COMPUTING IN CARDIOLOGY (CINC)

Abstract
In this work, we present a method to extract features from heart sound signals in order to enhance segmentation performance. The approach is data-driven, since the way features are extracted from the recorded signals is adapted to the data itself. The proposed method is based on the extraction of delay vectors, which are modeled with Gaussian mixture model priors, and an information-theoretic dimensionality reduction step which aims to maximize discrimination between delay vectors in different segments of the heart sound signal. We test our approach with heart sounds from the publicly available PhysioNet dataset showing an average F-1 score of 92.6% in detecting S-1 and S-2 sounds.

2018

Convolutional Neural Networks for Heart Sound Segmentation

Authors
Renna, F; Oliveira, J; Coimbra, MT;

Publication
2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO)

Abstract
In this paper, deep convolutional neural networks are used to segment heart sounds into their main components. The proposed method is based on the adoption of a novel deep convolutional neural network architecture, which is inspired by similar approaches used for image segmentation. A further post-processing step is applied to the output of the proposed neural network, which induces the output state sequence to be consistent with the natural sequence of states within a heart sound signal (S1, systole, S2, diastole). The proposed approach is tested on heart sound signals longer than 5 seconds from the publicly available PhysioNet dataset, and it is shown to outperform current state-of-the-art segmentation methods by achieving an average sensitivity of 93.4% and an average positive predictive value of 94.5% in detecting S1 and S2 sounds.

  • 1
  • 11