Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Diogo Marcelo Nogueira

2017

Classifying Heart Sounds Using Images of MFCC and Temporal Features

Autores
Nogueira, DM; Ferreira, CA; Jorge, AM;

Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE (EPIA 2017)

Abstract
Phonocardiogram signals contain very useful information about the condition of the heart. It is a method of registration of heart sounds, which can be visually represented on a chart. By analyzing these signals, early detections and diagnosis of heart diseases can be done. Intelligent and automated analysis of the phonocardiogram is therefore very important, to determine whether the patient's heart works properly or should be referred to an expert for further evaluation. In this work, we use electrocardiograms and phonocardiograms collected simultaneously, from the Physionet challenge database, and we aim to determine whether a phonocardiogram corresponds to a "normal" or "abnormal" physiological state. The main idea is to translate a 1D phonocardiogram signal into a 2D image that represents temporal and Mel-frequency cepstral coefficients features. To do that, we develop a novel approach that uses both features. First we segment the phonocardiogram signals with an algorithm based on a logistic regression hidden semi-Markov model, which uses the electrocardiogram signals as reference. After that, we extract a group of features from the time and frequency domain (Mel-frequency cepstral coefficients) of the phonocardiogram. Then, we combine these features into a two-dimensional time-frequency heat map representation. Lastly, we run a binary classifier to learn a model that discriminates between normal and abnormal phonocardiogram signals. In the experiments, we study the contribution of temporal and Mel-frequency cepstral coefficients features and evaluate three classification algorithms: Support Vector Machines, Convolutional Neural Network, and Random Forest. The best results are achieved when we map both temporal and Mel-frequency cepstral coefficients features into a 2D image and use the Support Vector Machines with a radial basis function kernel. Indeed, by including both temporal and Mel-frequency cepstral coefficients features, we obtain sligthly better results than the ones reported by the challenge participants, which use large amounts of data and high computational power.

2019

Classifying Heart Sounds Using Images of Motifs, MFCC and Temporal Features

Autores
Nogueira, DM; Ferreira, CA; Gomes, EF; Jorge, AM;

Publicação
JOURNAL OF MEDICAL SYSTEMS

Abstract
Cardiovascular disease is the leading cause of death in the world, and its early detection is a key to improving long-term health outcomes. The auscultation of the heart is still an important method in the medical process because it is very simple and cheap. To detect possible heart anomalies at an early stage, an automatic method enabling cardiac health low-cost screening for the general population would be highly valuable. By analyzing the phonocardiogram signals, it is possible to perform cardiac diagnosis and find possible anomalies at an early-term. Therefore, the development of intelligent and automated analysis tools of the phonocardiogram is very relevant. In this work, we use simultaneously collected electrocardiograms and phonocardiograms from the Physionet Challenge database with the main objective of determining whether a phonocardiogram corresponds to a normal or abnormal physiological state. Our main contribution is the methodological combination of time domain features and frequency domain features of phonocardiogram signals to improve cardiac disease automatic classification. This novel approach is developed using both features. First, the phonocardiogram signals are segmented with an algorithm based on a logistic regression hidden semi-Markov model, which uses electrocardiogram signals as a reference. Then, two groups of features from the time and frequency domain are extracted from the phonocardiogram segments. One group is based on motifs and the other on Mel-frequency cepstral coefficients. After that, we combine these features into a two-dimensional time-frequency heat map representation. Lastly, a binary classifier is applied to both groups of features to learn a model that discriminates between normal and abnormal phonocardiogram signals. In the experiments, three classification algorithms are used: Support Vector Machines, Convolutional Neural Network, and Random Forest. The best results are achieved when both time and Mel-frequency cepstral coefficients features are considered using a Support Vector Machines with a radial kernel.

2019

Heart Sounds Classification Using Images from Wavelet Transformation

Autores
Nogueira, DM; Zarmehri, MN; Ferreira, CA; Jorge, AM; Antunes, L;

Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2019, PT I

Abstract
Cardiovascular disease is the leading cause of death around the world and its early detection is a key to improving long-term health outcomes. To detect possible heart anomalies at an early stage, an automatic method enabling cardiac health low-cost screening for the general population would be highly valuable. By analyzing the phonocardiogram (PCG) signals, it is possible to perform cardiac diagnosis and find possible anomalies at an early-term. Accordingly, the development of intelligent and automated analysis tools of the PCG is very relevant. In this work, the PCG signals are studied with the main objective of determining whether a PCG signal corresponds to a “normal” or “abnormal” physiological state. The main contribution of this work is the evidence provided that time domain features can be combined with features extracted from a wavelet transformation of PCG signals to improve automatic cardiac disease classification. We empirically demonstrate that, from a pool of alternatives, the best classification results are achieved when both time and wavelet features are used by a Support Vector Machine with a linear kernel. Our approach has obtained better results than the ones reported by the challenge participants which use large amounts of data and high computational power. © Springer Nature Switzerland AG 2019.

2019

Using Soft Attention Mechanisms to Classify Heart Sounds

Autores
Oliveira, J; Nogueira, M; Ramos, C; Renna, F; Ferreira, C; Coimbra, M;

Publicação
2019 41ST ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC)

Abstract
Recently, soft attention mechanisms have been successfully used in a wide variety of applications such as the generation of image captions, text translation, etc. This mechanism attempts to mimic the visual cortex of a human brain by not analyzing all the objects in a scene equally, but by looking for clues (or salient features) which might give a more compact representation of the environment. In doing so, the human brain can process information more quickly and without overloading. Having learned this lesson, in this paper, we try to make a bridge from the visual to the audio scene classification problem, namely the classification of heart sound signals. To do so, a novel approach merging soft attention mechanisms and recurrent neural nets is proposed. Using the proposed methodology, the algorithm can successfully learn automatically significant audio segments when detecting and classifying abnormal heart sound signals, both improving these classification results and somehow creating a simple justification for them.

2021

Do we really need a segmentation step in heart sound classification algorithms?

Autores
Oliveira, J; Nogueira, D; Renna, F; Ferreira, C; Jorge, AM; Coimbra, M;

Publicação
2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC)

Abstract
Cardiac auscultation is the key screening procedure to detect and identify cardiovascular diseases (CVDs). One of many steps to automatically detect CVDs using auscultation, concerns the detection and delimitation of the heart sound boundaries, a process known as segmentation. Whether to include or not a segmentation step in the signal classification pipeline is nowadays a topic of discussion. Up to our knowledge, the outcome of a segmentation algorithm has been used almost exclusively to align the different signal segments according to the heartbeat. In this paper, the need for a heartbeat alignment step is tested and evaluated over different machine learning algorithms, including deep learning solutions. From the different classifiers tested, Gate Recurrent Unit (GRU) Network and Convolutional Neural Network (CNN) algorithms are shown to be the most robust. Namely, these algorithms can detect the presence of heart murmurs even without a heartbeat alignment step. Furthermore, Support Vector Machine (SVM) and Random Forest (RF) algorithms require an explicit segmentation step to effectively detect heart sounds and murmurs, the overall performance is expected drop approximately 5% on both cases.

2022

The CirCor DigiScope Dataset: From Murmur Detection to Murmur Classification

Autores
Oliveira, J; Renna, F; Costa, PD; Nogueira, M; Oliveira, C; Ferreira, C; Jorge, A; Mattos, S; Hatem, T; Tavares, T; Elola, A; Rad, AB; Sameni, R; Clifford, GD; Coimbra, MT;

Publicação
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Cardiac auscultation is one of the most cost-effective techniques used to detect and identify many heart conditions. Computer-assisted decision systems based on auscultation can support physicians in their decisions. Unfortunately, the application of such systems in clinical trials is still minimal since most of them only aim to detect the presence of extra or abnormal waves in the phonocardiogram signal, i.e., only a binary ground truth variable (normal vs abnormal) is provided. This is mainly due to the lack of large publicly available datasets, where a more detailed description of such abnormal waves (e.g., cardiac murmurs) exists. To pave the way to more effective research on healthcare recommendation systems based on auscultation, our team has prepared the currently largest pediatric heart sound dataset. A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients, in the process, 215780 heart sounds have been manually annotated. Furthermore, and for the first time, each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading, and quality. In addition, the auscultation locations where the murmur is present were identified as well as the auscultation location where the murmur is detected more intensively. Such detailed description for a relatively large number of heart sounds may pave the way for new machine learning algorithms with a real-world application for the detection and analysis of murmur waves for diagnostic purposes.

  • 1
  • 2