Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Diogo Marcelo Esterlita Nogueira, I was born on March 27, 1990. I am from São João da Pesqueira, Viseu and presently I live in Porto.

I am graduated in Biomedical Engineering by the University of Trás-os-Montes and Alto Douro (concluded in 2011) and I completed my Master's degree in Medical Physics, by the Faculty of Sciences of the University of Porto in 2014.

I started my professional career at INESC TEC in 2012, at the former Optoelectronics and Electronic Systems Unit, which today is called Center of Applied Photonics. During this period, I collaborated in the EYEFRY research project, whose participation ended in 2016.

In 2016 I joined another INESC TEC center, the LIAAD, and currently i'm working in the area of data mining and machine learning.

Interest
Topics
Details

Details

  • Name

    Diogo Marcelo Nogueira
  • Role

    External Student
  • Since

    15th November 2012
Publications

2025

Histopathological Imaging Dataset for Oral Cancer Analysis: A Study with a Data Leakage Warning

Authors
Nogueira, DM; Gomes, EF;

Publication
Proceedings of the 18th International Joint Conference on Biomedical Engineering Systems and Technologies, BIOSTEC 2025 - Volume 1, Porto, Portugal, February 20-22, 2025.

Abstract

2025

Leveraging Synthetic Data to Develop a Machine Learning Model for Voiding Flow Rate Prediction From Audio Signals

Authors
Alvarez, ML; Bahillo, A; Arjona, L; Nogueira, DM; Gomes, EF; Jorge, AM;

Publication
IEEE ACCESS

Abstract
Sound-based uroflowmetry (SU) is a non-invasive technique emerging as an alternative to traditional uroflowmetry (UF) to calculate the voiding flow rate based on the sound generated by the urine impacting the water in a toilet, enabling remote monitoring and reducing the patient burden and clinical costs. This study trains four different machine learning (ML) models (random forest, gradient boosting, support vector machine and convolutional neural network) using both regression and classification approaches to predict and categorize the voiding flow rate from sound events. The models were trained with a dataset that contains sounds from synthetic void events generated with a high precision peristaltic pump and a traditional toilet. Sound was simultaneously recorded with three devices: Ultramic384k, Mi A1 smartphone and Oppo Smartwatch. To extract the audio features, our analysis showed that segmenting the audio signals into 1000 ms segments with frequencies up to 16 kHz provided the best results. Results show that random forest achieved the best performance in both regression and classification tasks, with a mean absolute error (MAE) of 0.9, 0.7 and 0.9 ml/s and quadratic weighted kappa (QWK) of 0.99, 1.0 and 1.0 for the three devices. To evaluate the models in a real environment and assess the effectiveness of training with synthetic data, the best-performing models were retrained and validated using a real voiding sounds dataset. The results reported an MAE below 2.5 ml/s and a QWK above 0.86 for regression and classification tasks, respectively.

2025

Survey on Detection of Fraudulent Documents

Authors
Nogueira, DM; Simões, M; Ferreira, C; Ribeiro, RP; Martínez-Rego, D; Cai, A; Gama, J;

Publication

Abstract

2023

The selection of an optimal segmentation region in physiological signals

Authors
Oliveira, J; Carvalho, M; Nogueira, D; Coimbra, M;

Publication
INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH

Abstract
Physiological signals are often corrupted by noisy sources. Usually, artificial intelligence algorithms analyze the whole signal, regardless of its varying quality. Instead, experienced cardiologists search for a high-quality signal segment, where more accurate conclusions can be draw. We propose a methodology that simultaneously selects the optimal processing region of a physiological signal and determines its decoding into a state sequence of physiologically meaningful events. Our approach comprises two phases. First, the training of a neural network that then enables the estimation of the state probability distribution of a signal sample. Second, the use of the neural network output within an integer program. The latter models the problem of finding a time window by maximizing a likelihood function defined by the user. Our method was tested and validated in two types of signals, the phonocardiogram and the electrocardiogram. In phonocardiogram and electrocardiogram segmentation tasks, the system's sensitivity increased on average from 95.1% to 97.5% and from 78.9% to 83.8%, respectively, when compared to standard approaches found in the literature.

2022

The CirCor DigiScope Dataset: From Murmur Detection to Murmur Classification

Authors
Oliveira, J; Renna, F; Costa, PD; Nogueira, M; Oliveira, C; Ferreira, C; Jorge, A; Mattos, S; Hatem, T; Tavares, T; Elola, A; Rad, AB; Sameni, R; Clifford, GD; Coimbra, MT;

Publication
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Cardiac auscultation is one of the most cost-effective techniques used to detect and identify many heart conditions. Computer-assisted decision systems based on auscultation can support physicians in their decisions. Unfortunately, the application of such systems in clinical trials is still minimal since most of them only aim to detect the presence of extra or abnormal waves in the phonocardiogram signal, i.e., only a binary ground truth variable (normal vs abnormal) is provided. This is mainly due to the lack of large publicly available datasets, where a more detailed description of such abnormal waves (e.g., cardiac murmurs) exists. To pave the way to more effective research on healthcare recommendation systems based on auscultation, our team has prepared the currently largest pediatric heart sound dataset. A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients, in the process, 215780 heart sounds have been manually annotated. Furthermore, and for the first time, each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading, and quality. In addition, the auscultation locations where the murmur is present were identified as well as the auscultation location where the murmur is detected more intensively. Such detailed description for a relatively large number of heart sounds may pave the way for new machine learning algorithms with a real-world application for the detection and analysis of murmur waves for diagnostic purposes.