Detalhes
Nome
Bruno Miguel VelosoCargo
Investigador SéniorDesde
01 março 2013
Nacionalidade
PortugalCentro
Laboratório de Inteligência Artificial e Apoio à DecisãoContactos
+351220402963
bruno.m.veloso@inesctec.pt
2026
Autores
Toribio, L; Veloso, B; Gama, J; Zafra, A;
Publicação
NEUROCOMPUTING
Abstract
Early fault detection remains a critical challenge in predictive maintenance (PdM), particularly within critical infrastructure, where undetected failures or delayed interventions can compromise safety and disrupt operations. Traditional anomaly detection methods are typically reactive, relying on real-time sensor data to identify deviations as they occur. This reactive nature often provides insufficient lead time for effective maintenance planning. To address this limitation, we propose a novel two-stage early detection framework that integrates time series forecasting with anomaly detection to anticipate equipment failures several hours in advance. In the first stage, future sensor signal values are predicted using forecasting models; in the second, conventional anomaly detection algorithms are applied directly to the forecasted data. By shifting from real-time to anticipatory detection, the framework aims to deliver actionable early warnings, enabling timely and preventive maintenance. We validate this approach through a case study focused on metro train systems, an environment where early fault detection is crucial for minimizing service disruptions, optimizing maintenance schedules, and ensuring passenger safety. The framework is evaluated across three forecast horizons (1, 3, and 6 hours ahead) using twelve state-of-the-art anomaly detection algorithms from diverse methodological families. Detection performance is assessed using five performance metrics. Results show that anomaly detection remains highly effective at short to medium horizons, with performance at 1-hour and 3-hour forecasts comparable to that of real-time data. Ensemble and deep learning models exhibit strong robustness to forecast uncertainty, maintaining consistent results with real-time data even at 6-hour forecasts. In contrast, distance- and density-based models suffer substantial degradation at longer horizons (6-hours), reflecting their sensitivity to distributional shifts in predicted signals. Overall, the proposed framework offers a practical and extensible solution for enhancing traditional PdM systems with proactive capabilities. By enabling early anomaly detection on forecasted data, it supports improved decision-making, operational resilience, and maintenance planning in industrial environments.
2026
Autores
Dintén, R; Zorrilla, M; Veloso, B; Gama, J;
Publicação
INFORMATION FUSION
Abstract
One of the key aspects of Industry 4.0 is using intelligent systems to optimize manufacturing processes by improving productivity and reducing costs. These systems have greatly impacted in different areas, such as demand prediction and quality assessment. However, the prognostics and health management of industrial equipment is one of the areas with greater potential. This paper presents a comparative analysis of deep learning architectures applied to the prediction of the remaining useful life (RUL) on public real industrial datasets. The analysis includes some of the most commonly employed recurrent neural network variations and a novel approach based on a hybrid architecture using transformers. Moreover, we apply explainability techniques to provide comprehensive insights into the model's decision-making process. The contributions of the work are: (1) a novel transformer-based architecture for RUL prediction that outperforms traditional recurrent neural networks; (2) a detailed description of the design strategies used to construct the models on two under-explored datasets; (3) the use of explainability techniques to understand the feature importance and to explain the model's prediction and (4) making models built for reproducibility available to other researchers.
2025
Autores
Veloso, B; Neto, HA; Buarque, F; Gama, J;
Publicação
DATA MINING AND KNOWLEDGE DISCOVERY
Abstract
Hyper-parameter optimization in machine learning models is critical for achieving peak performance. Over the past few years, numerous researchers have worked on this optimization challenge. They primarily focused on batch learning tasks where data distributions remain relatively unchanged. However, addressing the properties of data streams poses a substantial challenge. With the rapid evolution of technology, the demand for sophisticated techniques to handle dynamic data streams is becoming increasingly urgent. This paper introduces a novel adaptation of the Fish School Search (FSS) Algorithm for online hyper-parameter optimization, the FSS-SPT. The FSS-SPT is a solution designed explicitly for the dynamic context of data streams. One fundamental property of the FSS-SPT is that it can change between exploration and exploitation modes to cope with the concept drift and converge to reasonable solutions. Our experiments on different datasets provide compelling evidence of the superior performance of our proposed methodology, the FSS-SPT. It outperformed existing algorithms in two machine learning tasks, demonstrating its potential for practical application.
2025
Autores
Barbosa, I; Gama, J; Veloso, B;
Publicação
Progress in Artificial Intelligence - 24th EPIA Conference on Artificial Intelligence, EPIA 2025, Faro, Portugal, October 1-3, 2025, Proceedings, Part II
Abstract
Predictive Maintenance (PdM) aims to prevent failures through early detection, yet lacks explainability to support decision-making. Current PdM models often identify failures, but fail to explain their root causes, especially in real-world scenarios, with complex and limited labeled data. This study proposes an interpretable framework that combines LSTM-based Anomaly Detection with a dual-layered Root Cause Analysis (RCA) based on SHAP attributions. Applied to a real-world dataset, the method detects degradation transitions, tracks failure patterns over time, and provides interpretable information without explicit root cause labels. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
2025
Autores
Alcoforado, A; Ferraz, TP; Okamura, LHT; Veloso, BM; Costa, AHR; Fama, IC; Bueno, BD;
Publicação
LINGUAMATICA
Abstract
Acquiring high-quality annotated data remains one of the most significant challenges in Natural Language Processing (NLP), especially for supervised learning approaches. In scenarios where pre-existing labeled data is unavailable, common solutions like crowdsourcing and zero-shot approaches often fall short, suffering from limitations such as the need for large datasets and a lack of guarantees regarding annotation quality. Traditionally, data for human annotation has been selected randomly, a practice that is not only costly and inefficient but also prone to bias, particularly in imbalanced datasets where minority classes are underrepresented. To address these challenges, this work introduces an automatic and informed data selection architecture designed to minimize the volume of required annotations while maximizing the diversity and representativeness of the selected data. Among the evaluated methods, Reverse Semantic Search (RSS) demonstrated superior performance, consistently outperforming random sampling in imbalanced scenarios and enhancing the effectiveness of trained classifiers. Furthermore, we compared RSS with other clustering-based approaches, providing insights into their respective strengths and weaknesses.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.