2024
Authors
Leites, J; Cerqueira, V; Soares, C;
Publication
Progress in Artificial Intelligence - 23rd EPIA Conference on Artificial Intelligence, EPIA 2024, Viana do Castelo, Portugal, September 3-6, 2024, Proceedings, Part III
Abstract
Most forecasting methods use recent past observations (lags) to model the future values of univariate time series. Selecting an adequate number of lags is important for training accurate forecasting models. Several approaches and heuristics have been devised to solve this task. However, there is no consensus about what the best approach is. Besides, lag selection procedures have been developed based on local models and classical forecasting techniques such as ARIMA. We bridge this gap in the literature by carrying out an extensive empirical analysis of different lag selection methods. We focus on deep learning methods trained in a global approach, i.e., on datasets comprising multiple univariate time series. Specifically, we use NHITS, a recently proposed architecture that has shown competitive forecasting performance. The experiments were carried out using three benchmark databases that contain a total of 2411 univariate time series. The results indicate that the lag size is a relevant parameter for accurate forecasts. In particular, excessively small or excessively large lag sizes have a considerable negative impact on forecasting performance. Cross-validation approaches show the best performance for lag selection, but this performance is comparable with simple heuristics. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
2025
Authors
Inácio, R; Cerqueira, V; Barandas, M; Soares, C;
Publication
Advances in Intelligent Data Analysis XXIII - 23rd International Symposium on Intelligent Data Analysis, IDA 2025, Konstanz, Germany, May 7-9, 2025, Proceedings
Abstract
The effectiveness of time series forecasting models can be hampered by conditions in the input space that lead them to underperform. When those are met, negative behaviours, such as higher-than-usual errors or increased uncertainty are shown. Traditionally, stress testing is applied to assess how models respond to adverse, but plausible scenarios, providing insights on how to improve their robustness and reliability. This paper builds upon this technique by contributing with a novel framework called MAST (Meta-learning and data Augmentation for Stress Testing). In particular, MAST is a meta-learning approach that predicts the probability that a given model will perform poorly on a given time series based on a set of statistical features. This way, instead of designing new stress scenarios, this method uses the information provided by instances that led to decreases in forecasting performance. An additional contribution is made, a novel time series data augmentation technique based on oversampling, that improves the information about stress factors in the input space, which elevates the classification capabilities of the method. We conducted experiments using 6 benchmark datasets containing a total of 97.829 time series. The results suggest that MAST is able to identify conditions that lead to large errors effectively. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
2025
Authors
Cerqueira, V; Roque, L; Soares, C;
Publication
DISCOVERY SCIENCE, DS 2024, PT I
Abstract
Accurate evaluation of forecasting models is essential for ensuring reliable predictions. Current practices for evaluating and comparing forecasting models focus on summarising performance into a single score, using metrics such as SMAPE. We hypothesize that averaging performance over all samples dilutes relevant information about the relative performance of models. Particularly, conditions in which this relative performance is different than the overall accuracy. We address this limitation by proposing a novel framework for evaluating univariate time series forecasting models from multiple perspectives, such as one-step ahead forecasting versus multi-step ahead forecasting. We show the advantages of this framework by comparing a state-of-the-art deep learning approach with classical forecasting techniques. While classical methods (e.g. ARIMA) are long-standing approaches to forecasting, deep neural networks (e.g. NHITS) have recently shown state-of-the-art forecasting performance in benchmark datasets. We conducted extensive experiments that show NHITS generally performs best, but its superiority varies with forecasting conditions. For instance, concerning the forecasting horizon, NHITS only outperforms classical approaches for multi-step ahead forecasting. Another relevant insight is that, when dealing with anomalies, NHITS is outperformed by methods such as Theta. These findings highlight the importance of evaluating forecasts from multiple dimensions.
2024
Authors
Silva, IOe; Soares, C; Cerqueira, V; Rodrigues, A; Bastardo, P;
Publication
Progress in Artificial Intelligence - 23rd EPIA Conference on Artificial Intelligence, EPIA 2024, Viana do Castelo, Portugal, September 3-6, 2024, Proceedings, Part III
Abstract
TadGAN is a recent algorithm with competitive performance on time series anomaly detection. The detection process of TadGAN works by comparing observed data with generated data. A challenge in anomaly detection is that there are anomalies which are not easy to detect by analyzing the original time series but have a clear effect on its higher-order characteristics. We propose Meta-TadGAN, an adaptation of TadGAN that analyzes meta-level representations of time series. That is, it analyzes a time series that represents the characteristics of the time series, rather than the original time series itself. Results on benchmark datasets as well as real-world data from fire detectors shows that the new method is competitive with TadGAN. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
2025
Authors
Cerqueira, V; Roque, L; Soares, C;
Publication
CoRR
Abstract
2025
Authors
Carvalhido, F; Cardoso, HL; Cerqueira, V;
Publication
THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 28
Abstract
Multimodal models, namely vision-language models, present unique possibilities through the seamless integration of different information mediums for data generation. These models mostly act as a black-box, making them lack transparency and explicability. Reliable results require accountable and trustworthy Artificial Intelligence (AI), namely when in use for critical tasks, such as the automatic generation of medical imaging reports for healthcare diagnosis. By exploring stresstesting techniques, multimodal generative models can become more transparent by disclosing their shortcomings, further supporting their responsible usage in the medical field.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.