Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Vitor Cerqueira received his Licenciate degree on Applied Mathematics and MSc on Data Analytics from the Faculty of Sciences, U. Porto, in 2012 and from the Faculty of Economics, also U. Porto, in 2014, respectively. Currently, he is pursuing his Ph.D degree in the doctoral program for Informatics Engineering from the University of Porto.

He is a research fellow in LIAAD, a laboratory for Artificial Intelligence and Decision Support Systems. His main research topic is related to ensemble learning for time series forecasting tasks and actionable forecasting methods. 

Interest
Topics
Details

Details

  • Name

    Vítor Manuel Cerqueira
  • Role

    External Research Collaborator
  • Since

    23rd June 2014
001
Publications

2025

Time Series Data Augmentation as an Imbalanced Learning Problem

Authors
Cerqueira, V; Moniz, N; Inácio, R; Soares, C;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2024, PT II

Abstract
Recent state-of-the-art forecasting methods are trained on collections of time series. These methods, often referred to as global models, can capture common patterns in different time series to improve their generalization performance. However, they require large amounts of data that might not be available. Moreover, global models may fail to capture relevant patterns unique to a particular time series. In these cases, data augmentation can be useful to increase the sample size of time series datasets. The main contribution of this work is a novel method for generating univariate time series synthetic samples. Our approach stems from the insight that the observations concerning a particular time series of interest represent only a small fraction of all observations. In this context, we frame the problem of training a forecasting model as an imbalanced learning task. Oversampling strategies are popular approaches used to handle the imbalance problem in machine learning. We use these techniques to create synthetic time series observations and improve the accuracy of forecasting models. We carried out experiments using 7 different databases that contain a total of 5502 univariate time series. We found that the proposed solution outperforms both a global and a local model, thus providing a better trade-off between these two approaches.

2025

Meta-learning and Data Augmentation for Stress Testing Forecasting Models

Authors
Inácio, R; Cerqueira, V; Barandas, M; Soares, C;

Publication
ADVANCES IN INTELLIGENT DATA ANALYSIS XXIII, IDA 2025

Abstract
The effectiveness of time series forecasting models can be hampered by conditions in the input space that lead them to underperform. When those are met, negative behaviours, such as higher-than-usual errors or increased uncertainty are shown. Traditionally, stress testing is applied to assess how models respond to adverse, but plausible scenarios, providing insights on how to improve their robustness and reliability. This paper builds upon this technique by contributing with a novel framework called MAST (Meta-learning and data Augmentation for Stress Testing). In particular, MAST is a meta-learning approach that predicts the probability that a given model will perform poorly on a given time series based on a set of statistical features. This way, instead of designing new stress scenarios, this method uses the information provided by instances that led to decreases in forecasting performance. An additional contribution is made, a novel time series data augmentation technique based on oversampling, that improves the information about stress factors in the input space, which elevates the classification capabilities of the method. We conducted experiments using 6 benchmark datasets containing a total of 97.829 time series. The results suggest that MAST is able to identify conditions that lead to large errors effectively.

2025

Forecasting with Deep Learning: Beyond Average of Average of Average Performance

Authors
Cerqueira, V; Roque, L; Soares, C;

Publication
DISCOVERY SCIENCE, DS 2024, PT I

Abstract
Accurate evaluation of forecasting models is essential for ensuring reliable predictions. Current practices for evaluating and comparing forecasting models focus on summarising performance into a single score, using metrics such as SMAPE. We hypothesize that averaging performance over all samples dilutes relevant information about the relative performance of models. Particularly, conditions in which this relative performance is different than the overall accuracy. We address this limitation by proposing a novel framework for evaluating univariate time series forecasting models from multiple perspectives, such as one-step ahead forecasting versus multi-step ahead forecasting. We show the advantages of this framework by comparing a state-of-the-art deep learning approach with classical forecasting techniques. While classical methods (e.g. ARIMA) are long-standing approaches to forecasting, deep neural networks (e.g. NHITS) have recently shown state-of-the-art forecasting performance in benchmark datasets. We conducted extensive experiments that show NHITS generally performs best, but its superiority varies with forecasting conditions. For instance, concerning the forecasting horizon, NHITS only outperforms classical approaches for multi-step ahead forecasting. Another relevant insight is that, when dealing with anomalies, NHITS is outperformed by methods such as Theta. These findings highlight the importance of evaluating forecasts from multiple dimensions.

2025

Modelradar: aspect-based forecast evaluation

Authors
Cerqueira, V; Roque, L; Soares, C;

Publication
MACHINE LEARNING

Abstract
Accurate evaluation of forecasting models is essential for ensuring reliable predictions. Current practices for evaluating and comparing forecasting models focus on summarising performance into a single score, using metrics such as SMAPE. While convenient, averaging performance over all samples dilutes relevant information about model behaviour under varying conditions. This limitation is especially problematic for time series forecasting, where multiple layers of averaging-across time steps, horizons, and multiple time series in a dataset-can mask relevant performance variations. We address this limitation by proposing ModelRadar, a framework for evaluating univariate time series forecasting models across multiple aspects, such as stationarity, presence of anomalies, or forecasting horizons. We demonstrate the advantages of this framework by comparing 24 forecasting methods, including classical approaches and different machine learning algorithms. PatchTST, a state-of-the-art transformer-based neural network architecture, performs best overall but its superiority varies with forecasting conditions. For instance, concerning the forecasting horizon, we found that PatchTST (and also other neural networks) only outperforms classical approaches for multi-step ahead forecasting. Another relevant insight is that classical approaches such as ETS or Theta are notably more robust in the presence of anomalies. These and other findings highlight the importance of aspect-based model evaluation for both practitioners and researchers. ModelRadar is available as a Python package.

2025

Stress-Testing of Multimodal Models in Medical Image-Based Report Generation

Authors
Carvalhido, F; Cardoso, HL; Cerqueira, V;

Publication
THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 28

Abstract
Multimodal models, namely vision-language models, present unique possibilities through the seamless integration of different information mediums for data generation. These models mostly act as a black-box, making them lack transparency and explicability. Reliable results require accountable and trustworthy Artificial Intelligence (AI), namely when in use for critical tasks, such as the automatic generation of medical imaging reports for healthcare diagnosis. By exploring stresstesting techniques, multimodal generative models can become more transparent by disclosing their shortcomings, further supporting their responsible usage in the medical field.