Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por LIAAD

2024

Machine Learning Data Market Based on Multiagent Systems

Autores
Baghcheband, H; Soares, C; Reis, LP;

Publicação
IEEE INTERNET COMPUTING

Abstract
Today, autonomous agents, the Internet of Things, and smart devices produce more and more distributed data and use them to learn models for different purposes. One challenge is that learning from local data only may lead to suboptimal models. Thus, better models are expected if agents can exchange data, leading to approaches such as federated learning. However, these approaches assume that data have no value and, thus, is exchanged for free. A machine learning data market (MLDM), a framework based on multiagent systems with a market-based perspective on data exchange, was recently proposed. In an MLDM, each agent trains its model based on both local data and data bought from other agents. Although the empirical results are interesting, several challenges are still open, including data acquisition and data valuation. The MLDM is an illustrative example of how the value of data can and should be integrated into the design of distributed ML systems.

2024

RIFF: Inducing Rules for Fraud Detection from Decision Trees

Autores
Martins, L; Bravo, J; Gomes, AS; Soares, C; Bizarro, P;

Publicação
RULES AND REASONING, RULEML+RR 2024

Abstract
Financial fraud is the cause of multi-billion dollar losses annually. Traditionally, fraud detection systems rely on rules due to their transparency and interpretability, key features in domains where decisions need to be explained. However, rule systems require significant input from domain experts to create and tune, an issue that rule induction algorithms attempt to mitigate by inferring rules directly from data. We explore the application of these algorithms to fraud detection, where rule systems are constrained to have a low false positive rate (FPR) or alert rate, by proposing RIFF, a rule induction algorithm that distills a low FPR rule set directly from decision trees. Our experiments show that the induced rules are often able to maintain or improve performance of the original models for low FPR tasks, while substantially reducing their complexity and outperforming rules hand-tuned by experts.

2024

An Empirical Evaluation of DeepAR for Univariate Time Series Forecasting

Autores
Gomes, RU; Soares, C; Reis, LP;

Publicação
Progress in Artificial Intelligence - 23rd EPIA Conference on Artificial Intelligence, EPIA 2024, Viana do Castelo, Portugal, September 3-6, 2024, Proceedings, Part III

Abstract
DeepAR is a popular probabilistic time series forecasting algorithm. According to the authors, DeepAR is particularly suitable to build global models using hundreds of related time series. For this reason, it is a common expectation that DeepAR obtains poor results in univariate forecasting [10]. However, there are no empirical studies that clearly support this. Here, we compare the performance of DeepAR with standard forecasting models to assess its performance regarding 1 step-ahead forecasts. We use 100 time series from the M4 competition to compare univariate DeepAR with univariate LSTM and SARIMAX models, both for point and quantile forecasts. Results show that DeepAR obtains good results, which contradicts common perception. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2024

Lag Selection for Univariate Time Series Forecasting Using Deep Learning: An Empirical Study

Autores
Leites, J; Cerqueira, V; Soares, C;

Publicação
Progress in Artificial Intelligence - 23rd EPIA Conference on Artificial Intelligence, EPIA 2024, Viana do Castelo, Portugal, September 3-6, 2024, Proceedings, Part III

Abstract
Most forecasting methods use recent past observations (lags) to model the future values of univariate time series. Selecting an adequate number of lags is important for training accurate forecasting models. Several approaches and heuristics have been devised to solve this task. However, there is no consensus about what the best approach is. Besides, lag selection procedures have been developed based on local models and classical forecasting techniques such as ARIMA. We bridge this gap in the literature by carrying out an extensive empirical analysis of different lag selection methods. We focus on deep learning methods trained in a global approach, i.e., on datasets comprising multiple univariate time series. Specifically, we use NHITS, a recently proposed architecture that has shown competitive forecasting performance. The experiments were carried out using three benchmark databases that contain a total of 2411 univariate time series. The results indicate that the lag size is a relevant parameter for accurate forecasts. In particular, excessively small or excessively large lag sizes have a considerable negative impact on forecasting performance. Cross-validation approaches show the best performance for lag selection, but this performance is comparable with simple heuristics. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2024

Enhancing Algorithm Performance Understanding through tsMorph: Generating Semi-Synthetic Time Series for Robust Forecasting Evaluation

Autores
Santos, M; de Carvalho, ACPLF; Soares, C;

Publicação
Proceedings of the 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 20th, 2024.

Abstract
When never produced as much data as today, and tomorrow will probably produce even more data. The increase is due not only to the larger number of data sources, but also because the source can continuously produce more recent data. The discovery of temporal patterns in continuously generated data is the main goal in many forecasting tasks, such as the average value of a currency or the average temperature in a city, in the next day. In these tasks, it is assumed that the time difference between two consecutive values produced by the same source is constant, and the sequence of values form a time series. The importance, and the very large number, of time series forecasting tasks make them one of the most popular data analysis application, which has been dealt with by a large number of different methods. Despite its popularity, there is a dearth of research aimed at comprehending the conditions under which these methods present high or poor forecasting performances. Empirical studies, although common, are challenged by the limited availability of time series datasets, restricting the extraction of reliable insights. To address this limitation, we present tsMorph, a tool for generating semi-synthetic time series through dataset morphing. tsMorph works by creating a sequence of datasets from two original datasets. The characteristics of the generated datasets progressively depart from those of one of the datasets and a convergence toward the attributes of the other dataset. This method provides a valuable alternative for obtaining substantial datasets. In this paper, we show the benefits of tsMorph by assessing the predictive performance of the Long Short-Term Memory Network and DeepAR forecasting algorithms. The time series used for the experiments come from the NN5 Competition. The experimental results provide important insights. Notably, the performances of the two algorithms improve proportionally with the frequency of the time series. These experiments confirm that tsMorph can be an effective tool for better understanding the behaviour of forecasting algorithms, delivering a pathway to overcoming the limitations posed by empirical studies and enabling more extensive and reliable experiments. Furthermore, tsMorph can promote Responsible Artificial Intelligence by emphasising characteristics of time series where forecasting algorithms may not perform well, thereby highlighting potential limitations. © 2024 Copyright for this paper by its authors.

2024

Fair-OBNC: Correcting Label Noise for Fairer Datasets

Autores
Silva, IOE; Jesus, S; Ferreira, H; Saleiro, P; Sousa, I; Bizarro, P; Soares, C;

Publicação
ECAI 2024

Abstract
Data used by automated decision-making systems, such as Machine Learning models, often reflects discriminatory behavior that occurred in the past. These biases in the training data are sometimes related to label noise, such as in COMPAS, where more African-American offenders are wrongly labeled as having a higher risk of recidivism when compared to their White counterparts. Models trained on such biased data may perpetuate or even aggravate the biases with respect to sensitive information, such as gender, race, or age. However, while multiple label noise correction approaches are available in the literature, these focus on model performance exclusively. In this work, we propose Fair-OBNC, a label noise correction method with fairness considerations, to produce training datasets with measurable demographic parity. The presented method adapts Ordering-Based Noise Correction, with an adjusted criterion of ordering, based both on the margin of error of an ensemble, and the potential increase in the observed demographic parity of the dataset. We evaluate Fair-OBNC against other different pre-processing techniques, under different scenarios of controlled label noise. Our results show that the proposed method is the overall better alternative within the pool of label correction methods, being capable of attaining better reconstructions of the original labels. Models trained in the corrected data have an increase, on average, of 150% in demographic parity, when compared to models trained in data with noisy labels, across the considered levels of label noise.

  • 36
  • 514