Details
Name
Carlos Manuel SoaresRole
External Research CollaboratorSince
01st January 2008
Nationality
PortugalCentre
Artificial Intelligence and Decision SupportContacts
+351222094398
carlos.m.soares@inesctec.pt
2026
Authors
Pereira, RR; Bono, J; Ferreira, H; Ribeiro, P; Soares, C; Bizarro, P;
Publication
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES. APPLIED DATA SCIENCE TRACK, ECML PKDD 2025, PT IX
Abstract
When the available data for a target domain is limited, transfer learning (TL) methods leverage related data-rich source domains to train and evaluate models, before deploying them on the target domain. However, most TL methods assume fixed levels of labeled and unlabeled target data, which contrasts with real-world scenarios where both data and labels arrive progressively over time. As a result, evaluations based on these static assumptions may not reflect how methods perform in practice. To support a more realistic assessment of TL methods in dynamic settings, we propose an evaluation framework that (1) simulates varying data availability over time, (2) creates multiple domains via resampling of a given dataset and (3) introduces inter-domain variability through controlled transformations, e.g., including time-dependent covariate and concept shifts. These capabilities enable the systematic simulation of a large number of variants of the experiments, providing deeper insights into how algorithms may behave when deployed. We demonstrate the usefulness of the proposed framework by performing a case study on a proprietary real-world suite of card payment datasets. To support reproducibility, we also apply the framework on the publicly available Bank Account Fraud (BAF) dataset. By providing a methodology for evaluating TL methods over time and in different data availability conditions, our framework supports a better understanding of model behavior in real-world environments, which enables more informed decisions when deploying models in new domains.
2026
Authors
Santos, M; Cerqueira, V; Soares, C;
Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2025, PT I
Abstract
Effective selection of forecasting algorithms for time series data is a challenge in machine learning, impacting both predictive accuracy and efficiency. Metalearning, using features extracted from time series, offers a strategic approach to optimize algorithm selection. The utility of this approach depends on the amount of information the features contain about the behavior of the algorithms. Although there are several methods for systematic time series feature extraction, they have never been compared. This paper empirically analyzes the performance of each feature extraction method for algorithm selection and its impact on forecasting accuracy. Our study reveals that TSFRESH, TSFEATURES, and TSFEL exhibit comparable performance at algorithm selection accuracy, adeptly capturing time series characteristics essential for accurate algorithm selection. In contrast, Catch22 is found to be less effective for this purpose. In particular, TSFEL is identified as the most efficient method, balancing dimensionality and predictive performance. These findings provide insights for enhancing forecasting accuracy and efficiency through judicious selection of meta-feature extractors.
2026
Authors
Dutra, I; Pechenizkiy, M; Cortez, P; Pashami, S; Jorge, AM; Soares, C; Abreu, PH; Gama, J;
Publication
ECML/PKDD (9)
Abstract
2026
Authors
Pfahringer, B; Japkowicz, N; Larrañaga, P; Ribeiro, RP; Dutra, I; Pechenizkiy, M; Cortez, P; Pashami, S; Jorge, AM; Soares, C; Abreu, PH; Gama, J;
Publication
ECML/PKDD (8)
Abstract
2026
Authors
Dutra, I; Pechenizkiy, M; Cortez, P; Pashami, S; Pasquali, A; Moniz, N; Jorge, AM; Soares, C; Abreu, PH; Gama, J;
Publication
ECML/PKDD (10)
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.