2015
Authors
Moniz, Nuno; Torgo, Luis;
Publication
CoRR
Abstract
2016
Authors
Catarina, M; Ana Paula, M; Maria, C; Hugo, R; Cristina, A; Isabel, A; Sandra, R; Teresa, B; Sérgio, L; Antonina, DS; Alexandra, S; Cátia, B; Sónia, C; Raquel, M; Catarina, C; André, D; Hugo, F; Ireneu, D; Luís, T; Mariana, O; Nuno, D; Pedro, J; Alfredo, M; Eduardo, S;
Publication
Frontiers in Marine Science
Abstract
2017
Authors
Cerqueira, V; Torgo, L; Pinto, F; Soares, C;
Publication
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2017, PT II
Abstract
This paper proposes an ensemble method for time series forecasting tasks. Combining different forecasting models is a common approach to tackle these problems. State-of-the-art methods track the loss of the available models and adapt their weights accordingly. Metalearning strategies such as stacking are also used in these tasks. We propose a metalearning approach for adaptively combining forecasting models that specializes them across the time series. Our assumption is that different forecasting models have different areas of expertise and a varying relative performance. Moreover, many time series show recurring structures due to factors such as seasonality. Therefore, the ability of a method to deal with changes in relative performance of models as well as recurrent changes in the data distribution can be very useful in dynamic environments. Our approach is based on an ensemble of heterogeneous forecasters, arbitrated by a metalearning model. This strategy is designed to cope with the different dynamics of time series and quickly adapt the ensemble to regime changes. We validate our proposal using time series from several real world domains. Empirical results show the competitiveness of the method in comparison to state-of-the-art approaches for combining forecasters.
2015
Authors
Nezvalova, L; Popelinsky, L; Torgo, L; Vaculik, K;
Publication
Advances in Intelligent Data Analysis XIV
Abstract
This paper addresses the task of finding outliers within each class in the context of supervised classification problems. Class-based outliers are cases that deviate too much with respect to the cases of the same class. We introduce a novel method for outlier detection in labelled data based on Random Forests and compare it with existing methods both on artificial and real-world data. We show that it is competitive with the existing methods and sometimes gives more intuitive results. We also provide an overview for outlier detection in labelled data. The main contribution are two methods for class-based outlier description and interpretation.
2017
Authors
Torgo, L;
Publication
Encyclopedia of Machine Learning and Data Mining
Abstract
2017
Authors
Branco, P; Torgo, L; Ribeiro, RP; Frank, E; Pfahringer, B; Rau, MM;
Publication
2017 IEEE INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA)
Abstract
Accounting for misclassification costs is important in many practical applications of machine learning, and cost sensitive techniques for classification have been studied extensively. Utility-based learning provides a generalization of purely cost-based approaches that considers both costs and benefits, enabling application to domains with complex cost-benefit settings. However, there is little work on utility- or cost-based learning for regression. In this paper, we formally define the problem of utility-based regression and propose a strategy for maximizing the utility of regression models. We verify our findings in a large set of experiments that show the advantage of our proposal in a diverse set of domains, learning algorithms and cost/benefit settings.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.