Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Luís Torgo

2015

Socially Driven News Recommendation

Authors
Moniz, Nuno; Torgo, Luis;

Publication
CoRR

Abstract

2016

Development of an autonomous system for integrated marine monitoring

Authors
Catarina, M; Ana Paula, M; Maria, C; Hugo, R; Cristina, A; Isabel, A; Sandra, R; Teresa, B; Sérgio, L; Antonina, DS; Alexandra, S; Cátia, B; Sónia, C; Raquel, M; Catarina, C; André, D; Hugo, F; Ireneu, D; Luís, T; Mariana, O; Nuno, D; Pedro, J; Alfredo, M; Eduardo, S;

Publication
Frontiers in Marine Science

Abstract

2017

Arbitrated Ensemble for Time Series Forecasting

Authors
Cerqueira, V; Torgo, L; Pinto, F; Soares, C;

Publication
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2017, PT II

Abstract
This paper proposes an ensemble method for time series forecasting tasks. Combining different forecasting models is a common approach to tackle these problems. State-of-the-art methods track the loss of the available models and adapt their weights accordingly. Metalearning strategies such as stacking are also used in these tasks. We propose a metalearning approach for adaptively combining forecasting models that specializes them across the time series. Our assumption is that different forecasting models have different areas of expertise and a varying relative performance. Moreover, many time series show recurring structures due to factors such as seasonality. Therefore, the ability of a method to deal with changes in relative performance of models as well as recurrent changes in the data distribution can be very useful in dynamic environments. Our approach is based on an ensemble of heterogeneous forecasters, arbitrated by a metalearning model. This strategy is designed to cope with the different dynamics of time series and quickly adapt the ensemble to regime changes. We validate our proposal using time series from several real world domains. Empirical results show the competitiveness of the method in comparison to state-of-the-art approaches for combining forecasters.

2015

Class-Based Outlier Detection: Staying Zombies or Awaiting for Resurrection?

Authors
Nezvalova, L; Popelinsky, L; Torgo, L; Vaculik, K;

Publication
Advances in Intelligent Data Analysis XIV

Abstract
This paper addresses the task of finding outliers within each class in the context of supervised classification problems. Class-based outliers are cases that deviate too much with respect to the cases of the same class. We introduce a novel method for outlier detection in labelled data based on Random Forests and compare it with existing methods both on artificial and real-world data. We show that it is competitive with the existing methods and sometimes gives more intuitive results. We also provide an overview for outlier detection in labelled data. The main contribution are two methods for class-based outlier description and interpretation.

2017

Model Trees

Authors
Torgo, L;

Publication
Encyclopedia of Machine Learning and Data Mining

Abstract

2017

Learning Through Utility Optimization in Regression Tasks

Authors
Branco, P; Torgo, L; Ribeiro, RP; Frank, E; Pfahringer, B; Rau, MM;

Publication
2017 IEEE INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA)

Abstract
Accounting for misclassification costs is important in many practical applications of machine learning, and cost sensitive techniques for classification have been studied extensively. Utility-based learning provides a generalization of purely cost-based approaches that considers both costs and benefits, enabling application to domains with complex cost-benefit settings. However, there is little work on utility- or cost-based learning for regression. In this paper, we formally define the problem of utility-based regression and propose a strategy for maximizing the utility of regression models. We verify our findings in a large set of experiments that show the advantage of our proposal in a diverse set of domains, learning algorithms and cost/benefit settings.

  • 4
  • 24