Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Rita Paula Ribeiro

2017

Learning Through Utility Optimization in Regression Tasks

Autores
Branco, P; Torgo, L; Ribeiro, RP; Frank, E; Pfahringer, B; Rau, MM;

Publicação
2017 IEEE INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA)

Abstract
Accounting for misclassification costs is important in many practical applications of machine learning, and cost sensitive techniques for classification have been studied extensively. Utility-based learning provides a generalization of purely cost-based approaches that considers both costs and benefits, enabling application to domains with complex cost-benefit settings. However, there is little work on utility- or cost-based learning for regression. In this paper, we formally define the problem of utility-based regression and propose a strategy for maximizing the utility of regression models. We verify our findings in a large set of experiments that show the advantage of our proposal in a diverse set of domains, learning algorithms and cost/benefit settings.

2018

Proceedings of the Workshop on Large-scale Learning from Data Streams in Evolving Environments (STREAMEVOLV 2016) co-located with the 2016 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2016), Riva del Garda, Italy, September 23, 2016

Autores
Mouchaweh, MS; Bouchachia, H; Gama, J; Ribeiro, RP;

Publicação
STREAMEVOLV@ECML-PKDD

Abstract

2018

Resampling with neighbourhood bias on imbalanced domains

Autores
Branco, P; Torgo, L; Ribeiro, RP;

Publicação
EXPERT SYSTEMS

Abstract
Imbalanced domains are an important problem that arises in predictive tasks causing a loss in the performance on the most relevant cases for the user. This problem has been extensively studied for classification problems, where the target variable is nominal. Recently, it was recognized that imbalanced domains occur in several other contexts and for multiple tasks, such as regression tasks, where the target variable is continuous. This paper focuses on imbalanced domains in both classification and regression tasks. Resampling strategies are among the most successful approaches to address imbalanced domains. In this work, we propose variants of existing resampling strategies that are able to take into account the information regarding the neighbourhood of the examples. Instead of performing sampling uniformly, our proposals bias the strategies to reinforce some regions of the data sets. With an extensive set of experiments, we provide evidence of the advantage of introducing a neighbourhood bias in the resampling strategies for both classification and regression tasks with imbalanced data sets.

2018

MetaUtil: Meta Learning for Utility Maximization in Regression

Autores
Branco, P; Torgo, L; Ribeiro, RP;

Publicação
Discovery Science - 21st International Conference, DS 2018, Limassol, Cyprus, October 29-31, 2018, Proceedings

Abstract
Several important real world problems of predictive analytics involve handling different costs of the predictions of the learned models. The research community has developed multiple techniques to deal with these tasks. The utility-based learning framework is a generalization of cost-sensitive tasks that takes into account both costs of errors and benefits of accurate predictions. This framework has important advantages such as allowing to represent more complex settings reflecting the domain knowledge in a more complete and precise way. Most existing work addresses classification tasks with only a few proposals tackling regression problems. In this paper we propose a new method, MetaUtil, for solving utility-based regression problems. The MetaUtil algorithm is versatile allowing the conversion of any out-of-the-box regression algorithm into a utility-based method. We show the advantage of our proposal in a large set of experiments on a diverse set of domains. © 2018, Springer Nature Switzerland AG.

2017

Outliers and the Simpson's Paradox

Autores
Portela, E; Ribeiro, RP; Gama, J;

Publicação
Advances in Soft Computing - 16th Mexican International Conference on Artificial Intelligence, MICAI 2017, Enseneda, Mexico, October 23-28, 2017, Proceedings, Part I

Abstract
There is no standard definition of outliers, but most authors agree that outliers are points far from other data points. Several outlier detection techniques have been developed mainly with two different purposes. On one hand, outliers are the interesting observations, like in fraud detection, on the other side, outliers are considered measurement observations that should be removed from the analysis, e.g. robust statistics. In this work, we start from the observation that outliers are effected by the so called Simpson paradox: a trend that appears in different groups of data but disappears or reverses when these groups are combined. Given a dataset, we learn a regression tree. The tree grows by partitioning the data into groups more and more homogeneous of the target variable. At each partition defined by the tree, we apply a box plot on the target variable to detect outliers. We would expected that deeper nodes of the tree contain less and less outliers. We observe that some points previously signaled as outliers are no more signaled as such, but new outliers appear. The identification of outliers depends on the context considered. Based on this observation, we propose a new method to quantify the level of outlierness of data points. © Springer Nature Switzerland AG 2018.

2018

SMOTEBoost for Regression: Improving the Prediction of Extreme Values

Autores
Moniz, N; Ribeiro, RP; Cerqueira, V; Chawla, N;

Publicação
2018 IEEE 5TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA)

Abstract
Supervised learning with imbalanced domains is one of the biggest challenges in machine learning. Such tasks differ from standard learning tasks by assuming a skewed distribution of target variables, and user domain preference towards under-represented cases. Most research has focused on imbalanced classification tasks, where a wide range of solutions has been tested. Still, little work has been done concerning imbalanced regression tasks. In this paper, we propose an adaptation of the SMOTEBoost approach for the problem of imbalanced regression. Originally designed for classification tasks, it combines boosting methods and the SMOTE resampling strategy. We present four variants of SMOTEBoost and provide an experimental evaluation using 30 datasets with an extensive analysis of results in order to assess the ability of SMOTEBoost methods in predicting extreme target values, and their predictive trade-off concerning baseline boosting methods. SMOTEBoost is publicly available in a software package.

  • 3
  • 13