Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by LIAAD

2012

Predicting the accuracy of regression models in the retail industry

Authors
Pinto, F; Soares, C;

Publication
CEUR Workshop Proceedings

Abstract
Companies are moving from developing a single model for a problem (e.g., a regression model to predict general sales) to developing several models for sub-problems of the original problem (e.g., regression models to predict sales of each of its product categories). Given the similarity between the sub-problems, the process of model development should not be independent. Information should be shared between processes. Different approaches can be used for that purpose, including metalearning (MtL) and transfer learning. In this work, we use MtL to predict the performance of a model based on the performance of models that were previously developed. Given that the sub-problems are related (e.g., the schemas of the data are the same), domain knowledge is used to develop the metafeatures that characterize them. The approach is applied to the development of models to predict sales of different product categories in a retail company from Portugal.

2012

An Experimental Study of the Combination of Meta-Learning with Particle Swarm Algorithms for SVM Parameter Selection

Authors
de Miranda, PBC; Prudencio, RBC; de Carvalho, ACPLF; Soares, C;

Publication
COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2012, PT III

Abstract
Support Vector Machines (SVMs) have become a well succeed algorithm due to the good performance it achieves on different learning problems. However, to perform well the SVM formulation requires adjustments on its model. Avoiding the trial and error procedure, the automatic SVM parameter selection is a way to deal with this. The automatic parameter selection is commonly considered an optimization problem whose goal is to find suitable configuration of parameters which attends some learning problem. In the current work, we propose a study of the combination of Meta-learning (ML) with Particle Swarm Optimization (PSO) algorithms to optimize the SVM model, seeking for combinations of parameters which maximize the success rate of SVM. ML is used to recommend SVM parameters, to a given input problem, based on well-succeeded parameters adopted in previous similar problems. In this combination, initial solutions provided by ML are possibly located in good regions in the search space. Hence, using a reduced number of candidate search points, in the search process, to find an adequate solution, would be less expensive. In our work, we implemented five benchmarks PSO approaches applied to select two SVM parameters for classification. The experiments consist in comparing the performance of the search algorithms using a traditional random initialization and using ML suggestions as initial population. This research analysed the influence of meta-learning on convergence of the optimization algorithms, verifying that the combination of PSO techniques with ML obtained solutions with higher quality on a set of 40 classification problems.

2012

Multi-objective Optimization and Meta-learning for SVM Parameter Selection

Authors
Miranda, PBC; Prudencio, RBC; de Carvalho, ACPLF; Soares, C;

Publication
2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)

Abstract
Support Vector Machines (SVMs) have become a well succeed technique due to the good performance it achieves on different learning problems. However, the performance depends on adjustments on its model. The automatic SVM parameter selection is a way to deal with this. This approach is considered an optimization problem whose goal is to find suitable configuration of parameters which attends some learning problem. This work proposes the use of Particle Swarm Optimization (PSO) to treat the SVM parameter selection problem. As the design of learning systems is inherently a multi-objective optimization problem, a multi-objective PSO (MOPSO) was used to maximize the success rate and minimize the number of support vectors of the model. Moreover, we propose the combination of Meta-Learning (ML) with MOPSO to the cited problem. ML is used to recommend SVM parameters, to a given input problem, based on well-succeeded parameters adopted in previous similar problems. In this combination, initial solutions provided by ML are possibly located in good regions in the search space. Hence, using a reduced number of candidate search points, the search process, to find an adequate solution, would be less expensive. We highlight that, the combination of search algorithms with ML was just studied in the single objective field and the use of MOPSO in this context has not been investigated. In our work, we implemented a prototype in which MOPSO was used to select the values of two SVM parameters for classification problems. In the performed experiments, the proposed solution (MOPSO using ML or Hybrid MOPSO) was compared to a MOPSO with random initialization, obtaining paretos with higher quality on a set of 40 classification problems.

2012

Combining a Multi-Objective Optimization Approach with Meta-Learning for SVM Parameter Selection

Authors
de Miranda, PBC; Prudencio, RBC; de Carvalho, ACPLF; Soares, C;

Publication
PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC)

Abstract
Support Vector Machine (SVM) is a supervised technique, which achieves good performance on different learning problems. However, adjustments on its model are essentials to the SVM work well. Optimization techniques have been used to automatize this process finding suitable configurations of parameters which attends some learning problems. This work utilizes Particle Swarm Optimization (PSO) applied to the SVM parameter selection problem. As the learning systems are essentially a multi-objective problem, a multi-objective PSO (MOPSO) was used to maximize the success rate and minimize the number of support vectors of the model. Nevertheless, we propose the combination of Meta-Learning (ML) with a modified MOPSO which uses the crowding distance mechanism (MOPSO-CDR). In this combination, solutions provided by ML are possibly located in good regions in the search space. Hence, using a reduced number of successful candidates, the search process would converge faster and be less expensive. In our work, we implemented a prototype in which MOPSO-CDR was used to select the values of two SVM parameters for classification problems. In the performed experiments, the proposed solution (MOPSO-CDR using ML) was compared to the MOPSO-CDR with random initialization, obtaining pareto fronts with higher quality on a set of 40 classification problems.

2012

A Meta-Learning Approach to Select Meta-Heuristics for the Traveling Salesman Problem Using MLP-Based Label Ranking

Authors
Kanda, J; Soares, C; Hruschka, E; de Carvalho, A;

Publication
NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III

Abstract
Different meta-heuristics (MHs) may find the best solutions for different traveling salesman problem (TSP) instances. The a priori selection of the best MH for a given instance is a difficult task. We address this task by using a meta-learning based approach, which ranks different MHs according to their expected performance. Our approach uses Multilayer Perceptrons (MLPs) for label ranking. It is tested on two different TSP scenarios, namely: re-visiting customers and visiting prospects. The experimental results show that: 1) MLPs can accurately predict MH rankings for TSP, 2) better TSP solutions can be obtained from a label ranking compared to multilabel classification approach, and 3) it is important to consider different TSP application scenarios when using meta-learning for MH selection.

2012

Meta-learning for periodic algorithm selection in time-changing data

Authors
Rossi, ALD; Carvalho, ACPLF; Soares, C;

Publication
Proceedings - Brazilian Symposium on Neural Networks, SBRN

Abstract
When users have to choose a learning algorithm to induce a model for a given dataset, a common practice is to select an algorithm whose bias suits the data distribution. In real-world applications that produce data continuously this distribution may change over time. Thus, a learning algorithm with the adequate bias for a dataset may become unsuitable for new data following a different distribution. In this paper we present a meta-learning approach for periodic algorithm selection when data distribution may change over time. This approach exploits the knowledge obtained from the induction of models for different data chunks to improve the general predictive performance. It periodically applies a meta-classifier to predict the most appropriate learning algorithm for new unlabeled data. Characteristics extracted from past and incoming data, together with the predictive performance from different models, constitute the meta-data, which is used to induce this meta-classifier. Experimental results using data of a travel time prediction problem show its ability to improve the general performance of the learning system. The proposed approach can be applied to other time-changing tasks, since it is domain independent. © 2012 IEEE.

  • 373
  • 506