2001
Authors
Brazdil, P; Soares, C; Pereira, R;
Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Several methods have been proposed to generate rankings of supervised classification algorithms based on their previous performance on other datasets [8,4]. Like any other prediction method, ranking methods will sometimes err, for instance, they may not rank the best algorithm in the first position. Often the user is willing to try more than one algorithm to increase the possibility of identifying the best one. The information provided in the ranking methods mentioned is not quite adequate for this purpose. That is, they do not identify those algorithms in the ranking that have reasonable possibility of performing best. In this paper, we describe a method for that purpose. We compare our method to the strategy of executing all algorithms and to a very simple reduction method, consisting of running the top three algorithms. In all this work we take time as well as accuracy into account. As expected, our method performs better than the simple reduction method and shows a more stable behavior than running all algorithms. © Springer-Verlag Berlin Heidelberg 2001.
2009
Authors
Carrier, CGG; Brazdil, P; Soares, C; Vilalta, R;
Publication
Encyclopedia of Data Warehousing and Mining, Second Edition (4 Volumes)
Abstract
2004
Authors
Vilalta, R; Carrier, CGG; Brazdil, P; Soares, C;
Publication
IJCSA
Abstract
2009
Authors
Brazdil, P; Giraud Carrier, C; Soares, C; Vilalta, R;
Publication
Cognitive Technologies
Abstract
2000
Authors
Soares, C; Brazdil, P; Costa, J;
Publication
DATA ANALYSIS, CLASSIFICATION, AND RELATED METHODS
Abstract
Due to the wide variety of algorithms for supervised classification originating from several research areas, selecting one of them to apply on a given problem is not a trivial task. Recently several methods have been developed to create rankings of classification algorithms based on their previous performance. Therefore, it is necessary to develop techniques to evaluate and compare those methods. We present three measures to evaluate rankings of classification algorithms, give examples of their use and discuss their characteristics.
2006
Authors
Campos, P; Brazdil, P; Brito, P;
Publication
Network-Centric Collaboration and Supporting Frameworks
Abstract
We propose a Multi-Agent framework to analyze the dynamics of organizational survival in cooperation networks. Firms can decide to cooperate horizontally (in the same market) or vertically with other firms that belong to the supply chain. Cooperation decisions are based on economic variables. We have defined a variant of the density dependence model to set up the dynamics of the survival in the simulation. To validate our model, we have used empirical outputs obtained in previous studies from the automobile manufacturing sector. We have observed that firms and networks proliferate in the regions with lower marginal costs, but new networks keep appearing and disappearing in regions with higher marginal costs.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.