Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by LIAAD

1998

Symbolic Clustering Of Probabilistic Data

Authors
Brito, P;

Publication
Studies in Classification, Data Analysis, and Knowledge Organization - Advances in Data Science and Classification

Abstract

1998

Dynamic discretization of continuous attributes

Authors
Gama, J; Torgo, L; Soares, C;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE-IBERAMIA 98

Abstract
Discretization of continuous attributes is an important task for certain types of machine learning algorithms. Bayesian approaches, for instance, require assumptions about data distributions. Decision Trees on the other hand, require sorting operations to deal with continuous attributes, which largely increase learning times. This paper presents a new method of discretization, whose main characteristic is that it takes into account interdependencies between attributes. Detecting interdependencies can be seen as discovering redundant attributes. This means that our method performs attribute selection as a side effect of the discretization. Empirical evaluation on five benchmark datasets from UCI repository, using C4.5 and a naive Bayes, shows a consistent reduction of the features without loss of generalization accuracy.

1998

Combining Classifiers by Constructive Induction

Authors
Gama, J;

Publication
Machine Learning: ECML-98, 10th European Conference on Machine Learning, Chemnitz, Germany, April 21-23, 1998, Proceedings

Abstract
Using multiple classifiers for increasing learning accuracy is an active research area. In this paper we present a new general method for merging classifiers. The basic idea of Cascade Generalization is to sequentially run the set of classifiers, at each step performing an extension of the original data set by adding new attributes. The new attributes are derived from the probability class distribution given by a base classifier. This constructive step extends the representational language for the high level classifiers, relaxing their bias. Cascade Generalization produces a single but structured model for the data that combines the model class representation of the base classifiers. We have performed an empirical evaluation of Cascade composition of three well known classifiers: Naive Bayes, Linear Discriminant, and C4.5. Composite models show an increase of performance, sometimes impressive, when compared with the corresponding single models, with significant statistical confidence levels. © Springer-Veriag Berlin Heidelberg 1998.

1998

Local Cascade Generalization

Authors
Gama, J;

Publication
Proceedings of the Fifteenth International Conference on Machine Learning (ICML 1998), Madison, Wisconsin, USA, July 24-27, 1998

Abstract

1998

VisAll: A universal tool to visualise the parallel execution of logic programs

Authors
Fonseca, N; Costa, VS; Dutra, ID;

Publication
LOGIC PROGRAMMING - PROCEEDINGS OF THE 1998 JOINT INTERNATIONAL CONFERENCE AND SYMPOSIUM ON LOGIC PROGRAMMING

Abstract
One of the most important advantages of logic programming systems is that they allow the transparent exploitation of parallelism. The different forms of parallelism available and the complex nature of logic programming applications present interesting problems to both the users and the developers of these systems. Graphical visualisation tools can give a particularly important contribution, as they are easier to understand than text based tools, and allow both for a general overview of an execution and for focusing on its important details. Towards these goals, we propose VisAll, anew tool to visualise the parallel execution of logic programs. VisAll benefits from a modular design centered in a graph that represents a parallel execution. A main graphical shell commands the different modules and presents VisAll as an unified system. Several input components, or translators, support the well-known VisAndor and VACE trace formats, plus a new format designed for independent and-parallel plus or-parallel execution in the SEA. Several output components, or visualisers, allow for different visualisations of the same execution.

1998

Redundant Covering with Global Evaluation in the RC1 Inductive Learner

Authors
Lopes, AlneudeAndrade; Brazdil, Pavel;

Publication
Advances in Artificial Intelligence, 14th Brazilian Symposium on Artificial Intelligence, SBIA '98, Porto Alegre, Brazil, November 4-6, 1998, Proceedings

Abstract
This paper presents an inductive method that learns a logic program represented as an ordered list of clauses. The input consists of a training set of positive examples and background knowledge represented intensionally as a logic program. Our method starts by constructing the explanations of all the positive examples in terms of background knowledge, linking the input to the output arguments. These are used as candidate hypotheses and organized, by relation of generality, into a set of hierarchies (forest). In the second step the candidate hypotheses are analysed with the aim of establishing their effective coverage. In the third step all the inconsistencies are evaluated. This analysis permits to add, at each step, the best hypothesis to the theory. The method was applied to learn the past tense of English verbs. The method presented achieves more accurate results than the previous work by Mooney and Califf [7]. © Springer-Verlag Berlin Heidelberg 1998.

  • 502
  • 510