2000
Authors
Gama, J;
Publication
ADVANCES IN ARTIFICIAL INTELLIGENCE
Abstract
Naive Bayes is a well known and studied algorithm both in statistics and machine learning. Although its limitations with respect to expressive power, this procedure has a surprisingly good performance in a wide variety of domains, including many where there are clear dependencies between attributes. In this paper we address its main perceived limitation - its inability to deal with attribute dependencies. We present Linear Bayes that uses, for the continuous attributes, a multivariate normal distribution to compute the require probabilities. In this way, the interdependencies between the continuous attributes are considered. On the empirical evaluation, we compare Linear Bayes against a naive-Bayes that discretize continuous attributes, a naive-Bayes that assumes a univariate Gaussian for continuous attributes, and a standard Linear discriminant function. We show that Linear Bayes is a plausible algorithm, that competes quite well against other well established techniques.
2000
Authors
Gama, J;
Publication
AI Commun.
Abstract
2000
Authors
Gama, J;
Publication
AI COMMUNICATIONS
Abstract
2000
Authors
Costa, VS; Srinivasan, A; Camacho, R;
Publication
Inductive Logic Programming, 10th International Conference, ILP 2000, London, UK, July 24-27, 2000, Proceedings
Abstract
2000
Authors
Teles, P; Wei, WWS;
Publication
COMPUTATIONAL STATISTICS & DATA ANALYSIS
Abstract
Time-series aggregates are often used in performing tests for departure from linearity. In this paper, we study the effects of temporal aggregation on testing for linearity, basing our analysis on both time- and frequency-domain tests. The results show that temporal aggregation weakens nonlinearity and reduces the power of the tests. The impact is severe. The use of aggregate data greatly hampers the detection of the nonlinear nature of the process.
1999
Authors
Jorge, A; Andrade Lopes, Ad;
Publication
Learning Language in Logic
Abstract
Assigning a category to a given word (tagging) depends on the particular word and on the categories (tags) of neighboring words. A theory that is able to assign tags to a given text can naturally be viewed as a recursive logic program. This article describes how iterative induction, a technique that has been proven powerful in the synthesis of recursive logic programs, has been applied to the task of part-of-speech tagging. The main strategy consists of inducing a succession T1, T2,…, Tn of theories, using in the induction of theory Ti all the previously induced theories. Each theory in the sequence may have lexical rules, context rules and hybrid ones. This iterative strategy is, to a large extent, independent of the inductive algorithm underneath. Here we consider one particular relational learning algorithm, CSC(RC), and we induce first order theories from positive examples and background knowledge that are able to successfully tag a relatively large corpus in Portuguese. © Springer-Verlag Berlin Heidelberg 2000.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.