2003
Autores
Castillo, G; Gama, J; Breda, AM;
Publicação
USER MODELING 2003, PROCEEDINGS
Abstract
We present Adaptive Bayes, an adaptive incremental version of Naive Bayes, to model a prediction task based on learning styles in the context of an Adaptive Hypermedia Educational System. Since the student's preferences can change over time, this task is related to a problem known as concept drift in the machine learning community. For this class of problems an adaptive predictive model, able to adapt quickly to the user's changes, is desirable. The results from conducted experiments show that Adaptive Bayes seems to be a fine and simple choice for this kind of prediction task in user modeling.
2003
Autores
Fonseca, N; Costa, VS; Silva, F; Camacho, R;
Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE
Abstract
2003
Autores
Fonseca, N; Rocha, R; Camacho, R; Silva, F;
Publicação
INDUCTIVE LOGIC PROGRAMMING, PROCEEDINGS
Abstract
This work aims at improving the scalability of memory usage in Inductive Logic Programming systems. In this context, we propose two efficient data structures: the Trie, used to represent lists and clauses; and the RL-Tree, a novel data structure used to represent the clauses coverage. We evaluate their performance in the April system using well known datasets. Initial results show a substantial reduction in memory usage without incurring extra execution time overheads. Our proposal is applicable in any ILP system.
2003
Autores
Michalski, RS; Brazdil, P;
Publicação
Machine Learning
Abstract
2003
Autores
Leite, R; Brazdil, P;
Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE
Abstract
We present a method that can be seen as an improvement of standard progressive sampling method. The method exploits information concerning performance of a given algorithm on past datasets, which is used to generate predictions of the stopping point. Experimental evaluation shows that the method can lead to significant time savings without significant losses in accuracy.
2003
Autores
Camacho, R;
Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE
Abstract
Inductive Logic Programming (ILP) is a promising technology for knowledge extraction applications. ILP has produced intelligible solutions for a wide variety of domains where it has been applied. The ILP lack of efficiency is, however, a major impediment for its scalability to applications requiring large amounts of data. In this paper we propose a set of techniques that improve ILP systems efficiency and make then more likely to scale up to applications of knowledge extraction from large datasets. We propose and evaluate the lazy evaluation of examples, to improve the efficiency of ILP systems. Lazy evaluation is essentially a way to avoid or postpone the evaluation of the generated hypotheses (coverage tests). The techniques were evaluated using the IndLog system on ILP datasets referenced in the literature. The proposals lead to substantial efficiency improvements and are generally applicable to any ILP system.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.