Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por LIAAD

1998

Inducing Models of human Control Skills

Autores
Camacho, R;

Publicação
Machine Learning: ECML-98, 10th European Conference on Machine Learning, Chemnitz, Germany, April 21-23, 1998, Proceedings

Abstract

1998

Numerical algorithm for recursive subspace identification

Autores
Delgado, CJM; dos Santos, PL; de Carvalho, JLM;

Publicação
PROCEEDINGS OF THE 37TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-4

Abstract
A subspace-based on-line identification algorithm based on one specific technique, based on Van Overschee and De Moor's results, but can be adapted to other similar methods since they all recover from the state sequence and the observability matrix is presented. These results relate an estimated Kalman filter sequence with an oblique projection. With further improvements, the algorithm can adapt to the identification of time-variant systems.

1997

Integrity constraints in ILP using a Monte Carlo approach

Autores
Jorge, A; Brazdil, PB;

Publicação
INDUCTIVE LOGIC PROGRAMMING

Abstract
Many state-of-the-art ILP systems require large numbers of negative examples to avoid overgeneralization. This is a considerable disadvantage for many ILP applications, namely inductive program synthesis where relativelly small and sparse example sets are a more realistic scenario. Integrity constraints are first order clauses that can play the role of negative examples in an inductive process. One integrity constraint can replace a long list of ground negative examples. However, checking the consistency of a program with a set of integrity constraints usually involves heavy theorem-proving. We propose an efficient constraint satisfaction algorithm that applies to a wide variety of useful integrity constraints and uses a Monte Carlo strategy. It looks for inconsistencies by random generation of queries to the program. This method allows the use of integrity constraints instead of (or together with) negative examples. As a consequence programs to induce can be specified more rapidly by the user and the ILP system tends to obtain more accurate definitions. Average running times are not greatly affected by the use of integrity constraints compared to ground negative examples.

1997

From Graphical Objects to Terms and Back: an Extended Application Framework for Prolog

Autores
Soares, C; Calejo, M;

Publicação
Proceedings of the 8th Workshop on Logic Programming Environments, LPE '97, post-conference workshop at ICLP 1997, Leuven, Belgium, July 11, 1997

Abstract

1997

Regression Using Classification Algorithms

Autores
Torgo, L; Gama, J;

Publicação
Intell. Data Anal.

Abstract
This article presents an alternative approach to the problem of regression. The methodology we describe allows the use of classification algorithms in regression tasks. From a practical point of view this enables the use of a wide range of existing machine learning (ML) systems in regression problems. In effect, most of the widely available systems deal with classification. Our method works as a pre-processing step in which the continuous goal variable values are discretised into a set of intervals. We use misclassification costs as a means to reflect the implicit ordering among these intervals. We describe a set of alternative discretisation methods and, based on our experimental results, justify the need for a search-based approach to choose the best method. The discretisation process is isolated from the classification algorithm, thus being applicable to virtually any existing system. The implemented system (RECLA) can thus be seen as a generic pre-processing tool. We have tested RECLA with three different classification systems and evaluated it in several regression data sets. Our experimental results confirm the validity of our search-based approach to class discretisation, and reveal the accuracy benefits of adding misclassification costs. © 1997 Elsevier Science B.Y.

1997

Oblique linear tree

Autores
Gama, J;

Publicação
ADVANCES IN INTELLIGENT DATA ANALYSIS: REASONING ABOUT DATA

Abstract
In this paper we present system Ltree for proposicional supervised learning. Ltree is able to define decision surfaces both orthogonal and oblique to the axes defined by the attributes of the input space. This is done combining a decision tree with a linear discriminant by means of constructive induction. At each decision node Ltree defines a new instance space by insertion of new attributes that are projections of the. examples that fall at this node over the hyper-planes given by a linear discriminant function. This new instance space is propagated down through the tree. Tests based on those new attributes are oblique with respect to the original input space. Ltree is a probabilistic tree in the sense that it outputs a class probability distribution for each query example. The class probability distribution is computed at learning time, taking into account the different class distributions on the path from the root to the actual node. We have carried out experiments on sixteen benchmark datasets and compared our system with other well known decision-tree systems (orthogonal and oblique) like C4.5, OC1 and LMDT. On these datasets we have observed that our system has advantages in what concerns accuracy and tree size at statistically significant confidence levels.

  • 503
  • 510