2005
Autores
Davis, J; Burnside, E; De Castro Dutra, I; Page, D; Santos Costa, V;
Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Inductive Logic Programming (ILP) is a popular approach for learning rules for classification tasks. An important question is how to combine the individual rules to obtain a useful classifier. In some instances, converting each learned rule into a binary feature for a Bayes net learner improves the accuracy compared to the standard decision list approach [3,4,14]. This results in a two-step process, where rules are generated in the first phase, and the classifier is learned in the second phase. We propose an algorithm that interleaves the two steps, by incrementally building a Bayes net during rule learning. Each candidate rule is introduced into the network, and scored by whether it improves the performance of the classifier. We call the algorithm SAYU for Score As You Use. We evaluate two structure learning algorithms Naïve Bayes and Tree Augmented Naïve Bayes. We test SAYU on four different datasets and see a significant improvement in two out of the four applications. Furthermore, the theories that SAYU learns tend to consist of far fewer rules than the theories in the two-step approach. © Springer-Verlag Berlin Heidelberg 2005.
2005
Autores
Davis, J; Burnside, E; Dutra, I; Page, D; Ramakrishnan, R; Costa, VS; Shavlik, J;
Publicação
IJCAI International Joint Conference on Artificial Intelligence
Abstract
Statistical relational learning (SRL) constructs probabilistic models from relational databases. A key capability of SRL is the learning of arcs (in the Bayes net sense) connecting entries in different rows of a relational table, or in different tables. Nevertheless, SRL approaches currently are constrained to use the existing database schema. For many database applications, users find it profitable to define alternative "views" of the database, in effect defining new fields or tables. Such new fields or tables can also be highly useful in learning. We provide SRL with the capability of learning new views.
2005
Autores
Faustino Da Silva, A; Costa, VS;
Publicação
Journal of Universal Computer Science
Abstract
Interpreted languages are widely used due to ease to use, portability, and safety. On the other hand, interpretation imposes a significance overhead. Just-in-Time (JIT) compilation is a popular approach to improving the runtime performance of languages such as Java. We compare the performance of a JIT compiler with a traditional compiler and with an emulator. We show that the compilation overhead from using JIT is negligible, and that the JIT compiler achieves better overall performance, suggesting the case for aggresive compilation in JIT compilers. © J. UCS.
2005
Autores
Vargas, PK; De Castro Dutra, I; Dalto Do Nascimento, V; Santos, LAS; Da Silva, LC; Geyer, CFR; Schulze, B;
Publicação
ACM International Conference Proceeding Series
Abstract
One of the challenges in grid computing research is to provide means to automatically submit, manage, and monitor applications which spread a large number of tasks. The usual way of managing these tasks is to represent each one as an explicit node in a graph, and this is the approach taken by many grid systems up to date. This approach can quickly saturate the machine where the application is launched, as we increase the number of tasks. In this work we present and validate a novel architectural model, GRAND (Grid Robust ApplicatioN Deployment), whose main objective is to deal with the problem of memory and load saturation of the submission machine. GRAND is implemented at a middleware level, aiming at providing a distributed task submission through a hierarchical organization. This paper provides an overview of the GRAND submission model as well our implementation. Initial results show that our approach can be much more effective than other approaches in the literature. Copyright 2005 ACM.
2005
Autores
Alves, S; Florido, M;
Publicação
THEORETICAL COMPUTER SCIENCE
Abstract
We identify a restricted class of terms of the lambda calculus, here called weak linear, that includes the linear lambda-terms keeping their good properties of strong normalization, non-duplicating reductions and typability in polynomial time. The advantage of this class over the linear lambda-calculus is the possibility of transforming general terms into weak linear terms with the same normal form. We present such transformation and prove its correctness by showing that it preserves normal forms.
2004
Autores
Lopes, R; Costa, VS; Silva, F;
Publicação
PRACTICAL ASPECTS OF DECLARATIVE LANGUAGES
Abstract
One of the major problems that actual logic programming systems have to address is whether and how to prune undesirable parts of the search space. A region of the search space would definitely be undesirable if it can only repeat previously found solutions, or if it is well-known that the whole computation will fail. Or it may be the case that we are interested in a subset of solutions. In this work we discuss how the BEAM addresses pruning issues. The BEAM is an implementation of David Warren's Extended Andorra Model. Because the BEAM relies on a very flexible execution mechanism, all cases of pruning discussed above should be considered. We show that all these different forms of pruning can be supported, and study their impact in applications.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.