Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Ricardo Rocha

2014

A Linear Logic Programming Language for Concurrent Programming over Graph Structures

Authors
Cruz, F; Rocha, R; Goldstein, SC; Pfenning, F;

Publication
THEORY AND PRACTICE OF LOGIC PROGRAMMING

Abstract
We have designed a new logic programming language called LM (Linear Meld) for programming graph- based algorithms in a declarative fashion. Our language is based on linear logic, an expressive logical system where logical facts can be consumed. Because LM integrates both classical and linear logic, LM tends to be more expressive than other logic programming languages. LM programs are naturally concurrent because facts are partitioned by nodes of a graph data structure. Computation is performed at the node level while communication happens between connected nodes. In this paper, we present the syntax and operational semantics of our language and illustrate its use through a number of examples.

2013

On Comparing Alternative Splitting Strategies for Or-Parallel Prolog Execution on Multicores

Authors
Vieira, Rui; Rocha, Ricardo; Silva, FernandoM.A.;

Publication
CoRR

Abstract

2017

On scaling dynamic programming problems with a multithreaded tabling, Prolog system

Authors
Areias, M; Rocha, R;

Publication
JOURNAL OF SYSTEMS AND SOFTWARE

Abstract
Tabling is a powerful implementation technique that improves the declarativeness and expressiveness of traditional Prolog systems in dealing with recursion and redundant computations. It can be viewed as a natural tool to implement dynamic programming problems, where a general recursive strategy divides a problem in simple sub-problems that are often the same. When tabling is combined with multithreading, we have the best of both worlds, since we can exploit the combination of higher declarative semantics with higher procedural control. However, at the engine level, such combination for dynamic programming problems is very difficult to exploit in order to achieve execution scalability as we increase the number of running threads. In this work, we focus on two well-known dynamic programming problems, the Knapsack and the Longest Common Subsequence problems, and we discuss how we were able to scale their execution by using the multithreaded tabling engine of the Yap Prolog system. To the best of our knowledge, this is the first work showing a Prolog system to be able to scale the execution of multithreaded dynamic programming problems. Our experiments also show that our system can achieve comparable or even better speedup results than other parallel implementations of the same problems.

2014

On the correctness and efficiency of lock-free expandable tries for tabled logic programs

Authors
Areias, M; Rocha, R;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Tabling is an implementation technique that improves the declarativeness and expressiveness of Prolog in dealing with recursion and redundant sub-computations. A critical component in the implementation of an efficient tabling framework is the design of the data structures and algorithms to access and manipulate tabled data. One of the most successful data structures for tabling is tries. In previous work, our initial approach to deal with concurrent table accesses, implemented on top of the Yap Prolog system, was to use lock-based trie data structures. In this work, we propose a new design based on lock-free data structures and, in particular, we focus our discussion on the correctness and efficiency of extending Yap's tabling framework to support lock-free expandable tries. Experimental results show that our new lock-free design can effectively reduce the execution time and scale better, when increasing the number of threads, than the original lock-based design. © 2014 Springer International Publishing.

2013

Or-parallel prolog execution on clusters of multicores

Authors
Santos, J; Rocha, R;

Publication
OpenAccess Series in Informatics

Abstract
Logic Programming languages, such as Prolog, provide an excellent framework for the parallel execution of logic programs. In particular, the inherent non-determinism in the way logic programs are structured makes Prolog very attractive for the exploitation of implicit parallelism. One of the most noticeable sources of implicit parallelism in Prolog programs is or-parallelism. Or-parallelism arises from the simultaneous evaluation of a subgoal call against the clauses that match that call. Arguably, the most successful model for or-parallelism is environment copying, that has been efficiently used in the implementation of or-parallel Prolog systems both on shared memory and distributed memory architectures. Nowadays, multicores and clusters of multicores are becoming the norm and, although, many parallel Prolog systems have been developed in the past, to the best of our knowledge, none of them was specially designed to explore the combination of shared with distributed memory architectures. Motivated by our past experience, in designing and developing parallel Prolog systems based on environment copying, we propose a novel computational model to efficiently exploit implicit parallelism from large scale real-world applications specialized for the novel architectures based on clusters of multicores. © João Santos and Ricardo Rocha.

2016

Declarative Coordination of Graph-based Parallel Programs

Authors
Cruz, F; Rocha, R; Goldstein, SC;

Publication
ACM SIGPLAN NOTICES

Abstract
Declarative programming has been hailed as a promising approach to parallel programming since it makes it easier to reason about programs while hiding the implementation details of parallelism from the programmer. However, its advantage is also its disadvantage as it leaves the programmer with no straightforward way to optimize programs for performance. In this paper, we introduce Coordinated Linear Meld (CLM), a concurrent forward-chaining linear logic programming language, with a declarative way to coordinate the execution of parallel programs allowing the programmer to specify arbitrary scheduling and data partitioning policies. Our approach allows the programmer to write graph-based declarative programs and then optionally to use coordination to fine-tune parallel performance. In this paper we specify the set of coordination facts, discuss their implementation in a parallel virtual machine, and show-through example-how they can be used to optimize parallel execution. We compare the performance of CLM programs against the original uncoordinated Linear Meld and several other frameworks.

  • 2
  • 18