2016
Autores
Jongmans, SSTQ; Clarke, D; Proença, J;
Publicação
Sci. Comput. Program.
Abstract
We present a procedure for splitting processes in a process algebra with multiactions and data (the untimed subset of the specification language mCRL2). This splitting procedure cuts a process into two processes along a set of actions A: Roughly, one of these processes contains no actions from A, while the other process contains only actions from A. We state and prove a theorem asserting that the parallel composition of these two processes is provably equal from a set of axioms (sound and complete with respect to strong bisimilarity) to the original process under some appropriate notion of synchronization. We apply our splitting procedure to the process algebraic semantics of the coordination language Reo: Using this procedure and its related theorem, we formally establish the soundness of splitting Reo connectors along the boundaries of their (a)synchronous regions in implementations of Reo. Such splitting can significantly improve the performance of connectors as shown elsewhere.
2016
Autores
Lafuente, AL; Proença, J;
Publicação
COORDINATION
Abstract
2016
Autores
Madeira, A; Neves, R; Martins, MA;
Publicação
JOURNAL OF LOGICAL AND ALGEBRAIC METHODS IN PROGRAMMING
Abstract
In the last decades, dynamic logics have been used in different domains as a suitable formalism to reason about and specify a wide range of systems. On the other hand, logics with many-valued semantics are emerging as an interesting tool to handle devices and scenarios where uncertainty is a prime concern. This paper contributes towards the combination of these two aspects through the development of a method for the systematic construction of many-valued dynamic logics. Technically, the method is parameterised by an action lattice that defines both the computational paradigm and the truth space (corresponding to the underlying Kleene algebra and residuated lattices, respectively).
2016
Autores
Machado, N; Lucia, B; Rodrigues, L;
Publicação
ACM SIGPLAN NOTICES
Abstract
Concurrency bugs that stem from schedule-dependent branches are hard to understand and debug, because their root causes imply not only different event orderings, but also changes in the control-flow between failing and non-failing executions. We present Cortex: a system that helps exposing and understanding concurrency bugs that result from schedule-dependent branches, without relying on information from failing executions. Cortex preemptively exposes failing executions by perturbing the order of events and control-flow behavior in non-failing schedules from production runs of a program. By leveraging this information from production runs, Cortex synthesizes executions to guide the search for failing schedules. Production-guided search helps cope with the large execution search space by targeting failing executions that are similar to observed non-failing executions. Evaluation on popular benchmarks shows that Cortex is able to expose failing schedules with only a few perturbations to non-failing executions, and takes a practical amount of time.
2016
Autores
Machado, N; Quinta, D; Lucia, B; Rodrigues, L;
Publicação
ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY
Abstract
We present Symbiosis: a concurrency debugging technique based on novel differential schedule projections (DSPs). A DSP shows the small set of memory operations and dataflows responsible for a failure, as well as a reordering of those elements that avoids the failure. To build a DSP, Symbiosis first generates a full, failing, multithreaded schedule via thread path profiling and symbolic constraint solving. Symbiosis selectively reorders events in the failing schedule to produce a nonfailing, alternate schedule. A DSP reports the ordering and dataflow differences between the failing and nonfailing schedules. Our evaluation on buggy real-world software and benchmarks shows that, in practical time, Symbiosis generates DSPs that both isolate the small fraction of event orders and dataflows responsible for the failure and report which event reorderings prevent failing. In our experiments, DSPs contain 90% fewer events and 96% fewer dataflows than the full failure-inducing schedules. We also conducted a user study that shows that, by allowing developers to focus on only a few events, DSPs reduce the amount of time required to understand the bug's root cause and find a valid fix.
2016
Autores
Silva, JMC; Carvalho, P; Bispo, KA; Lima, SR;
Publicação
UBIQUITOUS COMPUTING AND AMBIENT INTELLIGENCE, UCAMI 2016, PT II
Abstract
This paper proposes a self-adaptive sampling scheme for WSNs, which aims at capturing accurately the behavior of the physical parameters of interest in each specific WSN context yet reducing the overhead in terms of sensing events. The sampling scheme relies on a set of low-complexity rules capable of auto-regulate the sensing frequency in accordance with each parameter behavior. As proof-of-concept, based on real environmental datasets, we provide statistical indicators illustrating the added value of the proposed sampling scheme in reducing sensing events without compromising the estimation accuracy of physical phenomena.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.