2016
Authors
Machado, N; Lucia, B; Rodrigues, L;
Publication
ACM SIGPLAN NOTICES
Abstract
Concurrency bugs that stem from schedule-dependent branches are hard to understand and debug, because their root causes imply not only different event orderings, but also changes in the control-flow between failing and non-failing executions. We present Cortex: a system that helps exposing and understanding concurrency bugs that result from schedule-dependent branches, without relying on information from failing executions. Cortex preemptively exposes failing executions by perturbing the order of events and control-flow behavior in non-failing schedules from production runs of a program. By leveraging this information from production runs, Cortex synthesizes executions to guide the search for failing schedules. Production-guided search helps cope with the large execution search space by targeting failing executions that are similar to observed non-failing executions. Evaluation on popular benchmarks shows that Cortex is able to expose failing schedules with only a few perturbations to non-failing executions, and takes a practical amount of time.
2016
Authors
Machado, N; Quinta, D; Lucia, B; Rodrigues, L;
Publication
ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY
Abstract
We present Symbiosis: a concurrency debugging technique based on novel differential schedule projections (DSPs). A DSP shows the small set of memory operations and dataflows responsible for a failure, as well as a reordering of those elements that avoids the failure. To build a DSP, Symbiosis first generates a full, failing, multithreaded schedule via thread path profiling and symbolic constraint solving. Symbiosis selectively reorders events in the failing schedule to produce a nonfailing, alternate schedule. A DSP reports the ordering and dataflow differences between the failing and nonfailing schedules. Our evaluation on buggy real-world software and benchmarks shows that, in practical time, Symbiosis generates DSPs that both isolate the small fraction of event orders and dataflows responsible for the failure and report which event reorderings prevent failing. In our experiments, DSPs contain 90% fewer events and 96% fewer dataflows than the full failure-inducing schedules. We also conducted a user study that shows that, by allowing developers to focus on only a few events, DSPs reduce the amount of time required to understand the bug's root cause and find a valid fix.
2016
Authors
Silva, JMC; Carvalho, P; Bispo, KA; Lima, SR;
Publication
UBIQUITOUS COMPUTING AND AMBIENT INTELLIGENCE, UCAMI 2016, PT II
Abstract
This paper proposes a self-adaptive sampling scheme for WSNs, which aims at capturing accurately the behavior of the physical parameters of interest in each specific WSN context yet reducing the overhead in terms of sensing events. The sampling scheme relies on a set of low-complexity rules capable of auto-regulate the sensing frequency in accordance with each parameter behavior. As proof-of-concept, based on real environmental datasets, we provide statistical indicators illustrating the added value of the proposed sampling scheme in reducing sensing events without compromising the estimation accuracy of physical phenomena.
2016
Authors
Silva, JMC;
Publication
Abstract
2016
Authors
Oliveira, A; Perdigao, C; Santos, LP; Proenca, A;
Publication
2016 23RD PORTUGUESE MEETING ON COMPUTER GRAPHICS AND INTERACTION (EPCGI)
Abstract
The CG research community has a renewed interest on rendering algorithms based on path space integration, mainly due to new approaches to discover, generate and exploit relevant light paths while keeping the numerical integrator unbiased or, at the very least, consistent. Simultaneously, the current trend towards massive parallelism and heterogeneous environments, based on a mix of conventional computing units with accelerators, is playing a major role both in HPC and embedded platforms. To efficiently use the available resources in these and future systems, algorithms and software packages are being revisited and reevaluated to assess their adequateness to these environments. This paper assesses the performance and scalability of three different path based algorithms running on homogeneous servers (dual multicore Xeons) and heterogeneous systems (those multicore plus manycore Xeon and NVidia Kepler GPU devices). These algorithms include path tracing (PT), its bidirectional counterpart (BPT) and the more recent Vertex Connect and Merge (VCM). Experimental results with two conventional scenes (one mainly diffuse, the other exhibiting specular-diffuse-specular paths) show that all algorithms scale well across the different platforms, the actual scalability depending on whether shared data structures are accessed or not (PT vs. BPT vs. VCM).
2016
Authors
Ferreira, JF; Mendes, A;
Publication
JOURNAL OF LOGICAL AND ALGEBRAIC METHODS IN PROGRAMMING
Abstract
This paper proposes a calculational approach to prove properties of two well-known binary trees used to enumerate the rational numbers: the Stern-Brocot tree and the Eisenstein-Stern tree (also known as Calkin-Wilf tree). The calculational style of reasoning is enabled by a matrix formulation that is well-suited to naturally formulate path-based properties, since it provides a natural way to refer to paths in the trees. Three new properties are presented. First, we show that nodes with palindromic paths contain the same rational in both the Stern-Brocot and Eisenstein-Stern trees. Second, we show how certain numerators and denominators in these trees can be written as the sum of two squares x(2) and y(2), with the rational x/y appearing in specific paths. Finally, we show how we can construct Sierpifiski's triangle from these trees of rationals. (C) 2015 Published by Elsevier Inc.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.