Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HASLab

2010

On adding structure to unstructured overlay networks

Authors
Leitao, J; Carvalho, NA; Pereira, J; Oliveira, R; Rodrigues, L;

Publication
Handbook of Peer-to-Peer Networking

Abstract
Unstructured peer-to-peer overlay networks are very resilient to churn and topology changes, while requiring little maintenance cost. Therefore, they are an infrastructure to build highly scalable large-scale services in dynamic networks. Typically, the overlay topology is defined by a peer sampling service that aims at maintaining, in each process, a random partial view of peers in the system. The resulting random unstructured topology is suboptimal when a specific performance metric is considered. On the other hand, structured approaches (for instance, a spanning tree) may optimize a given target performance metric but are highly fragile. In fact, the cost for maintaining structures with strong constraints may easily become prohibitive in highly dynamic networks. This chapter discusses different techniques that aim at combining the advantages of unstructured and structured networks. Namely we focus on two distinct approaches, one based on optimizing the overlay and another based on optimizing the gossip mechanism itself. © 2010 Springer Science+Business Media, LLC.

2010

On the Expressiveness and Trade-Offs of Large Scale Tuple Stores

Authors
Vilaca, R; Cruz, F; Oliveira, R;

Publication
ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS: OTM 2010, PT II

Abstract
Massive-scale distributed computing is a challenge at our doorstep. The current exponential growth of data calls for massive-scale capabilities of storage and processing. This is being acknowledged by several major Internet players embracing the cloud computing model and offering first generation distributed tuple stores. Having all started from similar requirements, these systems ended up providing a similar service: A simple tuple store interface, that allows applications to insert, query, and remove individual elements. Furthermore, while availability is commonly assumed to be sustained by the massive scale itself, data consistency and freshness is usually severely hindered. By doing so, these services focus on a specific narrow trade-off between consistency, availability, performance, scale, and migration cost, that is much less attractive to common business needs. In this paper we introduce Data Droplets, a novel tuple store that shifts the current trade-off towards the needs of common business users, providing additional consistency guarantees and higher level data processing primitives smoothing the migration path for existing applications. We present a detailed comparison between Data Droplets and existing systems regarding their data model, architecture and trade-offs. Preliminary results of the system's performance under a realistic workload are also presented.

2010

Scalable Transactions in the Cloud: Partitioning Revisited

Authors
Maia, F; Enrique Armendariz Inigo, JE; Idoia Ruiz Fuertes, MI; Oliveira, R;

Publication
ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS: OTM 2010, PT II

Abstract
Cloud computing is becoming one of the most used paradigins to deploy highly available and scalable systems. These systems usually demand the management of huge amounts of data, which cannot be solved with traditional nor replicated database systems as we know them. Recent solutions store data in special key-value structures, in an approach that commonly lacks the consistency provided by transactional guarantees, as it is traded for high scalability and availability. In order to ensure consistent access to the information, the use of transactions is required. However, it is well-known that traditional replication protocols do not scale well for a cloud environment. Here we take a look at current proposals to deploy transactional systems in the cloud and we propose a new system aiming at being a step forward in achieving this goal. We proceed to focus on data partitioning and describe the key role it plays in achieving high scalability.

2010

Practical Database Replication

Authors
Correia, A; Pereira, J; Rodrigues, L; Carvalho, N; Oliveira, R;

Publication
REPLICATION: THEORY AND PRACTICE

Abstract
This chapter illustrates how the concepts and algorithms described earlier in this book can be used to build practical database replication systems. This is achieved first by addressing architectural challenges on how required functionality is provided by generally available software componentes and then how different components can be efficiently integrated. A second set of practical challenges arises from experience on how performance assumptions map to actual environments and real workloads. The result is a generic architecture for replicated database management systems, focusing on the interfaces between key components, and then on how different algorithmic and practical optimization options map to real world gains. This shows how consistent database replication is achievable in the current state of the art.

2010

Safe Integration of Annotated Components in Open Source Projects

Authors
Areias, S; da Cruz, DC; Henriques, PR; Pinto, JS;

Publication
ECEASST

Abstract
The decision of using existing software components versus building from scratch custom software is one of the most complex and important choices of the entire development/integration process. However, the reuse of software components raises a spectrum of issues, from requirements negotiation to product selection and integration. The correct tradeoff is reached after having analyzed advantages and issues correlated to the reuse. Despite the reuse failures in real cases, many efforts have been made to make this idea successful. In this context of software reuse in open source projects, we address the problem of reusing annotated components proposing a rigorous approach to assure the quality of the application under construction. We introduce the concept of caller-based slicing as a way of certifying that the integration of a component annotated with a contract into a system will preserve the correct behavior of the former, avoiding malfunctioning after integration. To complement the efforts done and the benefits of slicing techniques, there is also a need to find an efficient way to visualize the main program with the annotated components and the slices. To take full profit of visualization, it is crucial to combine the visualization of the control/data flow with the textual representation of source code. To attain this objective, we extend the notions of System Dependence Graph and Slicing Criterion to cope with annotations. © 2010, Universitatsbibliothek TU Berlin.

2010

Assertion-based slicing and slice graphs

Authors
Barros, JB; Da Cruz, D; Henriques, PR; Pinto, JS;

Publication
Proceedings - Software Engineering and Formal Methods, SEFM 2010

Abstract
This paper revisits the idea of slicing programs based on their axiomatic semantics, rather than using criteria based on control/data dependencies. We show how the forward propagation of preconditions and the backward propagation of post conditions can be combined in a new slicing algorithm that is more precise than the existing specification-based algorithms. The algorithm is based on (i) a precise test for removable statements, and (ii) the construction of a slice graph, a program control flow graph extended with semantic labels. It improves on previous approaches in two aspects: it does not fail to identify removable commands; and it produces the smallest possible slice that can be obtained (in a sense that will be made precise). The paper also reviews in detail, through examples, the ideas behind the use of preconditions and post conditions for slicing programs. © 2010 IEEE.

  • 202
  • 262