2019
Authors
Pontes, R; Maia, F; Vilaça, R; Machado, N;
Publication
SRDS
Abstract
Privacy sensitive applications that store confidential information such as personal identifiable data or medical records have strict security concerns. These concerns hinder the adoption of the cloud. With cloud providers under the constant threat of malicious attacks, a single successful breach is sufficient to exploit any valuable information and disclose sensitive data. Existing privacy-aware databases mitigate some of these concerns, but sill leak critical information that can potently compromise the entire system's security. This paper proposes d'Artagnan, the first privacy-aware multi-cloud NoSQL database framework that renders database leaks worthless. The framework stores data as encrypted secrets in multiple clouds such that i) a single data breach cannot break the database's confidentiality and ii) queries are processed on the server-side without leaking any sensitive information. d'Artagnan is evaluated with industry-standard benchmark on market-leading cloud providers.
2019
Authors
Machado, N; Maia, F; Neves, F; Coelho, F; Pereira, J;
Publication
OPODIS
Abstract
Testing large-scale distributed system software is still far from practical as the sheer scale needed and the inherent non-determinism make it very expensive to deploy and use realistically large environments, even with cloud computing and state-of-the-art automation. Moreover, observing global states without disturbing the system under test is itself difficult. This is particularly troubling as the gap between distributed algorithms and their implementations can easily introduce subtle bugs that are disclosed only with suitably large scale tests. We address this challenge with Minha, a framework that virtualizes multiple JVM instances in a single JVM, thus simulating a distributed environment where each host runs on a separate machine, accessing dedicated network and CPU resources. The key contributions are the ability to run off-the-shelf concurrent and distributed JVM bytecode programs while at the same time scaling up to thousands of virtual nodes; and enabling global observation within standard software testing frameworks. Our experiments with two distributed systems show the usefulness of Minha in disclosing errors, evaluating global properties, and in scaling tests orders of magnitude with the same hardware resources.
2015
Authors
Maia, F;
Publication
Abstract
2026
Authors
Maia, F; Figueira, G; Neves-Moreira, F;
Publication
COMPUTERS & OPERATIONS RESEARCH
Abstract
The stochastic dynamic inventory-routing problem (SDIRP) is a fundamental problem within supply chain operations that integrates inventory management and vehicle routing while handling the stochastic and dynamic nature of exogenous factors unveiled over time, such as customer demands, inventory supply and travel times. While practical applications require dynamic and stochastic decision-making, research in this field has only recently experienced significant growth, with most inventory-routing literature focusing on static variants. This paper reviews the current state of research on SDIRPs, identifying critical gaps and highlighting emerging trends in problem settings and decision policies. We extend the existing inventory-routing taxonomies by incorporating additional problem characteristics to better align models with real-world contexts. As a result, we highlight the need to account for further sources of uncertainty, multiple-supplier networks, perishability, multiple objectives, and pickup and delivery operations. We further categorize each study based on its policy design, investigating how different problem aspects shape decision policies. To conclude, we emphasize that large-scale and real-time problems require more attention and can benefit from decomposition approaches and learning-based methods.
2012
Authors
Maia, F; Matos, M; Riviere, E; Oliveira, R;
Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Slicing a large-scale distributed system is the process of autonomously partitioning its nodes into k groups, named slices. Slicing is associated to an order on node-specific criteria, such as available storage, uptime, or bandwidth. Each slice corresponds to the nodes between two quantiles in a virtual ranking according to the criteria. For instance, a system can be split in three groups, one with nodes with the lowest uptimes, one with nodes with the highest uptimes, and one in the middle. Such a partitioning can be used by applications to assign different tasks to different groups of nodes, e.g., assigning critical tasks to the more powerful or stable nodes and less critical tasks to other slices. Assigning a slice to each node in a large-scale distributed system, where no global knowledge of nodes' criteria exists, is not trivial. Recently, much research effort was dedicated to guaranteeing a fast and correct convergence in comparison to a global sort of the nodes. Unfortunately, state-of-the-art slicing protocols exhibit flaws that preclude their application in real scenarios, in particular with respect to cost and stability. In this paper, we identify steadiness issues where nodes in a slice border constantly exchange slice and large memory requirements for adequate convergence, and provide practical solutions for the two. Our solutions are generic and can be applied to two different state-of-the-art slicing protocols with little effort and while preserving the desirable properties of each. The effectiveness of the proposed solutions is extensively studied in several simulated experiments. © 2012 IFIP International Federation for Information Processing.
2011
Authors
Maia, F; Matos, M; Pereira, J; Oliveira, R;
Publication
DISTRIBUTED APPLICATIONS AND INTEROPERABLE SYSTEMS
Abstract
Consensus is an abstraction of a variety of important challenges in dependable distributed systems. Thus a large body of theoretical knowledge is focused on modeling and solving consensus within different system assumptions. However, moving from theory to practice imposes compromises and design decisions that may impact the elegance, trade-offs and correctness of theoretical appealing consensus protocols. In this paper we present the implementation and detailed analysis, in a real environment with a large number of nodes, of mutable consensus, a theoretical appealing protocol able to offer a wide range of trade-offs (called mutations) between decision latency and message complexity. The analysis sheds light on the fundamental behavior of the mutations, and leads to the identification of problems related to the real environment. Such problems are addressed without ever affecting the correctness of the theoretical proposal.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.