Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
About

About

I am associate professor at the Informatics Department of University of Minho, where I teach Distributed Systems in undergraduate, master and doctoral courses. I've been the director of the Computer Science and Technology Center (CCTC) of University of Minho from 2005 to 2010 and director of the High-Assurance Software Laboratory (HASLab), a research unit of University of Minho and INESC TEC, from 2010 to 2015. Currently, I am member of the Board of Directors of INESC TEC.

I received my PhD. degree in 2000 from the École Polytechnique Fédérale de Lausanne under the supervision of André Schiper and Rachid Guerraoui. In this work I studied the distributed consensus problem in an environment where participants could fail by crashing and then recover. My research interests are in dependable distributed systems, in particular with application to dependable distributed database systems and large scale systems. My work has been focused on on epidemic communication protocols, large scale data management and high-performace transactional middleware for cloud computing and data science. I've been involved in several research projects funded by the EU, FCT and national and international companies, having coordinated GORDA, ESCADA, StrongRep and Stratus. I currently coordinate the H2020 SafeCloud project.

I currently serve on the Steering Committees of the IEEE SRDS, ACM/IFIP/USENIX Middleware, IFIP DAIS conferences, on the Scientific and Technologic Committee of Instituto do Petróleo e Gás (ISPG) and as vice-chair of IFIP Working Group 6.1.

Interest
Topics
Details

Details

  • Name

    Rui Carlos Oliveira
  • Cluster

    Computer Science
  • Role

    Member of the Board of Directors
  • Since

    01st November 2011
007
Publications

2017

HTAPBench: Hybrid Transactional and Analytical Processing Benchmark

Authors
Coelho, Fabio; Paulo, Joao; Vilaça, Ricardo; Pereira, JoseOrlando; Oliveira, Rui;

Publication
Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering, ICPE 2017, L'Aquila, Italy, April 22-26, 2017

Abstract
The increasing demand for real-time analytics requires the fusion of Transactional (OLTP) and Analytical (OLAP) systems, eschewing ETL processes and introducing a plethora of proposals for the so-called Hybrid Analytical and Trans-actional Processing (HTAP) systems. Unfortunately, current benchmarking approaches are not able to comprehensively produce a unified metric from the assessment of an HTAP system. The evaluation of both engine types is done separately, leading to the use of disjoint sets of benchmarks such as TPC-C or TPC-H. In this paper we propose a new benchmark, HTAPBench, providing a unified metric for HTAP systems geared toward the execution of constantly increasing OLAP requests limited by an admissible impact on OLTP performance. To achieve this, a load balancer within HTAPBench regulates the coexistence of OLTP and OLAP workloads, proposing a method for the generation of both new data and requests, so that OLAP requests over freshly modified data are comparable across runs. We demonstrate the merit of our approach by validating it with different types of systems: OLTP, OLAP and HTAP; showing that the benchmark is able to highlight the differences between them, while producing queries with comparable complexity across experiments with negligible variability. © 2017 ACM.

2017

SafeFS: a modular architecture for secure user-space file systems: one FUSE to rule them all

Authors
Pontes, Rogerio; Burihabwa, Dorian; Maia, Francisco; Paulo, Joao; Schiavoni, Valerio; Felber, Pascal; Mercier, Hugues; Oliveira, Rui;

Publication
Proceedings of the 10th ACM International Systems and Storage Conference, SYSTOR 2017, Haifa, Israel, May 22-24, 2017

Abstract
The exponential growth of data produced, the ever faster and ubiquitous connectivity, and the collaborative processing tools lead to a clear shift of data stores from local servers to the cloud. This migration occurring across different application domains and types of users|individual or corporate|raises two immediate challenges. First, outsourcing data introduces security risks, hence protection mechanisms must be put in place to provide guarantees such as privacy, confidentiality and integrity. Second, there is no \one-size-fits-all" solution that would provide the right level of safety or performance for all applications and users, and it is therefore necessary to provide mechanisms that can be tailored to the various deployment scenarios. In this paper, we address both challenges by introducing SafeFS, a modular architecture based on software-defined storage principles featuring stackable building blocks that can be combined to construct a secure distributed file system. SafeFS allows users to specialize their data store to their specific needs by choosing the combination of blocks that provide the best safety and performance tradeoffs. The file system is implemented in user space using FUSE and can access remote data stores. The provided building blocks notably include mechanisms based on encryption, replication, and coding. We implemented SafeFS and performed indepth evaluation across a range of workloads. Results reveal that while each layer has a cost, one can build safe yet efficient storage architectures. Furthermore, the different combinations of blocks sometimes yield surprising tradeoffs. © 2017 ACM.

2017

DDFlasks: Deduplicated Very Large Scale Data Store

Authors
Maia, F; Paulo, J; Coelho, F; Neves, F; Pereira, J; Oliveira, R;

Publication
Distributed Applications and Interoperable Systems - 17th IFIP WG 6.1 International Conference, DAIS 2017, Held as Part of the 12th International Federated Conference on Distributed Computing Techniques, DisCoTec 2017, Neuchâtel, Switzerland, June 19-22, 2017, Proceedings

Abstract
With the increasing number of connected devices, it becomes essential to find novel data management solutions that can leverage their computational and storage capabilities. However, developing very large scale data management systems requires tackling a number of interesting distributed systems challenges, namely continuous failures and high levels of node churn. In this context, epidemic-based protocols proved suitable and effective and have been successfully used to build DataFlasks, an epidemic data store for massive scale systems. Ensuring resiliency in this data store comes with a significant cost in storage resources and network bandwidth consumption. Deduplication has proven to be an efficient technique to reduce both costs but, applying it to a large-scale distributed storage system is not a trivial task. In fact, achieving significant space-savings without compromising the resiliency and decentralized design of these storage systems is a relevant research challenge. In this paper, we extend DataFlasks with deduplication to design DDFlasks. This system is evaluated in a real world scenario using Wikipedia snapshots, and the results are twofold. We show that deduplication is able to decrease storage consumption up to 63% and decrease network bandwidth consumption by up to 20%, while maintaining a fullydecentralized and resilient design. © IFIP International Federation for Information Processing 2017.

2017

Similarity Aware Shuffling for the Distributed Execution of SQL Window Functions

Authors
Coelho, Fabio; Matos, Miguel; Pereira, Jose; Oliveira, Rui;

Publication
Distributed Applications and Interoperable Systems - 17th IFIP WG 6.1 International Conference, DAIS 2017, Held as Part of the 12th International Federated Conference on Distributed Computing Techniques, DisCoTec 2017, Neuchâtel, Switzerland, June 19-22, 2017, Proceedings

Abstract

2017

Performance trade-offs on a secure multi-party relational database

Authors
Pontes, Rogerio; Pinto, Mario; Barbosa, Manuel; Vilaça, Ricardo; Matos, Miguel; Oliveira, Rui;

Publication
Proceedings of the Symposium on Applied Computing, SAC 2017, Marrakech, Morocco, April 3-7, 2017

Abstract

Supervised
thesis

2016

Towards autonomic workload aware NoSQL databases

Author
Francisco Miguel Carvalho Barros da Cruz

Institution
UM

2015

Towards a transactional and analytical data management system for Big Data

Author
Fábio André Castanheira Luís Coelho

Institution
UM

2015

A Highly Scalable Transactional Query Engine for Cloud Computing

Author
Francisco Miguel da Cruz

Institution
UM

2015

Gestão de Bases de Dados Relacionais em Cloud Computing

Author
André Dias Costa

Institution
UM

2015

Epidemic Store for Massive Scale Systems

Author
Francisco António de Almeida Maia

Institution
UM