Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
About

About

I am a Post Doc researcher at HASLab - High Assurance Lab. at University of Minho and INESC TEC. My research interests are distributed systems, cloud computing, large scale data management and gossip-based protocols.
I obtained a Ph.D. in Computer Science from the Universities of Minho, Aveiro and Porto (MAP-i Doctoral Program in Computer Science) in 2015 advised by Professor Rui Oliveira. My Ph.D work was focused on DataFlasks, a inherently scalable and resilient data store specifically designed for very large scale systems. Designed entirely based on unstructured gossip-based protocols, it is able to cope with very high levels of churn and faults.   I am now interested in how Dataflasks can be enriched in order to provide stronger guarantees while maintaining its scalability properties

Interest
Topics
Details

Details

  • Name

    Francisco Almeida Maia
  • Cluster

    Computer Science
  • Role

    Researcher
  • Since

    01st November 2011
001
Publications

2018

Proceedings of the 1st Workshop on Privacy by Design in Distributed Systems, P2DS@EuroSys 2018, Porto, Portugal, April 23, 2018

Authors
Maia, F; Mercier, H; Brito, A;

Publication
P2DS@EuroSys

Abstract

2018

Totally Ordered Replication for Massive Scale Key-Value Stores

Authors
Ribeiro, J; Machado, N; Maia, F; Matos, M;

Publication
Distributed Applications and Interoperable Systems - 18th IFIP WG 6.1 International Conference, DAIS 2018, Held as Part of the 13th International Federated Conference on Distributed Computing Techniques, DisCoTec 2018, Madrid, Spain, June 18-21, 2018, Proceedings

Abstract

2017

SafeFS: a modular architecture for secure user-space file systems: one FUSE to rule them all

Authors
Pontes, Rogerio; Burihabwa, Dorian; Maia, Francisco; Paulo, Joao; Schiavoni, Valerio; Felber, Pascal; Mercier, Hugues; Oliveira, Rui;

Publication
Proceedings of the 10th ACM International Systems and Storage Conference, SYSTOR 2017, Haifa, Israel, May 22-24, 2017

Abstract
The exponential growth of data produced, the ever faster and ubiquitous connectivity, and the collaborative processing tools lead to a clear shift of data stores from local servers to the cloud. This migration occurring across different application domains and types of users|individual or corporate|raises two immediate challenges. First, outsourcing data introduces security risks, hence protection mechanisms must be put in place to provide guarantees such as privacy, confidentiality and integrity. Second, there is no \one-size-fits-all" solution that would provide the right level of safety or performance for all applications and users, and it is therefore necessary to provide mechanisms that can be tailored to the various deployment scenarios. In this paper, we address both challenges by introducing SafeFS, a modular architecture based on software-defined storage principles featuring stackable building blocks that can be combined to construct a secure distributed file system. SafeFS allows users to specialize their data store to their specific needs by choosing the combination of blocks that provide the best safety and performance tradeoffs. The file system is implemented in user space using FUSE and can access remote data stores. The provided building blocks notably include mechanisms based on encryption, replication, and coding. We implemented SafeFS and performed indepth evaluation across a range of workloads. Results reveal that while each layer has a cost, one can build safe yet efficient storage architectures. Furthermore, the different combinations of blocks sometimes yield surprising tradeoffs. © 2017 ACM.

2017

DDFlasks: Deduplicated Very Large Scale Data Store

Authors
Maia, F; Paulo, J; Coelho, F; Neves, F; Pereira, J; Oliveira, R;

Publication
Distributed Applications and Interoperable Systems - 17th IFIP WG 6.1 International Conference, DAIS 2017, Held as Part of the 12th International Federated Conference on Distributed Computing Techniques, DisCoTec 2017, Neuchâtel, Switzerland, June 19-22, 2017, Proceedings

Abstract
With the increasing number of connected devices, it becomes essential to find novel data management solutions that can leverage their computational and storage capabilities. However, developing very large scale data management systems requires tackling a number of interesting distributed systems challenges, namely continuous failures and high levels of node churn. In this context, epidemic-based protocols proved suitable and effective and have been successfully used to build DataFlasks, an epidemic data store for massive scale systems. Ensuring resiliency in this data store comes with a significant cost in storage resources and network bandwidth consumption. Deduplication has proven to be an efficient technique to reduce both costs but, applying it to a large-scale distributed storage system is not a trivial task. In fact, achieving significant space-savings without compromising the resiliency and decentralized design of these storage systems is a relevant research challenge. In this paper, we extend DataFlasks with deduplication to design DDFlasks. This system is evaluated in a real world scenario using Wikipedia snapshots, and the results are twofold. We show that deduplication is able to decrease storage consumption up to 63% and decrease network bandwidth consumption by up to 20%, while maintaining a fullydecentralized and resilient design. © IFIP International Federation for Information Processing 2017.

2017

SAFETHINGS: Data Security by Design in the IoT

Authors
Barbosa, M; Ben Mokhtar, S; Felber, P; Maia, F; Matos, M; Oliveira, R; Riviere, E; Schiavoni, V; Voulgaris, S;

Publication
13th European Dependable Computing Conference, EDCC 2017, Geneva, Switzerland, September 4-8, 2017

Abstract