Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About
Download Photo HD

About

Fábio Coelho (Male, PhD) is currently a senior researcher of HASLab, one of INESC TEC's research units. He holds a PhD in Computer Science, in the context of the MAP-i Doctoral Programme, from the universities of Minho, Aveiro and Porto (Portugal). His research is focused on cloud HTAP databases, cloud computing, distributed systems, P2P/ledger based systems and benchmarking. He has several international publications in top-tier conferences, such as SRDS, DAIS and ICPE. He participated in several national and EU projects such as CoherentPaaS, LeanBigData, CloudDBAppliance and Integrid. Currently he works closely with the Power and Energy Centre of INESC TEC in the provisioning of ICT solutions for coordination and distributed communication.

Interest
Topics
Details

Details

  • Name

    Fábio André Coelho
  • Role

    Assistant Researcher
  • Since

    01st January 2014
  • Nationality

    Portugal
  • Contacts

    +351253604440
    fabio.a.coelho@inesctec.pt
006
Publications

2020

Self-tunable DBMS Replication with Reinforcement Learning

Authors
Ferreira, L; Coelho, F; Pereira, J;

Publication
Distributed Applications and Interoperable Systems - Lecture Notes in Computer Science

Abstract

2019

Towards Intra-Datacentre High-Availability in CloudDBAppliance

Authors
Ferreira, L; Coelho, F; Alonso, AN; Pereira, J;

Publication
Proceedings of the 9th International Conference on Cloud Computing and Services Science

Abstract

2019

Recovery in CloudDBAppliance’s High-availability Middleware

Authors
Abreu, H; Ferreira, L; Coelho, F; Alonso, AN; Pereira, J;

Publication
Proceedings of the 8th International Conference on Data Science, Technology and Applications

Abstract

2019

Minha: Large-scale distributed systems testing made practical

Authors
Machado, N; Maia, F; Neves, F; Coelho, F; Pereira, J;

Publication
Leibniz International Proceedings in Informatics, LIPIcs

Abstract
Testing large-scale distributed system software is still far from practical as the sheer scale needed and the inherent non-determinism make it very expensive to deploy and use realistically large environments, even with cloud computing and state-of-the-art automation. Moreover, observing global states without disturbing the system under test is itself difficult. This is particularly troubling as the gap between distributed algorithms and their implementations can easily introduce subtle bugs that are disclosed only with suitably large scale tests. We address this challenge with Minha, a framework that virtualizes multiple JVM instances in a single JVM, thus simulating a distributed environment where each host runs on a separate machine, accessing dedicated network and CPU resources. The key contributions are the ability to run off-the-shelf concurrent and distributed JVM bytecode programs while at the same time scaling up to thousands of virtual nodes; and enabling global observation within standard software testing frameworks. Our experiments with two distributed systems show the usefulness of Minha in disclosing errors, evaluating global properties, and in scaling tests orders of magnitude with the same hardware resources. © Nuno Machado, Francisco Maia, Francisco Neves, Fábio Coelho, and José Pereira; licensed under Creative Commons License CC-BY 23rd International Conference on Principles of Distributed Systems (OPODIS 2019).

2017

DDFlasks: Deduplicated Very Large Scale Data Store

Authors
Maia, F; Paulo, J; Coelho, F; Neves, F; Pereira, J; Oliveira, R;

Publication
Distributed Applications and Interoperable Systems - 17th IFIP WG 6.1 International Conference, DAIS 2017, Held as Part of the 12th International Federated Conference on Distributed Computing Techniques, DisCoTec 2017, Neuchâtel, Switzerland, June 19-22, 2017, Proceedings

Abstract
With the increasing number of connected devices, it becomes essential to find novel data management solutions that can leverage their computational and storage capabilities. However, developing very large scale data management systems requires tackling a number of interesting distributed systems challenges, namely continuous failures and high levels of node churn. In this context, epidemic-based protocols proved suitable and effective and have been successfully used to build DataFlasks, an epidemic data store for massive scale systems. Ensuring resiliency in this data store comes with a significant cost in storage resources and network bandwidth consumption. Deduplication has proven to be an efficient technique to reduce both costs but, applying it to a large-scale distributed storage system is not a trivial task. In fact, achieving significant space-savings without compromising the resiliency and decentralized design of these storage systems is a relevant research challenge. In this paper, we extend DataFlasks with deduplication to design DDFlasks. This system is evaluated in a real world scenario using Wikipedia snapshots, and the results are twofold. We show that deduplication is able to decrease storage consumption up to 63% and decrease network bandwidth consumption by up to 20%, while maintaining a fullydecentralized and resilient design. © IFIP International Federation for Information Processing 2017.

Supervised
thesis

2019

High Availability Architecture for Cloud Based Databases

Author
Hugo Miguel Ferreira Abreu

Institution
UM

2018

Armazenamento de Dados Colunar para Processamento Analítico

Author
Daniel Filipe Vilar Tavares

Institution
UM

2018

Mecanismos RDMA para dados colunares em ambientes analíticos

Author
José Miguel Ribeiro da Silva

Institution
UM