Cookies
Usamos cookies para melhorar nosso site e a sua experiência. Ao continuar a navegar no site, você aceita a nossa política de cookies. Ver mais
Aceitar Rejeitar
  • Menu
Sobre
Download foto HD

Sobre

O Fábio nasceu em Lisboa, Portugal em 1988. Licenciou-se em Engenharia de Redes de Computadores e Multimédia em 2011 pelo Instituto Superior de Engenharia de Lisboa. Decidiu depois prosseguir os seus estudos, tendo ingressado no Mestrado em Engenharia Informática da Universidade do Minho, de onde obteve o grau de Mestre em 2013. Desde essa altura, o Fábio é invetigador no HASLab, Laboratório Associado do INESC TEC. Doutorou-se em 2018 no programa doutoral em informática MAP-i administrado em co-tutela pelas  Universidades do Minho, Aveiro e Porto. Conjuntamente, o seu trabalho de investigação e tese de doutoramento focam-se em ferramentas de "Data Analytics" para sistemas de larga escala, vulgo "BigData". De entre outros tópicos, o Fábio interessa-se também por sistemas de "Benchmarking" e por sistemas de processamento transacional distribuídos. Nos seus tempos livres, gosta de viajar e de fotografia.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Fábio André Coelho
  • Cargo

    Investigador Auxiliar
  • Desde

    01 janeiro 2014
  • Nacionalidade

    Portugal
  • Contactos

    +351253604440
    fabio.a.coelho@inesctec.pt
006
Publicações

2020

Self-tunable DBMS Replication with Reinforcement Learning

Autores
Ferreira, L; Coelho, F; Pereira, J;

Publicação
Distributed Applications and Interoperable Systems - Lecture Notes in Computer Science

Abstract

2019

Towards Intra-Datacentre High-Availability in CloudDBAppliance

Autores
Ferreira, L; Coelho, F; Alonso, AN; Pereira, J;

Publicação
Proceedings of the 9th International Conference on Cloud Computing and Services Science

Abstract

2019

Recovery in CloudDBAppliance’s High-availability Middleware

Autores
Abreu, H; Ferreira, L; Coelho, F; Alonso, AN; Pereira, J;

Publicação
Proceedings of the 8th International Conference on Data Science, Technology and Applications

Abstract

2019

Minha: Large-scale distributed systems testing made practical

Autores
Machado, N; Maia, F; Neves, F; Coelho, F; Pereira, J;

Publicação
Leibniz International Proceedings in Informatics, LIPIcs

Abstract
Testing large-scale distributed system software is still far from practical as the sheer scale needed and the inherent non-determinism make it very expensive to deploy and use realistically large environments, even with cloud computing and state-of-the-art automation. Moreover, observing global states without disturbing the system under test is itself difficult. This is particularly troubling as the gap between distributed algorithms and their implementations can easily introduce subtle bugs that are disclosed only with suitably large scale tests. We address this challenge with Minha, a framework that virtualizes multiple JVM instances in a single JVM, thus simulating a distributed environment where each host runs on a separate machine, accessing dedicated network and CPU resources. The key contributions are the ability to run off-the-shelf concurrent and distributed JVM bytecode programs while at the same time scaling up to thousands of virtual nodes; and enabling global observation within standard software testing frameworks. Our experiments with two distributed systems show the usefulness of Minha in disclosing errors, evaluating global properties, and in scaling tests orders of magnitude with the same hardware resources. © Nuno Machado, Francisco Maia, Francisco Neves, Fábio Coelho, and José Pereira; licensed under Creative Commons License CC-BY 23rd International Conference on Principles of Distributed Systems (OPODIS 2019).

2017

DDFlasks: Deduplicated Very Large Scale Data Store

Autores
Maia, F; Paulo, J; Coelho, F; Neves, F; Pereira, J; Oliveira, R;

Publicação
Distributed Applications and Interoperable Systems - 17th IFIP WG 6.1 International Conference, DAIS 2017, Held as Part of the 12th International Federated Conference on Distributed Computing Techniques, DisCoTec 2017, Neuchâtel, Switzerland, June 19-22, 2017, Proceedings

Abstract
With the increasing number of connected devices, it becomes essential to find novel data management solutions that can leverage their computational and storage capabilities. However, developing very large scale data management systems requires tackling a number of interesting distributed systems challenges, namely continuous failures and high levels of node churn. In this context, epidemic-based protocols proved suitable and effective and have been successfully used to build DataFlasks, an epidemic data store for massive scale systems. Ensuring resiliency in this data store comes with a significant cost in storage resources and network bandwidth consumption. Deduplication has proven to be an efficient technique to reduce both costs but, applying it to a large-scale distributed storage system is not a trivial task. In fact, achieving significant space-savings without compromising the resiliency and decentralized design of these storage systems is a relevant research challenge. In this paper, we extend DataFlasks with deduplication to design DDFlasks. This system is evaluated in a real world scenario using Wikipedia snapshots, and the results are twofold. We show that deduplication is able to decrease storage consumption up to 63% and decrease network bandwidth consumption by up to 20%, while maintaining a fullydecentralized and resilient design. © IFIP International Federation for Information Processing 2017.

Teses
supervisionadas

2019

High Availability Architecture for Cloud Based Databases

Autor
Hugo Miguel Ferreira Abreu

Instituição
UM

2018

Armazenamento de Dados Colunar para Processamento Analítico

Autor
Daniel Filipe Vilar Tavares

Instituição
UM

2018

Mecanismos RDMA para dados colunares em ambientes analíticos

Autor
José Miguel Ribeiro da Silva

Instituição
UM