Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About
Download Photo HD

About

Fábio Coelho (Male, PhD) is currently a senior researcher of HASLab, one of INESC TEC's research units. He holds a PhD in Computer Science, in the context of the MAP-i Doctoral Programme, from the universities of Minho, Aveiro and Porto (Portugal). His research is focused on cloud HTAP databases, cloud computing, distributed systems, P2P/ledger based systems and benchmarking. He has several international publications in top-tier conferences, such as SRDS, DAIS and ICPE. He participated in several national and EU projects such as CoherentPaaS, LeanBigData, CloudDBAppliance and Integrid. Currently he works closely with the Power and Energy Centre of INESC TEC in the provisioning of ICT solutions for coordination and distributed communication.

Interest
Topics
Details

Details

  • Name

    Fábio André Coelho
  • Cluster

    Computer Science
  • Role

    Assistant Researcher
  • Since

    01st January 2014
005
Publications

2021

Functional Scalability and Replicability Analysis for Smart Grid Functions: The InteGrid Project Approach

Authors
Menci, SP; Bessa, RJ; Herndler, B; Korner, C; Rao, B; Leimgruber, F; Madureira, AA; Rua, D; Coelho, F; Silva, JV; Andrade, JR; Sampaio, G; Teixeira, H; Simões, M; Viana, J; Oliveira, L; Castro, D; Krisper, U; André, R;

Publication
Energies

Abstract
The evolution of the electrical power sector due to the advances in digitalization, decarbonization and decentralization has led to the increase in challenges within the current distribution network. Therefore, there is an increased need to analyze the impact of the smart grid and its implemented solutions in order to address these challenges at the earliest stage, i.e., during the pilot phase and before large-scale deployment and mass adoption. Therefore, this paper presents the scalability and replicability analysis conducted within the European project InteGrid. Within the project, innovative solutions are proposed and tested in real demonstration sites (Portugal, Slovenia, and Sweden) to enable the DSO as a market facilitator and to assess the impact of the scalability and replicability of these solutions when integrated into the network. The analysis presents a total of three clusters where the impact of several integrated smart tools is analyzed alongside future large scale scenarios. These large scale scenarios envision significant penetration of distributed energy resources, increased network dimensions, large pools of flexibility, and prosumers. The replicability is analyzed through different types of networks, locations (country-wise), or time (daily). In addition, a simple replication path based on a step by step approach is proposed as a guideline to replicate the smart functions associated with each of the clusters.

2020

Self-tunable DBMS Replication with Reinforcement Learning

Authors
Ferreira, L; Coelho, F; Pereira, J;

Publication
Distributed Applications and Interoperable Systems - Lecture Notes in Computer Science

Abstract

2019

Towards Intra-Datacentre High-Availability in CloudDBAppliance

Authors
Ferreira, L; Coelho, F; Alonso, AN; Pereira, J;

Publication
Proceedings of the 9th International Conference on Cloud Computing and Services Science

Abstract

2019

Recovery in CloudDBAppliance’s High-availability Middleware

Authors
Abreu, H; Ferreira, L; Coelho, F; Alonso, AN; Pereira, J;

Publication
Proceedings of the 8th International Conference on Data Science, Technology and Applications

Abstract

2019

Minha: Large-scale distributed systems testing made practical

Authors
Machado, N; Maia, F; Neves, F; Coelho, F; Pereira, J;

Publication
Leibniz International Proceedings in Informatics, LIPIcs

Abstract
Testing large-scale distributed system software is still far from practical as the sheer scale needed and the inherent non-determinism make it very expensive to deploy and use realistically large environments, even with cloud computing and state-of-the-art automation. Moreover, observing global states without disturbing the system under test is itself difficult. This is particularly troubling as the gap between distributed algorithms and their implementations can easily introduce subtle bugs that are disclosed only with suitably large scale tests. We address this challenge with Minha, a framework that virtualizes multiple JVM instances in a single JVM, thus simulating a distributed environment where each host runs on a separate machine, accessing dedicated network and CPU resources. The key contributions are the ability to run off-the-shelf concurrent and distributed JVM bytecode programs while at the same time scaling up to thousands of virtual nodes; and enabling global observation within standard software testing frameworks. Our experiments with two distributed systems show the usefulness of Minha in disclosing errors, evaluating global properties, and in scaling tests orders of magnitude with the same hardware resources. © Nuno Machado, Francisco Maia, Francisco Neves, Fábio Coelho, and José Pereira; licensed under Creative Commons License CC-BY 23rd International Conference on Principles of Distributed Systems (OPODIS 2019).

Supervised
thesis

2020

Automatic Parameter Tuning Using Reinforcement Learning

Author
Luís Manuel Meruje Ferreira

Institution
UM

2019

High Availability Architecture for Cloud Based Databases

Author
Hugo Miguel Ferreira Abreu

Institution
UM

2018

Mecanismos RDMA para dados colunares em ambientes analíticos

Author
José Miguel Ribeiro da Silva

Institution
UM

2018

Armazenamento de Dados Colunar para Processamento Analítico

Author
Daniel Filipe Vilar Tavares

Institution
UM