Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

I am a Post Doc researcher at HASLab - High Assurance Lab. at University of Minho and INESC TEC. My research interests are distributed systems, cloud computing, large scale data management and gossip-based protocols.
I obtained a Ph.D. in Computer Science from the Universities of Minho, Aveiro and Porto (MAP-i Doctoral Program in Computer Science) in 2015 advised by Professor Rui Oliveira. My Ph.D work was focused on DataFlasks, a inherently scalable and resilient data store specifically designed for very large scale systems. Designed entirely based on unstructured gossip-based protocols, it is able to cope with very high levels of churn and faults.   I am now interested in how Dataflasks can be enriched in order to provide stronger guarantees while maintaining its scalability properties

Interest
Topics
Details

Details

  • Name

    Francisco Almeida Maia
  • Role

    Senior Researcher
  • Since

    01st November 2011
002
Publications

2026

Stochastic dynamic inventory-routing: A comprehensive review

Authors
Maia, F; Figueira, G; Neves-Moreira, F;

Publication
COMPUTERS & OPERATIONS RESEARCH

Abstract
The stochastic dynamic inventory-routing problem (SDIRP) is a fundamental problem within supply chain operations that integrates inventory management and vehicle routing while handling the stochastic and dynamic nature of exogenous factors unveiled over time, such as customer demands, inventory supply and travel times. While practical applications require dynamic and stochastic decision-making, research in this field has only recently experienced significant growth, with most inventory-routing literature focusing on static variants. This paper reviews the current state of research on SDIRPs, identifying critical gaps and highlighting emerging trends in problem settings and decision policies. We extend the existing inventory-routing taxonomies by incorporating additional problem characteristics to better align models with real-world contexts. As a result, we highlight the need to account for further sources of uncertainty, multiple-supplier networks, perishability, multiple objectives, and pickup and delivery operations. We further categorize each study based on its policy design, investigating how different problem aspects shape decision policies. To conclude, we emphasize that large-scale and real-time problems require more attention and can benefit from decomposition approaches and learning-based methods.

2019

d'Artagnan: A Trusted NoSQL Database on Untrusted Clouds

Authors
Pontes, R; Maia, F; Vilaça, R; Machado, N;

Publication
SRDS

Abstract
Privacy sensitive applications that store confidential information such as personal identifiable data or medical records have strict security concerns. These concerns hinder the adoption of the cloud. With cloud providers under the constant threat of malicious attacks, a single successful breach is sufficient to exploit any valuable information and disclose sensitive data. Existing privacy-aware databases mitigate some of these concerns, but sill leak critical information that can potently compromise the entire system's security. This paper proposes d'Artagnan, the first privacy-aware multi-cloud NoSQL database framework that renders database leaks worthless. The framework stores data as encrypted secrets in multiple clouds such that i) a single data breach cannot break the database's confidentiality and ii) queries are processed on the server-side without leaking any sensitive information. d'Artagnan is evaluated with industry-standard benchmark on market-leading cloud providers.

2019

Minha: Large-Scale Distributed Systems Testing Made Practical

Authors
Machado, N; Maia, F; Neves, F; Coelho, F; Pereira, J;

Publication
OPODIS

Abstract
Testing large-scale distributed system software is still far from practical as the sheer scale needed and the inherent non-determinism make it very expensive to deploy and use realistically large environments, even with cloud computing and state-of-the-art automation. Moreover, observing global states without disturbing the system under test is itself difficult. This is particularly troubling as the gap between distributed algorithms and their implementations can easily introduce subtle bugs that are disclosed only with suitably large scale tests. We address this challenge with Minha, a framework that virtualizes multiple JVM instances in a single JVM, thus simulating a distributed environment where each host runs on a separate machine, accessing dedicated network and CPU resources. The key contributions are the ability to run off-the-shelf concurrent and distributed JVM bytecode programs while at the same time scaling up to thousands of virtual nodes; and enabling global observation within standard software testing frameworks. Our experiments with two distributed systems show the usefulness of Minha in disclosing errors, evaluating global properties, and in scaling tests orders of magnitude with the same hardware resources.

2018

Proceedings of the 1st Workshop on Privacy by Design in Distributed Systems, P2DS@EuroSys 2018, Porto, Portugal, April 23, 2018

Authors
Maia, F; Mercier, H; Brito, A;

Publication
P2DS@EuroSys

Abstract

2018

Totally Ordered Replication for Massive Scale Key-Value Stores

Authors
Ribeiro, J; Machado, N; Maia, F; Matos, M;

Publication
DAIS

Abstract
Scalability is one of the most relevant features of today’s data management systems. In order to achieve high scalability and availability, recent distributed key-value stores refrain from costly replica coordination when processing requests. However, these systems typically do not perform well under churn. In this paper, we propose DataFlagons, a large-scale key-value store that integrates epidemic dissemination with a probabilistic total order broadcast algorithm. By ensuring that all replicas process requests in the same order, DataFlagons provides probabilistic strong data consistency while achieving high scalability and robustness under churn.