Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRACS

2021

Towards a Modular On-Premise Approach for Data Sharing

Authors
Resende, JS; Magalhaes, L; Brandao, A; Martins, R; Antunes, L;

Publication
SENSORS

Abstract
The growing demand for everyday data insights drives the pursuit of more sophisticated infrastructures and artificial intelligence algorithms. When combined with the growing number of interconnected devices, this originates concerns about scalability and privacy. The main problem is that devices can detect the environment and generate large volumes of possibly identifiable data. Public cloud-based technologies have been proposed as a solution, due to their high availability and low entry costs. However, there are growing concerns regarding data privacy, especially with the introduction of the new General Data Protection Regulation, due to the inherent lack of control caused by using off-premise computational resources on which public cloud belongs. Users have no control over the data uploaded to such services as the cloud, which increases the uncontrolled distribution of information to third parties. This work aims to provide a modular approach that uses cloud-of-clouds to store persistent data and reduce upfront costs while allowing information to remain private and under users' control. In addition to storage, this work also extends focus on usability modules that enable data sharing. Any user can securely share and analyze/compute the uploaded data using private computing without revealing private data. This private computation can be training machine learning (ML) models. To achieve this, we use a combination of state-of-the-art technologies, such as MultiParty Computation (MPC) and K-anonymization to produce a complete system with intrinsic privacy properties.

2021

Hardening cryptographic operations through the use of secure enclaves

Authors
Brandao, A; Resende, JS; Martins, R;

Publication
COMPUTERS & SECURITY

Abstract
With the rising popularity of the cloud, companies lose control of both the hardware and the operating system responsible for hosting their software and data. This means that companies are at risk of losing confidential data when these are utilized in components controlled by a third-party cloud vendor. Secure enclaves can help solve this problem by creating a secure environment where code can be executed securely, guaranteeing that no unwanted parties read or modify the data inside this secure environment. While the use of secure enclaves has been focused on small footprints software, such as the implementation of trusted computing base for distributed protocols, we analyze the strengths and shortcoming of current tools in an effort to further expand the applicability of their use. Given the importance of web servers and their inherent greater exposure to attacks, we explore the hardening of Apache web server through the use of secure enclaves. This was accomplished by making the necessary modifications to further protect its private key from both the operating system and hypervisor. We also provide a performance assessment to quantify the overhead associated with the use of secure enclaves, namely, Intel SGX.

2021

ZERMIA - A Fault Injector Framework for Testing Byzantine Fault Tolerant Protocols

Authors
Soares, J; Fernandez, R; Silva, M; Freitas, T; Martins, R;

Publication
NETWORK AND SYSTEM SECURITY, NSS 2021

Abstract
Byzantine fault tolerant (BFT) protocols are designed to increase system dependability and security. They guarantee liveness and correctness even in the presence of arbitrary faults. However, testing and validating BFT systems is not an easy task. As is the case for most concurrent and distributed applications, the correctness of these systems is not solely dependant on algorithm and protocol correctness. Ensuring the correct behaviour of BFT systems requires exhaustive testing under real-world scenarios. An approach is to use fault injection tools that deliberate introduce faults into a target system to observe its behaviour. However, existing tools tend to be designed for specific applications and systems, thus cannot be used generically. We argue that more advanced and powerful tools and frameworks are needed for testing the security and safety of distributed applications in general, and BFT systems in particular. Specifically, a fault injection framework that can be integrated into both client and server side applications, for testing them exhaustively. We present ZERMIA, a modular and extensible fault injection framework, designed for testing and validating concurrent and distributed applications. We validate ZERMIA’s principles by conduction a series of experiments on a distributed applications and a state of the art BFT library, to show the benefits of ZERMIA for testing and validating applications. © 2021, Springer Nature Switzerland AG.

2021

The Entropy Universe

Authors
Ribeiro, M; Henriques, T; Castro, L; Souto, A; Antunes, L; Costa Santos, C; Teixeira, A;

Publication
ENTROPY

Abstract
About 160 years ago, the concept of entropy was introduced in thermodynamics by Rudolf Clausius. Since then, it has been continually extended, interpreted, and applied by researchers in many scientific fields, such as general physics, information theory, chaos theory, data mining, and mathematical linguistics. This paper presents The Entropy Universe, which aims to review the many variants of entropies applied to time-series. The purpose is to answer research questions such as: How did each entropy emerge? What is the mathematical definition of each variant of entropy? How are entropies related to each other? What are the most applied scientific fields for each entropy? We describe in-depth the relationship between the most applied entropies in time-series for different scientific fields, establishing bases for researchers to properly choose the variant of entropy most suitable for their data. The number of citations over the past sixteen years of each paper proposing a new entropy was also accessed. The Shannon/differential, the Tsallis, the sample, the permutation, and the approximate entropies were the most cited ones. Based on the ten research areas with the most significant number of records obtained in the Web of Science and Scopus, the areas in which the entropies are more applied are computer science, physics, mathematics, and engineering. The universe of entropies is growing each day, either due to the introducing new variants either due to novel applications. Knowing each entropy's strengths and of limitations is essential to ensure the proper improvement of this research field.

2021

Complexity as cardiorespiratory coupling measure in neonates with different gestational ages

Authors
Ribeiro, M; Castro, L; Antunes, L; Costa Santos, C; Henriques, T;

Publication
Proceedings of Entropy 2021: The Scientific Tool of the 21st Century

Abstract

2021

A Standard-Based Internet of Things Platform and Data Flow Modeling for Smart Environmental Monitoring

Authors
Filho, T; Fernando, L; Rabelo, M; Silva, S; Santos, C; Ribeiro, M; Grout, IA; Moreira, W; Oliveira, A;

Publication
SENSORS

Abstract
The environment consists of the interaction between the physical, biotic, and anthropic means. As this interaction is dynamic, environmental characteristics tend to change naturally over time, requiring continuous monitoring. In this scenario, the internet of things (IoT), together with traditional sensor networks, allows for the monitoring of various environmental aspects such as air, water, atmospheric, and soil conditions, and sending data to different users and remote applications. This paper proposes a Standard-based Internet of Things Platform and Data Flow Modeling for Smart Environmental Monitoring. The platform consists of an IoT network based on the IEEE 1451 standard which has the network capable application processor (NCAP) node (coordinator) and multiple wireless transducers interface module (WTIM) nodes. A WTIM node consists of one or more transducers, a data transfer interface and a processing unit. Thus, with the developed network, it is possible to collect environmental data at different points within a city landscape, to perform analysis of the communication distance between the WTIM nodes, and monitor the number of bytes transferred according to each network node. In addition, a dynamic model of data flow is proposed where the performance of the NCAP and WTIM nodes are described through state variables, relating directly to the information exchange dynamics between the communicating nodes in the mesh network. The modeling results showed stability in the network. Such stability means that the network has capacity of preserve its flow of information, for a long period of time, without loss frames or packets due to congestion.

  • 49
  • 208