Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by João Tiago Paulo

2021

BDUS: implementing block devices in user space

Authors
Faria, A; Macedo, R; Pereira, J; Paulo, J;

Publication
SYSTOR '21: The 14th ACM International Systems and Storage Conference, Haifa, Israel, June 14-16, 2021.

Abstract

2021

MONARCH: Hierarchical Storage Management for Deep Learning Frameworks

Authors
Dantas, M; Leitao, D; Correia, C; Macedo, R; Xu, WJ; Paulo, J;

Publication
2021 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER 2021)

Abstract
Due to convenience and usability, many deep learning (DL) jobs resort to the available shared parallel file system (PFS) for storing and accessing training data when running in HPC environments. Under such a scenario, however, where multiple I/O-intensive applications operate concurrently, the PFS can quickly get saturated with simultaneous storage requests and become a critical performance bottleneck, leading to throughput variability and performance loss. We present MONARCH, a framework-agnostic middleware for hierarchical storage management. This solution leverages the existing storage tiers present at modern supercomputers (e.g., compute node's local storage, PFS) to improve DL training performance and alleviate the current I/O pressure of the shared PFS. We validate the applicability of our approach by developing and integrating an early prototype with the TensorFlow DL framework. Results show that MONARCH can reduce I/O operations submitted to the shared PFS by up to 45%, decreasing training time by 24% and 12%, for I/O-intensive models, namely LeNet and AlexNet.

2021

The Case for Storage Optimization Decoupling in Deep Learning Frameworks

Authors
Macedo, R; Correia, C; Dantas, M; Brito, C; Xu, WJ; Tanimura, Y; Haga, J; Paulo, J;

Publication
2021 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER 2021)

Abstract
Deep Learning (DL) training requires efficient access to large collections of data, leading DL frameworks to implement individual I/O optimizations to take full advantage of storage performance. However, these optimizations are intrinsic to each framework, limiting their applicability and portability across DL solutions, while making them inefficient for scenarios where multiple applications compete for shared storage resources. We argue that storage optimizations should be decoupled from DL frameworks and moved to a dedicated storage layer. To achieve this, we propose a new Software-Defined Storage architecture for accelerating DL training performance. The data plane implements self-contained, generally applicable I/O optimizations, while the control plane dynamically adapts them to cope with workload variations and multi-tenant environments. We validate the applicability and portability of our approach by developing and integrating an early prototype with the TensorFlow and PyTorch frameworks. Results show that our I/O optimizations significantly reduce DL training time by up to 54% and 63% for TensorFlow and PyTorch baseline configurations, while providing similar performance benefits to framework-intrinsic I/O mechanisms provided by TensorFlow.

2021

S2Dedup: SGX-enabled secure deduplication

Authors
Miranda, M; Esteves, T; Portela, B; Paulo, J;

Publication
SYSTOR '21: The 14th ACM International Systems and Storage Conference, Haifa, Israel, June 14-16, 2021.

Abstract
Secure deduplication allows removing duplicate content at third-party storage services while preserving the privacy of users' data. However, current solutions are built with strict designs that cannot be adapted to storage service and applications with different security and performance requirements. We present S2Dedup, a trusted hardware-based privacy-preserving deduplication system designed to support multiple security schemes that enable different levels of performance, security guarantees and space savings. An in-depth evaluation shows these trade-offs for the distinct Intel SGX-based secure schemes supported by our prototype. Moreover, we propose a novel Epoch and Exact Frequency scheme that prevents frequency analysis leakage attacks present in current deterministic approaches for secure deduplication while maintaining similar performance and space savings to state-of-the-art approaches.

2021

CAT: content-aware tracing and analysis for distributed systems

Authors
Esteves, T; Neves, F; Oliveira, R; Paulo, J;

Publication
Middleware '21: 22nd International Middleware Conference, Québec City, Canada, December 6 - 10, 2021

Abstract

2021

ATOCS: Automatic Configuration of Encryption Schemes for Secure NoSQL Databases

Authors
Ferreira, D; Paulo, J; Matos, M;

Publication
2021 17TH EUROPEAN DEPENDABLE COMPUTING CONFERENCE (EDCC 2021)

Abstract
Secure databases have emerged to securely store and process sensitive data at untrusted infrastructures (e.g., Cloud Computing). To be secure and efficient, the encryption schemes used by these systems must be carefully chosen. Indeed, this task requires expertise both in databases and security, and is currently being done manually, which is time-consuming and error-prone and can lead to security violations, poor performance, or both. This paper presents ATOCS, a novel framework that analyses the applications' code and, from the inferred requirements, determines the best combination of encryption schemes and related configurations for the underlying secure NoSQL database. Its design is modular and extensible thus facilitating the support of different applications and database solutions. Our evaluation with real-world applications shows that ATOCS is fast (it takes 44 seconds to analyse more than 12K LoC), accurate, and simplifies the configuration of secure databases.

  • 4
  • 8