Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HASLab

2021

EcoAndroid: An Android Studio Plugin for Developing Energy-Efficient Java Mobile Applications

Authors
Ribeiro, A; Ferreira, JF; Mendes, A;

Publication
2021 IEEE 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2021)

Abstract
Mobile devices have become indispensable in our daily life and reducing the energy consumed by them has become essential. However, developing energy-efficient mobile applications is not a trivial task. To address this problem, we present EcoAndroid, an Android Studio plugin that automatically applies energy patterns to Java source code. It currently supports ten different cases of energy-related refactorings, divided over five energy patterns taken from the literature. We used EcoAndroid to analyze 100 Java mobile applications (approximate to 1.5M LOC) and we found that 35 of the projects had a total of 95 energy code smells. EcoAndroid was able to automatically refactor all the code smells identified.

2021

Formal Methods Teaching

Authors
Ferreira, JF; Mendes, A; Menghi, C;

Publication
Lecture Notes in Computer Science

Abstract

2021

Quantum simulation of the ground-state Stark effect in small molecules: a case study using IBM Q

Authors
Tavares, C; Oliveira, S; Fernandes, V; Postnikov, A; Vasilevskiy, MI;

Publication
SOFT COMPUTING

Abstract
As quantum computing approaches its first commercial implementations, quantum simulation emerges as a potentially ground-breaking technology for several domains, including biology and chemistry. However, taking advantage of quantum algorithms in quantum chemistry raises a number of theoretical and practical challenges at different levels, from the conception to its actual execution. We go through such challenges in a case study of a quantum simulation for the hydrogen (H-2) and lithium hydride (LiH) molecules, at an actual commercially available quantum computer, the IBM Q. The former molecule has always been a playground for testing approximate calculation methods in quantum chemistry, while the latter is just a little bit more complex, lacking the mirror symmetry of the former. Using the variational quantum eigensolver method, we study the molecule's ground state energy versus interatomic distance, under the action of stationary electric fields (Stark effect). Additionally, we review the necessary calculations of the matrix elements of the second quantization Hamiltonian encompassing the extra terms concerning the action of electric fields, using STO-LG-type atomic orbitals to build the minimal basis sets.

2021

PRNG-Broker: A High-Performance Broker to Supply Parallel Streams of Pseudorandom Numbers for Large-Scale Simulations

Authors
Pereira, A; Proenca, A;

Publication
Advances in Parallel & Distributed Processing, and Applications - Transactions on Computational Science and Computational Intelligence

Abstract

2021

HEP-Frame: Improving the efficiency of pipelined data transformation & filtering for scientific analyses

Authors
Pereira, A; Proenca, A;

Publication
COMPUTER PHYSICS COMMUNICATIONS

Abstract
Software to analyse very large sets of experimental data often relies on a pipeline of irregular computational tasks with decisions to remove irrelevant data from further processing. A user-centred framework was designed and deployed, HEP-Frame, which aids domain experts to develop applications for scientific data analyses and to monitor and control their efficient execution. The key feature of HEP-Frame is the performance portability of the code across different heterogeneous platforms, due to a novel adaptive multi-layer scheduler, seamlessly integrated into the tool, an approach not available in competing frameworks. The multi-layer scheduler transparently allocates parallel data/tasks across the available heteroge-neous resources, dynamically balances threads among data input and computational tasks, adaptively reorders in run-time the parallel execution of the pipeline stages for each data stream, respecting data dependencies, and efficiently manages the execution of library functions in accelerators. Each layer implements a specific scheduling strategy: one balances the execution of the computational stages of the pipeline, distributing the execution of the stages of the same or different dataset elements among the available computing threads; another controls the order of the pipeline stages execution, so that most data is filtered out earlier and later stages execute the computationally heavy tasks; yet another adaptively balances the automatically created threads among data input and the computational tasks, taking into account the requirements of each application. Simulated data analyses from sensors in the ATLAS Experiment at CERN evaluated the scheduler efficiency, on dual multicore Xeon servers with and without accelerators, and on servers with the many-core Intel KNL. Experimental results show significant improved performance of these data analyses due to HEP-Frame features and the codes scaled well on multiple servers. Results also show the improved HEP-Frame scheduler performance over the key competitor, the HEFT list scheduler. The best overall performance improvement over a real fine tuned sequential data analysis was impressive in both homogeneous and heterogeneous multicore servers and in many-core servers: 81x faster in the homogeneous 24+24 core Skylake server, 86x faster in the heterogeneous 12+12 core Ivy Bridge server with the Kepler GPU, and 252x faster in the 64-core KNL server. Program summary Program Title: HEP-Frame CPC Library link to program files: https://doi.org/10.17632/m2jwxshtfz.1 Licencing provisions: GPLv3 Programming language: C++. Supplementary material: The current HEP-Frame public release available at https://bitbucket.org/ ampereira/hep-frame/wiki/Home . Nature of problem: Scientific data analysis applications are often developed to process large amounts of data obtained through experimental measurements or Monte Carlo simulations, aiming to identify patterns in the data or to test and/or validate theories. These large inputs are usually processed by a pipeline of computational tasks that may filter out irrelevant data (a task and its filter is addressed as a proposition in this communication), preventing it from being processed by subsequent tasks in the pipeline. This data filtering, coupled with the fact that propositions may have different computational intensities, contribute to the irregularity of the pipeline execution. This can lead to scientific data analyses I/O-, memory-, or compute-bound performance limitations, depending on the implemented algorithms and input data. To allow scientists to process more data with more accurate results their code and data structures should be optimized for the computing resources they can access. Since the main goal of most scientists is to obtain results relevant to their scientific fields, often within strict deadlines, optimizing the performance of their applications is very time consuming and is usually overlooked. Scientists require a software framework to aid the design and development of efficient applications and to control their parallel execution on distinct computing platforms. Solution method: This work proposes HEP-Frame, a framework to aid the development and efficient execution of pipelined scientific analysis applications on homogeneous and heterogeneous servers. HEP-Frame is a user-centred framework to aid scientists to develop applications to analyse data from a large number of dataset elements, with a flexible pipeline of propositions. It not only stresses the interface to domain experts so that code is more robust and is developed faster, but it also aims high-performance portability across different types of parallel computing platforms and desirable sustainability features. This framework aims to provide efficient parallel code execution without requiring user expertise in parallel computing. Frameworks to aid the design and deployment of scientific code usually fall into two categories: (i) resource-centred, closer to the computing platforms, where execution efficiency and performance portability are the main goals, but forces developers to adapt their code to strict framework con-straints; (ii) user-centred, which stresses the interface to domain experts to improve their code development speed and robustness, aiming to provide desirable sustainability features but disregarding the execution performance. There are also a set of frameworks that merge these two categories (Liu et al., 2015 [1]; Deelman et al., 2015 [2]) for scientific computing. While they do not have steep learning curves, concessions have to be made to their ease of use to allow for their broader scope of targeted applications. HEP-Frame attempts to merge this gap, placing itself between a fully user-or resource-centred framework, so that users develop code quickly and do not have to worry about the computational efficiency of the code It handles (i) by ensuring efficient execution of applications according to their computational requirements and the available resources on the server through a multi-layer scheduler, while (ii) is addressed by automatically generating code skeletons and transparently managing the data structure and automating repetitive tasks. Additional comments: An early stage proof-of-concept was published in a conference proceedings (Pereira et al., 2015). However, the HEP-Frame version presented in this communication only shares a very small portion of the code related to the skeleton generation (less than 5% of the overall code), while the rest of the user interface, multi-layer scheduler, and parallelization strategies were completely redesigned and re-implemented.

2020

A Comparison of Message Exchange Patterns in BFT Protocols - (Experience Report)

Authors
Silva, F; Alonso, AN; Pereira, J; Oliveira, R;

Publication
Distributed Applications and Interoperable Systems - 20th IFIP WG 6.1 International Conference, DAIS 2020, Held as Part of the 15th International Federated Conference on Distributed Computing Techniques, DisCoTec 2020, Valletta, Malta, June 15-19, 2020, Proceedings

Abstract
The performance and scalability of byzantine fault-tolerant (BFT) protocols for state machine replication (SMR) have recently come under scrutiny due to their application in the consensus mechanism of blockchain implementations. This led to a proliferation of proposals that provide different trade-offs that are not easily compared as, even if these are all based on message passing, multiple design and implementation factors besides the message exchange pattern differ between each of them. In this paper we focus on the impact of different combinations of cryptographic primitives and the message exchange pattern used to collect and disseminate votes, a key aspect for performance and scalability. By measuring this aspect in isolation and in a common framework, we characterise the design space and point out research directions for adaptive protocols that provide the best trade-off for each environment and workload combination. © IFIP International Federation for Information Processing 2020.

  • 71
  • 266