Details
Name
André Martins PereiraRole
Senior ResearcherSince
12th September 2022
Nationality
PortugalCentre
High-Assurance SoftwareContacts
+351253604440
andre.martins.pereira@inesctec.pt
2025
Authors
Coelho, M; Ocana, K; Pereira, A; Porto, A; Cardoso, DO; Lorenzon, A; Oliveira, R; Navaux, POA; Osthoff, C;
Publication
HIGH PERFORMANCE COMPUTING, CARLA 2024
Abstract
High-performance computing is pivotal for processing large datasets and executing complex simulations, ensuring faster and more accurate results. Improving the performance of software and scientific workflows in such environments requires careful analysis of their computational behavior and energy consumption. Therefore, maximizing computational throughput in these environments, through adequate software configuration and resource allocation, is essential for improving performance. The work presented in this paper focuses on leveraging regression-based machine learning and decision trees to analyze and optimize resource allocation in high-performance computing environments based on application's performance and energy metrics. Applied to a bioinformatics case study, these models enable informed decision-making by selecting the appropriate computing resources to enhance the performance of a phylogenomics software. Our contribution is to better explore and understand the efficient resource management of supercomputers, namely Santos Dumont. We show that the predictions for application's execution time using the proposed method are accurate for various amounts of computing nodes, while energy consumption predictions are less precise. The application parameters most relevant for this work are identified and the relative importance of each application parameter to the accuracy of the prediction is analysed.
2024
Authors
Silva, CA; Vilaça, R; Pereira, A; Bessa, RJ;
Publication
RENEWABLE & SUSTAINABLE ENERGY REVIEWS
Abstract
High-performance computing relies on performance-oriented infrastructures with access to powerful computing resources to complete tasks that contribute to solve complex problems in society. The intensive use of resources and the increase in service demand due to emerging fields of science, combined with the exascale paradigm, climate change concerns, and rising energy costs, ultimately means that the decarbonization of these centers is key to improve their environmental and financial performance. Therefore, a review on the main opportunities and challenges for the decarbonization of high-performance computing centers is essential to help decision-makers, operators and users contribute to a more sustainable computing ecosystem. It was found that state-of-the-art supercomputers are growing in computing power, but are combining different measures to meet sustainability concerns, namely going beyond energy efficiency measures and evolving simultaneously in terms of energy and information technology infrastructure. It was also shown that policy and multiple entities are now targeting specifically HPC, and that identifying synergies with the energy sector can reveal new revenue streams, but also enable a smoother integration of these centers in energy systems. Computing-intensive users can continue to pursue their scientific research, but participating more actively in the decarbonization process, in cooperation with computing service providers. Overall, many opportunities, but also challenges, were identified, to decrease carbon emissions in a sector mostly concerned with improving hardware performance.
2024
Authors
Reascos, L; Carneiro, F; Pereira, A; Castro, NF; Ribeiro, RM;
Publication
COMPUTER PHYSICS COMMUNICATIONS
Abstract
Density functional calculation of electronic structures of materials is one of the most used techniques in theoretical solid state physics. These calculations retrieve single electron wavefunctions and their eigenenergies. The berry suite of programs amplifies the usefulness of DFT by ordering the eigenstates in analytic bands, allowing the differentiation of the wavefunctions in reciprocal space. It can then calculate Berry connections and curvatures and the second harmonic generation conductivity. The berry software is implemented for two dimensional materials and was tested in hBN and InSe. In the near future, more properties and functionalities are expected to be added.Program summary Program Title: berry CPC Library link to program files: https://doi .org /10 .17632 /mpbbksz2t7 .1 Developer's repository link: https://github .com /ricardoribeiro -2020 /berry Licensing provisions: MIT Programming language: Python3 Nature of problem: Differentiation of Bloch wavefunctions in reciprocal space, numerically obtained from a DFT software, applied to two dimensional materials. This enables the numeric calculation of material's properties such as Berry geometries and Second Harmonic conductivity. Solution method: Extracts Kohn-Sham functions from a DFT calculation, orders them by analytic bands using graph and AI methods and calculates the gradient of the wavefunctions along an electronic band. Additional comments including restrictions and unusual features: Applies only to two dimensional materials, and only imports Kohn-Sham functions from Quantum Espresso package.
2023
Authors
Pereira, A; Onofre, A; Proenca, A;
Publication
EUROPEAN PHYSICAL JOURNAL PLUS
Abstract
HEP-Frame is a new C++ package designed to efficiently perform analyses of datasets from a very large number of events, like those available at the Large Hadron Collider (LHC) at CERN, Geneva. It mainly targets high-performance servers and mini-clusters, and it was designed for natural science researchers with a user-friendly interface to access structured databases. HEP-Frame automatically evaluates the underlying computing resources and builds an adequate code skeleton when creating a data analysis application. At run-time, HEP-Frame analyses a sequence of datasets exploring the available parallelism in the code and hardware resources: it concurrently reads inputs from a user-defined data structure and processes them, following the user-specific sequence of requirements to select relevant data; it manages the efficient execution of that sequence; and it outputs results in userdefined objects (e.g., ROOT structures), stored together with the used input dataset. This paper shows how a domain expert software development can benefit from HEP-Frame, and how it significantly improved the performance of analyses of large datasets produced in proton-proton collisions at the LHC. Two case studies are discussed: the associated production of top quarks together with a Higgs boson (t (t) over barH) at the LHC, and a double- and single-top quark productions at the high-luminosity phase of the LHC (HL-LHC). Results show that the HEP-Frame awareness of the analysis code behaviour and structure, and the underlying hardware system, provides powerful and transparent parallelization mechanisms that largely improve the execution time of data analysis applications.
2021
Authors
Pereira, A; Proenca, A;
Publication
Advances in Parallel & Distributed Processing, and Applications - Transactions on Computational Science and Computational Intelligence
Abstract
Supervised Thesis
2024
Author
Joaquim Tiago Martins Roque
Institution
2024
Author
Luís Manuel Gonçalves da Silva
Institution
2024
Author
Diogo Manuel Brito Pires
Institution
2024
Author
Diogo Manuel Brito Pires
Institution
2024
Author
Carolina Fernandes da Silva Gomes
Institution
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.