Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2017

Message from general and program co-chairs

Authors
Cardoso, JMP; Huebner, M; Agosta, G; Silvano, C;

Publication
ACM International Conference Proceeding Series

Abstract

2017

Impact of Compiler Phase Ordering When Targeting GPUs

Authors
Nobre, R; Reis, L; Cardoso, JMP;

Publication
Euro-Par 2017: Parallel Processing Workshops - Euro-Par 2017 International Workshops, Santiago de Compostela, Spain, August 28-29, 2017, Revised Selected Papers

Abstract
Research in compiler pass phase ordering (i.e., selection of compiler analysis/transformation passes and their order of execution) has been mostly performed in the context of CPUs and, in a small number of cases, FPGAs. In this paper we present experiments regarding compiler pass phase ordering specialization of OpenCL kernels targeting NVIDIA GPUs using Clang/LLVM 3.9 and the libclc OpenCL library. More specifically, we analyze the impact of using specialized compiler phase orders on the performance of 15 PolyBench/GPU OpenCL benchmarks. In addition, we analyze the final NVIDIA PTX assembly code generated by the different compilation flows in order to identify the main reasons for the cases with significant performance improvements. Using specialized compiler phase orders, we were able to achieve performance improvements over the CUDA version and OpenCL compiled with the NVIDIA driver. Compared to CUDA, we were able to achieve geometric mean improvements of 1.54× (up to 5.48×). Compared to the OpenCL driver version, we were able to achieve geometric mean improvements of 1.65× (up to 5.70×). © Springer International Publishing AG, part of Springer Nature 2018.

2017

On Coding Techniques for Targeting FPGAs via OpenCL

Authors
Paulino, N; Reis, L; Cardoso, JMP;

Publication
Parallel Computing is Everywhere, Proceedings of the International Conference on Parallel Computing, ParCo 2017, 12-15 September 2017, Bologna, Italy

Abstract
Software developers have always found it difficult to adopt Field-Programmable Gate Arrays (FPGAs) as computing platforms. Recent advances in HLS tools aim to ease the mapping of computations to FPGAs by abstracting the hardware design effort via a standard OpenCL interface and execution model. However, OpenCL is a low-level programming language and requires that developers master the target architecture in order to achieve efficient results. Thus, efforts addressing the generation of OpenCL from high-level languages are of paramount importance to increase design productivity and to help software developers. Existing approaches bridge this by translating MATLAB/Octave code into C, or similar languages, in order to improve performance by efficiently compiling for the target hardware. One example is the MATISSE source-to-source compiler, which translates MATLAB code into standard-compliant C and/or OpenCL code. In this paper, we analyse the viability of combining both flows so that sections of MATLAB code can be translated to specialized hardware with a small amount of effort, and test a few code optimizations and their effect on performance. We present preliminary results relative to execution times, and resource and power consumption, for two OpenCL kernels generated by MATISSE, and manual optimizations of each kernel based on different coding techniques. © 2018 The authors and IOS Press.

2017

The ANTAREX tool flow for monitoring and autotuning energy efficient HPC systems

Authors
Silvano, C; Agosta, G; Barbosa, JG; Bartolini, A; Beccari, AR; Benini, L; Bispo, J; Cardoso, JMP; Cavazzoni, C; Cherubin, S; Cmar, R; Gadioli, D; Manelfi, C; Martinovic, J; Nobre, R; Palermo, G; Palkovic, M; Pinto, P; Rohou, E; Sanna, N; Slaninová, K;

Publication
2017 International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation, SAMOS 2017, Pythagorion, Greece, July 17-20, 2017

Abstract
Designing and optimizing HPC applications are difficult and complex tasks, which require mastering specialized languages and tools for performance tuning. As this is incompatible with the current trend to open HPC infrastructures to a wider range of users, the availability of more sophisticated programming languages and tools to assist and automate the design stages is crucial to provide smoothly migration paths towards novel heterogeneous HPC platforms. The ANTAREX project intends to address these issues by providing a tool flow, a Domain Specific Launguage and APIs to provide application's adaptivity and to runtime manage and autotune applications for heterogeneous HPC systems. Our DSL provides a separation of concerns, where analysis, runtime adaptivity, performance tuning and energy strategies are specified separately from the application functionalities with the goal to increase productivity, significantly reduce time to solution, while making possible the deployment of substantially improved implementations. This paper presents the ANTAREX tool flow and shows the impact of optimization strategies in the context of one of the ANTAREX use cases related to personalized drug design. We show how simple strategies, not devised by typical compilers, can substantially speedup the execution and reduce energy consumption. © 2017 IEEE.

2017

Embedded Computing for High Performance: Efficient Mapping of Computations Using Customization, Code Transformations and Compilation

Authors
Cardoso, JMP; Coutinho, JGF; Diniz, PC;

Publication
Embedded Computing for High Performance: Efficient Mapping of Computations Using Customization, Code Transformations and Compilation

Abstract
Embedded Computing for High Performance: Design Exploration and Customization Using High-level Compilation and Synthesis Tools provides a set of real-life example implementations that migrate traditional desktop systems to embedded systems. Working with popular hardware, including Xilinx and ARM, the book offers a comprehensive description of techniques for mapping computations expressed in programming languages such as C or MATLAB to high-performance embedded architectures consisting of multiple CPUs, GPUs, and reconfigurable hardware (FPGAs). The authors demonstrate a domain-specific language (LARA) that facilitates retargeting to multiple computing systems using the same source code. In this way, users can decouple original application code from transformed code and enhance productivity and program portability. After reading this book, engineers will understand the processes, methodologies, and best practices needed for the development of applications for high-performance embedded computing systems. Focuses on maximizing performance while managing energy consumption in embedded systems Explains how to retarget code for heterogeneous systems with GPUs and FPGAs Demonstrates a domain-specific language that facilitates migrating and retargeting existing applications to modern systems Includes downloadable slides, tools, and tutorials.

2017

Message from ANDARE'17 general and program chairs

Authors
Bartolini, A; Cardoso, JMP; Silvano, C; Palermo, G; Barbosa, J; Marongiu, A; Mustafa, D; Rohou, E; Mantovani, F; Agosta, G; Martinovic, J; Pingali, K; Slaninová, K; Benini, L; Cytowski, M; Palkovic, M; Gerndt, M; Sanna, N; Diniz, P; Rusitoru, R; Eigenmann, R; Patki, T; Fahringer, T; Rosendard, T;

Publication
ACM International Conference Proceeding Series

Abstract

  • 332
  • 657