Opening the Black Box: Performance Estimation during Code Generation for GPUs

07/02/2021
by   Dominik Ernst, et al.
0

Automatic code generation is frequently used to create implementations of algorithms specifically tuned to particular hardware and application parameters. The code generation process involves the selection of adequate code transformations, tuning parameters, and parallelization strategies. To cover the huge search space, code generation frameworks may apply time-intensive autotuning, exploit scenario-specific performance models, or treat performance as an intangible black box that must be described via machine learning. This paper addresses the selection problem by identifying the relevant performance-defining mechanisms through a performance model coupled with an analytic hardware metric estimator. This enables a quick exploration of large configuration spaces to identify highly efficient candidates with high accuracy. Our current approach targets memory-intensive GPGPU applications and focuses on the correct modeling of data transfer volumes to all levels of the memory hierarchy. We show how our method can be coupled to the pystencils stencil code generator, which is used to generate kernels for a range four 3D25pt stencil and a complex two phase fluid solver based on the Lattice Boltzmann Method. For both, it delivers a ranking that can be used to select the best performing candidate. The method is not limited to stencil kernels, but can be integrated into any code generator that can generate the required address expressions.

READ FULL TEXT
research
04/29/2022

Analytical Performance Estimation during Code Generation on Modern GPUs

Automatic code generation is frequently used to create implementations o...
research
11/30/2017

Lattice Boltzmann Benchmark Kernels as a Testbed for Performance Analysis

Lattice Boltzmann methods (LBM) are an important part of current computa...
research
02/10/2021

Searching CUDA code autotuning spaces with hardware performance counters: data from benchmarks running on various GPU architectures

We have developed several autotuning benchmarks in CUDA that take into a...
research
02/10/2021

Using hardware performance counters to speed up autotuning convergence on GPUs

Nowadays, GPU accelerators are commonly used to speed up general-purpose...
research
07/15/2023

Probabilistic Black-Box Checking via Active MDP Learning

We introduce a novel methodology for testing stochastic black-box system...
research
04/23/2018

Automatic Heap Layout Manipulation for Exploitation

Heap layout manipulation is integral to exploiting heap-based memory cor...

Please sign up or login with your details

Forgot password? Click here to reset