Towards High Performance, Portability, and Productivity: Lightweight Augmented Neural Networks for Performance Prediction

03/17/2020
by   Ajitesh Srivastava, et al.
0

Writing high-performance code requires significant expertise in the programming language, compiler optimizations, and hardware knowledge. This often leads to poor productivity and portability and is inconvenient for a non-programmer domain-specialist such as a Physicist. More desirable is a high-level language where the domain-specialist simply specifies the workload in terms of high-level operations (e.g., matrix-multiply(A, B)), and the compiler identifies the best implementation fully utilizing the heterogeneous platform. For creating a compiler that supports productivity, portability, and performance simultaneously, it is crucial to predict the performance of various available implementations (variants) of the dominant operations (kernels) contained in the workload on various hardware to decide (a) which variant should be chosen for each kernel in the workload, and (b) on which hardware resource the variant should run. To enable the performance prediction, we propose lightweight augmented neural networks for arbitrary combinations of kernel-variant-hardware. A key innovation is utilizing the mathematical complexity of the kernels as a feature to achieve higher accuracy. These models are compact to reduce training time and fast inference during compile-time and run-time. Using models with less than 75 parameters, and only 250 training data instances, we are able to obtain a low MAPE of 3 traditional feed-forward neural networks on 48 kernel-variant-hardware combinations. We further demonstrate that our variant-selection approach can be used in Halide implementations to obtain up to 1.7x speedup over Halide's auto-scheduler.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2017

Effective Extensible Programming: Unleashing Julia on GPUs

GPUs and other accelerators are popular devices for accelerating compute...
research
10/29/2020

Systolic Computing on GPUs for Productive Performance

We propose a language and compiler to productively build high-performanc...
research
04/20/2023

Backporting RISC-V Vector assembly

Leveraging vectorisation, the ability for a CPU to apply operations to m...
research
11/21/2017

Programmatic Control of a Compiler for Generating High-performance Spatial Hardware

This methodology paper addresses high-performance high-productivity prog...
research
01/03/2023

oneDNN Graph Compiler: A Hybrid Approach for High-Performance Deep Learning Compilation

With the rapid development of deep learning models and hardware support ...
research
02/02/2022

Efficient Memory Partitioning in Software Defined Hardware

As programmers turn to software-defined hardware (SDH) to maintain a hig...
research
06/15/2019

High-Performance Deep Learning via a Single Building Block

Deep learning (DL) is one of the most prominent branches of machine lear...

Please sign up or login with your details

Forgot password? Click here to reset