Learning to Make Compiler Optimizations More Effective

02/24/2021
by   Rahim Mammadli, et al.
0

Because loops execute their body many times, compiler developers place much emphasis on their optimization. Nevertheless, in view of highly diverse source code and hardware, compilers still struggle to produce optimal target code. The sheer number of possible loop optimizations, including their combinations, exacerbates the problem further. Today's compilers use hard-coded heuristics to decide when, whether, and which of a limited set of optimizations to apply. Often, this leads to highly unstable behavior, making the success of compiler optimizations dependent on the precise way a loop has been written. This paper presents LoopLearner, which addresses the problem of compiler instability by predicting which way of writing a loop will lead to efficient compiled code. To this end, we train a neural network to find semantically invariant source-level transformations for loops that help the compiler generate more efficient code. Our model learns to extract useful features from the raw source code and predicts the speedup that a given transformation is likely to yield. We evaluate LoopLearner with 1,895 loops from various performance-relevant benchmarks. Applying the transformations that our model deems most favorable prior to compilation yields an average speedup of 1.14x. When trying the top-3 suggested transformations, the average speedup even increases to 1.29x. Comparing the approach with an exhaustive search through all available code transformations shows that LoopLearner helps to identify the most beneficial transformations in several orders of magnitude less time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2019

Towards an Achievable Performance for the Loop Nests

Numerous code optimization techniques, including loop nest optimizations...
research
05/09/2018

A Proposal for Loop-Transformation Pragmas

Pragmas for loop transformations, such as unrolling, are implemented in ...
research
03/02/2023

BenchDirect: A Directed Language Model for Compiler Benchmarks

The exponential increase of hardware-software complexity has made it imp...
research
05/06/2023

Revisiting Lightweight Compiler Provenance Recovery on ARM Binaries

A binary's behavior is greatly influenced by how the compiler builds its...
research
04/11/2021

A Deep Learning Based Cost Model for Automatic Code Optimization

Enabling compilers to automatically optimize code has been a longstandin...
research
06/08/2022

Progress Report: A Deep Learning Guided Exploration of Affine Unimodular Loop Transformations

In this paper, we present a work in progress about a deep learning based...
research
03/07/2016

TTC: A high-performance Compiler for Tensor Transpositions

We present TTC, an open-source parallel compiler for multidimensional te...

Please sign up or login with your details

Forgot password? Click here to reset