Progress Report: A Deep Learning Guided Exploration of Affine Unimodular Loop Transformations

06/08/2022
by   Massinissa Merouani, et al.
0

In this paper, we present a work in progress about a deep learning based approach for automatic code optimization in polyhedral compilers. The proposed technique explores combinations of affine and non-affine loop transformations to find the sequence of transformations that minimizes the execution time of a given program. This exploration is guided by a deep learning based cost model that evaluates the speedup that each sequence of transformations would yield. Preliminary results show that the proposed techniques achieve a 2.35x geometric mean speedup over state of the art polyhedral compilers (Pluto).

READ FULL TEXT
research
04/11/2021

A Deep Learning Based Cost Model for Automatic Code Optimization

Enabling compilers to automatically optimize code has been a longstandin...
research
02/02/2019

Towards an Achievable Performance for the Loop Nests

Numerous code optimization techniques, including loop nest optimizations...
research
10/22/2020

Exploring the Impact of Affine Loop Transformations in Qubit Allocation

Most quantum compiler transformations and qubit allocation techniques to...
research
02/24/2021

Learning to Make Compiler Optimizations More Effective

Because loops execute their body many times, compiler developers place m...
research
10/01/2011

Learning image transformations without training examples

The use of image transformations is essential for efficient modeling and...
research
05/04/2020

Sum-Product-Transform Networks: Exploiting Symmetries using Invertible Transformations

In this work, we propose Sum-Product-Transform Networks (SPTN), an exten...
research
11/20/2020

AZP: Automatic Specialization for Zero Values in Gaming Applications

Recent research has shown that dynamic zeros in shader programs of gamin...

Please sign up or login with your details

Forgot password? Click here to reset