TIRAMISU: A Polyhedral Compiler for Dense and Sparse Deep Learning

05/07/2020
by   Riyadh Baghdadi, et al.
0

In this paper, we demonstrate a compiler that can optimize sparse and recurrent neural networks, both of which are currently outside of the scope of existing neural network compilers (sparse neural networks here stand for networks that can be accelerated with sparse tensor algebra techniques). Our demonstration includes a mapping of sparse and recurrent neural networks to the polyhedral model along with an implementation of our approach in TIRAMISU, our state-of-the-art polyhedral compiler. We evaluate our approach on a set of deep learning benchmarks and compare our results with hand-optimized industrial libraries. Our results show that our approach at least matches Intel MKL-DNN and in some cases outperforms it by 5x (on multicore-CPUs).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/03/2023

oneDNN Graph Compiler: A Hybrid Approach for High-Performance Deep Learning Compilation

With the rapid development of deep learning models and hardware support ...
research
01/20/2021

SparseDNN: Fast Sparse Deep Learning Inference on CPUs

The last few years have seen gigantic leaps in algorithms and systems to...
research
07/26/2020

Optimizing Block-Sparse Matrix Multiplications on CUDA with TVM

We implemented and optimized matrix multiplications between dense and bl...
research
05/09/2022

Towards Optimal VPU Compiler Cost Modeling by using Neural Networks to Infer Hardware Performances

Calculating the most efficient schedule of work in a neural network comp...
research
01/08/2020

Automatic Generation of Efficient Sparse Tensor Format Conversion Routines

This paper shows how to generate code that efficiently converts sparse t...
research
11/23/2012

Theano: new features and speed improvements

Theano is a linear algebra compiler that optimizes a user's symbolically...
research
05/02/2022

LoopStack: a Lightweight Tensor Algebra Compiler Stack

We present LoopStack, a domain specific compiler stack for tensor operat...

Please sign up or login with your details

Forgot password? Click here to reset