PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives

06/02/2020
by   Sanket Tavarageri, et al.
0

Deep Neural Networks (DNNs) have revolutionized many aspects of our lives. The use of DNNs is becoming ubiquitous including in softwares for image recognition, speech recognition, speech synthesis, language translation, to name a few. he training of DNN architectures however is computationally expensive. Once the model is created, its use in the intended application - the inference task, is computationally heavy too and the inference needs to be fast for real time use. For obtaining high performance today, the code of Deep Learning (DL) primitives optimized for specific architectures by expert programmers exposed via libraries is the norm. However, given the constant emergence of new DNN architectures, creating hand optimized code is expensive, slow and is not scalable. To address this performance-productivity challenge, in this paper we present compiler algorithms to automatically generate high performance implementations of DL primitives that closely match the performance of hand optimized libraries. We develop novel data reuse analysis algorithms using the polyhedral model to derive efficient execution schedules automatically. In addition, because most DL primitives use some variant of matrix multiplication at their core, we develop a flexible framework where it is possible to plug in library implementations of the same in lieu of a subset of the loops. We show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance. We develop compiler algorithms to also perform operator fusions that reduce data movement through the memory hierarchy of the computer system.

READ FULL TEXT

page 13

page 14

page 15

page 16

research
06/15/2019

High-Performance Deep Learning via a Single Building Block

Deep learning (DL) is one of the most prominent branches of machine lear...
research
11/19/2016

dMath: Distributed Linear Algebra for DL

The paper presents a parallel math library, dMath, that demonstrates lea...
research
02/06/2020

PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives

At the heart of deep learning training and inferencing are computational...
research
02/04/2019

Blaze: Simplified High Performance Cluster Computing

MapReduce and its variants have significantly simplified and accelerated...
research
04/05/2016

dMath: A Scalable Linear Algebra and Math Library for Heterogeneous GP-GPU Architectures

A new scalable parallel math library, dMath, is presented in this paper ...
research
10/03/2017

Optimal DNN Primitive Selection with Partitioned Boolean Quadratic Programming

Deep Neural Networks (DNNs) require very large amounts of computation bo...
research
04/12/2021

Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads

During the past decade, novel Deep Learning (DL) algorithms/workloads an...

Please sign up or login with your details

Forgot password? Click here to reset