Ansor : Generating High-Performance Tensor Programs for Deep Learning

06/11/2020
by   Lianmin Zheng, et al.
10

High-performance tensor programs are crucial to guarantee efficient execution of deep neural networks. However, obtaining performant tensor programs for different operators on various hardware platforms is notoriously challenging. Currently, deep learning systems rely on vendor-provided kernel libraries or various search strategies to get performant tensor programs. These approaches either require significant engineering effort to develop platform-specific optimization code or fall short of finding high-performance programs due to restricted search space and ineffective exploration strategy. We present Ansor, a tensor program generation framework for deep learning applications. Compared with existing search strategies, Ansor explores many more optimization combinations by sampling programs from a hierarchical representation of the search space. Ansor then fine-tunes the sampled programs with evolutionary search and a learned cost model to identify the best programs. Ansor can find high-performance programs that are outside the search space of existing state-of-the-art approaches. In addition, Ansor utilizes a task scheduler to simultaneously optimize multiple subgraphs in deep neural networks. We show that Ansor improves the execution performance of deep neural networks relative to the state-of-the-art on the Intel CPU, ARM CPU, and NVIDIA GPU by up to 3.8×, 2.6×, and 1.7×, respectively.

READ FULL TEXT

page 4

page 5

page 10

page 11

research
05/21/2018

Learning to Optimize Tensor Programs

We introduce a learning-based framework to optimize tensor programs for ...
research
05/26/2022

Tensor Program Optimization with Probabilistic Programs

Automatic optimization for tensor programs becomes increasingly importan...
research
06/10/2020

OpEvo: An Evolutionary Method for Tensor Operator Optimization

Training and inference efficiency of deep neural networks highly rely on...
research
06/04/2020

Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference

Modern deep neural networks increasingly make use of features such as dy...
research
11/07/2022

TLP: A Deep Learning-based Cost Model for Tensor Program Tuning

Tensor program tuning is a non-convex objective optimization problem, to...
research
04/29/2021

Tuna: A Static Analysis Approach to Optimizing Deep Neural Networks

We introduce Tuna, a static analysis approach to optimizing deep neural ...
research
11/21/2022

HARL: Hierarchical Adaptive Reinforcement Learning Based Auto Scheduler for Neural Networks

To efficiently perform inference with neural networks, the underlying te...

Please sign up or login with your details

Forgot password? Click here to reset