OpEvo: An Evolutionary Method for Tensor Operator Optimization

06/10/2020
by   Xiaotian Gao, et al.
5

Training and inference efficiency of deep neural networks highly rely on the performance of tensor operators on hardware platforms. Manually optimized tensor operators have limitations in terms of supporting new operators or supporting new hardware platforms. Therefore, automatically optimizing device code configurations of tensor operators is getting increasingly attractive. However, current methods for tensor operator optimization usually suffer from poor sample-efficiency due to the combinatorial search space. In this work, we propose a novel evolutionary method, OpEvo, which efficiently explores the search spaces of tensor operators by introducing a topology-aware mutation operation based on q-random walk distribution to leverage the topological structures over the search spaces. Our comprehensive experiment results show that OpEvo can find the best configuration with the least number of trials and the lowest variance compared with state-of-the-art methods. All code of this work is available online.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2020

Ansor : Generating High-Performance Tensor Programs for Deep Learning

High-performance tensor programs are crucial to guarantee efficient exec...
research
06/29/2021

Interaction of Multiple Tensor Product Operators of the Same Type: an Introduction

Tensor product operators on finite dimensional Hilbert spaces are studie...
research
01/15/2022

Moses: Efficient Exploitation of Cross-device Transferable Features for Tensor Program Optimization

Achieving efficient execution of machine learning models has attracted s...
research
08/11/2020

Woodpecker-DL: Accelerating Deep Neural Networks via Hardware-Aware Multifaceted Optimizations

Accelerating deep model training and inference is crucial in practice. E...
research
10/25/2021

Bolt: Bridging the Gap between Auto-tuners and Hardware-native Performance

Today's auto-tuners (e.g., AutoTVM, Ansor) generate efficient tensor pro...
research
07/11/2022

SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning

Sparse tensors are rapidly becoming critical components of modern deep l...
research
04/10/2021

Joint Program and Layout Transformations to enable Convolutional Operators on Specialized Hardware based on Constraint Programming

The success of Deep Artificial Neural Networks (DNNs) in many domains cr...

Please sign up or login with your details

Forgot password? Click here to reset