Transfer-Tuning: Reusing Auto-Schedules for Efficient Tensor Program Code Generation

01/14/2022
by   Perry Gibson, et al.
5

Auto-scheduling for tensor programs is a process where a search algorithm automatically explores candidate schedules (program transformations) for a given program on a target hardware platform to improve its performance. However this can be a very time consuming process depending on the complexity of the tensor program and the capacity of the target device, with often many thousands of program variants being explored. To address this, in this paper we introduce the idea of transfer-tuning, a novel approach to identify and reuse auto-schedules between tensor programs. We demonstrate this concept using Deep Neural Networks (DNNs), taking sets of auto-schedules from pre-tuned DNNs and using them to reduce the inference time of a new DNN. We compare transfer-tuning against the state-of-the-art Ansor auto-scheduler, defining the maximum possible speedup for a given DNN model as what Ansor achieves using its recommended full tuning time. On a server-class CPU and across 11 widely used DNN models, we observe that transfer-tuning achieves up to 88.41% (49.13% on average) of this maximum speedup, while Ansor requires 6.5× more search time on average to match it. We also evaluate transfer-tuning on a constrained edge CPU and observe that the differences in search time are exacerbated, with Ansor requiring 10.8× more time on average to match transfer-tuning's speedup, which further demonstrates its value. Our code is available at https://www.github.com/gicLAB/transfer-tuning

READ FULL TEXT

page 8

page 10

research
11/21/2022

HARL: Hierarchical Adaptive Reinforcement Learning Based Auto Scheduler for Neural Networks

To efficiently perform inference with neural networks, the underlying te...
research
10/25/2021

Bolt: Bridging the Gap between Auto-tuners and Hardware-native Performance

Today's auto-tuners (e.g., AutoTVM, Ansor) generate efficient tensor pro...
research
07/14/2017

Pushing the Limits of Online Auto-tuning: Machine Code Optimization in Short-Running Kernels

We propose an online auto-tuning approach for computing kernels. Differe...
research
09/02/2022

Programming with Context-Sensitive Holes using Dependency-Aware Tuning

Developing efficient and maintainable software systems is both hard and ...
research
04/07/2020

Offsite Autotuning Approach – Performance Model Driven Autotuning Applied to Parallel Explicit ODE Methods

Autotuning techniques are a promising approach to minimize the otherwise...
research
01/01/2022

FamilySeer: Towards Optimized Tensor Codes by Exploiting Computation Subgraph Similarity

Deploying various deep learning (DL) models efficiently has boosted the ...
research
10/18/2022

Hidet: Task-Mapping Programming Paradigm for Deep Learning Tensor Programs

As deep learning models nowadays are widely adopted by both cloud servic...

Please sign up or login with your details

Forgot password? Click here to reset