FamilySeer: Towards Optimized Tensor Codes by Exploiting Computation Subgraph Similarity

01/01/2022
by   Shanjun Zhang, et al.
4

Deploying various deep learning (DL) models efficiently has boosted the research on DL compilers. The difficulty of generating optimized tensor codes drives DL compiler to ask for the auto-tuning approaches, and the increasing demands require increasing auto-tuning efficiency and quality. Currently, the DL compilers partition the input DL models into several subgraphs and leverage the auto-tuning to find the optimal tensor codes of these subgraphs. However, existing auto-tuning approaches usually regard subgraphs as individual ones and overlook the similarities across them, and thus fail to exploit better tensor codes under limited time budgets. We propose FamilySeer, an auto-tuning framework for DL compilers that can generate better tensor codes even with limited time budgets. FamilySeer exploits the similarities and differences among subgraphs can organize them into subgraph families, where the tuning of one subgraph can also improve other subgraphs within the same family. The cost model of each family gets more purified training samples generated by the family and becomes more accurate so that the costly measurements on real hardware can be replaced with the lightweight estimation through cost model. Our experiments show that FamilySeer can generate model codes with the same code performance more efficiently than state-of-the-art auto-tuning frameworks.

READ FULL TEXT
research
02/06/2020

The Deep Learning Compiler: A Comprehensive Survey

The difficulty of deploying various deep learning (DL) models on diverse...
research
02/08/2021

MetaTune: Meta-Learning Based Cost Model for Fast and Efficient Auto-tuning Frameworks

Deep learning compiler frameworks are gaining ground as a more portable ...
research
12/02/2022

AGO: Boosting Mobile AI Inference Performance by Removing Constraints on Graph Optimization

Traditional deep learning compilers rely on heuristics for subgraph gene...
research
07/13/2021

Thinkback: Task-SpecificOut-of-Distribution Detection

The increased success of Deep Learning (DL) has recently sparked large-s...
research
01/14/2022

Transfer-Tuning: Reusing Auto-Schedules for Efficient Tensor Program Code Generation

Auto-scheduling for tensor programs is a process where a search algorith...
research
11/21/2022

HARL: Hierarchical Adaptive Reinforcement Learning Based Auto Scheduler for Neural Networks

To efficiently perform inference with neural networks, the underlying te...
research
10/12/2019

ClassyTune: A Performance Auto-Tuner for Systems in the Cloud

Performance tuning can improve the system performance and thus enable th...

Please sign up or login with your details

Forgot password? Click here to reset