Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation

01/23/2020
by   Byung Hoon Ahn, et al.
1

Achieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks. However, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods. These methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal. As such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance. This solution dubbed Chameleon leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain-knowledge inspired logic to improve the samples itself. Experimentation with real hardware shows that Chameleon provides 4.45x speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6

READ FULL TEXT
research
05/30/2019

Reinforcement Learning and Adaptive Sampling for Optimized DNN Compilation

Achieving faster execution with shorter compilation time can enable furt...
research
08/11/2020

Woodpecker-DL: Accelerating Deep Neural Networks via Hardware-Aware Multifaceted Optimizations

Accelerating deep model training and inference is crucial in practice. E...
research
07/03/2019

HyperNOMAD: Hyperparameter optimization of deep neural networks using mesh adaptive direct search

The performance of deep neural networks is highly sensitive to the choic...
research
01/17/2023

Scaling Deep Networks with the Mesh Adaptive Direct Search algorithm

Deep neural networks are getting larger. Their implementation on edge an...
research
04/11/2019

Cramnet: Layer-wise Deep Neural Network Compression with Knowledge Transfer from a Teacher Network

Neural Networks accomplish amazing things, but they suffer from computat...
research
02/25/2022

A Hardware-Aware System for Accelerating Deep Neural Network Optimization

Recent advances in Neural Architecture Search (NAS) which extract specia...

Please sign up or login with your details

Forgot password? Click here to reset