HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness

05/31/2022
by   Dennis Rieber, et al.
0

The process of optimizing the latency of DNN operators with ML models and hardware-in-the-loop, called auto-tuning, has established itself as a pervasive method for the deployment of neural networks. From a search space of loop-optimizations, the candidate providing the best performance has to be selected. Performance of individual configurations is evaluated through hardware measurements. The combinatorial explosion of possible configurations, together with the cost of hardware evaluation makes exhaustive explorations of the search space infeasible in practice. Machine Learning methods, like random forests or reinforcement learning are used to aid in the selection of candidates for hardware evaluation. For general purpose hardware like x86 and GPGPU architectures impressive performance gains can be achieved, compared to hand-optimized libraries like cuDNN. The method is also useful in the space of hardware accelerators with less wide-spread adoption, where a high-performance library is not always available. However, hardware accelerators are often less flexible with respect to their programming which leads to operator configurations not executable on the hardware target. This work evaluates how these invalid configurations affect the auto-tuning process and its underlying performance prediction model for the VTA hardware. From these results, a validity-driven initialization method for AutoTVM is developed, only requiring 41.6 improving search robustness.

READ FULL TEXT
research
10/25/2021

Bolt: Bridging the Gap between Auto-tuners and Hardware-native Performance

Today's auto-tuners (e.g., AutoTVM, Ansor) generate efficient tensor pro...
research
02/17/2021

Rethinking Co-design of Neural Architectures and Hardware Accelerators

Neural architectures and hardware accelerators have been two driving for...
research
08/30/2020

Performance portability through machine learning guided kernel selection in SYCL libraries

Automatically tuning parallel compute kernels allows libraries and frame...
research
04/10/2021

Joint Program and Layout Transformations to enable Convolutional Operators on Specialized Hardware based on Constraint Programming

The success of Deep Artificial Neural Networks (DNNs) in many domains cr...
research
11/21/2022

HARL: Hierarchical Adaptive Reinforcement Learning Based Auto Scheduler for Neural Networks

To efficiently perform inference with neural networks, the underlying te...
research
03/15/2021

Autotuning Benchmarking Techniques: A Roofline Model Case Study

Peak performance metrics published by vendors often do not correspond to...
research
08/16/2020

In-situ Workflow Auto-tuning via Combining Performance Models of Component Applications

In-situ parallel workflows couple multiple component applications, such ...

Please sign up or login with your details

Forgot password? Click here to reset