HAQ: Hardware-Aware Automated Quantization

11/21/2018
by   Kuan Wang, et al.
0

Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support flexible bitwidth (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off among accuracy, latency, power, and model size, which is both time-consuming and sub-optimal. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in an uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and we take the hardware accelerator's feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback signals to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, power and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design.

READ FULL TEXT

page 6

page 7

research
08/11/2020

Hardware-Centric AutoML for Mixed-Precision Quantization

Model quantization is a widely used technique to compress and accelerate...
research
06/06/2020

Generative Design of Hardware-aware DNNs

To efficiently run DNNs on the edge/cloud, many new DNN inference accele...
research
02/10/2018

ADC: Automated Deep Compression and Acceleration with Reinforcement Learning

Model compression is an effective technique facilitating the deployment ...
research
04/19/2020

HCM: Hardware-Aware Complexity Metric for Neural Network Architectures

Convolutional Neural Networks (CNNs) have become common in many fields i...
research
04/24/2019

Design Automation for Efficient Deep Learning Computing

Efficient deep learning computing requires algorithm and hardware co-des...
research
04/17/2023

ATHEENA: A Toolflow for Hardware Early-Exit Network Automation

The continued need for improvements in accuracy, throughput, and efficie...
research
11/24/2021

Algorithm and Hardware Co-design for Reconfigurable CNN Accelerator

Recent advances in algorithm-hardware co-design for deep neural networks...

Please sign up or login with your details

Forgot password? Click here to reset