Automated Backend-Aware Post-Training Quantization

03/27/2021
by   Ziheng Jiang, et al.
5

Quantization is a key technique to reduce the resource requirement and improve the performance of neural network deployment. However, different hardware backends such as x86 CPU, NVIDIA GPU, ARM CPU, and accelerators may demand different implementations for quantized networks. This diversity calls for specialized post-training quantization pipelines to built for each hardware target, an engineering effort that is often too large for developers to keep up with. We tackle this problem with an automated post-training quantization framework called HAGO. HAGO provides a set of general quantization graph transformations based on a user-defined hardware specification and implements a search mechanism to find the optimal quantization strategy while satisfying hardware constraints for any model. We observe that HAGO achieves speedups of 2.09x, 1.97x, and 2.48x on Intel Xeon Cascade Lake CPUs, NVIDIA Tesla T4 GPUs, ARM Cortex-A CPUs on Raspberry Pi4 relative to full precision respectively, while maintaining the highest reported post-training quantization accuracy in each case.

READ FULL TEXT

page 2

page 8

research
09/19/2021

HPTQ: Hardware-Friendly Post Training Quantization

Neural network quantization enables the deployment of models on edge dev...
research
12/05/2022

QFT: Post-training quantization via fast joint finetuning of all degrees of freedom

The post-training quantization (PTQ) challenge of bringing quantized neu...
research
06/18/2020

Efficient Execution of Quantized Deep Learning Models: A Compiler Approach

A growing number of applications implement predictive functions using de...
research
11/05/2021

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

Model quantization has emerged as an indispensable technique to accelera...
research
01/14/2021

On the quantization of recurrent neural networks

Integer quantization of neural networks can be defined as the approximat...
research
07/07/2022

Attention Round for Post-Training Quantization

At present, the quantification methods of neural network models are main...
research
11/17/2019

Loss Aware Post-training Quantization

Neural network quantization enables the deployment of large models on re...

Please sign up or login with your details

Forgot password? Click here to reset