Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation

04/20/2020
by   Hao Wu, et al.
10

Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of quantization parameters and evaluate their choices on a wide range of neural network models for different application domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is able to maintain accuracy within 1 the floating-point baseline on all networks studied, including models that are more difficult to quantize, such as MobileNets and BERT-large.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset