AUSN: Approximately Uniform Quantization by Adaptively Superimposing Non-uniform Distribution for Deep Neural Networks

07/08/2020
by   Liu Fangxin, et al.
0

Quantization is essential to simplify DNN inference in edge applications. Existing uniform and non-uniform quantization methods, however, exhibit an inherent conflict between the representing range and representing resolution, and thereby result in either underutilized bit-width or significant accuracy drop. Moreover, these methods encounter three drawbacks: i) the absence of a quantitative metric for in-depth analysis of the source of the quantization errors; ii) the limited focus on the image classification tasks based on CNNs; iii) the unawareness of the real hardware and energy consumption reduced by lowering the bit-width. In this paper, we first define two quantitative metrics, i.e., the Clipping Error and rounding error, to analyze the quantization error distribution. We observe that the boundary- and rounding- errors vary significantly across layers, models and tasks. Consequently, we propose a novel quantization method to quantize the weight and activation. The key idea is to Approximate the Uniform quantization by Adaptively Superposing multiple Non-uniform quantized values, namely AUSN. AUSN is consist of a decoder-free coding scheme that efficiently exploits the bit-width to its extreme, a superposition quantization algorithm that can adapt the coding scheme to different DNN layers, models and tasks without extra hardware design effort, and a rounding scheme that can eliminate the well-known bit-width overflow and re-quantization issues. Theoretical analysis (see Appendix A) and accuracy evaluation on various DNN models of different tasks show the effectiveness and generalization of AUSN. The synthesis (see Appendix B) results on FPGA show 2× reduction of the energy consumption, and 2× to 4× reduction of the hardware resource.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2022

Power-of-Two Quantization for Low Bitwidth and Hardware Compliant Neural Networks

Deploying Deep Neural Networks in low-power embedded devices for real ti...
research
09/28/2019

Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks

We proposed Additive Powers-of-Two (APoT) quantization, an efficient non...
research
08/10/2023

NUPES : Non-Uniform Post-Training Quantization via Power Exponent Search

Deep neural network (DNN) deployment has been confined to larger hardwar...
research
05/03/2022

Uniform Vs. Non-Uniform Coarse Quantization in Mutual Information Maximizing LDPC Decoding

Recently, low-resolution LDPC decoders have been introduced that perform...
research
10/13/2020

A Very Compact Embedded CNN Processor Design Based on Logarithmic Computing

In this paper, we propose a very compact embedded CNN processor design b...
research
07/15/2020

Finding Non-Uniform Quantization Schemes usingMulti-Task Gaussian Processes

We propose a novel method for neural network quantization that casts the...

Please sign up or login with your details

Forgot password? Click here to reset