Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks

09/28/2019
by   Yuhang Li, et al.
0

We proposed Additive Powers-of-Two (APoT) quantization, an efficient non-uniform quantization scheme that attends to the bell-shaped and long-tailed distribution of weights in neural networks. By constraining all quantization levels as a sum of several Powers-of-Two terms, APoT quantization enjoys overwhelming efficiency of computation and a good match with weights' distribution. A simple reparameterization on clipping function is applied to generate better-defined gradient for updating of optimal clipping threshold. Moreover, weight normalization is presented to refine the input distribution of weights to be more stable and consistent. Experimental results show that our proposed method outperforms state-of-the-art methods, and is even competitive with the full-precision models demonstrating the effectiveness of our proposed APoT quantization. For example, our 3-bit quantized ResNet-34 on ImageNet only drops 0.3 computation of our model is approximately 2x less than uniformly quantized neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset