Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation

11/29/2021
by   Zechun Liu, et al.
11

The nonuniform quantization strategy for compressing neural networks usually achieves better performance than its counterpart, i.e., uniform strategy, due to its superior representational capacity. However, many nonuniform quantization methods overlook the complicated projection process in implementing the nonuniformly quantized weights/activations, which incurs non-negligible time and space overhead in hardware deployment. In this study, we propose Nonuniform-to-Uniform Quantization (N2UQ), a method that can maintain the strong representation ability of nonuniform methods while being hardware-friendly and efficient as the uniform quantization for model inference. We achieve this through learning the flexible in-equidistant input thresholds to better fit the underlying distribution while quantizing these real-valued inputs into equidistant output levels. To train the quantized network with learnable input thresholds, we introduce a generalized straight-through estimator (G-STE) for intractable backward derivative calculation w.r.t. threshold parameters. Additionally, we consider entropy preserving regularization to further reduce information loss in weight quantization. Even under this adverse constraint of imposing uniformly quantized weights and activations, our N2UQ outperforms state-of-the-art nonuniform quantization methods by 0.7 1.8 contribution of N2UQ design. Code will be made publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2019

Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks

We proposed Additive Powers-of-Two (APoT) quantization, an efficient non...
research
06/22/2017

Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks

Quantized Neural Networks (QNNs), which use low bitwidth numbers for rep...
research
08/05/2019

GDRQ: Group-based Distribution Reshaping for Quantization

Low-bit quantization is challenging to maintain high performance with li...
research
03/19/2019

Trained Uniform Quantization for Accurate and Efficient Neural Network Inference on Fixed-Point Hardware

We propose a method of training quantization clipping thresholds for uni...
research
05/04/2021

Training Quantized Neural Networks to Global Optimality via Semidefinite Programming

Neural networks (NNs) have been extremely successful across many tasks i...
research
10/08/2021

Dynamic Binary Neural Network by learning channel-wise thresholds

Binary neural networks (BNNs) constrain weights and activations to +1 or...
research
11/25/2022

Homology-constrained vector quantization entropy regularizer

This paper describes an entropy regularization term for vector quantizat...

Please sign up or login with your details

Forgot password? Click here to reset