From Quantized DNNs to Quantizable DNNs

04/11/2020
by   Kunyuan Du, et al.
0

This paper proposes Quantizable DNNs, a special type of DNNs that can flexibly quantize its bit-width (denoted as `bit modes' thereafter) during execution without further re-training. To simultaneously optimize for all bit modes, a combinational loss of all bit modes is proposed, which enforces consistent predictions ranging from low-bit mode to 32-bit mode. This Consistency-based Loss may also be viewed as certain form of regularization during training. Because outputs of matrix multiplication in different bit modes have different distributions, we introduce Bit-Specific Batch Normalization so as to reduce conflicts among different bit modes. Experiments on CIFAR100 and ImageNet have shown that compared to quantized DNNs, Quantizable DNNs not only have much better flexibility, but also achieve even higher classification accuracy. Ablation studies further verify that the regularization through the consistency-based loss indeed improves the model's generalization performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2019

Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers

Deep neural network (DNN) quantization converting floating-point (FP) da...
research
08/03/2017

Learning Accurate Low-Bit Deep Neural Networks with Stochastic Quantization

Low-bit deep neural networks (DNNs) become critical for embedded applica...
research
02/18/2020

Gradient ℓ_1 Regularization for Quantization Robustness

We analyze the effect of quantizing weights and activations of neural ne...
research
07/11/2019

Freeze and Chaos for DNNs: an NTK view of Batch Normalization, Checkerboard and Boundary Effects

In this paper, we analyze a number of architectural features of Deep Neu...
research
05/20/2020

BiQGEMM: Matrix Multiplication with Lookup Table For Binary-Coding-based Quantized DNNs

The number of parameters in deep neural networks (DNNs) is rapidly incre...
research
10/10/2018

Adding 32-bit Mode to the ACL2 Model of the x86 ISA

The ACL2 model of the x86 Instruction Set Architecture was built for the...
research
08/15/2021

RLIBM-ALL: A Novel Polynomial Approximation Method to Produce Correctly Rounded Results for Multiple Representations and Rounding Modes

Mainstream math libraries for floating point (FP) do not produce correct...

Please sign up or login with your details

Forgot password? Click here to reset