Learning Low Precision Deep Neural Networks through Regularization
We consider the quantization of deep neural networks (DNNs) to produce low-precision models for efficient inference of fixed-point operations. Compared to previous approaches to training quantized DNNs directly under the constraints of low-precision weights and activations, we learn the quantization of DNNs with minimal quantization loss through regularization. In particular, we introduce the learnable regularization coefficient to find accurate low-precision models efficiently in training. In our experiments, the proposed scheme yields the state-of-the-art low-precision models of AlexNet and ResNet-18, which have better accuracy than their previously available low-precision models. We also examine our quantization method to produce low-precision DNNs for image super resolution. We observe only 0.5 dB peak signal-to-noise ratio (PSNR) loss when using binary weights and 8-bit activations. The proposed scheme can be used to train low-precision models from scratch or to fine-tune a well-trained high-precision model to converge to a low-precision model. Finally, we discuss how a similar regularization method can be adopted in DNN weight pruning and compression, and show that 401× compression is achieved for LeNet-5.
READ FULL TEXT