Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization

11/13/2021
by   Cheng-Chou Lan, et al.
0

In low-latency or mobile applications, lower computation complexity, lower memory footprint and better energy efficiency are desired. Many prior works address this need by removing redundant parameters. Parameter quantization replaces floating-point arithmetic with lower precision fixed-point arithmetic, further reducing complexity. Typical training of quantized weight neural networks starts from fully quantized weights. Quantization creates random noise. As a way to compensate for this noise, during training, we propose to quantize some weights while keeping others in floating-point precision. A deep neural network has many layers. To arrive at a fully quantized weight network, we start from one quantized layer and then quantize more and more layers. We show that the order of layer quantization affects accuracies. Order count is large for deep neural networks. A sensitivity pre-training is proposed to guide the layer quantization order. Recent work in weight binarization replaces weight-input matrix multiplication with additions. We apply the proposed iterative training to weight binarization. Our experiments cover fully connected and convolutional networks on MNIST, CIFAR-10 and ImageNet datasets. We show empirically that, starting from partial binary weights instead of from fully binary ones, training reaches fully binary weight networks with better accuracies for larger and deeper networks. Layer binarization in the forward order results in better accuracies. Guided layer binarization can further improve that. The improvements come at a cost of longer training time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2018

Quantizing deep convolutional networks for efficient inference: A whitepaper

We present an overview of techniques for quantizing convolutional neural...
research
06/11/2019

Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free Inference

In this work, we propose to quantize all parts of standard classificatio...
research
01/22/2016

Bitwise Neural Networks

Based on the assumption that there exists a neural network that efficien...
research
11/20/2015

Resiliency of Deep Neural Networks under Quantization

The complexity of deep neural network algorithms for hardware implementa...
research
01/04/2020

RPR: Random Partition Relaxation for Training; Binary and Ternary Weight Neural Networks

We present Random Partition Relaxation (RPR), a method for strong quanti...
research
06/07/2016

Deep neural networks are robust to weight binarization and other non-linear distortions

Recent results show that deep neural networks achieve excellent performa...
research
04/22/2020

Up or Down? Adaptive Rounding for Post-Training Quantization

When quantizing neural networks, assigning each floating-point weight to...

Please sign up or login with your details

Forgot password? Click here to reset