ReCU: Reviving the Dead Weights in Binary Neural Networks

03/23/2021
by   Zihan Xu, et al.
4

Binary neural networks (BNNs) have received increasing attention due to their superior reductions of computation and memory. Most existing works focus on either lessening the quantization error by minimizing the gap between the full-precision weights and their binarization or designing a gradient approximation to mitigate the gradient mismatch, while leaving the "dead weights" untouched. This leads to slow convergence when training BNNs. In this paper, for the first time, we explore the influence of "dead weights" which refer to a group of weights that are barely updated during the training of BNNs, and then introduce rectified clamp unit (ReCU) to revive the "dead weights" for updating. We prove that reviving the "dead weights" by ReCU can result in a smaller quantization error. Besides, we also take into account the information entropy of the weights, and then mathematically analyze why the weight standardization can benefit BNNs. We demonstrate the inherent contradiction between minimizing the quantization error and maximizing the information entropy, and then propose an adaptive exponential scheduler to identify the range of the "dead weights". By considering the "dead weights", our method offers not only faster BNN training, but also state-of-the-art performance on CIFAR-10 and ImageNet, compared with recent methods. Code can be available at [this https URL](https://github.com/z-hXu/ReCU).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2020

Rotated Binary Neural Network

Binary Neural Network (BNN) shows its predominance in reducing the compl...
research
04/05/2022

Bimodal Distributed Binarized Neural Networks

Binary Neural Networks (BNNs) are an extremely promising method to reduc...
research
07/01/2018

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

Inference for state-of-the-art deep neural networks is computationally e...
research
10/19/2020

Bi-Real Net V2: Rethinking Non-linearity for 1-bit CNNs and Going Beyond

Binary neural networks (BNNs), where both weights and activations are bi...
research
02/15/2021

FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation

Learning convolutional neural networks (CNNs) with low bitwidth is chall...
research
06/21/2021

How Do Adam and Training Strategies Help BNNs Optimization?

The best performing Binary Neural Networks (BNNs) are usually attained u...
research
12/02/2021

Equal Bits: Enforcing Equally Distributed Binary Network Weights

Binary networks are extremely efficient as they use only two symbols to ...

Please sign up or login with your details

Forgot password? Click here to reset