IR-Net: Forward and Backward Information Retention for Highly Accurate Binary Neural Networks

09/24/2019
by   Haotong Qin, et al.
1

Weight and activation binarization is an effective approach to deep neural network compression and can accelerate the inference by leveraging bitwise operations. Although many binarization methods have improved the accuracy of the model by minimizing the quantization error in forward propagation, there remains a noticeable performance gap between the binarized model and the full-precision one. Our empirical study indicates that the quantization brings information loss in both forward and backward propagation, which is the bottleneck of training highly accurate binary neural networks. To address these issues, we propose an Information Retention Network (IR-Net) to retain the information that consists in the forward activations and backward gradients. IR-Net mainly relies on two technical contributions: (1) Libra Parameter Binarization (Libra-PB): minimize both quantization error and information loss of parameters by balanced and standardized weights in forward propagation; (2) Error Decay Estimator (EDE): minimize the information loss of gradients by gradually approximating the sign function in backward propagation, jointly considering the updating ability and accurate gradients. Comprehensive experiments with various network structures on CIFAR-10 and ImageNet datasets manifest that the proposed IR-Net can consistently outperform state-of-the-art quantization methods.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/25/2021

Distribution-sensitive Information Retention for Accurate Binary Neural Network

Model binarization is an effective method of compressing neural networks...
02/03/2017

Deep Learning with Low Precision by Half-wave Gaussian Quantization

The problem of quantizing the activations of a deep neural network is co...
09/26/2019

Balanced Binary Neural Networks with Gated Residual

Binary neural networks have attracted numerous attention in recent years...
05/16/2019

Formal derivation of Mesh Neural Networks with their Forward-Only gradient Propagation

This paper proposes the Mesh Neural Network (MNN), a novel architecture ...
09/28/2020

Rotated Binary Neural Network

Binary Neural Network (BNN) shows its predominance in reducing the compl...
02/09/2021

Distribution Adaptive INT8 Quantization for Training CNNs

Researches have demonstrated that low bit-width (e.g., INT8) quantizatio...
12/02/2021

Equal Bits: Enforcing Equally Distributed Binary Network Weights

Binary networks are extremely efficient as they use only two symbols to ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.