IR-Net: Forward and Backward Information Retention for Highly Accurate Binary Neural Networks

09/24/2019
by   Haotong Qin, et al.
1

Weight and activation binarization is an effective approach to deep neural network compression and can accelerate the inference by leveraging bitwise operations. Although many binarization methods have improved the accuracy of the model by minimizing the quantization error in forward propagation, there remains a noticeable performance gap between the binarized model and the full-precision one. Our empirical study indicates that the quantization brings information loss in both forward and backward propagation, which is the bottleneck of training highly accurate binary neural networks. To address these issues, we propose an Information Retention Network (IR-Net) to retain the information that consists in the forward activations and backward gradients. IR-Net mainly relies on two technical contributions: (1) Libra Parameter Binarization (Libra-PB): minimize both quantization error and information loss of parameters by balanced and standardized weights in forward propagation; (2) Error Decay Estimator (EDE): minimize the information loss of gradients by gradually approximating the sign function in backward propagation, jointly considering the updating ability and accurate gradients. Comprehensive experiments with various network structures on CIFAR-10 and ImageNet datasets manifest that the proposed IR-Net can consistently outperform state-of-the-art quantization methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2021

Distribution-sensitive Information Retention for Accurate Binary Neural Network

Model binarization is an effective method of compressing neural networks...
research
02/03/2017

Deep Learning with Low Precision by Half-wave Gaussian Quantization

The problem of quantizing the activations of a deep neural network is co...
research
09/26/2019

Balanced Binary Neural Networks with Gated Residual

Binary neural networks have attracted numerous attention in recent years...
research
12/24/2022

Hyperspherical Loss-Aware Ternary Quantization

Most of the existing works use projection functions for ternary quantiza...
research
10/06/2022

IR2Net: Information Restriction and Information Recovery for Accurate Binary Neural Networks

Weight and activation binarization can efficiently compress deep neural ...
research
05/16/2019

Formal derivation of Mesh Neural Networks with their Forward-Only gradient Propagation

This paper proposes the Mesh Neural Network (MNN), a novel architecture ...
research
12/29/2019

Towards Unified INT8 Training for Convolutional Neural Network

Recently low-bit (e.g., 8-bit) network quantization has been extensively...

Please sign up or login with your details

Forgot password? Click here to reset