Deep Learning with Low Precision by Half-wave Gaussian Quantization

02/03/2017
by   Zhaowei Cai, et al.
0

The problem of quantizing the activations of a deep neural network is considered. An examination of the popular binary quantization approach shows that this consists of approximating a classical non-linearity, the hyperbolic tangent, by two functions: a piecewise constant sign function, which is used in feedforward network computations, and a piecewise linear hard tanh function, used in the backpropagation step during network learning. The problem of approximating the ReLU non-linearity, widely used in the recent deep learning literature, is then considered. An half-wave Gaussian quantizer (HWGQ) is proposed for forward approximation and shown to have efficient implementation, by exploiting the statistics of of network activations and batch normalization operations commonly used in the literature. To overcome the problem of gradient mismatch, due to the use of different forward and backward approximations, several piece-wise backward approximators are then investigated. The implementation of the resulting quantized network, denoted as HWGQ-Net, is shown to achieve much closer performance to full precision networks, such as AlexNet, ResNet, GoogLeNet and VGG-Net, than previously available low-precision networks, with 1-bit binary weights and 2-bit quantized activations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/24/2019

IR-Net: Forward and Backward Information Retention for Highly Accurate Binary Neural Networks

Weight and activation binarization is an effective approach to deep neur...
research
02/18/2019

Low-bit Quantization of Neural Networks for Efficient Inference

Recent breakthrough methods in machine learning make use of increasingly...
research
07/01/2018

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

Inference for state-of-the-art deep neural networks is computationally e...
research
12/19/2019

FQ-Conv: Fully Quantized Convolution for Efficient and Accurate Inference

Deep neural networks (DNNs) can be made hardware-efficient by reducing t...
research
09/25/2021

Distribution-sensitive Information Retention for Accurate Binary Neural Network

Model binarization is an effective method of compressing neural networks...
research
08/29/2023

On-Device Learning with Binary Neural Networks

Existing Continual Learning (CL) solutions only partially address the co...
research
11/23/2020

Learning Quantized Neural Nets by Coarse Gradient Method for Non-linear Classification

Quantized or low-bit neural networks are attractive due to their inferen...

Please sign up or login with your details

Forgot password? Click here to reset