Batch Normalization in Quantized Networks

04/29/2020
by   Eyyüb Sari, et al.
0

Implementation of quantized neural networks on computing hardware leads to considerable speed up and memory saving. However, quantized deep networks are difficult to train and batch normalization (BatchNorm) layer plays an important role in training full-precision and quantized networks. Most studies on BatchNorm are focused on full-precision networks, and there is little research in understanding BatchNorm affect in quantized training which we address here. We show BatchNorm avoids gradient explosion which is counter-intuitive and recently observed in numerical experiments by other researchers.

READ FULL TEXT

page 1

page 2

page 3

research
06/07/2017

Training Quantized Nets: A Deeper Understanding

Currently, deep neural networks are deployed on low-power portable devic...
research
08/30/2020

Optimal Quantization for Batch Normalization in Neural Network Deployments and Beyond

Quantized Neural Networks (QNNs) use low bit-width fixed-point numbers f...
research
06/01/2022

A Log-Linear Time Sequential Optimal Calibration Algorithm for Quantized Isotonic L2 Regression

We study the sequential calibration of estimations in a quantized isoton...
research
12/29/2019

MTJ-Based Hardware Synapse Design for Quantized Deep Neural Networks

Quantized neural networks (QNNs) are being actively researched as a solu...
research
07/14/2020

AQD: Towards Accurate Quantized Object Detection

Network quantization aims to lower the bitwidth of weights and activatio...
research
12/10/2022

Vertical Layering of Quantized Neural Networks for Heterogeneous Inference

Although considerable progress has been obtained in neural network quant...
research
11/07/2022

AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks

In this paper, we develop a new algorithm, Annealed Skewed SGD - AskewSG...

Please sign up or login with your details

Forgot password? Click here to reset