Optimal Quantization for Batch Normalization in Neural Network Deployments and Beyond

08/30/2020
by   Dachao Lin, et al.
0

Quantized Neural Networks (QNNs) use low bit-width fixed-point numbers for representing weight parameters and activations, and are often used in real-world applications due to their saving of computation resources and reproducibility of results. Batch Normalization (BN) poses a challenge for QNNs for requiring floating points in reciprocal operations, and previous QNNs either require computing BN at high precision or revise BN to some variants in heuristic ways. In this work, we propose a novel method to quantize BN by converting an affine transformation of two floating points to a fixed-point operation with shared quantized scale, which is friendly for hardware acceleration and model deployment. We confirm that our method maintains same outputs through rigorous theoretical analysis and numerical analysis. Accuracy and efficiency of our quantization method are verified by experiments at layer level on CIFAR and ImageNet datasets. We also believe that our method is potentially useful in other problems involving quantization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2017

Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations

Deep convolutional neural network (CNN) inference requires significant a...
research
05/25/2018

Scalable Methods for 8-bit Training of Neural Networks

Quantized Neural Networks (QNNs) are often used to improve network effic...
research
04/29/2020

Batch Normalization in Quantized Networks

Implementation of quantized neural networks on computing hardware leads ...
research
02/27/2017

Low-Precision Batch-Normalized Activations

Artificial neural networks can be trained with relatively low-precision ...
research
03/04/2023

Fixed-point quantization aware training for on-device keyword-spotting

Fixed-point (FXP) inference has proven suitable for embedded devices wit...
research
05/27/2019

Learning In Practice: Reasoning About Quantization

There is a mismatch between the standard theoretical analyses of statist...
research
11/10/2017

Quantized Memory-Augmented Neural Networks

Memory-augmented neural networks (MANNs) refer to a class of neural netw...

Please sign up or login with your details

Forgot password? Click here to reset