Data-Free Quantization through Weight Equalization and Bias Correction

06/11/2019
by   Markus Nagel, et al.
1

We introduce a data-free quantization method for deep neural networks that does not require fine-tuning or hyperparameter selection. It achieves near-original model performance on common computer vision architectures and tasks. 8-bit fixed-point quantization is essential for efficient inference in modern deep learning hardware architectures. However, quantizing models to run in 8-bit is a non-trivial task, frequently leading to either significant performance reduction or engineering time spent on training a network to be amenable to quantization. Our approach relies on equalizing the weight ranges in the network by making use of a scale-equivariance property of activation functions. In addition the method corrects biases in the error that are introduced during quantization. This improves quantization accuracy performance, and can be applied ubiquitously to almost any model with a straight-forward API call. For common architectures, such as the MobileNet family, we achieve state-of-the-art quantized model performance. We further show that the method also extends to other computer vision architectures and tasks such as semantic segmentation and object detection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2020

Near-Lossless Post-Training Quantization of Deep Neural Networks via a Piecewise Linear Approximation

Quantization plays an important role for energy-efficient deployment of ...
research
08/21/2023

QD-BEV : Quantization-aware View-guided Distillation for Multi-view 3D Object Detection

Multi-view 3D detection based on BEV (bird-eye-view) has recently achiev...
research
02/19/2020

SYMOG: learning symmetric mixture of Gaussian modes for improved fixed-point quantization

Deep neural networks (DNNs) have been proven to outperform classical met...
research
12/30/2021

Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks

Quantized neural networks typically require smaller memory footprints an...
research
08/21/2023

Dataset Quantization

State-of-the-art deep neural networks are trained with large amounts (mi...
research
02/08/2022

Binary Neural Networks as a general-propose compute paradigm for on-device computer vision

For binary neural networks (BNNs) to become the mainstream on-device com...
research
06/11/2019

Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free Inference

In this work, we propose to quantize all parts of standard classificatio...

Please sign up or login with your details

Forgot password? Click here to reset