PD-Quant: Post-Training Quantization based on Prediction Difference Metric

12/14/2022
by   Jiawei Liu, et al.
0

As a neural network compression technique, post-training quantization (PTQ) transforms a pre-trained model into a quantized model using a lower-precision data type. However, the prediction accuracy will decrease because of the quantization noise, especially in extremely low-bit settings. How to determine the appropriate quantization parameters (e.g., scaling factors and rounding of weights) is the main problem facing now. Many existing methods determine the quantization parameters by minimizing the distance between features before and after quantization. Using this distance as the metric to optimize the quantization parameters only considers local information. We analyze the problem of minimizing local metrics and indicate that it would not result in optimal quantization parameters. Furthermore, the quantized model suffers from overfitting due to the small number of calibration samples in PTQ. In this paper, we propose PD-Quant to solve the problems. PD-Quant uses the information of differences between network prediction before and after quantization to determine the quantization parameters. To mitigate the overfitting problem, PD-Quant adjusts the distribution of activations in PTQ. Experiments show that PD-Quant leads to better quantization parameters and improves the prediction accuracy of quantized models, especially in low-bit settings. For example, PD-Quant pushes the accuracy of ResNet-18 up to 53.08 40.92 https://github.com/hustvl/PD-Quant.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2018

Joint Training of Low-Precision Neural Network with Quantization Interval Parameters

Optimization for low-precision neural network is an important technique ...
research
11/24/2021

PTQ4ViT: Post-Training Quantization Framework for Vision Transformers

Quantization is one of the most effective methods to compress neural net...
research
04/19/2023

Post-Training Quantization for Object Detection

Efficient inference for object detection networks is a major challenge o...
research
10/15/2021

PTQ-SL: Exploring the Sub-layerwise Post-training Quantization

Network quantization is a powerful technique to compress convolutional n...
research
07/07/2022

Attention Round for Post-Training Quantization

At present, the quantification methods of neural network models are main...
research
05/07/2021

Pareto-Optimal Quantized ResNet Is Mostly 4-bit

Quantization has become a popular technique to compress neural networks ...
research
05/16/2021

Is In-Domain Data Really Needed? A Pilot Study on Cross-Domain Calibration for Network Quantization

Post-training quantization methods use a set of calibration data to comp...

Please sign up or login with your details

Forgot password? Click here to reset