Long-Range Zero-Shot Generative Deep Network Quantization

11/13/2022
by   Yan Luo, et al.
0

Quantization approximates a deep network model with floating-point numbers by the one with low bit width numbers, in order to accelerate inference and reduce computation. Quantizing a model without access to the original data, zero-shot quantization can be accomplished by fitting the real data distribution by data synthesis. However, zero-shot quantization achieves inferior performance compared to the post-training quantization with real data. We find it is because: 1) a normal generator is hard to obtain high diversity of synthetic data, since it lacks long-range information to allocate attention to global features; 2) the synthetic images aim to simulate the statistics of real data, which leads to weak intra-class heterogeneity and limited feature richness. To overcome these problems, we propose a novel deep network quantizer, dubbed Long-Range Zero-Shot Generative Deep Network Quantization (LRQ). Technically, we propose a long-range generator to learn long-range information instead of simple local features. In order for the synthetic data to contain more global features, long-range attention using large kernel convolution is incorporated into the generator. In addition, we also present an Adversarial Margin Add (AMA) module to force intra-class angular enlargement between feature vector and class center. As AMA increases the convergence difficulty of the loss function, which is opposite to the training objective of the original loss function, it forms an adversarial process. Furthermore, in order to transfer knowledge from the full-precision network, we also utilize a decoupled knowledge distillation. Extensive experiments demonstrate that LRQ obtains better performance than other competitors.

READ FULL TEXT

page 1

page 2

research
11/17/2021

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization

Learning to synthesize data has emerged as a promising direction in zero...
research
10/26/2022

Zero-Shot Learning of a Conditional Generative Adversarial Network for Data-Free Network Quantization

We propose a novel method for training a conditional generative adversar...
research
12/06/2021

A Generalized Zero-Shot Quantization of Deep Convolutional Neural Networks via Learned Weights Statistics

Quantizing the floating-point weights and activations of deep convolutio...
research
09/09/2021

Fine-grained Data Distribution Alignment for Post-Training Quantization

While post-training quantization receives popularity mostly due to its e...
research
03/31/2022

It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher

Model quantization is considered as a promising method to greatly reduce...
research
01/18/2023

ACQ: Improving Generative Data-free Quantization Via Attention Correction

Data-free quantization aims to achieve model quantization without access...

Please sign up or login with your details

Forgot password? Click here to reset