Learning in School: Multi-teacher Knowledge Inversion for Data-Free Quantization

11/19/2020
by   Yuhang Li, et al.
0

User data confidentiality protection is becoming a rising challenge in the present deep learning research. In that case, data-free quantization has emerged as a promising method to conduct model compression without the need for user data. With no access to data, model quantization naturally becomes less resilient and faces a higher risk of performance degradation. Prior works propose to distill fake images by matching the activation distribution given a specific pre-trained model. However, this fake data cannot be applied to other models easily and is optimized by an invariant objective, resulting in the lack of generalizability and diversity whereas these properties can be found in the natural image dataset. To address these problems, we propose Learning in School (LIS) algorithm, capable to generate the images suitable for all models by inverting the knowledge in multiple teachers. We further introduce a decentralized training strategy by sampling teachers from hierarchical courses to simultaneously maintain the diversity of generated images. LIS data is highly diverse, not model-specific and only requires one-time synthesis to generalize multiple models and applications. Extensive experiments prove that LIS images resemble natural images with high quality and high fidelity. On data-free quantization, our LIS method significantly surpasses the existing model-specific methods. In particular, LIS data is effective in both post-training quantization and quantization-aware training on the ImageNet dataset and achieves up to 33% top-1 accuracy uplift compared with existing methods.

READ FULL TEXT

page 1

page 6

research
03/07/2020

Generative Low-bitwidth Data Free Quantization

Neural network quantization is an effective way to compress deep models ...
research
04/08/2022

Data-Free Quantization with Accurate Activation Clipping and Adaptive Batch Normalization

Data-free quantization is a task that compresses the neural network to l...
research
04/30/2022

ClusterQ: Semantic Feature Distribution Alignment for Data-Free Quantization

Network quantization has emerged as a promising method for model compres...
research
06/04/2023

Temporal Dynamic Quantization for Diffusion Models

The diffusion model has gained popularity in vision applications due to ...
research
09/30/2021

Towards Efficient Post-training Quantization of Pre-trained Language Models

Network quantization has gained increasing attention with the rapid grow...
research
09/13/2022

PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers

Data-free quantization can potentially address data privacy and security...
research
07/15/2020

Image De-Quantization Using Generative Models as Priors

Image quantization is used in several applications aiming in reducing th...

Please sign up or login with your details

Forgot password? Click here to reset