Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer

10/14/2020
by   Chen Zhu, et al.
0

When large scale training data is available, one can obtain compact and accurate networks to be deployed in resource-constrained environments effectively through quantization and pruning. However, training data are often protected due to privacy concerns and it is challenging to obtain compact networks without data. We study data-free quantization and pruning by transferring knowledge from trained large networks to compact networks. Auxiliary generators are simultaneously and adversarially trained with the targeted compact networks to generate synthetic inputs that maximize the discrepancy between the given large network and its quantized or pruned version. We show theoretically that the alternating optimization for the underlying minimax problem converges under mild conditions for pruning and quantization. Our data-free compact networks achieve competitive accuracy to networks trained and fine-tuned with training data. Our quantized and pruned networks achieve good performance while being more compact and lightweight. Further, we demonstrate that the compact structure and corresponding initialization from the Lottery Ticket Hypothesis can also help in data-free training.

READ FULL TEXT
research
08/14/2023

Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning

Structured pruning and quantization are promising approaches for reducin...
research
07/06/2023

Pruning vs Quantization: Which is Better?

Neural network pruning and quantization techniques are almost as old as ...
research
06/21/2022

QuantFace: Towards Lightweight Face Recognition by Synthetic Data Low-bit Quantization

Deep learning-based face recognition models follow the common trend in d...
research
03/01/2021

Diversifying Sample Generation for Accurate Data-Free Quantization

Quantization has emerged as one of the most prevalent approaches to comp...
research
04/02/2022

Paoding: Supervised Robustness-preserving Data-free Neural Network Pruning

When deploying pre-trained neural network models in real-world applicati...
research
06/03/2019

A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off

Reducing the precision of weights and activation functions in neural net...
research
01/18/2023

ACQ: Improving Generative Data-free Quantization Via Attention Correction

Data-free quantization aims to achieve model quantization without access...

Please sign up or login with your details

Forgot password? Click here to reset