It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher

03/31/2022
by   Kanghyun Choi, et al.
0

Model quantization is considered as a promising method to greatly reduce the resource requirements of deep neural networks. To deal with the performance drop induced by quantization errors, a popular method is to use training data to fine-tune quantized networks. In real-world environments, however, such a method is frequently infeasible because training data is unavailable due to security, privacy, or confidentiality concerns. Zero-shot quantization addresses such problems, usually by taking information from the weights of a full-precision teacher network to compensate the performance drop of the quantized networks. In this paper, we first analyze the loss surface of state-of-the-art zero-shot quantization techniques and provide several findings. In contrast to usual knowledge distillation problems, zero-shot quantization often suffers from 1) the difficulty of optimizing multiple loss terms together, and 2) the poor generalization capability due to the use of synthetic samples. Furthermore, we observe that many weights fail to cross the rounding threshold during training the quantized networks even when it is necessary to do so for better performance. Based on the observations, we propose AIT, a simple yet powerful technique for zero-shot quantization, which addresses the aforementioned two problems in the following way: AIT i) uses a KL distance loss only without a cross-entropy loss, and ii) manipulates gradients to guarantee that a certain portion of weights are properly updated after crossing the rounding thresholds. Experiments show that AIT outperforms the performance of many existing methods by a great margin, taking over the overall state-of-the-art position in the field.

READ FULL TEXT
research
12/09/2022

Genie: Show Me the Data for Quantization

Zero-shot quantization is a promising approach for developing lightweigh...
research
03/24/2023

Hard Sample Matters a Lot in Zero-Shot Quantization

Zero-shot quantization (ZSQ) is promising for compressing and accelerati...
research
03/29/2021

Zero-shot Adversarial Quantization

Model quantization is a promising approach to compress deep neural netwo...
research
12/06/2021

A Generalized Zero-Shot Quantization of Deep Convolutional Neural Networks via Learned Weights Statistics

Quantizing the floating-point weights and activations of deep convolutio...
research
11/17/2021

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization

Learning to synthesize data has emerged as a promising direction in zero...
research
11/17/2022

Zero-Shot Dynamic Quantization for Transformer Inference

We introduce a novel run-time method for significantly reducing the accu...
research
11/13/2022

Long-Range Zero-Shot Generative Deep Network Quantization

Quantization approximates a deep network model with floating-point numbe...

Please sign up or login with your details

Forgot password? Click here to reset