Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training

11/12/2022
by   Yunshan Zhong, et al.
0

Quantization-aware training (QAT) receives extensive popularity as it well retains the performance of quantized networks. In QAT, the contemporary experience is that all quantized weights are updated for an entire training process. In this paper, this experience is challenged based on an interesting phenomenon we observed. Specifically, a large portion of quantized weights reaches the optimal quantization level after a few training epochs, which we refer to as the partly scratch-off lottery ticket. This straightforward-yet-valuable observation naturally inspires us to zero out gradient calculations of these weights in the remaining training period to avoid meaningless updating. To effectively find the ticket, we develop a heuristic method, dubbed as lottery ticket scratcher (LTS), which freezes a weight once the distance between the full-precision one and its quantization level is smaller than a controllable threshold. Surprisingly, the proposed LTS typically eliminates 30%-60% weight updating and 15%-30% FLOPs of the backward pass, while still resulting on par with or even better performance than the compared baseline. For example, compared with the baseline, LTS improves 2-bit ResNet-18 by 1.41%, eliminating 56% weight updating and 28% FLOPs of the backward pass.

READ FULL TEXT
research
09/28/2019

Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks

We proposed Additive Powers-of-Two (APoT) quantization, an efficient non...
research
02/04/2023

Oscillation-free Quantization for Low-bit Vision Transformers

Weight oscillation is an undesirable side effect of quantization-aware t...
research
06/26/2022

CTMQ: Cyclic Training of Convolutional Neural Networks with Multiple Quantization Steps

This paper proposes a training method having multiple cyclic training fo...
research
09/05/2021

Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss

Network quantization, which aims to reduce the bit-lengths of the networ...
research
12/05/2022

QFT: Post-training quantization via fast joint finetuning of all degrees of freedom

The post-training quantization (PTQ) challenge of bringing quantized neu...
research
09/09/2020

FleXOR: Trainable Fractional Quantization

Quantization based on the binary codes is gaining attention because each...
research
06/16/2023

ZeRO++: Extremely Efficient Collective Communication for Giant Model Training

Zero Redundancy Optimizer (ZeRO) has been used to train a wide range of ...

Please sign up or login with your details

Forgot password? Click here to reset