Once Quantized for All: Progressively Searching for Quantized Efficient Models

10/09/2020
by   Mingzhu Shen, et al.
0

Automatic search of Quantized Neural Networks has attracted a lot of attention. However, the existing quantization aware Neural Architecture Search (NAS) approaches inherit a two-stage search-retrain schema, which is not only time-consuming but also adversely affected by the unreliable ranking of architectures during the search. To avoid the undesirable effect of the search-retrain schema, we present Once Quantized for All (OQA), a novel framework that searches for quantized efficient models and deploys their quantized weights at the same time without additional post-process. While supporting a huge architecture search space, our OQA can produce a series of ultra-low bit-width(e.g. 4/3/2 bit) quantized efficient models. A progressive bit inheritance procedure is introduced to support ultra-low bit-width. Our discovered model family, OQANets, achieves a new state-of-the-art (SOTA) on quantized efficient models compared with various quantization methods and bit-widths. In particular, OQA2bit-L achieves 64.0 outperforming its 2-bit counterpart EfficientNet-B0@QKD by a large margin of 14 https://github.com/LaVieEnRoseSMZ/OQA.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset