SinReQ: Generalized Sinusoidal Regularization for Automatic Low-Bitwidth Deep Quantized Training

05/04/2019
by   Ahmed T. Elthakeb, et al.
0

Quantization of neural networks offers significant promise in reducing their compute and storage cost. Albeit alluring, without domain experts to come up with special handcrafted optimization techniques or ad-hoc manipulation of the original network architecture, deep quantization (below 8 bits) results in unrecoverable accuracy gap between the quantized model and the full-precision counterpart. We propose a novel sinusoidal regularization, dubbed SinReQ, for low precision deep quantized training. The proposed method is aimed at automatically yielding semi-quantized weights at pre-defined target bitwidths during conventional training. The proposed regularization is realized by adding a periodic function (sinusoidal regularizer) to the original objective function. We exploit the inherent periodicity with a desired convexity profile in sinusoidal functions to automatically propel weights towards target quantization levels during conventional training. Our method combines generality by providing the flexibility for arbitrary-bit quantization, and customization by optimizing different layer-wise regularizers simultaneously. Preliminary results for experiments on CIFAR10, SVHN show that integrating SinReQ within the training algorithm achieves 2.82 improvements to DoReFa (Zhou et al., 2016), and WRPN (Mishra et al., 2018) methods respectively.

READ FULL TEXT
research
02/29/2020

Gradient-Based Deep Quantization of Neural Networks through Sinusoidal Adaptive Regularization

As deep neural networks make their ways into different domains, their co...
research
11/24/2018

On Periodic Functions as Regularizers for Quantization of Neural Networks

Deep learning models have been successfully used in computer vision and ...
research
12/23/2021

Training Quantized Deep Neural Networks via Cooperative Coevolution

This work considers a challenging Deep Neural Network (DNN) quantization...
research
11/29/2019

Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization

Neural Network quantization, which aims to reduce bit-lengths of the net...
research
01/19/2018

BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights

We propose BinaryRelax, a simple two-phase algorithm, for training deep ...
research
03/15/2018

Word2Bits - Quantized Word Vectors

Word vectors require significant amounts of memory and storage, posing i...
research
06/14/2019

Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks

The deep layers of modern neural networks extract a rather rich set of f...

Please sign up or login with your details

Forgot password? Click here to reset