GradFreeBits: Gradient Free Bit Allocation for Dynamic Low Precision Neural Networks

02/18/2021
by   Benjamin J. Bodner, et al.
0

Quantized neural networks (QNNs) are among the main approaches for deploying deep neural networks on low resource edge devices. Training QNNs using different levels of precision throughout the network (dynamic quantization) typically achieves superior trade-offs between performance and computational load. However, optimizing the different precision levels of QNNs can be complicated, as the values of the bit allocations are discrete and difficult to differentiate for. Also, adequately accounting for the dependencies between the bit allocation of different layers is not straight-forward. To meet these challenges, in this work we propose GradFreeBits: a novel joint optimization scheme for training dynamic QNNs, which alternates between gradient-based optimization for the weights, and gradient-free optimization for the bit allocation. Our method achieves better or on par performance with current state of the art low precision neural networks on CIFAR10/100 and ImageNet classification. Furthermore, our approach can be extended to a variety of other applications involving neural networks used in conjunction with parameters which are difficult to optimize for.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2018

Joint Training of Low-Precision Neural Network with Quantization Interval Parameters

Optimization for low-precision neural network is an important technique ...
research
08/15/2018

Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks

Quantized deep neural networks (QDNNs) are attractive due to their much ...
research
06/04/2022

Combinatorial optimization for low bit-width neural networks

Low-bit width neural networks have been extensively explored for deploym...
research
05/30/2022

FBM: Fast-Bit Allocation for Mixed-Precision Quantization

Quantized neural networks are well known for reducing latency, power con...
research
08/20/2022

DenseShift: Towards Accurate and Transferable Low-Bit Shift Network

Deploying deep neural networks on low-resource edge devices is challengi...
research
12/20/2018

SQuantizer: Simultaneous Learning for Both Sparse and Low-precision Neural Networks

Deep neural networks have achieved state-of-the-art accuracies in a wide...
research
03/02/2021

All at Once Network Quantization via Collaborative Knowledge Transfer

Network quantization has rapidly become one of the most widely used meth...

Please sign up or login with your details

Forgot password? Click here to reset