Mirror Descent View for Neural Network Quantization

10/18/2019
by   Thalaiyasingam Ajanthan, et al.
7

Quantizing large Neural Networks (NN) while maintaining the performance is highly desirable for resource-limited devices due to reduced memory and time complexity. NN quantization is usually formulated as a constrained optimization problem and optimized via a modified version of gradient descent. In this work, by interpreting the continuous parameters (unconstrained) as the dual of the quantized ones, we introduce a Mirror Descent (MD) framework (Bubeck (2015)) for NN quantization. Specifically, we provide conditions on the projections (i.e., mapping from continuous to quantized ones) which would enable us to derive valid mirror maps and in turn the respective MD updates. Furthermore, we discuss a numerically stable implementation of MD by storing an additional set of auxiliary dual variables (continuous). This update is strikingly analogous to the popular Straight Through Estimator (STE) based method which is typically viewed as a "trick" to avoid vanishing gradients issue but here we show that it is an implementation method for MD for certain projections. Our experiments on standard classification datasets (CIFAR-10/100, TinyImageNet) with convolutional and residual architectures show that our MD variants obtain fully-quantized networks with accuracies very close to the floating-point networks.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/11/2018

Proximal Mean-field for Neural Network Quantization

Compressing large neural networks by quantizing the parameters, while ma...
08/25/2020

Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks

The massive size of modern neural networks has motivated substantial rec...
10/03/2018

Relaxed Quantization for Discretized Neural Networks

Neural network quantization has become an important research area due to...
11/20/2015

Resiliency of Deep Neural Networks under Quantization

The complexity of deep neural network algorithms for hardware implementa...
08/17/2018

A study on speech enhancement using exponent-only floating point quantized neural network (EOFP-QNN)

Numerous studies have investigated the effectiveness of neural network q...
03/30/2020

Improved Gradient based Adversarial Attacks for Quantized Networks

Neural network quantization has become increasingly popular due to effic...
09/18/2016

On Randomized Distributed Coordinate Descent with Quantized Updates

In this paper, we study the randomized distributed coordinate descent al...

Code Repositories

md-bnn

Code implementation of our AISTATS'21 paper "Mirror Descent View for Neural Network Quantization"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.