Training Quantised Neural Networks with STE Variants: the Additive Noise Annealing Algorithm

03/21/2022
by   Matteo Spallanzani, et al.
4

Training quantised neural networks (QNNs) is a non-differentiable optimisation problem since weights and features are output by piecewise constant functions. The standard solution is to apply the straight-through estimator (STE), using different functions during the inference and gradient computation steps. Several STE variants have been proposed in the literature aiming to maximise the task accuracy of the trained network. In this paper, we analyse STE variants and study their impact on QNN training. We first observe that most such variants can be modelled as stochastic regularisations of stair functions; although this intuitive interpretation is not new, our rigorous discussion generalises to further variants. Then, we analyse QNNs mixing different regularisations, finding that some suitably synchronised smoothing of each layer map is required to guarantee pointwise compositional convergence to the target discontinuous function. Based on these theoretical insights, we propose additive noise annealing (ANA), a new algorithm to train QNNs encompassing standard STE and its variants as special cases. When testing ANA on the CIFAR-10 image classification benchmark, we find that the major impact on task accuracy is not due to the qualitative shape of the regularisations but to the proper synchronisation of the different STE variants used in a network, in accordance with the theoretical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2019

Additive Noise Annealing and Approximation Properties of Quantized Neural Networks

We present a theoretical and experimental investigation of the quantizat...
research
07/20/2016

On the Modeling of Error Functions as High Dimensional Landscapes for Weight Initialization in Learning Networks

Next generation deep neural networks for classification hosted on embedd...
research
05/09/2019

Differentiable Approximation Bridges For Training Networks Containing Non-Differentiable Functions

Modern neural network training relies on piece-wise (sub-)differentiable...
research
09/26/2019

Sequential Training of Neural Networks with Gradient Boosting

This paper presents a novel technique based on gradient boosting to trai...
research
07/20/2019

Distributed Global Optimization by Annealing

The paper considers a distributed algorithm for global minimization of a...
research
05/04/2019

Predict-and-recompute conjugate gradient variants

The standard implementation of the conjugate gradient algorithm suffers ...
research
06/04/2019

Embedded hyper-parameter tuning by Simulated Annealing

We propose a new metaheuristic training scheme that combines Stochastic ...

Please sign up or login with your details

Forgot password? Click here to reset