On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality

01/23/2023
by   Lu Xia, et al.
0

When training neural networks with low-precision computation, rounding errors often cause stagnation or are detrimental to the convergence of the optimizers; in this paper we study the influence of rounding errors on the convergence of the gradient descent method for problems satisfying the Polyak-Lojasiewicz inequality. Within this context, we show that, in contrast, biased stochastic rounding errors may be beneficial since choosing a proper rounding strategy eliminates the vanishing gradient problem and forces the rounding bias in a descent direction. Furthermore, we obtain a bound on the convergence rate that is stricter than the one achieved by unbiased stochastic rounding. The theoretical analysis is validated by comparing the performances of various rounding strategies when optimizing several examples using low-precision fixed-point number formats.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/08/2016

Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks

It is known that training deep neural networks, in particular, deep conv...
03/24/2021

A Simple and Efficient Stochastic Rounding Method for Training Neural Networks in Low Precision

Conventional stochastic rounding (CSR) is widely employed in the trainin...
05/13/2019

A Stochastic Gradient Method with Biased Estimation for Faster Nonconvex Optimization

A number of optimization approaches have been proposed for optimizing no...
10/26/2020

Convergence Acceleration via Chebyshev Step: Plausible Interpretation of Deep-Unfolded Gradient Descent

Deep unfolding is a promising deep-learning technique, whose network arc...
05/31/2020

Improved stochastic rounding

Due to the limited number of bits in floating-point or fixed-point arith...

Please sign up or login with your details

Forgot password? Click here to reset