Improving SGD convergence by tracing multiple promising directions and estimating distance to minimum

01/31/2019
by   Jarek Duda, et al.
0

Deep neural networks are usually trained with stochastic gradient descent (SGD), which optimizes θ∈R^D parameters to minimize objective function using very rough approximations of gradient, averaging to the real gradient. To improve its convergence, there is used some state representing the current situation, like momentum being local average of gradients - gradient calculated in successive step updates the state, which directly governs the update of θ parameters. This high dimensional optimization might have more than one direction worth tracing, also the used methods do not try to estimate distance to the minimum for adapting the step size. Modelling Hessian would try to trace all the directions, also estimating distance to minimum, however, it is seen as too costly. We can reduce this representation to make it more practical: try to model some d≪ D of its most promising (eigen)directions, using d which provides the best tradeoff. There will be discussed using state as parametrisation with approximated Hessian: H≈∑_i=1^d λ_i v_i v_i^T, preferably using only positive eigenvalues, trying to model the lowest valley nearby. In every step we can use the calculated gradient to update such local model, then shift the θ parameters toward closest minimum of such approximation of local behavior of objective function. Comparing to the use of single momentum vector, it would remember and update multiple (d) promising directions, and adapt step size using estimated distance to the minimum.

READ FULL TEXT

page 1

page 2

page 3

research
07/16/2019

SGD momentum optimizer with step estimation by online parabola model

In stochastic gradient descent, especially for neural network training, ...
research
02/18/2023

Parameter Averaging for SGD Stabilizes the Implicit Bias towards Flat Regions

Stochastic gradient descent is a workhorse for training deep neural netw...
research
08/12/2015

On the Convergence of SGD Training of Neural Networks

Neural networks are usually trained by some form of stochastic gradient ...
research
09/12/2019

diffGrad: An Optimization Method for Convolutional Neural Networks

Stochastic Gradient Decent (SGD) is one of the core techniques behind th...
research
12/31/2019

Stochastic gradient-free descents

In this paper we propose stochastic gradient-free methods and gradient-f...
research
11/06/2016

Entropy-SGD: Biasing Gradient Descent Into Wide Valleys

This paper proposes a new optimization algorithm called Entropy-SGD for ...
research
03/22/2022

Resonance in Weight Space: Covariate Shift Can Drive Divergence of SGD with Momentum

Most convergence guarantees for stochastic gradient descent with momentu...

Please sign up or login with your details

Forgot password? Click here to reset