Convergence diagnostics for stochastic gradient descent with constant step size

10/17/2017
by   Jerry Chee, et al.
0

Iterative procedures in stochastic optimization are typically comprised of a transient phase and a stationary phase. During the transient phase the procedure converges towards a region of interest, and during the stationary phase the procedure oscillates in a convergence region, commonly around a single point. In this paper, we develop a statistical diagnostic test to detect such phase transition in the context of stochastic gradient descent with constant step size. We present theoretical and experimental results suggesting that the diagnostic behaves as intended, and the region where the diagnostic is activated coincides with the convergence region. For a class of loss functions, we derive a closed-form solution describing such region, and support this theoretical result with simulated experiments. Finally, we suggest an application to speed up convergence of stochastic gradient descent by halving the learning rate each time convergence is detected. This leads to remarkable speed gains that are empirically comparable to state-of-art procedures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset