Trajectory of Mini-Batch Momentum: Batch Size Saturation and Convergence in High Dimensions

06/02/2022
∙
by   Kiwon Lee, et al.
∙
0
∙

We analyze the dynamics of large batch stochastic gradient descent with momentum (SGD+M) on the least squares problem when both the number of samples and dimensions are large. In this setting, we show that the dynamics of SGD+M converge to a deterministic discrete Volterra equation as dimension increases, which we analyze. We identify a stability measurement, the implicit conditioning ratio (ICR), which regulates the ability of SGD+M to accelerate the algorithm. When the batch size exceeds this ICR, SGD+M converges linearly at a rate of đ’Ș(1/√(Îș)), matching optimal full-batch momentum (in particular performing as well as a full-batch but with a fraction of the size). For batch sizes smaller than the ICR, in contrast, SGD+M has rates that scale like a multiple of the single batch SGD rate. We give explicit choices for the learning rate and momentum parameter in terms of the Hessian spectra that achieve this performance.

READ FULL TEXT

page 7

page 38

research
∙ 05/28/2023

Acceleration of stochastic gradient descent with momentum by averaging: finite-sample rates and asymptotic normality

Stochastic gradient descent with momentum (SGDM) has been widely used in...
research
∙ 11/01/2017

Don't Decay the Learning Rate, Increase the Batch Size

It is common practice to decay the learning rate. Here we show one can u...
research
∙ 06/07/2021

Dynamics of Stochastic Momentum Methods on Large-scale, Quadratic Models

We analyze a class of stochastic gradient algorithms with momentum on a ...
research
∙ 03/04/2021

Correcting Momentum with Second-order Information

We develop a new algorithm for non-convex stochastic optimization that f...
research
∙ 08/06/2023

The Effect of SGD Batch Size on Autoencoder Learning: Sparsity, Sharpness, and Feature Learning

In this work, we investigate the dynamics of stochastic gradient descent...
research
∙ 06/22/2022

A view of mini-batch SGD via generating functions: conditions of convergence, phase transitions, benefit from negative momenta

Mini-batch SGD with momentum is a fundamental algorithm for learning lar...
research
∙ 04/26/2019

Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources

With an increasing demand for training powers for deep learning algorith...

Please sign up or login with your details

Forgot password? Click here to reset