The Impact of Local Geometry and Batch Size on the Convergence and Divergence of Stochastic Gradient Descent

09/14/2017
by   Vivak Patel, et al.
0

Stochastic small-batch (SB) methods, such as mini-batch Stochastic Gradient Descent (SGD), have been extremely successful in training neural networks with strong generalization properties. In the work of Keskar et. al (2017), an SB method's success in training neural networks was attributed to the fact it converges to flat minima---those minima whose Hessian has only small eigenvalues---while a large-batch (LB) method converges to sharp minima---those minima whose Hessian has a few large eigenvalues. Commonly, this difference is attributed to the noisier gradients in SB methods that allow SB iterates to escape from sharp minima. While this explanation is intuitive, in this work we offer an alternative mechanism. In this work, we argue that SGD escapes from or converges to minima based on a deterministic relationship between the learning rate, the batch size, and the local geometry of the minimizer. We derive the exact relationships by a rigorous mathematical analysis of the canonical quadratic sums problem. Then, we numerically study how these relationships extend to nonconvex, stochastic optimization problems. As a consequence of this work, we offer a more complete explanation of why SB methods prefer flat minima and LB methods seem agnostic, which can be leveraged to design SB and LB training methods that have tailored optimization properties.

READ FULL TEXT
research
12/02/2021

On Large Batch Training and Sharp Minima: A Fokker-Planck Perspective

We study the statistical properties of the dynamic trajectory of stochas...
research
01/19/2023

An SDE for Modeling SAM: Theory and Insights

We study the SAM (Sharpness-Aware Minimization) optimizer which has rece...
research
09/24/2020

How Many Factors Influence Minima in SGD?

Stochastic gradient descent (SGD) is often applied to train Deep Neural ...
research
07/06/2022

When does SGD favor flat minima? A quantitative characterization via linear stability

The observation that stochastic gradient descent (SGD) favors flat minim...
research
05/10/2019

The sharp, the flat and the shallow: Can weakly interacting agents learn to escape bad minima?

An open problem in machine learning is whether flat minima generalize be...
research
02/17/2023

SAM operates far from home: eigenvalue regularization as a dynamical phenomenon

The Sharpness Aware Minimization (SAM) optimization algorithm has been s...
research
10/04/2022

The Dynamics of Sharpness-Aware Minimization: Bouncing Across Ravines and Drifting Towards Wide Minima

We consider Sharpness-Aware Minimization (SAM), a gradient-based optimiz...

Please sign up or login with your details

Forgot password? Click here to reset