Understanding Gradient Descent on Edge of Stability in Deep Learning

05/19/2022
by   Sanjeev Arora, et al.
194

Deep learning experiments in Cohen et al. (2021) using deterministic Gradient Descent (GD) revealed an Edge of Stability (EoS) phase when learning rate (LR) and sharpness (i.e., the largest eigenvalue of Hessian) no longer behave as in traditional optimization. Sharpness stabilizes around 2/LR and loss goes up and down across iterations, yet still with an overall downward trend. The current paper mathematically analyzes a new mechanism of implicit regularization in the EoS phase, whereby GD updates due to non-smooth loss landscape turn out to evolve along some deterministic flow on the manifold of minimum loss. This is in contrast to many previous results about implicit bias either relying on infinitesimal updates or noise in gradient. Formally, for any smooth function L with certain regularity condition, this effect is demonstrated for (1) Normalized GD, i.e., GD with a varying LR η_t =η/ || ∇ L(x(t)) || and loss L; (2) GD with constant LR and loss √(L). Both provably enter the Edge of Stability, with the associated flow on the manifold minimizing λ_max(∇^2 L). The above theoretical results have been corroborated by an experimental study.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

10/13/2021

What Happens after SGD Reaches Zero Loss? –A Mathematical Framework

Understanding the implicit bias of Stochastic Gradient Descent (SGD) is ...
10/24/2020

Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets

We analyze the inductive bias of gradient descent for weight normalized ...
10/07/2021

Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect

Recent empirical advances show that training deep models with large lear...
02/26/2021

Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability

We empirically demonstrate that full-batch gradient descent on neural ne...
11/04/2020

Direction Matters: On the Implicit Regularization Effect of Stochastic Gradient Descent with Moderate Learning Rate

Understanding the algorithmic regularization effect of stochastic gradie...
06/08/2022

On Gradient Descent Convergence beyond the Edge of Stability

Gradient Descent (GD) is a powerful workhorse of modern machine learning...
09/11/2019

An Implicit Form of Krasulina's k-PCA Update without the Orthonormality Constraint

We shed new insights on the two commonly used updates for the online k-P...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.