Understanding Gradient Descent on Edge of Stability in Deep Learning

05/19/2022
by   Sanjeev Arora, et al.
194

Deep learning experiments in Cohen et al. (2021) using deterministic Gradient Descent (GD) revealed an Edge of Stability (EoS) phase when learning rate (LR) and sharpness (i.e., the largest eigenvalue of Hessian) no longer behave as in traditional optimization. Sharpness stabilizes around 2/LR and loss goes up and down across iterations, yet still with an overall downward trend. The current paper mathematically analyzes a new mechanism of implicit regularization in the EoS phase, whereby GD updates due to non-smooth loss landscape turn out to evolve along some deterministic flow on the manifold of minimum loss. This is in contrast to many previous results about implicit bias either relying on infinitesimal updates or noise in gradient. Formally, for any smooth function L with certain regularity condition, this effect is demonstrated for (1) Normalized GD, i.e., GD with a varying LR η_t =η/ || ∇ L(x(t)) || and loss L; (2) GD with constant LR and loss √(L). Both provably enter the Edge of Stability, with the associated flow on the manifold minimizing λ_max(∇^2 L). The above theoretical results have been corroborated by an experimental study.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2022

Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability

Traditional analyses of gradient descent show that when the largest eige...
research
05/27/2023

The Implicit Regularization of Dynamical Stability in Stochastic Gradient Descent

In this paper, we study the implicit regularization of stochastic gradie...
research
10/13/2021

What Happens after SGD Reaches Zero Loss? –A Mathematical Framework

Understanding the implicit bias of Stochastic Gradient Descent (SGD) is ...
research
10/24/2020

Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets

We analyze the inductive bias of gradient descent for weight normalized ...
research
12/14/2022

Learning threshold neurons via the "edge of stability"

Existing analyses of neural network training often operate under the unr...
research
06/08/2022

On Gradient Descent Convergence beyond the Edge of Stability

Gradient Descent (GD) is a powerful workhorse of modern machine learning...
research
07/26/2022

Analyzing Sharpness along GD Trajectory: Progressive Sharpening and Edge of Stability

Recent findings (e.g., arXiv:2103.00065) demonstrate that modern neural ...

Please sign up or login with your details

Forgot password? Click here to reset