Catapult Dynamics and Phase Transitions in Quadratic Nets

01/18/2023
by   David Meltzer, et al.
0

Neural networks trained with gradient descent can undergo non-trivial phase transitions as a function of the learning rate. In (Lewkowycz et al., 2020) it was discovered that wide neural nets can exhibit a catapult phase for super-critical learning rates, where the training loss grows exponentially quickly at early times before rapidly decreasing to a small value. During this phase the top eigenvalue of the neural tangent kernel (NTK) also undergoes significant evolution. In this work, we will prove that the catapult phase exists in a large class of models, including quadratic models and two-layer, homogenous neural nets. To do this, we show that for a certain range of learning rates the weight norm decreases whenever the loss becomes large. We also empirically study learning rates beyond this theoretically derived range and show that the activation map of ReLU nets trained with super-critical learning rates becomes increasingly sparse as we increase the learning rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2020

The large learning rate phase of deep learning: the catapult mechanism

The choice of initial learning rate can have a profound effect on the pe...
research
05/24/2022

Quadratic models for understanding neural network dynamics

In this work, we propose using a quadratic model as a tool for understan...
research
05/13/2023

Depth Dependence of μP Learning Rates in ReLU MLPs

In this short note we consider random fully connected ReLU networks of w...
research
11/25/2020

Implicit bias of deep linear networks in the large learning rate phase

Correctly choosing a learning rate (scheme) for gradient-based optimizat...
research
10/03/2019

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

Recent research shows that the following two models are equivalent: (a) ...
research
08/23/2017

Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates

In this paper, we show a phenomenon, which we named "super-convergence",...
research
07/26/2022

Analyzing Sharpness along GD Trajectory: Progressive Sharpening and Edge of Stability

Recent findings (e.g., arXiv:2103.00065) demonstrate that modern neural ...

Please sign up or login with your details

Forgot password? Click here to reset