The large learning rate phase of deep learning: the catapult mechanism

03/04/2020
by   Aitor Lewkowycz, et al.
0

The choice of initial learning rate can have a profound effect on the performance of deep networks. We present a class of neural networks with solvable training dynamics, and confirm their predictions empirically in practical deep learning settings. The networks exhibit sharply distinct behaviors at small and large learning rates. The two regimes are separated by a phase transition. In the small learning rate phase, training can be understood using the existing theory of infinitely wide neural networks. At large learning rates the model captures qualitatively distinct phenomena, including the convergence of gradient descent dynamics to flatter minima. One key prediction of our model is a narrow range of large, stable learning rates. We find good agreement between our model's predictions and training dynamics in realistic deep learning settings. Furthermore, we find that the optimal performance in such settings is often found in the large learning rate phase. We believe our results shed light on characteristics of models trained at different learning rates. In particular, they fill a gap between existing wide neural network theory, and the nonlinear, large learning rate, training dynamics relevant to practice.

READ FULL TEXT

page 2

page 16

research
02/23/2023

Phase diagram of training dynamics in deep neural networks: effect of learning rate, depth, and width

We systematically analyze optimization dynamics in deep neural networks ...
research
01/18/2023

Catapult Dynamics and Phase Transitions in Quadratic Nets

Neural networks trained with gradient descent can undergo non-trivial ph...
research
11/25/2020

Implicit bias of deep linear networks in the large learning rate phase

Correctly choosing a learning rate (scheme) for gradient-based optimizat...
research
06/15/2020

On the training dynamics of deep networks with L_2 regularization

We study the role of L_2 regularization in deep learning, and uncover si...
research
11/01/2021

Investigating the locality of neural network training dynamics

A fundamental quest in the theory of deep-learning is to understand the ...
research
12/14/2022

Learning threshold neurons via the "edge of stability"

Existing analyses of neural network training often operate under the unr...
research
08/23/2017

Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates

In this paper, we show a phenomenon, which we named "super-convergence",...

Please sign up or login with your details

Forgot password? Click here to reset