Maximal Initial Learning Rates in Deep ReLU Networks

12/14/2022
by   Gaurav Iyer, et al.
0

Training a neural network requires choosing a suitable learning rate, involving a trade-off between speed and effectiveness of convergence. While there has been considerable theoretical and empirical analysis of how large the learning rate can be, most prior work focuses only on late-stage training. In this work, we introduce the maximal initial learning rate η^∗ - the largest learning rate at which a randomly initialized neural network can successfully begin training and achieve (at least) a given threshold accuracy. Using a simple approach to estimate η^∗, we observe that in constant-width fully-connected ReLU networks, η^∗ demonstrates different behavior to the maximum learning rate later in training. Specifically, we find that η^∗ is well predicted as a power of (depth×width), provided that (i) the width of the network is sufficiently large compared to the depth, and (ii) the input layer of the network is trained at a relatively small learning rate. We further analyze the relationship between η^∗ and the sharpness λ_1 of the network at initialization, indicating that they are closely though not inversely related. We formally prove bounds for λ_1 in terms of (depth×width) that align with our empirical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2023

Depth Dependence of μP Learning Rates in ReLU MLPs

In this short note we consider random fully connected ReLU networks of w...
research
07/10/2019

Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks

Stochastic gradient descent with a large initial learning rate is a wide...
research
10/07/2021

Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect

Recent empirical advances show that training deep models with large lear...
research
10/14/2019

Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks

Deep neural networks have been used in various machine learning applicat...
research
02/23/2023

Phase diagram of training dynamics in deep neural networks: effect of learning rate, depth, and width

We systematically analyze optimization dynamics in deep neural networks ...
research
01/28/2019

Stiffness: A New Perspective on Generalization in Neural Networks

We investigate neural network training and generalization using the conc...
research
06/29/2023

Provable Advantage of Curriculum Learning on Parity Targets with Mixed Inputs

Experimental results have shown that curriculum learning, i.e., presenti...

Please sign up or login with your details

Forgot password? Click here to reset