An Exponential Learning Rate Schedule for Deep Learning

10/16/2019
by   Zhiyuan Li, et al.
36

Intriguing empirical evidence exists that deep learning can work well with exoticschedules for varying the learning rate. This paper suggests that the phenomenonmay be due to Batch Normalization or BN(Ioffe Szegedy, 2015), which is ubiquitous and provides benefits in optimization and generalization across all standardarchitectures. The following new results are shown about BN with weight decay and momentum (in other words, the typical use case which was not considered inearlier theoretical analyses of stand-alone BN (Ioffe Szegedy, 2015; Santurkaret al., 2018; Arora et al., 2018). 1. Training can be done using SGD with momentum and an exponentially increasing learning rate schedule, i.e., learning rate increases by some (1 +α) factor in every epoch for some α >0. (Precise statement in the paper.) To the best of our knowledge this is the first time such a rate schedule has been successfully used, let alone for highly successful architectures. As expected, such training rapidly blows up network weights, but the net stays well-behaved due to normalization. 2. Mathematical explanation of the success of the above rate schedule: a rigorous proof that it is equivalent to the standard setting of BN + SGD + StandardRate Tuning + Weight Decay + Momentum. This equivalence holds for other normalization layers as well, Group Normalization(Wu He, 2018), LayerNormalization(Ba et al., 2016), Instance Norm(Ulyanov et al., 2016), etc. 3. A worked-out toy example illustrating the above linkage of hyper-parameters. Using either weight decay or BN alone reaches global minimum, but convergence fails when both are used.

READ FULL TEXT

page 23

page 24

page 25

page 26

page 27

research
03/23/2021

How to decay your learning rate

Complex learning rate schedules have become an integral part of deep lea...
research
10/06/2020

Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate

Recent works (e.g., (Li and Arora, 2020)) suggest that the use of popula...
research
03/26/2018

A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay

Although deep learning has produced dazzling successes for applications ...
research
10/09/2019

On the adequacy of untuned warmup for adaptive optimization

Adaptive optimization algorithms such as Adam (Kingma Ba, 2014) are ...
research
11/30/2021

AutoDrop: Training Deep Learning Models with Automatic Learning Rate Drop

Modern deep learning (DL) architectures are trained using variants of th...
research
05/26/2023

Rotational Optimizers: Simple Robust DNN Training

The training dynamics of modern deep neural networks depend on complex i...
research
06/15/2020

Spherical Motion Dynamics of Deep Neural Networks with Batch Normalization and Weight Decay

We comprehensively reveal the learning dynamics of deep neural networks ...

Please sign up or login with your details

Forgot password? Click here to reset