Curvature is Key: Sub-Sampled Loss Surfaces and the Implications for Large Batch Training

06/16/2020
by   Diego Granziol, et al.
0

We study the effect of mini-batching on the loss landscape of deep neural networks using spiked, field-dependent random matrix theory. We show that the magnitude of the extremal values of the batch Hessian are larger than those of the empirical Hessian. Our framework yields an analytical expression for the maximal SGD learning rate as a function of batch size, informing practical optimisation schemes. We use this framework to demonstrate that accepted and empirically-proven schemes for adapting the learning rate emerge as special cases of our more general framework. For stochastic second order methods and adaptive methods, we derive that the minimal damping coefficient is proportional to the ratio of the learning rate to batch size. For adaptive methods, we show that for the typical setup of small learning rate and small damping, square root learning rate scalings with increasing batch-size should be employed. We validate our claims on the VGG/WideResNet architectures on the CIFAR-100 and ImageNet datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/20/2020

How do SGD hyperparameters in natural training affect adversarial robustness?

Learning rate, batch size and momentum are three important hyperparamete...
research
07/13/2018

DNN's Sharpest Directions Along the SGD Trajectory

Recent work has identified that using a high learning rate or a small ba...
research
12/20/2019

Second-order Information in First-order Optimization Methods

In this paper, we try to uncover the second-order essence of several fir...
research
11/15/2020

Explaining the Adaptive Generalisation Gap

We conjecture that the reason for the difference in generalisation betwe...
research
03/31/2021

Empirically explaining SGD from a line search perspective

Optimization in Deep Learning is mainly guided by vague intuitions and s...
research
03/06/2019

Mean-field Analysis of Batch Normalization

Batch Normalization (BatchNorm) is an extremely useful component of mode...
research
10/08/2021

A Loss Curvature Perspective on Training Instability in Deep Learning

In this work, we study the evolution of the loss Hessian across many cla...

Please sign up or login with your details

Forgot password? Click here to reset