A Walk with SGD

02/24/2018
by   Chen Xing, et al.
0

Exploring why stochastic gradient descent (SGD) based optimization methods train deep neural networks (DNNs) that generalize well has become an active area of research recently. Towards this end, we empirically study the dynamics of SGD when training over-parametrized deep networks. Specifically we study the DNN loss surface along the trajectory of SGD by interpolating the loss surface between parameters from consecutive iterations and tracking various metrics during the training process. We find that the covariance structure of the noise induced due to mini-batches is quite special that allows SGD to descend and explore the loss surface while avoiding barriers along its path. Specifically, our experiments show evidence that for the most part of training, SGD explores regions along a valley by bouncing off valley walls at a height above the valley floor. This 'bouncing off walls at a height' mechanism helps SGD traverse larger distance for small batch sizes and large learning rates which we find play qualitatively different roles in the dynamics. While a large learning rate maintains a large height from the valley floor, a small batch size injects noise facilitating exploration. We find this mechanism is crucial for generalization because the floor of the valley has barriers and this exploration above the valley floor allows SGD to quickly travel far away from the initialization point (without being affected by barriers) and find flatter regions, corresponding to better generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2020

Improved generalization by noise enhancement

Recent studies have demonstrated that noise in stochastic gradient desce...
research
07/13/2018

DNN's Sharpest Directions Along the SGD Trajectory

Recent work has identified that using a high learning rate or a small ba...
research
02/21/2020

The Break-Even Point on Optimization Trajectories of Deep Neural Networks

The early phase of training of deep neural networks is critical for thei...
research
07/19/2021

Rethinking the limiting dynamics of SGD: modified loss, phase space oscillations, and anomalous diffusion

In this work we explore the limiting dynamics of deep neural networks tr...
research
09/24/2020

How Many Factors Influence Minima in SGD?

Stochastic gradient descent (SGD) is often applied to train Deep Neural ...
research
06/21/2022

On the Maximum Hessian Eigenvalue and Generalization

The mechanisms by which certain training interventions, such as increasi...
research
07/29/2020

Meta-LR-Schedule-Net: Learned LR Schedules that Scale and Generalize

The learning rate (LR) is one of the most important hyper-parameters in ...

Please sign up or login with your details

Forgot password? Click here to reset