DeepAI
Log In Sign Up

The Break-Even Point on Optimization Trajectories of Deep Neural Networks

02/21/2020
by   Stanisław Jastrzębski, et al.
18

The early phase of training of deep neural networks is critical for their final performance. In this work, we study how the hyperparameters of stochastic gradient descent (SGD) used in the early phase of training affect the rest of the optimization trajectory. We argue for the existence of the "break-even" point on this trajectory, beyond which the curvature of the loss surface and noise in the gradient are implicitly regularized by SGD. In particular, we demonstrate on multiple classification tasks that using a large learning rate in the initial phase of training reduces the variance of the gradient, and improves the conditioning of the covariance of gradients. These effects are beneficial from the optimization perspective and become visible after the break-even point. Complementing prior work, we also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers. In short, our work shows that key properties of the loss surface are strongly influenced by SGD in the early phase of training. We argue that studying the impact of the identified effects on generalization is a promising future direction.

READ FULL TEXT

page 2

page 14

07/13/2018

DNN's Sharpest Directions Along the SGD Trajectory

Recent work has identified that using a high learning rate or a small ba...
10/08/2021

A Loss Curvature Perspective on Training Instability in Deep Learning

In this work, we study the evolution of the loss Hessian across many cla...
12/28/2020

Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization

The early phase of training has been shown to be important in two ways f...
11/24/2017

Critical Learning Periods in Deep Neural Networks

Critical periods are phases in the early development of humans and anima...
02/24/2018

A Walk with SGD

Exploring why stochastic gradient descent (SGD) based optimization metho...
11/25/2020

Implicit bias of deep linear networks in the large learning rate phase

Correctly choosing a learning rate (scheme) for gradient-based optimizat...
06/23/2020

Spherical Perspective on Learning with Batch Norm

Batch Normalization (BN) is a prominent deep learning technique. In spit...