
Three Factors Influencing Minima in SGD
We study the properties of the endpoint of stochastic gradient descent (...
read it

Implicit Gradient Regularization
Gradient descent can be surprisingly good at optimizing deep neural netw...
read it

Learning Stages: Phenomenon, Root Cause, Mechanism Hypothesis, and Implications
Under StepDecay learning rate strategy (decaying the learning rate after...
read it

Direction Matters: On the Implicit Regularization Effect of Stochastic Gradient Descent with Moderate Learning Rate
Understanding the algorithmic regularization effect of stochastic gradie...
read it

Adaptive Learning Rate Clipping Stabilizes Learning
Artificial neural network training with stochastic gradient descent can ...
read it

The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study
We investigate how the final parameters found by stochastic gradient des...
read it

Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks
Stochastic gradient descent (SGD) is widely believed to perform implicit...
read it
On the Origin of Implicit Regularization in Stochastic Gradient Descent
For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximizes test accuracy is often larger than the learning rate which minimizes training loss. To interpret this phenomenon we prove that for SGD with random shuffling, the mean SGD iterate also stays close to the path of gradient flow if the learning rate is small and finite, but on a modified loss. This modified loss is composed of the original loss function and an implicit regularizer, which penalizes the norms of the minibatch gradients. Under mild assumptions, when the batch size is small the scale of the implicit regularization term is proportional to the ratio of the learning rate to the batch size. We verify empirically that explicitly including the implicit regularizer in the loss can enhance the test accuracy when the learning rate is small.
READ FULL TEXT
Comments
There are no comments yet.