A Bayesian Perspective on Generalization and Stochastic Gradient Descent

10/17/2017
by   Samuel L. Smith, et al.
0

This paper tackles two related questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work is inspired by Zhang et al. (2017), who showed deep networks can easily memorize randomly labeled training data, despite generalizing well when shown real labels of the same inputs. We show here that the same phenomenon occurs in small linear models. These observations are explained by evaluating the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also explore the "generalization gap" between small and large batch training, identifying an optimum batch size which maximizes the test set accuracy. Interpreting stochastic gradient descent as a stochastic differential equation, we identify a "noise scale" g = ϵ (N/B - 1) ≈ϵ N/B, where ϵ is the learning rate, N training set size and B batch size. Consequently the optimum batch size is proportional to the learning rate and the training set size, B_opt∝ϵ N. We verify these predictions empirically.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2020

On the Generalization Benefit of Noise in Stochastic Gradient Descent

It has long been argued that minibatch stochastic gradient descent can g...
research
06/25/2018

Stochastic natural gradient descent draws posterior samples in function space

Natural gradient descent (NGD) minimises the cost function on a Riemanni...
research
05/30/2019

Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience

The ability of overparameterized deep networks to generalize well has be...
research
05/09/2019

The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study

We investigate how the final parameters found by stochastic gradient des...
research
09/19/2023

On the different regimes of Stochastic Gradient Descent

Modern deep networks are trained with stochastic gradient descent (SGD) ...
research
09/05/2018

Deep Bilevel Learning

We present a novel regularization approach to train neural networks that...
research
08/21/2021

How Can Increased Randomness in Stochastic Gradient Descent Improve Generalization?

Recent works report that increasing the learning rate or decreasing the ...

Please sign up or login with your details

Forgot password? Click here to reset