Stochastic natural gradient descent draws posterior samples in function space

06/25/2018
by   Samuel L. Smith, et al.
0

Natural gradient descent (NGD) minimises the cost function on a Riemannian manifold whose metric is defined by the Fisher information. In this work, we prove that if the model predictions on the training set approach the true conditional distribution of labels given inputs, then the noise inherent in minibatch gradients causes the stationary distribution of NGD to approach a Bayesian posterior, whose temperature T ≈ϵ N/(2B) is controlled by the learning rate ϵ, training set size N and batch size B. The parameter-dependence of the Fisher metric introduces an implicit prior over the parameters, which we identify as the well-known Jeffreys prior. To support our claims, we show that the distribution of samples from NGD is close to the Laplace approximation to the posterior when T = 1. Furthermore, the test loss of ensembles drawn using NGD falls rapidly as we increase the batch size until B ≈ϵ N/2, while above this point the test loss is constant or rises slowly.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset