Stochastic natural gradient descent draws posterior samples in function space

06/25/2018
by   Samuel L. Smith, et al.
0

Natural gradient descent (NGD) minimises the cost function on a Riemannian manifold whose metric is defined by the Fisher information. In this work, we prove that if the model predictions on the training set approach the true conditional distribution of labels given inputs, then the noise inherent in minibatch gradients causes the stationary distribution of NGD to approach a Bayesian posterior, whose temperature T ≈ϵ N/(2B) is controlled by the learning rate ϵ, training set size N and batch size B. The parameter-dependence of the Fisher metric introduces an implicit prior over the parameters, which we identify as the well-known Jeffreys prior. To support our claims, we show that the distribution of samples from NGD is close to the Laplace approximation to the posterior when T = 1. Furthermore, the test loss of ensembles drawn using NGD falls rapidly as we increase the batch size until B ≈ϵ N/2, while above this point the test loss is constant or rises slowly.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2017

A Bayesian Perspective on Generalization and Stochastic Gradient Descent

This paper tackles two related questions at the heart of machine learnin...
research
12/04/2017

Natural Langevin Dynamics for Neural Networks

One way to avoid overfitting in machine learning is to use model paramet...
research
06/12/2023

Riemannian Laplace approximations for Bayesian neural networks

Bayesian neural networks often approximate the weight-posterior with a G...
research
08/21/2021

How Can Increased Randomness in Stochastic Gradient Descent Improve Generalization?

Recent works report that increasing the learning rate or decreasing the ...
research
12/09/2017

Cost-Sensitive Approach to Batch Size Adaptation for Gradient Descent

In this paper, we propose a novel approach to automatically determine th...
research
07/25/2023

Modify Training Directions in Function Space to Reduce Generalization Error

We propose theoretical analyses of a modified natural gradient descent m...
research
06/27/2012

Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring

In this paper we address the following question: Can we approximately sa...

Please sign up or login with your details

Forgot password? Click here to reset