Stochastic Gradient Descent as Approximate Bayesian Inference

04/13/2017
by   Stephan Mandt, et al.
0

Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results. (1) We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions. (2) We demonstrate that constant SGD gives rise to a new variational EM algorithm that optimizes hyperparameters in complex probabilistic models. (3) We also propose SGD with momentum for sampling and show how to adjust the damping coefficient accordingly. (4) We analyze MCMC algorithms. For Langevin Dynamics and Stochastic Gradient Fisher Scoring, we quantify the approximation errors due to finite learning rates. Finally (5), we use the stochastic process perspective to give a short proof of why Polyak averaging is optimal. Based on this idea, we propose a scalable approximate MCMC algorithm, the Averaged Stochastic Gradient Sampler.

READ FULL TEXT

page 8

page 19

page 21

research
02/08/2016

A Variational Analysis of Stochastic Gradient Algorithms

Stochastic Gradient Descent (SGD) is an important algorithm in machine l...
research
02/07/2019

A Simple Baseline for Bayesian Uncertainty in Deep Learning

We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose ...
research
02/17/2022

Sampling Approximately Low-Rank Ising Models: MCMC meets Variational Methods

We consider Ising models on the hypercube with a general interaction mat...
research
07/23/2019

Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions

We consider a co-variate shift problem where one has access to several m...
research
11/20/2022

Non-reversible Parallel Tempering for Deep Posterior Approximation

Parallel tempering (PT), also known as replica exchange, is the go-to wo...
research
11/11/2021

Stationary Behavior of Constant Stepsize SGD Type Algorithms: An Asymptotic Characterization

Stochastic approximation (SA) and stochastic gradient descent (SGD) algo...
research
06/09/2020

Isotropic SGD: a Practical Approach to Bayesian Posterior Sampling

In this work we define a unified mathematical framework to deepen our un...

Please sign up or login with your details

Forgot password? Click here to reset