Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling

10/29/2015
by   Xiaocheng Shang, et al.
0

Monte Carlo sampling for Bayesian posterior inference is a common approach used in machine learning. The Markov Chain Monte Carlo procedures that are used are often discrete-time analogues of associated stochastic differential equations (SDEs). These SDEs are guaranteed to leave invariant the required posterior distribution. An area of current research addresses the computational benefits of stochastic gradient methods in this setting. Existing techniques rely on estimating the variance or covariance of the subsampling error, and typically assume constant variance. In this article, we propose a covariance-controlled adaptive Langevin thermostat that can effectively dissipate parameter-dependent noise while maintaining a desired target distribution. The proposed method achieves a substantial speedup over popular alternative schemes for large-scale machine learning applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2018

Langevin Markov Chain Monte Carlo with stochastic gradients

Monte Carlo sampling techniques have broad applications in machine learn...
research
06/09/2020

Isotropic SGD: a Practical Approach to Bayesian Posterior Sampling

In this work we define a unified mathematical framework to deepen our un...
research
06/19/2018

Large-Scale Stochastic Sampling from the Probability Simplex

Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popul...
research
10/23/2019

An Adaptive Empirical Bayesian Method for Sparse Deep Learning

We propose a novel adaptive empirical Bayesian (AEB) method for sparse d...
research
08/05/2021

Covariance Estimation and its Application in Large-Scale Online Controlled Experiments

During the last few decades, online controlled experiments (also known a...
research
06/29/2022

Cyclical Kernel Adaptive Metropolis

We propose cKAM, cyclical Kernel Adaptive Metropolis, which incorporates...
research
05/21/2021

Removing the mini-batching error in Bayesian inference using Adaptive Langevin dynamics

The computational cost of usual Monte Carlo methods for sampling a poste...

Please sign up or login with your details

Forgot password? Click here to reset