Is There an Analog of Nesterov Acceleration for MCMC?

02/04/2019
by   Yi-an Ma, et al.
4

We formulate gradient-based Markov chain Monte Carlo (MCMC) sampling as optimization on the space of probability measures, with Kullback-Leibler (KL) divergence as the objective function. We show that an underdamped form of the Langevin algorithm perform accelerated gradient descent in this metric. To characterize the convergence of the algorithm, we construct a Lyapunov functional and exploit hypocoercivity of the underdamped Langevin algorithm. As an application, we show that accelerated rates can be obtained for a class of nonconvex functions with the Langevin algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2020

tfp.mcmc: Modern Markov Chain Monte Carlo Tools Built for Modern Hardware

Markov chain Monte Carlo (MCMC) is widely regarded as one of the most im...
research
11/17/2017

Techniques for proving Asynchronous Convergence results for Markov Chain Monte Carlo methods

Markov Chain Monte Carlo (MCMC) methods such as Gibbs sampling are findi...
research
07/27/2019

The Wang-Landau Algorithm as Stochastic Optimization and its Acceleration

We show that the Wang-Landau algorithm can be formulated as a stochastic...
research
07/03/2023

Monte Carlo Policy Gradient Method for Binary Optimization

Binary optimization has a wide range of applications in combinatorial op...
research
10/21/2019

Aggregated Gradient Langevin Dynamics

In this paper, we explore a general Aggregated Gradient Langevin Dynamic...
research
08/30/2019

On the robustness of gradient-based MCMC algorithms

We analyse the tension between robustness and efficiency for Markov chai...
research
06/16/2020

Hessian-Free High-Resolution Nesterov Acceleration for Sampling

We propose an accelerated-gradient-based MCMC method. It relies on a mod...

Please sign up or login with your details

Forgot password? Click here to reset