The Randomized Midpoint Method for Log-Concave Sampling

09/12/2019 ∙ by Ruoqi Shen, et al. ∙ 0

Sampling from log-concave distributions is a well researched problem that has many applications in statistics and machine learning. We study the distributions of the form p^*∝(-f(x)), where f:R^d→R has an L-Lipschitz gradient and is m-strongly convex. In our paper, we propose a Markov chain Monte Carlo (MCMC) algorithm based on the underdamped Langevin diffusion (ULD). It can achieve ϵ· D error (in 2-Wasserstein distance) in Õ(κ^7/6/ϵ^1/3+κ/ϵ^2/3) steps, where Ddef=√(d/m) is the effective diameter of the problem and κdef=L/m is the condition number. Our algorithm performs significantly faster than the previously best known algorithm for solving this problem, which requires Õ(κ^1.5/ϵ) steps. Moreover, our algorithm can be easily parallelized to require only O(κlog1/ϵ) parallel steps. To solve the sampling problem, we propose a new framework to discretize stochastic differential equations. We apply this framework to discretize and simulate ULD, which converges to the target distribution p^*. The framework can be used to solve not only the log-concave sampling problem, but any problem that involves simulating (stochastic) differential equations.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.