DeepAI AI Chat
Log In Sign Up

Spectral Gap of Replica Exchange Langevin Diffusion on Mixture Distributions

by   Jing Dong, et al.

Langevin diffusion (LD) is one of the main workhorses for sampling problems. However, its convergence rate can be significantly reduced if the target distribution is a mixture of multiple densities, especially when each component concentrates around a different mode. Replica exchange Langevin diffusion (ReLD) is a sampling method that can circumvent this issue. In particular, ReLD adds another LD sampling a high-temperature version of the target density, and exchange the locations of two LDs according to a Metropolis-Hasting type of law. This approach can be further extended to multiple replica exchange Langevin diffusion (mReLD), where K additional LDs are added to sample distributions at different temperatures and exchanges take place between neighboring-temperature processes. While ReLD and mReLD have been used extensively in statistical physics, molecular dynamics, and other applications, there is little existing analysis on its convergence rate and choices of temperatures. This paper closes these gaps assuming the target distribution is a mixture of log-concave densities. We show ReLD can obtain constant or even better convergence rates even when the density components of the mixture concentrate around isolated modes. We also show using mReLD with K additional LDs can achieve the same result while the exchange frequency only needs to be (1/K)-th power of the one in ReLD.


page 1

page 2

page 3

page 4


Simulated Tempering Langevin Monte Carlo II: An Improved Proof using Soft Markov Chain Decomposition

A key task in Bayesian machine learning is sampling from distributions t...

Optimal convergence rates for the invariant density estimation of jump-diffusion processes

We aim at estimating the invariant density associated to a stochastic di...

Theoretical guarantees for approximate sampling from smooth and log-concave densities

Sampling from various kinds of distributions is an issue of paramount im...

Convergence Rates for Mixture-of-Experts

In mixtures-of-experts (ME) model, where a number of submodels (experts)...

Exponential ergodicity of mirror-Langevin diffusions

Motivated by the problem of sampling from ill-conditioned log-concave di...

Accelerating Nonconvex Learning via Replica Exchange Langevin Diffusion

Langevin diffusion is a powerful method for nonconvex optimization, whic...

Mixture Proportion Estimation via Kernel Embedding of Distributions

Mixture proportion estimation (MPE) is the problem of estimating the wei...