DeepAI AI Chat
Log In Sign Up

A sharp uniform-in-time error estimate for Stochastic Gradient Langevin Dynamics

by   Lei Li, et al.

We establish a sharp uniform-in-time error estimate for the Stochastic Gradient Langevin Dynamics (SGLD), which is a popular sampling algorithm. Under mild assumptions, we obtain a uniform-in-time O(η^2) bound for the KL-divergence between the SGLD iteration and the Langevin diffusion, where η is the step size (or learning rate). Our analysis is also valid for varying step sizes. Based on this, we are able to obtain an O(η) bound for the distance between the SGLD iteration and the invariant distribution of the Langevin diffusion, in terms of Wasserstein or total variation distances.


page 1

page 2

page 3

page 4


Unajusted Langevin algorithm with multiplicative noise: Total variation and Wasserstein bounds

In this paper, we focus on non-asymptotic bounds related to the Euler sc...

Nonasymptotic estimates for Stochastic Gradient Langevin Dynamics under local conditions in nonconvex optimization

Within the context of empirical risk minimization, see Raginsky, Rakhlin...

On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case

Stochastic Gradient Langevin Dynamics (SGLD) is a combination of a Robbi...

Time-independent Generalization Bounds for SGLD in Non-convex Settings

We establish generalization error bounds for stochastic gradient Langevi...

Geometric ergodicity of SGLD via reflection coupling

We consider the geometric ergodicity of the Stochastic Gradient Langevin...

On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo

We provide convergence guarantees in Wasserstein distance for a variety ...

Phase transition in random contingency tables with non-uniform margins

For parameters n,δ,B, and C, let X=(X_kℓ) be the random uniform continge...