DeepAI AI Chat
Log In Sign Up

A sharp uniform-in-time error estimate for Stochastic Gradient Langevin Dynamics

07/19/2022
by   Lei Li, et al.
0

We establish a sharp uniform-in-time error estimate for the Stochastic Gradient Langevin Dynamics (SGLD), which is a popular sampling algorithm. Under mild assumptions, we obtain a uniform-in-time O(η^2) bound for the KL-divergence between the SGLD iteration and the Langevin diffusion, where η is the step size (or learning rate). Our analysis is also valid for varying step sizes. Based on this, we are able to obtain an O(η) bound for the distance between the SGLD iteration and the invariant distribution of the Langevin diffusion, in terms of Wasserstein or total variation distances.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/28/2020

Unajusted Langevin algorithm with multiplicative noise: Total variation and Wasserstein bounds

In this paper, we focus on non-asymptotic bounds related to the Euler sc...
10/04/2019

Nonasymptotic estimates for Stochastic Gradient Langevin Dynamics under local conditions in nonconvex optimization

Within the context of empirical risk minimization, see Raginsky, Rakhlin...
12/06/2018

On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case

Stochastic Gradient Langevin Dynamics (SGLD) is a combination of a Robbi...
11/25/2021

Time-independent Generalization Bounds for SGLD in Non-convex Settings

We establish generalization error bounds for stochastic gradient Langevi...
01/17/2023

Geometric ergodicity of SGLD via reflection coupling

We consider the geometric ergodicity of the Stochastic Gradient Langevin...
02/15/2018

On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo

We provide convergence guarantees in Wasserstein distance for a variety ...
03/20/2019

Phase transition in random contingency tables with non-uniform margins

For parameters n,δ,B, and C, let X=(X_kℓ) be the random uniform continge...