Improved Bounds for Discretization of Langevin Diffusions: Near-Optimal Rates without Convexity

07/25/2019
by   Wenlong Mou, et al.
1

We present an improved analysis of the Euler-Maruyama discretization of the Langevin diffusion. Our analysis does not require global contractivity, and yields polynomial dependence on the time horizon. Compared to existing approaches, we make an additional smoothness assumption, and improve the existing rate from O(η) to O(η^2) in terms of the KL divergence. This result matches the correct order for numerical SDEs, without suffering from exponential time dependence. When applied to algorithms for sampling and learning, this result simultaneously improves all those methods based on Dalayan's approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2023

Improved Discretization Analysis for Underdamped Langevin Monte Carlo

Underdamped Langevin Monte Carlo (ULMC) is an algorithm used to sample f...
research
05/29/2023

An Alternate Proof of Near-Optimal Light Spanners

In 2016, a breakthrough result of Chechik and Wulff-Nilsen [SODA '16] es...
research
11/03/2022

Improved Analysis of Score-based Generative Modeling: User-Friendly Bounds under Minimal Smoothness Assumptions

In this paper, we focus on the theoretical analysis of diffusion-based g...
research
08/04/2021

An adaptive time-stepping full discretization for stochastic Allen–Cahn equation

It is known in [1] that a regular explicit Euler-type scheme with a unif...
research
06/29/2011

The Rate of Convergence of AdaBoost

The AdaBoost algorithm was designed to combine many "weak" hypotheses th...
research
07/10/2021

Convergence Analysis of Schrödinger-Föllmer Sampler without Convexity

Schrödinger-Föllmer sampler (SFS) is a novel and efficient approach for ...

Please sign up or login with your details

Forgot password? Click here to reset