DeepAI
Log In Sign Up

Higher Order Generalization Error for First Order Discretization of Langevin Diffusion

02/11/2021
by   Mufan Bill Li, et al.
0

We propose a novel approach to analyze generalization error for discretizations of Langevin diffusion, such as the stochastic gradient Langevin dynamics (SGLD). For an ϵ tolerance of expected generalization error, it is known that a first order discretization can reach this target if we run Ω(ϵ^-1log (ϵ^-1) ) iterations with Ω(ϵ^-1) samples. In this article, we show that with additional smoothness assumptions, even first order methods can achieve arbitrarily runtime complexity. More precisely, for each N>0, we provide a sufficient smoothness condition on the loss function such that a first order discretization can reach ϵ expected generalization error given Ω( ϵ^-1/Nlog (ϵ^-1) ) iterations with Ω(ϵ^-1) samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/15/2022

Higher order time discretization for the stochastic semilinear wave equation with multiplicative noise

In this paper, a higher-order time-discretization scheme is proposed, wh...
03/09/2020

Variational Time Discretizations of Higher Order and Higher Regularity

We consider a family of variational time discretizations that are genera...
11/25/2021

Time-independent Generalization Bounds for SGLD in Non-convex Settings

We establish generalization error bounds for stochastic gradient Langevi...
08/28/2019

High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm

We propose a Markov chain Monte Carlo (MCMC) algorithm based on third-or...
03/14/2022

Sampling discretization error of integral norms for function classes with small smoothness

We consider infinitely dimensional classes of functions and instead of t...
12/02/2020

SUPG-stabilized Virtual Elements for diffusion-convection problems: a robustness analysis

The objective of this contribution is to develop a convergence analysis ...