Functional Central Limit Theorem and Strong Law of Large Numbers for Stochastic Gradient Langevin Dynamics

10/05/2022
by   Attila Lovas, et al.
0

We study the mixing properties of an important optimization algorithm of machine learning: the stochastic gradient Langevin dynamics (SGLD) with a fixed step size. The data stream is not assumed to be independent hence the SGLD is not a Markov chain, merely a Markov chain in a random environment, which complicates the mathematical treatment considerably. We derive a strong law of large numbers and a functional central limit theorem for SGLD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2018

On some Limit Theorem for Markov Chain

The goal of this paper is to describe conditions which guarantee a centr...
research
01/02/2015

(Non-) asymptotic properties of Stochastic Gradient Langevin Dynamics

Applying standard Markov chain Monte Carlo (MCMC) algorithms to large da...
research
06/09/2023

A Central Limit Theorem for Stochastic Saddle Point Optimization

In this work, we study the Uncertainty Quantification (UQ) of an algorit...
research
08/25/2019

Hypercoercivity properties of adaptive Langevin dynamics

Adaptive Langevin dynamics is a method for sampling the Boltzmann-Gibbs ...
research
08/25/2019

Hypocoercivity properties of adaptive Langevin dynamics

Adaptive Langevin dynamics is a method for sampling the Boltzmann-Gibbs ...
research
03/02/2022

Understanding the Sources of Error in MBAR through Asymptotic Analysis

Multiple sampling strategies commonly used in molecular dynamics, such a...

Please sign up or login with your details

Forgot password? Click here to reset