Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem

10/11/2017
by   Justin Sirignano, et al.
0

Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An L^p convergence rate is also proven for the algorithm in the strongly convex case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2016

Stochastic Gradient Descent in Continuous Time

Stochastic gradient descent in continuous time (SGDCT) provides a comput...
research
09/02/2017

A convergence analysis of the perturbed compositional gradient flow: averaging principle and normal deviations

We consider in this work a system of two stochastic differential equatio...
research
06/19/2020

Meta Learning in the Continuous Time Limit

In this paper, we establish the ordinary differential equation (ODE) tha...
research
10/05/2018

Continuous-time Models for Stochastic Optimization Algorithms

We propose a new continuous-time formulation for first-order stochastic ...
research
01/02/2020

Stochastic Gradient Langevin Dynamics on a Distributed Network

Langevin MCMC gradient optimization is a class of increasingly popular m...
research
12/07/2020

Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance

In this paper, a general stochastic optimization procedure is studied, u...

Please sign up or login with your details

Forgot password? Click here to reset