An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias

06/14/2020
by   Lu Yu, et al.
0

Structured non-convex learning problems, for which critical points have favorable statistical properties, arise frequently in statistical machine learning. Algorithmic convergence and statistical estimation rates are well-understood for such problems. However, quantifying the uncertainty associated with the underlying training algorithm is not well-studied in the non-convex setting. In order to address this shortcoming, in this work, we establish an asymptotic normality result for the constant step size stochastic gradient descent (SGD) algorithm–a widely used algorithm in practice. Specifically, based on the relationship between SGD and Markov Chains [DDB19], we show that the average of SGD iterates is asymptotically normally distributed around the expected value of their unique invariant distribution, as long as the non-convex and non-smooth objective function satisfies a dissipativity property. We also characterize the bias between this expected value and the critical points of the objective function under various local regularity conditions. Together, the above two results could be leveraged to construct confidence intervals for non-convex problems that are trained using the SGD algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/06/2018

On exponential convergence of SGD in non-convex over-parametrized learning

Large over-parametrized models learned via stochastic gradient descent (...
research
06/09/2017

Global Convergence of the (1+1) Evolution Strategy

We establish global convergence of the (1+1)-ES algorithm, i.e., converg...
research
10/02/2014

Mapping Energy Landscapes of Non-Convex Learning Problems

In many statistical learning problems, the target functions to be optimi...
research
06/20/2023

Convergence and concentration properties of constant step-size SGD through Markov chains

We consider the optimization of a smooth and strongly convex objective u...
research
01/26/2023

First Order Methods for Geometric Optimization of Crystal Structures

The geometric optimization of crystal structures is a procedure widely u...
research
09/18/2018

Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent

We propose graph-dependent implicit regularisation strategies for distri...
research
10/28/2019

Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval

In recent literature, a general two step procedure has been formulated f...

Please sign up or login with your details

Forgot password? Click here to reset