
On exponential convergence of SGD in nonconvex overparametrized learning
Large overparametrized models learned via stochastic gradient descent (...
read it

Mapping Energy Landscapes of NonConvex Learning Problems
In many statistical learning problems, the target functions to be optimi...
read it

Global Convergence of the (1+1) Evolution Strategy
We establish global convergence of the (1+1)ES algorithm, i.e., converg...
read it

GraphDependent Implicit Regularisation for Distributed Stochastic Subgradient Descent
We propose graphdependent implicit regularisation strategies for distri...
read it

Convergence of constant step stochastic gradient descent for nonsmooth nonconvex functions
This paper studies the asymptotic behavior of the constant step Stochast...
read it

Online Stochastic Gradient Descent with Arbitrary Initialization Solves Nonsmooth, Nonconvex Phase Retrieval
In recent literature, a general two step procedure has been formulated f...
read it

Nonasymptotic Analysis of Biased Stochastic Approximation Scheme
Stochastic approximation (SA) is a key method used in statistical learni...
read it
An Analysis of Constant Step Size SGD in the Nonconvex Regime: Asymptotic Normality and Bias
Structured nonconvex learning problems, for which critical points have favorable statistical properties, arise frequently in statistical machine learning. Algorithmic convergence and statistical estimation rates are wellunderstood for such problems. However, quantifying the uncertainty associated with the underlying training algorithm is not wellstudied in the nonconvex setting. In order to address this shortcoming, in this work, we establish an asymptotic normality result for the constant step size stochastic gradient descent (SGD) algorithm–a widely used algorithm in practice. Specifically, based on the relationship between SGD and Markov Chains [DDB19], we show that the average of SGD iterates is asymptotically normally distributed around the expected value of their unique invariant distribution, as long as the nonconvex and nonsmooth objective function satisfies a dissipativity property. We also characterize the bias between this expected value and the critical points of the objective function under various local regularity conditions. Together, the above two results could be leveraged to construct confidence intervals for nonconvex problems that are trained using the SGD algorithm.
READ FULL TEXT
Comments
There are no comments yet.