A classification for the performance of online SGD for high-dimensional inference

03/23/2020
by   Gérard Ben Arous, et al.
10

Stochastic gradient descent (SGD) is a popular algorithm for optimization problems arising in high-dimensional inference tasks. Here one produces an estimator of an unknown parameter from a large number of independent samples of data by iteratively optimizing a loss function. This loss function is high-dimensional, random, and often complex. We study here the performance of the simplest version of SGD, namely online SGD, in the initial "search" phase, where the algorithm is far from a trust region and the loss landscape is highly non-convex. To this end, we investigate the performance of online SGD at attaining a "better than random" correlation with the unknown parameter, i.e, achieving weak recovery. Our contribution is a classification of the difficulty of typical instances of this task for online SGD in terms of the number of samples required as the dimension diverges. This classification depends only on an intrinsic property of the population loss, which we call the information exponent. Using the information exponent, we find that there are three distinct regimes—the easy, critical, and difficult regimes—where one requires linear, quasilinear, and polynomially many samples (in the dimension) respectively to achieve weak recovery. We illustrate our approach by applying it to a wide variety of estimation tasks such as parameter estimation for generalized linear models, two-component Gaussian mixture models, phase retrieval, and spiked matrix and tensor models, as well as supervised learning for single-layer networks with general activation functions. In this latter case, our results translate into a classification of the difficulty of this task in terms of the Hermite decomposition of the activation function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2020

Dynamical mean-field theory for stochastic gradient descent in Gaussian mixture classification

We analyze in a closed form the learning dynamics of stochastic gradient...
research
05/14/2022

Homogenization of SGD in high-dimensions: Exact dynamics and generalization properties

We develop a stochastic differential equation, called homogenized SGD, f...
research
02/16/2021

Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes

In this article, we consider convergence of stochastic gradient descent ...
research
02/20/2023

High-dimensional Central Limit Theorems for Linear Functionals of Online Least-Squares SGD

Stochastic gradient descent (SGD) has emerged as the quintessential meth...
research
06/08/2022

High-dimensional limit theorems for SGD: Effective dynamics and critical scaling

We study the scaling limits of stochastic gradient descent (SGD) with co...
research
12/18/2019

Gradient-based training of Gaussian Mixture Models in High-Dimensional Spaces

We present an approach for efficiently training Gaussian Mixture Models ...
research
06/28/2022

Studying Generalization Through Data Averaging

The generalization of machine learning models has a complex dependence o...

Please sign up or login with your details

Forgot password? Click here to reset