Benign Overfitting of Constant-Stepsize SGD for Linear Regression

03/23/2021
by   Difan Zou, et al.
9

There is an increasing realization that algorithmic inductive biases are central in preventing overfitting; empirically, we often see a benign overfitting phenomenon in overparameterized settings for natural learning algorithms, such as stochastic gradient descent (SGD), where little to no explicit regularization has been employed. This work considers this issue in arguably the most basic setting: constant-stepsize SGD (with iterate averaging) for linear regression in the overparameterized regime. Our main result provides a sharp excess risk bound, stated in terms of the full eigenspectrum of the data covariance matrix, that reveals a bias-variance decomposition characterizing when generalization is possible: (i) the variance bound is characterized in terms of an effective dimension (specific for SGD) and (ii) the bias bound provides a sharp geometric characterization in terms of the location of the initial iterate (and how it aligns with the data covariance matrix). We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares (minimum-norm interpolation) and ridge regression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2021

The Benefits of Implicit Regularization from SGD in Least Squares Problems

Stochastic gradient descent (SGD) exhibits strong algorithmic regulariza...
research
02/12/2022

Relaxing the Feature Covariance Assumption: Time-Variant Bounds for Benign Overfitting in Linear Regression

Benign overfitting demonstrates that overparameterized models can perfor...
research
03/11/2022

A geometrical viewpoint on the benign overfitting property of the minimum l_2-norm interpolant estimator

Practitioners have observed that some deep learning models generalize we...
research
08/05/2021

Interpolation can hurt robust generalization even when there is no noise

Numerous recent works show that overparameterization implicitly reduces ...
research
10/25/2017

A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)

This work provides a simplified proof of the statistical minimax optimal...
research
10/20/2022

Local SGD in Overparameterized Linear Regression

We consider distributed learning using constant stepsize SGD (DSGD) over...
research
08/15/2020

Obtaining Adjustable Regularization for Free via Iterate Averaging

Regularization for optimization is a crucial technique to avoid overfitt...

Please sign up or login with your details

Forgot password? Click here to reset