Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression

02/01/2023
by   Mo Zhou, et al.
0

In deep learning, often the training process finds an interpolator (a solution with 0 training loss), but the test loss is still low. This phenomenon, known as benign overfitting, is a major mystery that received a lot of recent attention. One common mechanism for benign overfitting is implicit regularization, where the training process leads to additional properties for the interpolator, often characterized by minimizing certain norms. However, even for a simple sparse linear regression problem y = β^*⊤ x +ξ with sparse β^*, neither minimum ℓ_1 or ℓ_2 norm interpolator gives the optimal test loss. In this work, we give a different parametrization of the model which leads to a new implicit regularization effect that combines the benefit of ℓ_1 and ℓ_2 interpolators. We show that training our new model via gradient descent leads to an interpolator with near-optimal test loss. Our result is based on careful analysis of the training dynamics and provides another example of implicit regularization effect that goes beyond norm minimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2023

Implicit Regularization for Group Sparsity

We study the implicit regularization of gradient descent towards structu...
research
01/27/2022

The Implicit Bias of Benign Overfitting

The phenomenon of benign overfitting, where a predictor perfectly fits n...
research
03/16/2021

Deep learning: a statistical viewpoint

The remarkable practical success of deep learning has revealed some majo...
research
03/22/2019

Implicit Regularization via Hadamard Product Over-Parametrization in High-Dimensional Linear Regression

We consider Hadamard product parametrization as a change-of-variable (ov...
research
04/29/2022

Implicit Regularization Properties of Variance Reduced Stochastic Mirror Descent

In machine learning and statistical data analysis, we often run into obj...
research
12/30/2017

Theory of Deep Learning III: explaining the non-overfitting puzzle

A main puzzle of deep networks revolves around the absence of overfittin...
research
02/12/2022

Relaxing the Feature Covariance Assumption: Time-Variant Bounds for Benign Overfitting in Linear Regression

Benign overfitting demonstrates that overparameterized models can perfor...

Please sign up or login with your details

Forgot password? Click here to reset