Implicit Regularization via Hadamard Product Over-Parametrization in High-Dimensional Linear Regression

03/22/2019
by   Peng Zhao, et al.
0

We consider Hadamard product parametrization as a change-of-variable (over-parametrization) technique for solving least square problems in the context of linear regression. Despite the non-convexity and exponentially many saddle points induced by the change-of-variable, we show that under certain conditions, this over-parametrization leads to implicit regularization: if we directly apply gradient descent to the residual sum of squares with sufficiently small initial values, then under proper early stopping rule, the iterates converge to a nearly sparse rate-optimal solution with relatively better accuracy than explicit regularized approaches. In particular, the resulting estimator does not suffer from extra bias due to explicit penalties, and can achieve the parametric root-n rate (independent of the dimension) under proper conditions on the signal-to-noise ratio. We perform simulations to compare our methods with high dimensional linear regression with explicit regularizations. Our results illustrate advantages of using implicit regularization via gradient descent after over-parametrization in sparse vector estimation.

READ FULL TEXT
research
08/12/2021

Implicit Sparse Regularization: The Impact of Depth and Early Stopping

In this paper, we study the implicit bias of gradient descent for sparse...
research
01/29/2023

Implicit Regularization for Group Sparsity

We study the implicit regularization of gradient descent towards structu...
research
09/11/2019

Implicit Regularization for Optimal Sparse Recovery

We investigate implicit regularization schemes for gradient descent meth...
research
04/04/2017

Homotopy Parametric Simplex Method for Sparse Learning

High dimensional sparse learning has imposed a great computational chall...
research
08/04/2022

Spectral Universality of Regularized Linear Regression with Nearly Deterministic Sensing Matrices

It has been observed that the performances of many high-dimensional esti...
research
02/01/2023

Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression

In deep learning, often the training process finds an interpolator (a so...
research
10/21/2021

On Optimal Interpolation In Linear Regression

Understanding when and why interpolating methods generalize well has rec...

Please sign up or login with your details

Forgot password? Click here to reset