DeepAI AI Chat
Log In Sign Up

When Does More Regularization Imply Fewer Degrees of Freedom? Sufficient Conditions and Counter Examples from Lasso and Ridge Regression

11/12/2013
by   Shachar Kaufman, et al.
Tel Aviv University
0

Regularization aims to improve prediction performance of a given statistical modeling approach by moving to a second approach which achieves worse training error but is expected to have fewer degrees of freedom, i.e., better agreement between training and prediction error. We show here, however, that this expected behavior does not hold in general. In fact, counter examples are given that show regularization can increase the degrees of freedom in simple situations, including lasso and ridge regression, which are the most common regularization approaches in use. In such situations, the regularization increases both training error and degrees of freedom, and is thus inherently without merit. On the other hand, two important regularization scenarios are described where the expected reduction in degrees of freedom is indeed guaranteed: (a) all symmetric linear smoothers, and (b) linear regression versus convex constrained linear regression (as in the constrained variant of ridge regression and lasso).

READ FULL TEXT

page 22

page 24

03/30/2016

Degrees of Freedom in Deep Neural Networks

In this paper, we explore degrees of freedom in deep sigmoidal neural ne...
09/12/2011

Efficient algorithm to select tuning parameters in sparse regression modeling with regularization

In sparse regression modeling via regularization such as the lasso, it i...
04/14/2022

On Measuring Model Complexity in Heteroscedastic Linear Regression

Heteroscedasticity is common in real world applications and is often han...
02/24/2019

De-Biasing The Lasso With Degrees-of-Freedom Adjustment

This paper studies schemes to de-bias the Lasso in sparse linear regress...
08/20/2019

State Space System Modelling of a Quad Copter UAV

In this paper, a linear mathematical model for a quad copter unmanned ae...
05/25/2022

Surprises in adversarially-trained linear regression

State-of-the-art machine learning models can be vulnerable to very small...
02/06/2022

Asymptotic behavior of the forecast-assimilation process with unstable dynamics

Extensive numerical evidence shows that the assimilation of observations...