DeepAI AI Chat
Log In Sign Up

Risk estimation for high-dimensional lasso regression

02/04/2016
by   Darren Homrighausen, et al.
Indiana University Bloomington
Colorado State University
0

In high-dimensional estimation, analysts are faced with more parameters p than available observations n, and asymptotic analysis of performance allows the ratio p/n→∞. This situation makes regularization both necessary and desirable in order for estimators to possess theoretical guarantees. However, the amount of regularization, often determined by one or more tuning parameters, is integral to achieving good performance. In practice, choosing the tuning parameter is done through resampling methods (e.g. cross-validation), generalized information criteria, or reformulating the optimization problem (e.g. square-root lasso or scaled sparse regression). Each of these techniques comes with varying levels of theoretical guarantee for the low- or high-dimensional regimes. However, there are some notable deficiencies in the literature. The theory, and sometimes practice, of many methods relies on either the knowledge or estimation of the variance parameter, which is difficult to estimate in high dimensions. In this paper, we provide theoretical intuition suggesting that some previously proposed approaches based on information criteria work poorly in high dimensions. We introduce a suite of new risk estimators leveraging the burgeoning literature on high-dimensional variance estimation. Finally, we compare our proposal to many existing methods for choosing the tuning parameters for lasso regression by providing an extensive simulation to examine their finite sample performance. We find that our new estimators perform quite well, often better than the existing approaches across a wide range of simulation conditions and evaluation criteria.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/04/2013

Risk-consistency of cross-validation with lasso-type procedures

The lasso and related sparsity inducing algorithms have been the target ...
04/02/2014

Don't Fall for Tuning Parameters: Tuning-Free Variable Selection in High Dimensions With the TREX

Lasso is a seminal contribution to high-dimensional statistics, but it h...
08/10/2019

A Survey of Tuning Parameter Selection for High-dimensional Regression

Penalized (or regularized) regression, as represented by Lasso and its v...
02/27/2020

Tuning-free ridge estimators for high-dimensional generalized linear models

Ridge estimators regularize the squared Euclidean lengths of parameters....
03/13/2013

Estimation Stability with Cross Validation (ESCV)

Cross-validation (CV) is often used to select the regularization paramet...
07/07/2018

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

Consider the following class of learning schemes: β̂ := _β ∑_j=1^n ℓ(x_j...
03/11/2019

Generalized Sparse Additive Models

We present a unified framework for estimation and analysis of generalize...