De-Biasing The Lasso With Degrees-of-Freedom Adjustment

02/24/2019
by   Pierre C. Bellec, et al.
0

This paper studies schemes to de-bias the Lasso in sparse linear regression where the goal is to estimate and construct confidence intervals for a low-dimensional projection of the unknown coefficient vector in a preconceived direction a_0. We assume that the design matrix has iid Gaussian rows with known covariance matrix Σ. Our analysis reveals that previous propositions to de-bias the Lasso require a modification in order to enjoy asymptotic efficiency in a full range of the level of sparsity. This modification takes the form of a degrees-of-freedom adjustment that accounts for the dimension of the model selected by the Lasso. Let s_0 denote the number of nonzero coefficients of the true coefficient vector. The unadjusted de-biasing schemes proposed in previous studies enjoys efficiency if s_0 n^2/3, up to logarithmic factors. However, if s_0 n^2/3, the unadjusted scheme cannot be efficient in certain directions a_0. In the latter regime, it it necessary to modify existing procedures by an adjustment that accounts for the degrees-of-freedom of the Lasso. The proposed degrees-of-freedom adjustment grants asymptotic efficiency for any direction a_0. This holds under a Sparse Riecz Condition on the covariance matrix Σ and the sample size requirement s_0/p→0 and s_0(p/s_0)/n→0. Our analysis also highlights that the degrees-of-freedom adjustment is not necessary when the initial bias of the Lasso in the direction a_0 is small, which is granted under more stringent conditions on Σ^-1. This explains why the necessity of degrees-of-freedom adjustment did not appear in some previous studies. The main proof argument involves a Gaussian interpolation path similar to that used to derive Slepian's lemma. It yields a sharp ℓ_∞ error bound for the Lasso under Gaussian design which is of independent interest.

READ FULL TEXT
research
06/29/2021

Predictive Model Degrees of Freedom in Linear Regression

Overparametrized interpolating models have drawn increasing attention fr...
research
09/14/2023

Spectrum-Aware Adjustment: A New Debiasing Framework with Applications to Principal Components Regression

We introduce a new debiasing framework for high-dimensional linear regre...
research
07/27/2020

The Lasso with general Gaussian designs with applications to hypothesis testing

The Lasso is a method for high-dimensional regression, which is now comm...
research
07/16/2021

Chi-square and normal inference in high-dimensional multi-task regression

The paper proposes chi-square and normal inference methodologies for the...
research
12/01/2017

A Pliable Lasso

We propose a generalization of the lasso that allows the model coefficie...
research
11/12/2013

When Does More Regularization Imply Fewer Degrees of Freedom? Sufficient Conditions and Counter Examples from Lasso and Ridge Regression

Regularization aims to improve prediction performance of a given statist...
research
06/10/2019

Degrees of Freedom Analysis of Unrolled Neural Networks

Unrolled neural networks emerged recently as an effective model for lear...

Please sign up or login with your details

Forgot password? Click here to reset