Hypothesis Testing in High-Dimensional Regression under the Gaussian Random Design Model: Asymptotic Theory

01/17/2013
by   Adel Javanmard, et al.
0

We consider linear regression in the high-dimensional regime where the number of observations n is smaller than the number of parameters p. A very successful approach in this setting uses ℓ_1-penalized least squares (a.k.a. the Lasso) to search for a subset of s_0< n parameters that best explain the data, while setting the other parameters to zero. Considerable amount of work has been devoted to characterizing the estimation and model selection problems within this approach. In this paper we consider instead the fundamental, but far less understood, question of statistical significance. More precisely, we address the problem of computing p-values for single regression coefficients. On one hand, we develop a general upper bound on the minimax power of tests with a given significance level. On the other, we prove that this upper bound is (nearly) achievable through a practical procedure in the case of random design matrices with independent entries. Our approach is based on a debiasing of the Lasso estimator. The analysis builds on a rigorous characterization of the asymptotic distribution of the Lasso estimator and its debiased version. Our result holds for optimal sample size, i.e., when n is at least on the order of s_0 (p/s_0). We generalize our approach to random design matrices with i.i.d. Gaussian rows x_i∼ N(0,Σ). In this case we prove that a similar distributional characterization (termed `standard distributional limit') holds for n much larger than s_0( p)^2. Finally, we show that for optimal sample size, n being at least of order s_0 (p/s_0), the standard distributional limit for general Gaussian designs can be derived from the replica heuristics in statistical physics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2020

The Lasso with general Gaussian designs with applications to hypothesis testing

The Lasso is a method for high-dimensional regression, which is now comm...
research
11/03/2018

The distribution of the Lasso: Uniform control over sparse balls and adaptive parameter tuning

The Lasso is a popular regression method for high-dimensional problems i...
research
03/25/2019

Fundamental Barriers to High-Dimensional Regression with Convex Penalties

In high-dimensional regression, we attempt to estimate a parameter vecto...
research
05/27/2021

Characterizing the SLOPE Trade-off: A Variational Perspective and the Donoho-Tanner Limit

Sorted l1 regularization has been incorporated into many methods for sol...
research
03/27/2019

Asymptotics and Optimal Designs of SLOPE for Sparse Linear Regression

In sparse linear regression, the SLOPE estimator generalizes LASSO by as...
research
07/11/2016

Minimum Description Length Principle in Supervised Learning with Application to Lasso

The minimum description length (MDL) principle in supervised learning is...
research
08/23/2019

On the asymptotic properties of SLOPE

Sorted L-One Penalized Estimator (SLOPE) is a relatively new convex opti...

Please sign up or login with your details

Forgot password? Click here to reset