A Residual Bootstrap for High-Dimensional Regression with Near Low-Rank Designs

07/04/2016
by   Miles E. Lopes, et al.
0

We study the residual bootstrap (RB) method in the context of high-dimensional linear regression. Specifically, we analyze the distributional approximation of linear contrasts c^ (β̂_ρ-β), where β̂_ρ is a ridge-regression estimator. When regression coefficients are estimated via least squares, classical results show that RB consistently approximates the laws of contrasts, provided that p≪ n, where the design matrix is of size n× p. Up to now, relatively little work has considered how additional structure in the linear model may extend the validity of RB to the setting where p/n 1. In this setting, we propose a version of RB that resamples residuals obtained from ridge regression. Our main structural assumption on the design matrix is that it is nearly low rank --- in the sense that its singular values decay according to a power-law profile. Under a few extra technical assumptions, we derive a simple criterion for ensuring that RB consistently approximates the law of a given contrast. We then specialize this result to study confidence intervals for mean response values X_i^β, where X_i^ is the ith row of the design. More precisely, we show that conditionally on a Gaussian design with near low-rank structure, RB simultaneously approximates all of the laws X_i^(β̂_ρ-β), i=1,...,n. This result is also notable as it imposes no sparsity assumptions on β. Furthermore, since our consistency results are formulated in terms of the Mallows (Kantorovich) metric, the existence of a limiting distribution is not required.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2021

Rank-Constrained Least-Squares: Prediction and Inference

In this work, we focus on the high-dimensional trace regression model wi...
research
10/26/2021

Debiased and threshold ridge regression for linear model with heteroskedastic and dependent error

Focusing on a high dimensional linear model y = Xβ + ϵ with dependent, n...
research
07/22/2021

Fast Low-Rank Tensor Decomposition by Ridge Leverage Score Sampling

Low-rank tensor decomposition generalizes low-rank matrix approximation ...
research
05/24/2018

Confidence region of singular vectors for high-dimensional and low-rank matrix regression

Let M∈R^m_1× m_2 be an unknown matrix with r= rank( M)≪(m_1,m_2) whose ...
research
05/24/2018

Confidence interval of singular vectors for high-dimensional and low-rank matrix regression

Let M∈R^m_1× m_2 be an unknown matrix with r= rank( M)≪(m_1,m_2) whose ...
research
07/02/2019

Tight Sensitivity Bounds For Smaller Coresets

An ε-coreset for Least-Mean-Squares (LMS) of a matrix A∈R^n× d is a smal...

Please sign up or login with your details

Forgot password? Click here to reset