DeepAI AI Chat
Log In Sign Up

Model Agnostic High-Dimensional Error-in-Variable Regression

by   Anish Agarwal, et al.

We consider the problem of high-dimensional error-in-variable regression where we only observe a sparse, noisy version of the covariate data. We propose an algorithm that utilizes matrix estimation (ME) as a key subroutine to de-noise the corrupted data, and then performs ordinary least squares regression. When the ME subroutine is instantiated with hard singular value thresholding (HSVT), our results indicate that if the number of samples scales as ω( ρ^-4 r ^5 (p)), then our in- and out-of-sample prediction error decays to 0 as p →∞; ρ represents the fraction of observed data, r is the (approximate) rank of the true covariate matrix, and p is the number of covariates. As an important byproduct of our approach, we demonstrate that HSVT with regression acts as implicit ℓ_0-regularization since HSVT aims to find a low-rank structure within the covariance matrix. Thus, we can view the sparsity of the estimated parameter as a consequence of the covariate structure rather than a model assumption as is often considered in the literature. Moreover, our non-asymptotic bounds match (up to ^4(p) factors) the best guaranteed sample complexity results in the literature for algorithms that require precise knowledge of the underlying model; we highlight that our approach is model agnostic. In our analysis, we obtain two technical results of independent interest: first, we provide a simple bound on the spectral norm of random matrices with independent sub-exponential rows with randomly missing entries; second, we bound the max column sum error -- a nonstandard error metric -- for HSVT. Our setting enables us to apply our results to applications such as synthetic control for causal inference, time series analysis, and regression with privacy. It is important to note that the existing inventory of methods is unable to analyze these applications.


Sparse covariance matrix estimation in high-dimensional deconvolution

We study the estimation of the covariance matrix Σ of a p-dimensional no...

Network Autoregression for Incomplete Matrix-Valued Time Series

We study the dynamics of matrix-valued time series with observed network...

On Principal Component Regression in a High-Dimensional Error-in-Variables Setting

We analyze the classical method of Principal Component Regression (PCR) ...

Robust High Dimensional Sparse Regression and Matching Pursuit

We consider high dimensional sparse regression, and develop strategies a...

Learning Mixture Model with Missing Values and its Application to Rankings

We consider the question of learning mixtures of generic sub-gaussian di...

Convergence bounds for nonlinear least squares and applications to tensor recovery

We consider the problem of approximating a function in general nonlinear...

Sample Efficient Toeplitz Covariance Estimation

We study the query complexity of estimating the covariance matrix T of a...