Errors-in-variables models with dependent measurements

11/15/2016
by   Mark Rudelson, et al.
0

Suppose that we observe y ∈R^n and X ∈R^n × m in the following errors-in-variables model: y & = & X_0 β^* +ϵ X & = & X_0 + W, where X_0 is an n × m design matrix with independent subgaussian row vectors, ϵ∈R^n is a noise vector and W is a mean zero n × m random noise matrix with independent subgaussian column vectors, independent of X_0 and ϵ. This model is significantly different from those analyzed in the literature in the sense that we allow the measurement error for each covariate to be a dependent vector across its n observations. Such error structures appear in the science literature when modeling the trial-to-trial fluctuations in response strength shared across a set of neurons. Under sparsity and restrictive eigenvalue type of conditions, we show that one is able to recover a sparse vector β^* ∈R^m from the model given a single observation matrix X and the response vector y. We establish consistency in estimating β^* and obtain the rates of convergence in the ℓ_q norm, where q = 1, 2. We show error bounds which approach that of the regular Lasso and the Dantzig selector in case the errors in W are tending to 0. We analyze the convergence rates of the gradient descent methods for solving the nonconvex programs and show that the composite gradient descent algorithm is guaranteed to converge at a geometric rate to a neighborhood of the global minimizers: the size of the neighborhood is bounded by the statistical error in the ℓ_2 norm. Our analysis reveals interesting connections between computational and statistical efficiency and the concentration of measure phenomenon in random matrix theory. We provide simulation evidence illuminating the theoretical predictions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2015

High dimensional errors-in-variables models with dependent measurements

Suppose that we observe y ∈R^f and X ∈R^f × m in the following errors-in...
research
04/25/2011

Fast global convergence of gradient methods for high-dimensional statistical recovery

Many statistical M-estimators are based on convex optimization problems ...
research
09/23/2012

Gemini: Graph estimation with matrix variate normal instances

Undirected graphs can be used to describe matrix variate distributions. ...
research
12/10/2020

Low-rank matrix estimation in multi-response regression with measurement errors: Statistical and computational guarantees

In this paper, we investigate the matrix estimation problem in the multi...
research
12/13/2020

Pseudo-likelihood-based M-estimation of random graphs with dependent edges and parameter vectors of increasing dimension

An important question in statistical network analysis is how to estimate...
research
09/16/2011

High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity

Although the standard formulations of prediction problems involve fully-...
research
04/01/2016

Analysis of gradient descent methods with non-diminishing, bounded errors

The main aim of this paper is to provide an analysis of gradient descent...

Please sign up or login with your details

Forgot password? Click here to reset