High Dimensional Linear Regression using Lattice Basis Reduction

03/18/2018
by   David Gamarnik, et al.
0

We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector β^* from n noisy linear observations Y=Xβ^*+W ∈R^n, for known X ∈R^n × p and unknown W ∈R^n. Unlike most of the literature on this model we make no sparsity assumption on β^*. Instead we adopt a regularization based on assuming that the underlying vectors β^* have rational entries with the same denominator Q ∈Z_>0. We call this Q-rationality assumption. We propose a new polynomial-time algorithm for this task which is based on the seminal Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm. We establish that under the Q-rationality assumption, our algorithm recovers exactly the vector β^* for a large class of distributions for the iid entries of X and non-zero noise W. We prove that it is successful under small noise, even when the learner has access to only one observation (n=1). Furthermore, we prove that in the case of the Gaussian white noise for W, n=o(p/ p) and Q sufficiently large, our algorithm tolerates a nearly optimal information-theoretic level of the noise.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset