DeepAI AI Chat
Log In Sign Up

High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition

01/16/2017
by   David Gamarnik, et al.
MIT
0

We consider a sparse linear regression model Y=Xβ^*+W where X has a Gaussian entries, W is the noise vector with mean zero Gaussian entries, and β^* is a binary vector with support size (sparsity) k. Using a novel conditional second moment method we obtain a tight up to a multiplicative constant approximation of the optimal squared error _βY-Xβ_2, where the minimization is over all k-sparse binary vectors β. The approximation reveals interesting structural properties of the underlying regression problem. In particular, a) We establish that n^*=2k p/ (2k/σ^2+1) is a phase transition point with the following "all-or-nothing" property. When n exceeds n^*, (2k)^-1β_2-β^*_0≈ 0, and when n is below n^*, (2k)^-1β_2-β^*_0≈ 1, where β_2 is the optimal solution achieving the smallest squared error. With this we prove that n^* is the asymptotic threshold for recovering β^* information theoretically. b) We compute the squared error for an intermediate problem _βY-Xβ_2 where minimization is restricted to vectors β with β-β^*_0=2k ζ, for ζ∈ [0,1]. We show that a lower bound part Γ(ζ) of the estimate, which corresponds to the estimate based on the first moment method, undergoes a phase transition at three different thresholds, namely n_inf,1=σ^2 p, which is information theoretic bound for recovering β^* when k=1 and σ is large, then at n^* and finally at n_LASSO/CS. c) We establish a certain Overlap Gap Property (OGP) on the space of all binary vectors β when n< ck p for sufficiently small constant c. We conjecture that OGP is the source of algorithmic hardness of solving the minimization problem _βY-Xβ_2 in the regime n<n_LASSO/CS.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/14/2017

Sparse High-Dimensional Linear Regression. Algorithmic Barriers and a Local Search Algorithm

We consider a sparse high dimensional regression model where the goal is...
03/12/2019

The All-or-Nothing Phenomenon in Sparse Linear Regression

We study the problem of recovering a hidden binary k-sparse p-dimensiona...
12/30/2019

All-or-Nothing Phenomena: From Single-Letter to High Dimensions

We consider the linear regression problem of estimating a p-dimensional ...
03/18/2018

High Dimensional Linear Regression using Lattice Basis Reduction

We consider a high dimensional linear regression problem where the goal ...
02/03/2012

Lossy Compression via Sparse Linear Regression: Performance under Minimum-distance Encoding

We study a new class of codes for lossy compression with the squared-err...
09/08/2019

Living near the edge: A lower-bound on the phase transition of total variation minimization

This work is about the total variation (TV) minimization which is used f...
12/09/2020

Optimal oracle inequalities for solving projected fixed-point equations

Linear fixed point equations in Hilbert spaces arise in a variety of set...