DeepAI AI Chat
Log In Sign Up

Robustness in sparse linear models: relative efficiency based on robust approximate message passing

07/31/2015
by   Jelena Bradic, et al.
0

Understanding efficiency in high dimensional linear models is a longstanding problem of interest. Classical work with smaller dimensional problems dating back to Huber and Bickel has illustrated the benefits of efficient loss functions. When the number of parameters p is of the same order as the sample size n, p ≈ n, an efficiency pattern different from the one of Huber was recently established. In this work, we consider the effects of model selection on the estimation efficiency of penalized methods. In particular, we explore whether sparsity, results in new efficiency patterns when p > n. In the interest of deriving the asymptotic mean squared error for regularized M-estimators, we use the powerful framework of approximate message passing. We propose a novel, robust and sparse approximate message passing algorithm (RAMP), that is adaptive to the error distribution. Our algorithm includes many non-quadratic and non-differentiable loss functions. We derive its asymptotic mean squared error and show its convergence, while allowing p, n, s →∞, with n/p ∈ (0,1) and n/s ∈ (1,∞). We identify new patterns of relative efficiency regarding a number of penalized M estimators, when p is much larger than n. We show that the classical information bound is no longer reachable, even for light--tailed error distributions. We show that the penalized least absolute deviation estimator dominates the penalized least square estimator, in cases of heavy--tailed distributions. We observe this pattern for all choices of the number of non-zero parameters s, both s ≤ n and s ≈ n. In non-penalized problems where s =p ≈ n, the opposite regime holds. Therefore, we discover that the presence of model selection significantly changes the efficiency patterns.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/11/2020

Asymptotic errors for convex penalized linear regression beyond Gaussian matrices

We consider the problem of learning a coefficient vector x_0 in R^N from...
06/26/2022

Prediction Errors for Penalized Regressions based on Generalized Approximate Message Passing

We discuss the prediction accuracy of assumed statistical models in term...
07/02/2021

Asymptotic Statistical Analysis of Sparse Group LASSO via Approximate Message Passing Algorithm

Sparse Group LASSO (SGL) is a regularized model for high-dimensional lin...
04/15/2019

Cramer-Rao Bound for Estimation After Model Selection and its Application to Sparse Vector Estimation

In many practical parameter estimation problems, such as coefficient est...
12/30/2019

All-or-Nothing Phenomena: From Single-Letter to High Dimensions

We consider the linear regression problem of estimating a p-dimensional ...
02/20/2018

Estimator of Prediction Error Based on Approximate Message Passing for Penalized Linear Regression

We propose an estimator of prediction error using an approximate message...
01/02/2018

Performance Limits with Additive Error Metrics in Noisy Multi-Measurement Vector Problem

Real-world applications such as magnetic resonance imaging with multiple...