The noise barrier and the large signal bias of the Lasso and other convex estimators

04/04/2018
by   Pierre C. Bellec, et al.
0

Convex estimators such as the Lasso, the matrix Lasso and the group Lasso have been studied extensively in the last two decades, demonstrating great success in both theory and practice. Two quantities are introduced, the noise barrier and the large scale bias, that provides insights on the performance of these convex regularized estimators. It is now well understood that the Lasso achieves fast prediction rates, provided that the correlations of the design satisfy some Restricted Eigenvalue or Compatibility condition, and provided that the tuning parameter is large enough. Using the two quantities introduced in the paper, we show that the compatibility condition on the design matrix is actually unavoidable to achieve fast prediction rates with the Lasso. The Lasso must incur a loss due to the correlations of the design matrix, measured in terms of the compatibility constant. This results holds for any design matrix, any active subset of covariates, and any tuning parameter. It is now well known that the Lasso enjoys a dimension reduction property: the prediction error is of order λ√(k) where k is the sparsity; even if the ambient dimension p is much larger than k. Such results require that the tuning parameters is greater than some universal threshold. We characterize sharp phase transitions for the tuning parameter of the Lasso around a critical threshold dependent on k. If λ is equal or larger than this critical threshold, the Lasso is minimax over k-sparse target vectors. If λ is equal or smaller than critical threshold, the Lasso incurs a loss of order σ√(k) --which corresponds to a model of size k-- even if the target vector has fewer than k nonzero coefficients. Remarkably, the lower bounds obtained in the paper also apply to random, data-driven tuning parameters. The results extend to convex penalties beyond the Lasso.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2014

On the Prediction Performance of the Lasso

Although the Lasso has been extensively studied, the relationship betwee...
research
04/03/2018

On tight bounds for the Lasso

We present upper and lower bounds for the prediction error of the Lasso....
research
05/16/2022

On Lasso and Slope drift estimators for Lévy-driven Ornstein–Uhlenbeck processes

We investigate the problem of estimating the drift parameter of a high-d...
research
08/18/2022

Small Tuning Parameter Selection for the Debiased Lasso

In this study, we investigate the bias and variance properties of the de...
research
11/13/2020

Adaptive Estimation In High-Dimensional Additive Models With Multi-Resolution Group Lasso

In additive models with many nonparametric components, a number of regul...
research
10/12/2010

Optimal designs for Lasso and Dantzig selector using Expander Codes

We investigate the high-dimensional regression problem using adjacency m...
research
11/03/2018

The distribution of the Lasso: Uniform control over sparse balls and adaptive parameter tuning

The Lasso is a popular regression method for high-dimensional problems i...

Please sign up or login with your details

Forgot password? Click here to reset