Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators

03/11/2015
by   Yuchen Zhang, et al.
0

For the problem of high-dimensional sparse linear regression, it is known that an ℓ_0-based estimator can achieve a 1/n "fast" rate on the prediction error without any conditions on the design matrix, whereas in absence of restrictive conditions on the design matrix, popular polynomial-time methods only guarantee the 1/√(n) "slow" rate. In this paper, we show that the slow rate is intrinsic to a broad class of M-estimators. In particular, for estimators based on minimizing a least-squares cost function together with a (possibly non-convex) coordinate-wise separable regularizer, there is always a "bad" local optimum such that the associated prediction error is lower bounded by a constant multiple of 1/√(n). For convex regularizers, this lower bound applies to all global optima. The theory is applicable to many popular estimators, including convex ℓ_1-based methods as well as M-estimators based on nonconvex regularizers, including the SCAD penalty or the MCP regularizer. In addition, for a broad class of nonconvex regularizers, we show that the bad local optima are very common, in that a broad class of local minimization algorithms with random initialization will typically converge to a bad solution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2015

Statistical consistency and asymptotic normality for high-dimensional robust M-estimators

We study theoretical properties of regularized robust M-estimators, appl...
research
05/19/2018

M-estimation with the Trimmed l1 Penalty

We study high-dimensional M-estimators with the trimmed ℓ_1 penalty. Whi...
research
08/11/2014

Optimum Statistical Estimation with Strategic Data Sources

We propose an optimum mechanism for providing monetary incentives to the...
research
04/12/2022

High-dimensional nonconvex lasso-type M-estimators

This paper proposes a theory for ℓ_1-norm penalized high-dimensional M-e...
research
03/04/2015

Statistical Limits of Convex Relaxations

Many high dimensional sparse learning problems are formulated as nonconv...
research
08/15/2017

The Trimmed Lasso: Sparsity and Robustness

Nonconvex penalty methods for sparse modeling in linear regression have ...
research
12/16/2021

Analysis of Generalized Bregman Surrogate Algorithms for Nonsmooth Nonconvex Statistical Learning

Modern statistical applications often involve minimizing an objective fu...

Please sign up or login with your details

Forgot password? Click here to reset