Support recovery without incoherence: A case for nonconvex regularization

12/17/2014
by   Po-Ling Loh, et al.
0

We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and ℓ_∞-bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex. Using this method, we derive two theorems concerning support recovery and ℓ_∞-guarantees for the regression estimator in a general setting. Our results provide rigorous theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin, support recovery consistency may be guaranteed without requiring the typical incoherence conditions present in ℓ_1-based methods. We then derive several corollaries that illustrate the wide applicability of our method to analyzing composite objective functions involving losses such as least squares, nonconvex modified least squares for errors-in variables linear regression, the negative log likelihood for generalized linear models, and the graphical Lasso. We conclude with empirical studies to corroborate our theoretical predictions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2019

Sparse recovery via nonconvex regularized M-estimators over ℓ_q-balls

In this paper, we analyse the recovery properties of nonconvex regulariz...
research
10/27/2014

A Greedy Homotopy Method for Regression with Nonconvex Constraints

Constrained least squares regression is an essential tool for high-dimen...
research
06/20/2013

Optimal computational and statistical rates of convergence for sparse nonconvex learning problems

We provide theoretical analysis of the statistical and computational pro...
research
06/11/2021

A Unified Framework for Constructing Nonconvex Regularizations

Over the past decades, many individual nonconvex methods have been propo...
research
05/10/2013

Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima

We provide novel theoretical results regarding local optima of regulariz...
research
05/04/2012

Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization

Least squares fitting is in general not useful for high-dimensional line...
research
04/06/2022

A novel nonconvex, smooth-at-origin penalty for statistical learning

Nonconvex penalties are utilized for regularization in high-dimensional ...

Please sign up or login with your details

Forgot password? Click here to reset