High-dimensional nonconvex lasso-type M-estimators

04/12/2022
by   Jad Beyhum, et al.
0

This paper proposes a theory for ℓ_1-norm penalized high-dimensional M-estimators, with nonconvex risk and unrestricted domain. Under high-level conditions, the estimators are shown to attain the rate of convergence s_0√(log(nd)/n), where s_0 is the number of nonzero coefficients of the parameter of interest. Sufficient conditions for our main assumptions are then developed and finally used in several examples including robust linear regression, binary classification and nonlinear least squares.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/12/2019

First order expansion of convex regularized estimators

We consider first order expansions of convex penalized estimators in hig...
04/25/2011

Fast global convergence of gradient methods for high-dimensional statistical recovery

Many statistical M-estimators are based on convex optimization problems ...
07/24/2020

Canonical thresholding for non-sparse high-dimensional linear regression

We consider a high-dimensional linear regression problem. Unlike many pa...
10/27/2014

A Greedy Homotopy Method for Regression with Nonconvex Constraints

Constrained least squares regression is an essential tool for high-dimen...
08/01/2016

Oracle Inequalities for High-dimensional Prediction

The abundance of high-dimensional data in the modern sciences has genera...
07/09/2019

Nonconvex Regularized Robust Regression with Oracle Properties in Polynomial Time

This paper investigates tradeoffs among optimization errors, statistical...
03/11/2015

Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators

For the problem of high-dimensional sparse linear regression, it is know...