High-dimensional nonconvex lasso-type M-estimators

04/12/2022
by   Jad Beyhum, et al.
0

This paper proposes a theory for ℓ_1-norm penalized high-dimensional M-estimators, with nonconvex risk and unrestricted domain. Under high-level conditions, the estimators are shown to attain the rate of convergence s_0√(log(nd)/n), where s_0 is the number of nonzero coefficients of the parameter of interest. Sufficient conditions for our main assumptions are then developed and finally used in several examples including robust linear regression, binary classification and nonlinear least squares.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2019

First order expansion of convex regularized estimators

We consider first order expansions of convex penalized estimators in hig...
research
04/25/2011

Fast global convergence of gradient methods for high-dimensional statistical recovery

Many statistical M-estimators are based on convex optimization problems ...
research
07/24/2020

Canonical thresholding for non-sparse high-dimensional linear regression

We consider a high-dimensional linear regression problem. Unlike many pa...
research
11/06/2018

Scale calibration for high-dimensional robust regression

We present a new method for high-dimensional linear regression when a sc...
research
08/01/2016

Oracle Inequalities for High-dimensional Prediction

The abundance of high-dimensional data in the modern sciences has genera...
research
07/09/2019

Nonconvex Regularized Robust Regression with Oracle Properties in Polynomial Time

This paper investigates tradeoffs among optimization errors, statistical...
research
03/11/2015

Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators

For the problem of high-dimensional sparse linear regression, it is know...

Please sign up or login with your details

Forgot password? Click here to reset