M-estimation with the Trimmed l1 Penalty

05/19/2018
by   Jihun Yun, et al.
0

We study high-dimensional M-estimators with the trimmed ℓ_1 penalty. While standard ℓ_1 penalty incurs bias (shrinkage), trimmed ℓ_1 leaves the h largest entries penalty-free. This family of estimators include the Trimmed Lasso for sparse linear regression and its counterpart for sparse graphical model estimation. The trimmed ℓ_1 penalty is non-convex, but unlike other non-convex regularizers such as SCAD and MCP, it is not amenable and therefore prior analyzes cannot be applied. We characterize the support recovery of the estimates as a function of the trimming parameter h. Under certain conditions, we show that for any local optimum, (i) if the trimming parameter h is smaller than the true support size, all zero entries of the true parameter vector are successfully estimated as zero, and (ii) if h is larger than the true support size, the non-relevant parameters of the local optimum have smaller absolute values than relevant parameters and hence relevant parameters are not penalized. We then bound the ℓ_2 error of any local optimum. These bounds are asymptotically comparable to those for non-convex amenable penalties such as SCAD or MCP, but enjoy better constants. We specialize our main results to linear regression and graphical model estimation. Finally, we develop a fast provably convergent optimization algorithm for the trimmed regularizer problem. The algorithm has the same rate of convergence as difference of convex (DC)-based approaches, but is faster in practice and finds better objective values than recently proposed algorithms for DC optimization. Empirical results further demonstrate the value of ℓ_1 trimming.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2018

A Unifying Framework of High-Dimensional Sparse Estimation with Difference-of-Convex (DC) Regularizations

Under the linear regression framework, we study the variable selection p...
research
08/07/2019

Linear convergence and support recovery for non-convex multi-penalty regularization

We provide a comprehensive convergence study of the iterative multi-pena...
research
03/11/2015

Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators

For the problem of high-dimensional sparse linear regression, it is know...
research
11/23/2021

A Global Two-stage Algorithm for Non-convex Penalized High-dimensional Linear Regression Problems

By the asymptotic oracle property, non-convex penalties represented by m...
research
10/04/2010

Regularizers for Structured Sparsity

We study the problem of learning a sparse linear regression vector under...
research
07/01/2014

DC approximation approaches for sparse optimization

Sparse optimization refers to an optimization problem involving the zero...
research
11/13/2018

A unified algorithm for the non-convex penalized estimation: The ncpen package

Various R packages have been developed for the non-convex penalized esti...

Please sign up or login with your details

Forgot password? Click here to reset