Strong oracle optimality of folded concave penalized estimation

10/22/2012
by   Jianqing Fan, et al.
0

Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, that is, sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2018

Ultrahigh-dimensional Robust and Efficient Sparse Regression using Non-Concave Penalized Density Power Divergence

We propose a sparse regression method based on the non-concave penalized...
research
07/07/2021

The folded concave Laplacian spectral penalty learns block diagonal sparsity patterns with the strong oracle property

Structured sparsity is an important part of the modern statistical toolk...
research
07/06/2021

A provable two-stage algorithm for penalized hazards regression

From an optimizer's perspective, achieving the global optimum for a gene...
research
08/25/2011

A General Theory of Concave Regularization for High Dimensional Sparse Estimation Problems

Concave regularization methods provide natural procedures for sparse rec...
research
08/14/2019

Least Squares Approximation for a Distributed System

In this work we develop a distributed least squares approximation (DLSA)...
research
11/09/2017

Oracle inequalities for sign constrained generalized linear models

High-dimensional data have recently been analyzed because of data collec...
research
02/12/2014

Sparse Estimation From Noisy Observations of an Overdetermined Linear System

This note studies a method for the efficient estimation of a finite numb...

Please sign up or login with your details

Forgot password? Click here to reset