Sorted Concave Penalized Regression

12/28/2017
by   Long Feng, et al.
0

The Lasso is biased. Concave penalized least squares estimation (PLSE) takes advantage of signal strength to reduce this bias, leading to sharper error bounds in prediction, coefficient estimation and variable selection. For prediction and estimation, the bias of the Lasso can be also reduced by taking a smaller penalty level than what selection consistency requires, but such smaller penalty level depends on the sparsity of the true coefficient vector. The sorted L1 penalized estimation (Slope) was proposed for adaptation to such smaller penalty levels. However, the advantages of concave PLSE and Slope do not subsume each other. We propose sorted concave penalized estimation to combine the advantages of concave and sorted penalizations. We prove that sorted concave penalties adaptively choose the smaller penalty level and at the same time benefits from signal strength, especially when a significant proportion of signals are stronger than the corresponding adaptively selected penalty levels. A local convex approximation, which extends the local linear and quadratic approximations to sorted concave penalties, is developed to facilitate the computation of sorted concave PLSE and proven to possess desired prediction and estimation error bounds. We carry out a unified treatment of penalty functions in a general optimization setting, including the penalty levels and concavity of the above mentioned sorted penalties and mixed penalties motivated by Bayesian considerations. Our analysis of prediction and estimation errors requires the restricted eigenvalue condition on the design, not beyond, and provides selection consistency under a required minimum signal strength condition in addition. Thus, our results also sharpens existing results on concave PLSE by removing the upper sparse eigenvalue component of the sparse Riesz condition.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2021

A Convex-Nonconvex Strategy for Grouped Variable Selection

This paper deals with the grouped variable selection problem. A widely u...
research
02/14/2018

Ultrahigh-dimensional Robust and Efficient Sparse Regression using Non-Concave Penalized Density Power Divergence

We propose a sparse regression method based on the non-concave penalized...
research
11/13/2018

A unified algorithm for the non-convex penalized estimation: The ncpen package

Various R packages have been developed for the non-convex penalized esti...
research
09/12/2018

Prediction and estimation consistency of sparse multi-class penalized optimal scoring

Sparse linear discriminant analysis via penalized optimal scoring is a s...
research
11/09/2022

Sparse Bayesian Lasso via a Variable-Coefficient ℓ_1 Penalty

Modern statistical learning algorithms are capable of amazing flexibilit...
research
08/22/2011

Sparse Estimation using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models

In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have b...
research
11/22/2018

Penalized least squares approximation methods and their applications to stochastic processes

We construct an objective function that consists of a quadratic approxim...

Please sign up or login with your details

Forgot password? Click here to reset