A Global Two-stage Algorithm for Non-convex Penalized High-dimensional Linear Regression Problems

11/23/2021
by   Peili Li, et al.
0

By the asymptotic oracle property, non-convex penalties represented by minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) have attracted much attentions in high-dimensional data analysis, and have been widely used in signal processing, image restoration, matrix estimation, etc. However, in view of their non-convex and non-smooth characteristics, they are computationally challenging. Almost all existing algorithms converge locally, and the proper selection of initial values is crucial. Therefore, in actual operation, they often combine a warm-starting technique to meet the rigid requirement that the initial value must be sufficiently close to the optimal solution of the corresponding problem. In this paper, based on the DC (difference of convex functions) property of MCP and SCAD penalties, we aim to design a global two-stage algorithm for the high-dimensional least squares linear regression problems. A key idea for making the proposed algorithm to be efficient is to use the primal dual active set with continuation (PDASC) method, which is equivalent to the semi-smooth Newton (SSN) method, to solve the corresponding sub-problems. Theoretically, we not only prove the global convergence of the proposed algorithm, but also verify that the generated iterative sequence converges to a d-stationary point. In terms of computational performance, the abundant research of simulation and real data show that the algorithm in this paper is superior to the latest SSN method and the classic coordinate descent (CD) algorithm for solving non-convex penalized high-dimensional linear regression problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2018

Semi-Smooth Newton Algorithm for Non-Convex Penalized Linear Regression

Both the smoothly clipped absolute deviation (SCAD) and the minimax conc...
research
03/04/2022

High-dimensional Censored Regression via the Penalized Tobit Likelihood

The Tobit model has long been the standard method for regression with a ...
research
09/04/2023

Smoothing ADMM for Sparse-Penalized Quantile Regression with Non-Convex Penalties

This paper investigates quantile regression in the presence of non-conve...
research
11/13/2018

A unified algorithm for the non-convex penalized estimation: The ncpen package

Various R packages have been developed for the non-convex penalized esti...
research
05/21/2012

Variance function estimation in high-dimensions

We consider the high-dimensional heteroscedastic regression model, where...
research
05/19/2018

M-estimation with the Trimmed l1 Penalty

We study high-dimensional M-estimators with the trimmed ℓ_1 penalty. Whi...
research
05/21/2019

A Two-stage Classification Method for High-dimensional Data and Point Clouds

High-dimensional data classification is a fundamental task in machine le...

Please sign up or login with your details

Forgot password? Click here to reset