Variable Selection and Regularization via Arbitrary Rectangle-range Generalized Elastic Net

12/14/2021
βˆ™
by   Yujia Ding, et al.
βˆ™
0
βˆ™

We introduce the arbitrary rectangle-range generalized elastic net penalty method, abbreviated to ARGEN, for performing constrained variable selection and regularization in high-dimensional sparse linear models. As a natural extension of the nonnegative elastic net penalty method, ARGEN is proved to have variable selection consistency and estimation consistency under some conditions. The asymptotic behavior in distribution of the ARGEN estimators have been studied. We also propose an algorithm called MU-QP-RR-W-l_1 to efficiently solve ARGEN. By conducting simulation study we show that ARGEN outperforms the elastic net in a number of settings. Finally an application of S P 500 index tracking with constraints on the stock allocations is performed to provide general guidance for adapting ARGEN to solve real-world problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
βˆ™ 06/10/2021

Sign Consistency of the Generalized Elastic Net Estimator

In this paper, we propose a novel variable selection approach in the fra...
research
βˆ™ 04/20/2018

Generalized Linear Model for Gamma Distributed Variables via Elastic Net Regularization

The Generalized Linear Model (GLM) for the Gamma distribution (glmGamma)...
research
βˆ™ 05/23/2019

Adaptive Function-on-Scalar Regression with a Smoothing Elastic Net

This paper presents a new methodology, called AFSSEN, to simultaneously ...
research
βˆ™ 07/22/2008

Elastic-Net Regularization in Learning Theory

Within the framework of statistical learning theory we analyze in detail...
research
βˆ™ 03/28/2022

A Comparison of Hamming Errors of Representative Variable Selection Methods

Lasso is a celebrated method for variable selection in linear models, bu...
research
βˆ™ 06/05/2018

Selection and Estimation Optimality in High Dimensions with the TWIN Penalty

We introduce a novel class of variable selection penalties called TWIN, ...
research
βˆ™ 04/06/2022

A novel nonconvex, smooth-at-origin penalty for statistical learning

Nonconvex penalties are utilized for regularization in high-dimensional ...

Please sign up or login with your details

Forgot password? Click here to reset