On the use of cross-validation for the calibration of the tuning parameter in the adaptive lasso

05/20/2020
by   Ballout Nadim, et al.
0

The adaptive lasso is a popular extension of the lasso, which was shown to generally enjoy better theoretical performance, at no additional computational cost in comparison to the lasso. The adaptive lasso relies on a weighted version of the L_1-norm penalty used in the lasso, where weights are typically derived from an initial estimate of the parameter vector. Irrespective of the method chosen to obtain this initial estimate, the performance of the corresponding version of the adaptive lasso critically depends on the value of the tuning parameter, which controls the magnitude of the weighted L_1-norm in the penalized criterion. In this article, we show that the standard cross-validation, although very popular in this context, has a severe defect when applied for the calibration of the tuning parameter in the adaptive lasso. We further propose a simple cross-validation scheme which corrects this defect. Empirical results from a simulation study confirms the superiority of our approach, in terms of both support recovery and prediction error. Although we focus on the adaptive lasso under linear regression models, our work likely extends to other regression models, as well as to the adaptive versions of other penalized approaches, including the group lasso, fused lasso, and data shared lasso

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2013

Risk-consistency of cross-validation with lasso-type procedures

The lasso and related sparsity inducing algorithms have been the target ...
research
05/04/2018

Hedging parameter selection for basis pursuit

In Compressed Sensing and high dimensional estimation, signal recovery o...
research
06/28/2013

Simple one-pass algorithm for penalized linear regression with cross-validation on MapReduce

In this paper, we propose a one-pass algorithm on MapReduce for penalize...
research
05/01/2020

Thresholded Adaptive Validation: Tuning the Graphical Lasso for Graph Recovery

The graphical lasso is the most popular estimator in Gaussian graphical ...
research
02/18/2020

Estimating the Penalty Level of ℓ_1-minimization via Two Gaussian Approximation Methods

In this paper, we aim to give a theoretical approximation for the penalt...
research
06/04/2018

Post model-fitting exploration via a "Next-Door" analysis

We propose a simple method for evaluating the model that has been chosen...
research
02/17/2021

Muddling Labels for Regularization, a novel approach to generalization

Generalization is a central problem in Machine Learning. Indeed most pre...

Please sign up or login with your details

Forgot password? Click here to reset