Lasso tuning through the flexible-weighted bootstrap

03/10/2019
by   Ellis Patrick, et al.
0

Regularized regression approaches such as the Lasso have been widely adopted for constructing sparse linear models in high-dimensional datasets. A complexity in fitting these models is the tuning of the parameters which control the level of introduced sparsity through penalization. The most common approach to select the penalty parameter is through k-fold cross-validation. While cross-validation is used to minimise the empirical prediction error, approaches such as the m-out-of-n paired bootstrap which use smaller training datasets provide consistency in selecting the non-zero coefficients in the oracle model, performing well in an asymptotic setting but having limitations when n is small. In fact, for models such as the Lasso there is a monotonic relationship between the size of training sets and the penalty parameter. We propose a generalization of these methods for selecting the regularization parameter based on a flexible-weighted bootstrap procedure that mimics the m-out-of-n bootstrap and overcomes its challenges for all sample sizes. Through simulation studies we demonstrate that when selecting a penalty parameter, the choice of weights in the bootstrap procedure can be used to dictate the size of the penalty parameter and hence the sparsity of the fitted model. We empirically illustrate our weighted bootstrap procedure by applying the Lasso to integrate clinical and microRNA data in the modeling of Alzheimer's disease. In both the real and simulated data we find a narrow part of the parameter space to perform well, emulating an m-out-of-n bootstrap, and that our procedure can be used to improve interpretation of other optimization heuristics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/10/2021

Analytic and Bootstrap-after-Cross-Validation Methods for Selecting Penalty Parameters of High-Dimensional M-Estimators

We develop two new methods for selecting the penalty parameter for the ℓ...
research
06/13/2018

LASSO-Driven Inference in Time and Space

We consider the estimation and inference in a system of high-dimensional...
research
06/07/2020

What needles do sparse neural networks find in nonlinear haystacks

Using a sparsity inducing penalty in artificial neural networks (ANNs) a...
research
03/13/2013

Estimation Stability with Cross Validation (ESCV)

Cross-validation (CV) is often used to select the regularization paramet...
research
10/10/2018

ET-Lasso: Efficient Tuning of Lasso for High-Dimensional Data

The L1 regularization (Lasso) has proven to be a versatile tool to selec...
research
12/23/2021

Cooperative learning for multi-view analysis

We propose a new method for supervised learning with multiple sets of fe...
research
02/18/2020

Estimating the Penalty Level of ℓ_1-minimization via Two Gaussian Approximation Methods

In this paper, we aim to give a theoretical approximation for the penalt...

Please sign up or login with your details

Forgot password? Click here to reset