Strong rules for discarding predictors in lasso-type problems

11/09/2010
by   Robert Tibshirani, et al.
0

We consider rules for discarding predictors in lasso regression and related problems, for computational efficiency. El Ghaoui et al (2010) propose "SAFE" rules that guarantee that a coefficient will be zero in the solution, based on the inner products of each predictor with the outcome. In this paper we propose strong rules that are not foolproof but rarely fail in practice. These can be complemented with simple checks of the Karush- Kuhn-Tucker (KKT) conditions to provide safe rules that offer substantial speed and space savings in a variety of statistical convex optimization problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2021

Look-Ahead Screening Rules for the Lasso

The lasso is a popular method to induce shrinkage and sparsity in the so...
research
11/16/2012

Lasso Screening Rules via Dual Polytope Projection

Lasso is a widely used regression technique to find sparse representatio...
research
10/07/2018

Sparse Regression with Multi-type Regularized Feature Modeling

Within the statistical and machine learning literature, regularization t...
research
05/07/2020

The Strong Screening Rule for SLOPE

Extracting relevant features from data sets where the number of observat...
research
03/21/2017

From safe screening rules to working sets for faster Lasso-type solvers

Convex sparsity-promoting regularizations are ubiquitous in modern stati...
research
04/27/2021

The Hessian Screening Rule

Predictor screening rules, which discard predictors from the design matr...
research
10/03/2018

Learning sparse optimal rule fit by safe screening

In this paper, we consider linear prediction models in the form of a spa...

Please sign up or login with your details

Forgot password? Click here to reset