Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

06/08/2016
by   Eugene Ndiaye, et al.
0

In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ_1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for confidence sets or uncertainty quantification. In this work, after illustrating numerical difficulties for the Smoothed Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expansive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2016

GAP Safe Screening Rules for Sparse-Group-Lasso

In high dimensional settings, sparse structures are crucial for efficien...
research
05/27/2017

Generalized Concomitant Multi-Task Lasso for sparse multimodal regression

In high dimension, it is customary to consider Lasso-type estimators to ...
research
02/07/2019

Concomitant Lasso with Repetitions (CLaR): beyond averaging multiple realizations of heteroscedastic noise

Sparsity promoting norms are frequently used in high dimensional regress...
research
11/17/2016

Gap Safe screening rules for sparsity enforcing penalties

In high dimensional regression settings, sparsity enforcing penalties ha...
research
06/02/2021

Smooth Bilevel Programming for Sparse Regularization

Iteratively reweighted least square (IRLS) is a popular approach to solv...
research
05/16/2022

On Lasso and Slope drift estimators for Lévy-driven Ornstein–Uhlenbeck processes

We investigate the problem of estimating the drift parameter of a high-d...
research
03/27/2023

Square Root LASSO: Well-posedness, Lipschitz stability and the tuning trade off

This paper studies well-posedness and parameter sensitivity of the Squar...

Please sign up or login with your details

Forgot password? Click here to reset