Sparsity by Worst-Case Penalties

10/07/2012
by   Yves Grandvalet, et al.
0

This paper proposes a new interpretation of sparse penalties such as the elastic-net and the group-lasso. Beyond providing a new viewpoint on these penalization schemes, our approach results in a unified optimization strategy. Our experiments demonstrate that this strategy, implemented on the elastic-net, is computationally extremely efficient for small to medium size problems. Our accompanying software solves problems very accurately, at machine precision, in the time required to get a rough estimate with competing state-of-the-art algorithms. We illustrate on real and artificial datasets that this accuracy is required to for the correctness of the support of the solution, which is an important element for the interpretability of sparsity-inducing penalties.

READ FULL TEXT

page 13

page 17

research
04/23/2012

Sparse Prediction with the k-Support Norm

We derive a novel norm that corresponds to the tightest convex relaxatio...
research
06/29/2015

A simple yet efficient algorithm for multiple kernel learning under elastic-net constraints

This report presents an algorithm for the solution of multiple kernel le...
research
02/04/2022

Elastic Gradient Descent and Elastic Gradient Flow: LARS Like Algorithms Approximating the Solution Paths of the Elastic Net

The elastic net combines lasso and ridge regression to fuse the sparsity...
research
09/23/2018

A Learning Theory Approach to a Computationally Efficient Parameter Selection for the Elastic Net

Despite recent advances in regularisation theory, the issue of parameter...
research
01/06/2021

Graphical Elastic Net and Target Matrices: Fast Algorithms and Software for Sparse Precision Matrix Estimation

We consider estimation of undirected Gaussian graphical models and inver...
research
06/06/2020

An Efficient Semi-smooth Newton Augmented Lagrangian Method for Elastic Net

Feature selection is an important and active research area in statistics...
research
02/11/2020

Selecting time-series hyperparameters with the artificial jackknife

This article proposes a generalisation of the delete-d jackknife to solv...

Please sign up or login with your details

Forgot password? Click here to reset