DeepAI AI Chat
Log In Sign Up

Sparse Prediction with the k-Support Norm

by   Andreas Argyriou, et al.
The University of Chicago
Toyota Technological Institute at Chicago

We derive a novel norm that corresponds to the tightest convex relaxation of sparsity combined with an ℓ_2 penalty. We show that this new k-support norm provides a tighter relaxation than the elastic net and is thus a good replacement for the Lasso or the elastic net in sparse prediction problems. Through the study of the k-support norm, we also bound the looseness of the elastic net, thus shedding new light on it and providing justification for its use.


page 1

page 2

page 3

page 4


Sparsity by Worst-Case Penalties

This paper proposes a new interpretation of sparse penalties such as the...

Lasso and elastic nets by orthants

We propose a new method for computing the lasso path, using the fact tha...

Robust Elastic Net Regression

We propose a robust elastic net (REN) model for high-dimensional sparse ...

Sparse Regularization in Marketing and Economics

Sparse alpha-norm regularization has many data-rich applications in mark...

Feature-weighted elastic net: using "features of features" for better prediction

In some supervised learning settings, the practitioner might have additi...

Trace Lasso: a trace norm regularization for correlated designs

Using the ℓ_1-norm to regularize the estimation of the parameter vector ...