Structured sparsity-inducing norms through submodular functions

08/25/2010
by   Francis Bach, et al.
0

Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the L1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2010

Shaping Level Sets with Submodular Functions

We consider a class of sparsity-inducing regularization terms based on s...
research
05/06/2012

Convex Relaxation for Combinatorial Penalties

In this paper, we propose an unifying view of several recently proposed ...
research
09/12/2011

Structured sparsity through convex optimization

Sparse estimation methods are aimed at using or obtaining parsimonious r...
research
07/10/2014

A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation

A key problem in statistics and machine learning is the determination of...
research
06/21/2021

Unsupervised Deep Learning by Injecting Low-Rank and Sparse Priors

What if deep neural networks can learn from sparsity-inducing priors? Wh...
research
02/05/2021

Exploring the Subgraph Density-Size Trade-off via the Lovász Extension

Given an undirected graph, the Densest-k-Subgraph problem (DkS) seeks to...
research
02/14/2012

Active Semi-Supervised Learning using Submodular Functions

We consider active, semi-supervised learning in an offline transductive ...

Please sign up or login with your details

Forgot password? Click here to reset