Network Flow Algorithms for Structured Sparsity

08/31/2010
by   Julien Mairal, et al.
0

We consider a class of learning problems that involve a structured sparsity-inducing norm defined as the sum of ℓ_∞-norms over groups of variables. Whereas a lot of effort has been put in developing fast optimization methods when the groups are disjoint or embedded in a specific hierarchical structure, we address here the case of general overlapping groups. To this end, we show that the corresponding optimization problem is related to network flow optimization. More precisely, the proximal problem associated with the norm we consider is dual to a quadratic min-cost flow problem. We propose an efficient procedure which computes its solution exactly in polynomial time. Our algorithm scales up to millions of variables, and opens up a whole new range of applications for structured sparse models. We present several experiments on image and video data, demonstrating the applicability and scalability of our approach for various problems.

READ FULL TEXT

page 1

page 13

research
04/11/2011

Convex and Network Flow Optimization for Structured Sparsity

We consider a class of learning problems regularized by a structured spa...
research
06/26/2011

A General Framework for Structured Sparsity via Proximal Optimization

We study a generalized framework for structured sparsity. It extends the...
research
09/11/2010

Proximal Methods for Hierarchical Sparse Coding

Sparse coding consists in representing signals as sparse linear combinat...
research
09/12/2011

Structured sparsity through convex optimization

Sparse estimation methods are aimed at using or obtaining parsimonious r...
research
05/04/2011

Structured Sparsity via Alternating Direction Methods

We consider a class of sparse learning problems in high dimensional feat...
research
04/14/2022

Numerical evaluation of dual norms via the MM algorithm

We deal with the problem of numerically computing the dual norm, which i...
research
02/07/2021

Structured Sparsity Inducing Adaptive Optimizers for Deep Learning

The parameters of a neural network are naturally organized in groups, so...

Please sign up or login with your details

Forgot password? Click here to reset