1 Introduction
Sparse linear models have become a popular framework for dealing with various unsupervised and supervised tasks in machine learning and signal processing. In such models, linear combinations of small sets of variables are selected to describe the data. Regularization by the
norm has emerged as a powerful tool for addressing this combinatorial variable selection problem, relying on both a welldeveloped theory (see [1] and references therein) and efficient algorithms [2, 3, 4].The norm primarily encourages sparse solutions, regardless of the potential structural relationships (e.g., spatial, temporal or hierarchical) existing between the variables. Much effort has recently been devoted to designing sparsityinducing regularizations capable of encoding higherorder information about allowed patterns of nonzero coefficients [5, 6, 7, 8, 9], with successful applications in bioinformatics [6, 10], topic modeling [11]
and computer vision
[8].By considering sums of norms of appropriate subsets, or groups, of variables, these regularizations control the sparsity patterns of the solutions. The underlying optimization problem is usually difficult, in part because it involves nonsmooth components. Proximal methods have proven to be effective in this context, essentially because of their fast convergence rates and their ability to deal with large problems [3, 4]. While the settings where the penalized groups of variables do not overlap [12] or are embedded in a treeshaped hierarchy [11] have already been studied, sparsityinducing regularizations of general overlapping groups have, to the best of our knowledge, never been considered within the proximal method framework.
This paper makes the following contributions:

It shows that the proximal operator associated with the structured norm we consider can be computed by solving a quadratic mincost flow problem, thereby establishing a connection with the network flow optimization literature.

It presents a fast and scalable procedure for solving a large class of structured sparse regularized problems, which, to the best of our knowledge, have not been addressed efficiently before.

It shows that the dual norm of the sparsityinducing norm we consider can also be evaluated efficiently, which enables us to compute duality gaps for the corresponding optimization problems.

It demonstrates that our method is relevant for various applications, from video background subtraction to estimation of hierarchical structures for dictionary learning of natural image patches.
2 Structured Sparse Models
We consider in this paper convex optimization problems of the form
(1) 
where is a convex differentiable function and is a convex, nonsmooth, sparsityinducing regularization function. When one knows a priori that the solutions of this learning problem only have a few nonzero coefficients, is often chosen to be the norm, leading for instance to the Lasso [13]. When these coefficients are organized in groups, a penalty encoding explicitly this prior knowledge can improve the prediction performance and/or interpretability of the learned models [12, 14, 15, 16]. Such a penalty might for example take the form
(2) 
where is a set of groups of indices, denotes the th coordinate of for in
, the vector
in represents the coefficients of indexed by in , and the scalars are positive weights. A sum of norms is also used in the literature [7], but the norm is piecewise linear, a property that we take advantage of in this paper. Note that when is the set of singletons of , we get back the norm.If is a more general partition of , variables are selected in groups rather than individually. When the groups overlap, is still a norm and sets groups of variables to zero together [5]. The latter setting has first been considered for hierarchies [7, 10, 17], and then extended to general group structures [5].^{1}^{1}1Note that other types of structured sparse models have also been introduced, either through a different norm [6], or through nonconvex criteria [8, 9]. Solving Eq. (1) in this context becomes challenging and is the topic of this paper. Following [11] who tackled the case of hierarchical groups, we propose to approach this problem with proximal methods, which we now introduce.
2.1 Proximal Methods
In a nutshell, proximal methods can be seen as a natural extension of gradientbased techniques, and they are well suited to minimizing the sum of two convex terms, a smooth function —continuously differentiable with Lipschitzcontinuous gradient— and a potentially nonsmooth function (see [18] and references therein). At each iteration, the function is linearized at the current estimate and the socalled proximal problem has to be solved:
The quadratic term keeps the solution in a neighborhood where the current linear approximation holds, and is an upper bound on the Lipschitz constant of . This problem can be rewritten as
(3) 
with , and . We call proximal operator associated with the regularization the function that maps a vector in onto the (unique, by strong convexity) solution of Eq. (3). Simple proximal method use as the next iterate, but accelerated variants [3, 4] are also based on the proximal operator and require to solve problem (3) exactly and efficiently to enjoy their fast convergence rates. Note that when is the norm, the solution of Eq. (3) is obtained by a softthresholding [18].
The approach we develop in the rest of this paper extends [11] to the case of general overlapping groups when is a weighted sum of norms, broadening the application of these regularizations to a wider spectrum of problems.^{2}^{2}2For hierarchies, the approach of [11] applies also to the case of where is a weighted sum of norms.
3 A Quadratic MinCost Flow Formulation
In this section, we show that a convex dual of problem (3) for general overlapping groups can be reformulated as a quadratic mincost flow problem. We propose an efficient algorithm to solve it exactly, as well as a related algorithm to compute the dual norm of . We start by considering the dual formulation to problem (3) introduced in [11], for the case where is a sum of norms:
Lemma 1 (Dual of the proximal problem [11])
Without loss of generality,^{3}^{3}3 Let denote a solution of Eq. (4). Optimality conditions of Eq. (4) derived in [11] show that for all in , the signs of the nonzero coefficients for in are the same as the signs of the entries . To solve Eq. (4), one can therefore flip the signs of the negative variables , then solve the modified dual formulation (with nonnegative variables), which gives the magnitude of the entries (the signs of these being known). we assume from now on that the scalars are all nonnegative, and we constrain the entries of to be nonnegative. We now introduce a graph modeling of problem (4).
3.1 Graph Model
Let be a directed graph , where is a set of vertices, a set of arcs, a source, and a sink. Let and be two functions on the arcs, and , where is a cost function and is a nonnegative capacity function. A flow is a nonnegative function on arcs that satisfies capacity constraints on all arcs (the value of the flow on an arc is less than or equal to the arc capacity) and conservation constraints on all vertices (the sum of incoming flows at a vertex is equal to the sum of outgoing flows) except for the source and the sink.
We introduce a canonical graph associated with our optimization problem, and uniquely characterized by the following construction:
(i) is the union of two sets of vertices and , where contains exactly one vertex for each index in
, and contains exactly one vertex for each group in . We thus have . For simplicity, we
identify groups and indices with the vertices of the graph.
(ii) For every group in , contains an arc .
These arcs have capacity and zero cost.
(iii) For every group in , and every index in ,
contains an arc with zero cost and infinite capacity. We
denote by the flow on this arc.
(iv) For every index in , contains an arc
with infinite capacity and a cost , where is the flow on .
Note that by flow conservation, we necessarily have .
Examples of canonical graphs are given in Figures 1(a)LABEL:. The flows associated with can now be identified with the variables of problem (4): indeed, the sum of the costs on the edges leading to the sink is equal to the objective function of (4), while the capacities of the arcs match the constraints on each group. This shows that finding a flow minimizing the sum of the costs on such a graph is equivalent to solving problem (4).
When some groups are included in others, the canonical graph can be simplified to yield a graph with a smaller number of edges. Specifically, if and are groups with , the edges for carrying a flow can be removed and replaced by a single edge of infinite capacity and zero cost, carrying the flow . This simplification is illustrated in Figure 1(d), with a graph equivalent to the one of Figure 1(c). This does not change the optimal value of , which is the quantity of interest for computing the optimal primal variable . We present in Appendix A a formal definition of equivalent graphs. These simplifications are useful in practice, since they reduce the number of edges in the graph and improve the speed of the algorithms we are now going to present.
3.2 Computation of the Proximal Operator
Quadratic mincost flow problems have been well studied in the operations research literature [19]. One of the simplest cases, where contains a single group as in Figure 1(a), can be solved by an orthogonal projection on the ball of radius . It has been shown, both in machine learning [20] and operations research [19, 21], that such a projection can be done in operations. When the group structure is a tree as in Figure 1(d), strategies developed in the two communities are also similar [11, 19], and solve the problem in operations, where is the depth of the tree.
The general case of overlapping groups is more difficult. Hochbaum and Hong have shown in [19] that quadratic mincost flow problems can be reduced to a specific parametric maxflow problem, for which an efficient algorithm exists [22].^{4}^{4}4By definition, a parametric maxflow problem consists in solving, for every value of a parameter, a maxflow problem on a graph whose arc capacities depend on this parameter. While this approach could be used to solve Eq. (4), it ignores the fact that our graphs have nonzero costs only on edges leading to the sink. To take advantage of this specificity, we propose the dedicated Algorithm 1. Our method clearly shares some similarities with a simplified version of [22] presented in [23], namely a divide and conquer strategy. Nonetheless, we performed an empirical comparison described in Appendix D, which shows that our dedicated algorithm has significantly better performance in practice.
Informally, computeFlow returns the optimal flow vector , proceeding as follows: This function first solves a relaxed version of problem Eq. (4) obtained by replacing the sum of the vectors by a single vector whose norm should be less than, or equal to, the sum of the constraints on the vectors . The optimal vector therefore gives a lower bound on the optimal cost. Then, the maximumflow step [24] tries to find a feasible flow such that the vector matches . If , then the cost of the flow reaches the lower bound, and the flow is optimal. If , the lower bound cannot be reached, and we construct a minimum cut of the graph [25] that defines two disjoints sets of nodes and ; is the part of the graph that can potentially receive more flow from the source, whereas all arcs linking to are saturated. The properties of a min cut [26] imply that there are no arcs from to (arcs inside have infinite capacity by construction), and that there is no flow on arcs from to . At this point, it is possible to show that the value of the optimal mincost flow on these arcs is also zero. Thus, removing them yields an equivalent optimization problem, which can be decomposed into two independent problems of smaller size and solved recursively by the calls to computeFlow and computeFlow. Note that when is the norm, our algorithm solves problem (4) during the first projection step in line and stops. A formal proof of correctness of Algorithm 1 and further details are relegated to Appendix B.
The approach of [19, 22] is guaranteed to have the same worstcase complexity as a single maxflow algorithm. However, we have experimentally observed a significant discrepancy between the worst case and empirical complexities for these flow problems, essentially because the empirical cost of each maxflow is significantly smaller than its theoretical cost. Despite the fact that the worstcase guarantee of our algorithm is weaker than their (up to a factor ), it is more adapted to the structure of our graphs and has proven to be much faster in our experiments (see supplementary material).
Some implementation details are crucial to the efficiency of the algorithm:

Exploiting maximal connected components: When there exists no arc between two subsets of , it is possible to process them independently to solve the global mincost flow problem. To that effect, before calling the function computeFlow(), we look for maximal connected components and call sequentially the procedure computeFlow() for in .

Efficient maxflow algorithm: We have implemented the “pushrelabel” algorithm of [24]
to solve our maxflow problems, using classical heuristics that significantly speed it up in practice (see
[24, 27]). Our implementation uses the socalled “highestactive vertex selection rule, global and gap heuristics” (see [24, 27]), and has a worstcase complexity of for a graph . This algorithm leverages the concept of preflow that relaxes the definition of flow and allows vertices to have a positive excess. 
Using flow warmrestarts: Our algorithm can be initialized with any valid preflow, enabling warmrestarts when the maxflow is called several times as in our algorithm.

Improved projection step: The first line of the procedure computeFlow can be replaced by The idea is that the structure of the graph will not allow to be greater than after the maxflow step. Adding these additional constraints leads to better performance when the graph is not well balanced. This modified projection step can still be computed in linear time [21].
3.3 Computation of the Dual Norm
The dual norm of , defined for any vector in by , is a key quantity to study sparsityinducing regularizations [5, 17, 28]. We use it here to monitor the convergence of the proximal method through a duality gap, and define a proper optimality criterion for problem (1). We denote by the Fenchel conjugate of [29], defined by . The duality gap for problem (1) can be derived from standard Fenchel duality arguments [29] and it is equal to . Therefore, evaluating the duality gap requires to compute efficiently in order to find a feasible dual variable . This is equivalent to solving another network flow problem, based on the following variational formulation:
(5) 
In the network problem associated with (12), the capacities on the arcs , , are set to , and the capacities on the arcs , in , are fixed to . Solving problem (12) amounts to finding the smallest value of , such that there exists a flow saturating the capacities on the arcs leading to the sink (i.e., ). Equration (12) and the algorithm below are proven to be correct in Appendix B.
4 Applications and Experiments
Our experiments use the algorithm of [4] based on our proximal operator, with weights set to . We present this algorithm in more details in Appendix C.
4.1 Speed Comparison
We compare our method (ProxFlow) and two generic optimization techniques, namely a subgradient descent (SG) and an interior point method,^{5}^{5}5In our simulations, we use the commercial software Mosek, http://www.mosek.com/
on a regularized linear regression problem. Both SG and ProxFlow are implemented in
C++. Experiments are run on a singlecore GHz CPU. We consider a design matrix in built from overcomplete dictionaries of discrete cosine transforms (DCT), which are naturally organized on one or twodimensional grids and display local correlations. The following families of groups using this spatial information are thus considered: (1) every contiguous sequence of length for the onedimensional case, and (2) every square in the twodimensional setting. We generate vectors in according to the linear model , where . The vector has about percent nonzero components, randomly selected, while respecting the structure of , and uniformly generated between .In our experiments, the regularization parameter is chosen to achieve this level of sparsity. For SG, we take the step size to be equal to , where is the iteration number, and are the best parameters selected in . For the interior point methods, since problem (1) can be cast either as a quadratic (QP) or as a conic program (CP), we show in Figure 2 the results for both formulations. Our approach compares favorably with the other methods, on three problems of different sizes, , see Figure 2. In addition, note that QP, CP and SG do not obtain sparse solutions, whereas ProxFlow does. We have also run ProxFlow and SG on a larger dataset with : after hours, ProxFlow and SG have reached a relative duality gap of and respectively.^{6}^{6}6Due to the computational burden, QP and CP could not be run on every problem.
4.2 Background Subtraction
Following [8], we consider a background subtraction task. Given a sequence of frames from a fixed camera, we try to segment out foreground objects in a new image. If we denote by this image composed of pixels, we model as a sparse linear combination of other images , plus an error term in , i.e., for some sparse vector in . This approach is reminiscent of [30]
in the context of face recognition, where
is further made sparse to deal with small occlusions. The term accounts for background parts present in both and , while contains specific, or foreground, objects in . The resulting optimization problem is In this formulation, the norm penalty on does not take into account the fact that neighboring pixels in are likely to share the same label (background or foreground), which may lead to scattered pieces of foreground and background regions (Figure 3). We therefore put an additional structured regularization term on , where the groups in are all the overlapping squares on the image. A dataset with handsegmented evaluation images is used to illustrate the effect of .^{7}^{7}7http://research.microsoft.com/enus/um/people/jckrumm/wallflower/testimages.htm For simplicity, we use a single regularization parameter, i.e., , chosen to maximize the number of pixels matching the ground truth. We consider images with pixels (i.e., a resolution of , times 3 for the RGB channels). As shown in Figure 3, adding improves the background subtraction results for the two tested images, by removing the scattered artifacts due to the lack of structural constraints of the norm, which encodes neither spatial nor color consistency.4.3 MultiTask Learning of Hierarchical Structures
In [11], Jenatton et al. have recently proposed to use a hierarchical structured norm to learn dictionaries of natural image patches. Following their work, we seek to represent signals of dimension as sparse linear combinations of elements from a dictionary in . This can be expressed for all in as , for some sparse vector in . In [11], the dictionary elements are embedded in a predefined tree , via a particular instance of the structured norm , which we refer to it as , and call the underlying set of groups. In this case, each signal admits a sparse decomposition in the form of a subtree of dictionary elements.
Inspired by ideas from multitask learning [16], we propose to learn the tree structure by pruning irrelevant parts of a larger initial tree . We achieve this by using an additional regularization term across the different decompositions, so that subtrees of will simultaneously be removed for all signals . In other words, the approach of [11] is extended by the following formulation:
(6) 
where is the matrix of decomposition coefficients in . The new regularization term operates on the rows of and is defined as .^{8}^{8}8The simplified case where and are the  and mixed norms [14] corresponds to [31]. The overall penalty on , which results from the combination of and , is itself an instance of with general overlapping groups, as defined in Eq (2).
To address problem (6), we use the same optimization scheme as [11], i.e., alternating between and , fixing one variable while optimizing with respect to the other. The task we consider is the denoising of natural image patches, with the same dataset and protocol as [11]. We study whether learning the hierarchy of the dictionary elements improves the denoising performance, compared to standard sparse coding (i.e., when is the norm and ) and the hierarchical dictionary learning of [11] based on predefined trees (i.e., ). The dimensions of the training set — patches of size for dictionaries with up to elements — impose to handle extremely large graphs, with . Since problem (6) is too large to be solved exactly sufficiently many times to select the regularization parameters rigorously, we use the following heuristics: we optimize mostly with the currently pruned tree held fixed (i.e., ), and only prune the tree (i.e., ) every few steps on a random subset of patches. We consider the same hierarchies as in [11], involving between and dictionary elements. The regularization parameter is selected on the validation set of patches, for both sparse coding (Flat) and hierarchical dictionary learning (Tree). Starting from the tree giving the best performance (in this case the largest one, see Figure 4), we solve problem (6) following our heuristics, for increasing values of . As shown in Figure 4
, there is a regime where our approach performs significantly better than the two other compared methods. The standard deviation of the noise is
(the pixels have values in ); no significant improvements were observed for lower levels of noise.5 Conclusion
We have presented a new optimization framework for solving sparse structured problems involving sums of norms of any (overlapping) groups of variables. Interestingly, this sheds new light on connections between sparse methods and the literature of network flow optimization. In particular, the proximal operator for the formulation we consider can be cast as a quadratic mincost flow problem, for which we propose an efficient and simple algorithm. This allows the use of accelerated gradient methods. Several experiments demonstrate that our algorithm can be applied to a wide class of learning problems, which have not been addressed before within sparse methods.
Appendix A Equivalence to Canonical Graphs
Formally, the notion of equivalence between graphs can be summarized by the following lemma:
Lemma 2 (Equivalence to canonical graphs.)
Let be the canonical graph corresponding to a group structure with weights . Let be a graph sharing the same set of vertices, source and sink as , but with a different arc set . We say that is equivalent to if and only if the following conditions hold:

Arcs of outgoing from the source are the same as in , with the same costs and capacities.

Arcs of going to the sink are the same as in , with the same costs and capacities.

For every arc in , with in , there exists a unique path in from to with zero costs and infinite capacities on every arc of the path.

Conversely, if there exists a path in between a vertex in and a vertex in , then there exists an arc in .
Then, the cost of the optimal mincost flow on and are the same. Moreover, the values of the optimal flow on the arcs , in , are the same on and .
Proof. We first notice that on both and , the cost of a flow on the graph only depends on the flow on the arcs , in , which we have denoted by in .
We will prove that finding a feasible flow on with a cost is equivalent to finding a feasible flow on with the same cost . We now use the concept of path flow, which is a flow vector in carrying the same positive value on every arc of a directed path between two nodes of . It intuitively corresponds to sending a positive amount of flow along a path of the graph.
According to the definition of graph equivalence introduced in the Lemma, it is easy to show that there is a bijection between the arcs in , and the paths in with positive capacities on every arc. Given now a feasible flow in , we build a feasible flow on which is a sum of path flows. More precisely, for every arc in , we consider its equivalent path in , with a path flow carrying the same amount of flow as . Therefore, each arc in has a total amount of flow that is equal to the sum of the flows carried by the path flows going over . It is also easy to show that this construction builds a flow on (capacity and conservation constraints are satisfied) and that this flow has the same cost as , that is, .
Conversely, given a flow on , we use a classical path flow
decomposition (see Proposition 1.1 in [26]), saying that there
exists a decomposition of as a sum of path flows in . Using the
bijection described above, we know that each path in the previous sums
corresponds to a unique arc in . We now build a flow in , by
associating to each path flow in the decomposition of , an arc in
carrying the same amount of flow. The flow of every other arc in is set to
zero. It is also easy to show that this builds a valid flow in that has
the same cost as .
Appendix B Convergence Analysis
We show in this section the correctness of Algorithm 1 for computing the proximal operator, and of Algorithm 2 for computing the dual norm .
b.1 Computation of the Proximal Operator
We now prove that our algorithm converges and that it finds the optimal solution of the proximal problem. This requires that we introduce the optimality conditions for problem (4) derived in [11], since our convergence proof essentially checks that these conditions are satisfied upon termination of the algorithm.
Note that these optimality conditions provide an intuitive view of our mincost flow problem. Solving the mincost flow problem is equivalent to sending the maximum amount of flow in the graph under the capacity constraints, while respecting the rule that the flow outgoing from a group should always be directed to the variables with maximum residual .
Before proving the convergence and correctness of our algorithm, we also recall classical properties of the min capacity cuts, which we intensively use in the proofs of this paper. The procedure computeFlow of our algorithm finds a minimum cut of a graph , dividing the set into two disjoint parts and . is by construction the sets of nodes in such that there exists a nonsaturating path from to , while all the paths from to are saturated. Conversely, arcs from to are all saturated, whereas there can be nonsaturated arcs from to . Moreover, the following properties hold

There is no arc going from to . Otherwise the value of the cut would be infinite. (Arcs inside have infinite capacity by construction of our graph).

There is no flow going from to (see properties of the minimum cut [26]).

The cut goes through all arcs going from to , and all arcs going from to .
All these properties are illustrated on Figure 5.
Recall that we assume (cf. Section 3.1) that the scalars are all non negative, and that we add nonnegativity constraints on . With the optimality conditions of Lemma 3 in hand, we can show our first convergence result.
Proposition 1 (Convergence of Algorithm 1)
Algorithm 1 converges in a finite and polynomial number of operations.
Proof. Our algorithm splits recursively the graph into disjoints parts and processes each part recursively. The processing of one part requires an orthogonal projection onto an ball and a maxflow algorithm, which can both be computed in polynomial time. To prove that the procedure converges, it is sufficient to show that when the procedure computeFlow is called for a graph and computes a cut
Comments
There are no comments yet.