Network Flow Algorithms for Structured Sparsity

08/31/2010 ∙ by Julien Mairal, et al. ∙ 0

We consider a class of learning problems that involve a structured sparsity-inducing norm defined as the sum of ℓ_∞-norms over groups of variables. Whereas a lot of effort has been put in developing fast optimization methods when the groups are disjoint or embedded in a specific hierarchical structure, we address here the case of general overlapping groups. To this end, we show that the corresponding optimization problem is related to network flow optimization. More precisely, the proximal problem associated with the norm we consider is dual to a quadratic min-cost flow problem. We propose an efficient procedure which computes its solution exactly in polynomial time. Our algorithm scales up to millions of variables, and opens up a whole new range of applications for structured sparse models. We present several experiments on image and video data, demonstrating the applicability and scalability of our approach for various problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sparse linear models have become a popular framework for dealing with various unsupervised and supervised tasks in machine learning and signal processing. In such models, linear combinations of small sets of variables are selected to describe the data. Regularization by the

-norm has emerged as a powerful tool for addressing this combinatorial variable selection problem, relying on both a well-developed theory (see [1] and references therein) and efficient algorithms [2, 3, 4].

The -norm primarily encourages sparse solutions, regardless of the potential structural relationships (e.g., spatial, temporal or hierarchical) existing between the variables. Much effort has recently been devoted to designing sparsity-inducing regularizations capable of encoding higher-order information about allowed patterns of non-zero coefficients [5, 6, 7, 8, 9], with successful applications in bioinformatics [6, 10], topic modeling [11]

and computer vision 

[8].

By considering sums of norms of appropriate subsets, or groups, of variables, these regularizations control the sparsity patterns of the solutions. The underlying optimization problem is usually difficult, in part because it involves nonsmooth components. Proximal methods have proven to be effective in this context, essentially because of their fast convergence rates and their ability to deal with large problems [3, 4]. While the settings where the penalized groups of variables do not overlap [12] or are embedded in a tree-shaped hierarchy [11] have already been studied, sparsity-inducing regularizations of general overlapping groups have, to the best of our knowledge, never been considered within the proximal method framework.

This paper makes the following contributions:

  • It shows that the proximal operator associated with the structured norm we consider can be computed by solving a quadratic min-cost flow problem, thereby establishing a connection with the network flow optimization literature.

  • It presents a fast and scalable procedure for solving a large class of structured sparse regularized problems, which, to the best of our knowledge, have not been addressed efficiently before.

  • It shows that the dual norm of the sparsity-inducing norm we consider can also be evaluated efficiently, which enables us to compute duality gaps for the corresponding optimization problems.

  • It demonstrates that our method is relevant for various applications, from video background subtraction to estimation of hierarchical structures for dictionary learning of natural image patches.

2 Structured Sparse Models

We consider in this paper convex optimization problems of the form

(1)

where is a convex differentiable function and is a convex, nonsmooth, sparsity-inducing regularization function. When one knows a priori that the solutions of this learning problem only have a few non-zero coefficients, is often chosen to be the -norm, leading for instance to the Lasso [13]. When these coefficients are organized in groups, a penalty encoding explicitly this prior knowledge can improve the prediction performance and/or interpretability of the learned models [12, 14, 15, 16]. Such a penalty might for example take the form

(2)

where is a set of groups of indices, denotes the -th coordinate of for in

, the vector 

in represents the coefficients of indexed by in , and the scalars  are positive weights. A sum of -norms is also used in the literature [7], but the -norm is piecewise linear, a property that we take advantage of in this paper. Note that when is the set of singletons of , we get back the -norm.

If is a more general partition of , variables are selected in groups rather than individually. When the groups overlap, is still a norm and sets groups of variables to zero together [5]. The latter setting has first been considered for hierarchies [7, 10, 17], and then extended to general group structures [5].111Note that other types of structured sparse models have also been introduced, either through a different norm [6], or through non-convex criteria [8, 9]. Solving Eq. (1) in this context becomes challenging and is the topic of this paper. Following [11] who tackled the case of hierarchical groups, we propose to approach this problem with proximal methods, which we now introduce.

2.1 Proximal Methods

In a nutshell, proximal methods can be seen as a natural extension of gradient-based techniques, and they are well suited to minimizing the sum of two convex terms, a smooth function  —continuously differentiable with Lipschitz-continuous gradient— and a potentially non-smooth function  (see [18] and references therein). At each iteration, the function is linearized at the current estimate  and the so-called proximal problem has to be solved:

The quadratic term keeps the solution in a neighborhood where the current linear approximation holds, and is an upper bound on the Lipschitz constant of . This problem can be rewritten as

(3)

with , and . We call proximal operator associated with the regularization the function that maps a vector  in  onto the (unique, by strong convexity) solution  of Eq. (3). Simple proximal method use as the next iterate, but accelerated variants [3, 4] are also based on the proximal operator and require to solve problem (3) exactly and efficiently to enjoy their fast convergence rates. Note that when is the -norm, the solution of Eq. (3) is obtained by a soft-thresholding [18].

The approach we develop in the rest of this paper extends [11] to the case of general overlapping groups when is a weighted sum of -norms, broadening the application of these regularizations to a wider spectrum of problems.222For hierarchies, the approach of [11] applies also to the case of where is a weighted sum of -norms.

3 A Quadratic Min-Cost Flow Formulation

In this section, we show that a convex dual of problem (3) for general overlapping groups  can be reformulated as a quadratic min-cost flow problem. We propose an efficient algorithm to solve it exactly, as well as a related algorithm to compute the dual norm of . We start by considering the dual formulation to problem (3) introduced in [11], for the case where is a sum of -norms:

Lemma 1 (Dual of the proximal problem [11])

Given in , consider the problem

(4)

where is in , and denotes the -th coordinate of the vector . Then, every solution of Eq. (4) satisfies , where is the solution of Eq. (3).

Without loss of generality,333 Let denote a solution of Eq. (4). Optimality conditions of Eq. (4) derived in [11] show that for all in , the signs of the non-zero coefficients for in are the same as the signs of the entries . To solve Eq. (4), one can therefore flip the signs of the negative variables , then solve the modified dual formulation (with non-negative variables), which gives the magnitude of the entries (the signs of these being known). we assume from now on that the scalars are all non-negative, and we constrain the entries of  to be non-negative. We now introduce a graph modeling of problem (4).

3.1 Graph Model

Let be a directed graph , where is a set of vertices, a set of arcs, a source, and a sink. Let and be two functions on the arcs, and , where is a cost function and is a non-negative capacity function. A flow is a non-negative function on arcs that satisfies capacity constraints on all arcs (the value of the flow on an arc is less than or equal to the arc capacity) and conservation constraints on all vertices (the sum of incoming flows at a vertex is equal to the sum of outgoing flows) except for the source and the sink.

We introduce a canonical graph associated with our optimization problem, and uniquely characterized by the following construction:
 (i) is the union of two sets of vertices and , where contains exactly one vertex for each index in , and contains exactly one vertex for each group  in . We thus have . For simplicity, we identify groups and indices with the vertices of the graph.
 (ii) For every group in , contains an arc . These arcs have capacity and zero cost.
 (iii) For every group in , and every index in , contains an arc with zero cost and infinite capacity. We denote by the flow on this arc.
 (iv) For every index in , contains an arc with infinite capacity and a cost , where is the flow on . Note that by flow conservation, we necessarily have .

Examples of canonical graphs are given in Figures 1(a)-LABEL:. The flows associated with can now be identified with the variables of problem (4): indeed, the sum of the costs on the edges leading to the sink is equal to the objective function of (4), while the capacities of the arcs match the constraints on each group. This shows that finding a flow minimizing the sum of the costs on such a graph is equivalent to solving problem (4).

When some groups are included in others, the canonical graph can be simplified to yield a graph with a smaller number of edges. Specifically, if and are groups with , the edges for carrying a flow can be removed and replaced by a single edge of infinite capacity and zero cost, carrying the flow . This simplification is illustrated in Figure 1(d), with a graph equivalent to the one of Figure 1(c). This does not change the optimal value of , which is the quantity of interest for computing the optimal primal variable . We present in Appendix A a formal definition of equivalent graphs. These simplifications are useful in practice, since they reduce the number of edges in the graph and improve the speed of the algorithms we are now going to present.

(a) .

(b) .

(c) .

(d) .
Figure 1: Graph representation of simple proximal problems with different group structures . The three indices are represented as grey squares, and the groups in as red discs. The source is linked to every group with respective maximum capacity and zero cost. Each variable is linked to the sink , with an infinite capacity, and with a cost . All other arcs in the graph have zero cost and infinite capacity. They represent inclusion relations in-between groups, and between groups and variables. The graphs LABEL: and LABEL: correspond to a special case of tree-structured hierarchy in the sense of [11]. Their min-cost flow problems are equivalent.

3.2 Computation of the Proximal Operator

Quadratic min-cost flow problems have been well studied in the operations research literature [19]. One of the simplest cases, where  contains a single group as in Figure 1(a), can be solved by an orthogonal projection on the -ball of radius . It has been shown, both in machine learning [20] and operations research [19, 21], that such a projection can be done in operations. When the group structure is a tree as in Figure 1(d), strategies developed in the two communities are also similar [11, 19], and solve the problem in operations, where is the depth of the tree.

The general case of overlapping groups is more difficult. Hochbaum and Hong have shown in [19] that quadratic min-cost flow problems can be reduced to a specific parametric max-flow problem, for which an efficient algorithm exists [22].444By definition, a parametric max-flow problem consists in solving, for every value of a parameter, a max-flow problem on a graph whose arc capacities depend on this parameter. While this approach could be used to solve Eq. (4), it ignores the fact that our graphs have non-zero costs only on edges leading to the sink. To take advantage of this specificity, we propose the dedicated Algorithm 1. Our method clearly shares some similarities with a simplified version of [22] presented in [23], namely a divide and conquer strategy. Nonetheless, we performed an empirical comparison described in Appendix D, which shows that our dedicated algorithm has significantly better performance in practice.

1:  Inputs: , a set of groups , positive weights , and (regularization parameter).
2:  Build the initial graph as explained in Section 3.2.
3:  Compute the optimal flow: .
4:  Return: (optimal solution of the proximal problem).

Function computeFlow()

1:  Projection step:
2:  For all nodes in , set to be the capacity of the arc .
3:  Max-flow step: Update by computing a max-flow on the graph .
4:  if   then
5:     Denote by and the two disjoint subsets of separated by the minimum -cut of the graph, and remove the arcs between and . Call and the two remaining disjoint subsets of  corresponding to and .
6:     .
7:     .
8:  end if
9:  Return: .
Algorithm 1 Computation of the proximal operator for overlapping groups.

Informally, computeFlow returns the optimal flow vector , proceeding as follows: This function first solves a relaxed version of problem Eq. (4) obtained by replacing the sum of the vectors  by a single vector whose -norm should be less than, or equal to, the sum of the constraints on the vectors . The optimal vector therefore gives a lower bound on the optimal cost. Then, the maximum-flow step [24] tries to find a feasible flow such that the vector matches . If , then the cost of the flow reaches the lower bound, and the flow is optimal. If , the lower bound cannot be reached, and we construct a minimum -cut of the graph [25] that defines two disjoints sets of nodes and ; is the part of the graph that can potentially receive more flow from the source, whereas all arcs linking to are saturated. The properties of a min -cut [26] imply that there are no arcs from to (arcs inside have infinite capacity by construction), and that there is no flow on arcs from  to . At this point, it is possible to show that the value of the optimal min-cost flow on these arcs is also zero. Thus, removing them yields an equivalent optimization problem, which can be decomposed into two independent problems of smaller size and solved recursively by the calls to computeFlow and computeFlow. Note that when is the -norm, our algorithm solves problem (4) during the first projection step in line and stops. A formal proof of correctness of Algorithm 1 and further details are relegated to Appendix B.

The approach of [19, 22] is guaranteed to have the same worst-case complexity as a single max-flow algorithm. However, we have experimentally observed a significant discrepancy between the worst case and empirical complexities for these flow problems, essentially because the empirical cost of each max-flow is significantly smaller than its theoretical cost. Despite the fact that the worst-case guarantee of our algorithm is weaker than their (up to a factor ), it is more adapted to the structure of our graphs and has proven to be much faster in our experiments (see supplementary material).

Some implementation details are crucial to the efficiency of the algorithm:

  • Exploiting maximal connected components: When there exists no arc between two subsets of , it is possible to process them independently to solve the global min-cost flow problem. To that effect, before calling the function computeFlow(), we look for maximal connected components and call sequentially the procedure computeFlow() for in .

  • Efficient max-flow algorithm: We have implemented the “push-relabel” algorithm of [24]

    to solve our max-flow problems, using classical heuristics that significantly speed it up in practice (see

    [24, 27]). Our implementation uses the so-called “highest-active vertex selection rule, global and gap heuristics” (see [24, 27]), and has a worst-case complexity of for a graph . This algorithm leverages the concept of pre-flow that relaxes the definition of flow and allows vertices to have a positive excess.

  • Using flow warm-restarts: Our algorithm can be initialized with any valid pre-flow, enabling warm-restarts when the max-flow is called several times as in our algorithm.

  • Improved projection step: The first line of the procedure computeFlow can be replaced by The idea is that the structure of the graph will not allow to be greater than after the max-flow step. Adding these additional constraints leads to better performance when the graph is not well balanced. This modified projection step can still be computed in linear time [21].

3.3 Computation of the Dual Norm

The dual norm of , defined for any vector in by , is a key quantity to study sparsity-inducing regularizations [5, 17, 28]. We use it here to monitor the convergence of the proximal method through a duality gap, and define a proper optimality criterion for problem (1). We denote by the Fenchel conjugate of  [29], defined by . The duality gap for problem (1) can be derived from standard Fenchel duality arguments [29] and it is equal to . Therefore, evaluating the duality gap requires to compute efficiently in order to find a feasible dual variable . This is equivalent to solving another network flow problem, based on the following variational formulation:

(5)

In the network problem associated with (12), the capacities on the arcs , , are set to , and the capacities on the arcs , in , are fixed to . Solving problem (12) amounts to finding the smallest value of , such that there exists a flow saturating the capacities on the arcs leading to the sink  (i.e., ). Equration (12) and the algorithm below are proven to be correct in Appendix B.

1:  Inputs: , a set of groups , positive weights .
2:  Build the initial graph as explained in Section 3.3.
3:  .
4:  Return: (value of the dual norm).

Function dualNorm()

1:   and set the capacities of arcs to for all in .
2:  Max-flow step: Update by computing a max-flow on the graph .
3:  if   then
4:     Define and as in Algorithm 1, and set .
5:  end if
6:  Return: .
Algorithm 2 Computation of the dual norm.

4 Applications and Experiments

Our experiments use the algorithm of [4] based on our proximal operator, with weights set to . We present this algorithm in more details in Appendix C.

4.1 Speed Comparison

We compare our method (ProxFlow) and two generic optimization techniques, namely a subgradient descent (SG) and an interior point method,555In our simulations, we use the commercial software Mosek, http://www.mosek.com/

on a regularized linear regression problem. Both SG and ProxFlow are implemented in

C++. Experiments are run on a single-core GHz CPU. We consider a design matrix in built from overcomplete dictionaries of discrete cosine transforms (DCT), which are naturally organized on one- or two-dimensional grids and display local correlations. The following families of groups using this spatial information are thus considered: (1) every contiguous sequence of length for the one-dimensional case, and (2) every -square in the two-dimensional setting. We generate vectors in according to the linear model , where . The vector has about percent nonzero components, randomly selected, while respecting the structure of , and uniformly generated between .

In our experiments, the regularization parameter is chosen to achieve this level of sparsity. For SG, we take the step size to be equal to , where is the iteration number, and are the best parameters selected in . For the interior point methods, since problem (1) can be cast either as a quadratic (QP) or as a conic program (CP), we show in Figure 2 the results for both formulations. Our approach compares favorably with the other methods, on three problems of different sizes, , see Figure 2. In addition, note that QP, CP and SG do not obtain sparse solutions, whereas ProxFlow does. We have also run ProxFlow and SG on a larger dataset with : after hours, ProxFlow and SG have reached a relative duality gap of and respectively.666Due to the computational burden, QP and CP could not be run on every problem.

Figure 2: Speed comparisons: distance to the optimal primal value versus CPU time (log-log scale). Due to the computational burden, QP and CP could not be run on every problem.

4.2 Background Subtraction

Following [8], we consider a background subtraction task. Given a sequence of frames from a fixed camera, we try to segment out foreground objects in a new image. If we denote by this image composed of pixels, we model as a sparse linear combination of other images , plus an error term in , i.e., for some sparse vector in . This approach is reminiscent of [30]

in the context of face recognition, where

is further made sparse to deal with small occlusions. The term accounts for background parts present in both  and , while contains specific, or foreground, objects in . The resulting optimization problem is In this formulation, the -norm penalty on does not take into account the fact that neighboring pixels in are likely to share the same label (background or foreground), which may lead to scattered pieces of foreground and background regions (Figure 3). We therefore put an additional structured regularization term on , where the groups in are all the overlapping -squares on the image. A dataset with hand-segmented evaluation images is used to illustrate the effect of .777http://research.microsoft.com/en-us/um/people/jckrumm/wallflower/testimages.htm For simplicity, we use a single regularization parameter, i.e., , chosen to maximize the number of pixels matching the ground truth. We consider images with pixels (i.e., a resolution of , times 3 for the RGB channels). As shown in Figure 3, adding improves the background subtraction results for the two tested images, by removing the scattered artifacts due to the lack of structural constraints of the -norm, which encodes neither spatial nor color consistency.

Figure 3: The original image (column 1), the background (i.e., ) reconstructed by our method (column 2), and the foreground (i.e., the sparsity pattern of as a mask on the original image) detected with (column 3) and with (column 4). The rightmost column is another foreground found with , on a different image, with the same values of as for the previous image. For the top left image, the percentage of pixels matching the ground truth is 98.8% with , 87.0% without. As for the bottom left image, the result is 93.8% with , 90.4% without (best seen in color).

4.3 Multi-Task Learning of Hierarchical Structures

In [11], Jenatton et al. have recently proposed to use a hierarchical structured norm to learn dictionaries of natural image patches. Following their work, we seek to represent signals of dimension as sparse linear combinations of elements from a dictionary in . This can be expressed for all in as , for some sparse vector in . In [11], the dictionary elements are embedded in a predefined tree , via a particular instance of the structured norm , which we refer to it as , and call the underlying set of groups. In this case, each signal  admits a sparse decomposition in the form of a subtree of dictionary elements.

Inspired by ideas from multi-task learning [16], we propose to learn the tree structure by pruning irrelevant parts of a larger initial tree . We achieve this by using an additional regularization term  across the different decompositions, so that subtrees of will simultaneously be removed for all signals . In other words, the approach of [11] is extended by the following formulation:

(6)

where is the matrix of decomposition coefficients in . The new regularization term operates on the rows of and is defined as .888The simplified case where and are the - and mixed -norms [14] corresponds to [31]. The overall penalty on , which results from the combination of and , is itself an instance of  with general overlapping groups, as defined in Eq (2).

To address problem (6), we use the same optimization scheme as [11], i.e., alternating between  and , fixing one variable while optimizing with respect to the other. The task we consider is the denoising of natural image patches, with the same dataset and protocol as [11]. We study whether learning the hierarchy of the dictionary elements improves the denoising performance, compared to standard sparse coding (i.e., when is the -norm and ) and the hierarchical dictionary learning of [11] based on predefined trees (i.e., ). The dimensions of the training set — patches of size for dictionaries with up to elements — impose to handle extremely large graphs, with . Since problem (6) is too large to be solved exactly sufficiently many times to select the regularization parameters rigorously, we use the following heuristics: we optimize mostly with the currently pruned tree held fixed (i.e., ), and only prune the tree (i.e., ) every few steps on a random subset of patches. We consider the same hierarchies as in [11], involving between and dictionary elements. The regularization parameter is selected on the validation set of patches, for both sparse coding (Flat) and hierarchical dictionary learning (Tree). Starting from the tree giving the best performance (in this case the largest one, see Figure 4), we solve problem (6) following our heuristics, for increasing values of . As shown in Figure 4

, there is a regime where our approach performs significantly better than the two other compared methods. The standard deviation of the noise is

(the pixels have values in ); no significant improvements were observed for lower levels of noise.

Figure 4: Left: Hierarchy obtained by pruning a larger tree of elements. Right: Mean square error versus dictionary size. The error bars represent two standard deviations, based on three runs.

5 Conclusion

We have presented a new optimization framework for solving sparse structured problems involving sums of -norms of any (overlapping) groups of variables. Interestingly, this sheds new light on connections between sparse methods and the literature of network flow optimization. In particular, the proximal operator for the formulation we consider can be cast as a quadratic min-cost flow problem, for which we propose an efficient and simple algorithm. This allows the use of accelerated gradient methods. Several experiments demonstrate that our algorithm can be applied to a wide class of learning problems, which have not been addressed before within sparse methods.

Appendix A Equivalence to Canonical Graphs

Formally, the notion of equivalence between graphs can be summarized by the following lemma:

Lemma 2 (Equivalence to canonical graphs.)

Let be the canonical graph corresponding to a group structure with weights . Let be a graph sharing the same set of vertices, source and sink as , but with a different arc set . We say that is equivalent to if and only if the following conditions hold:

  • Arcs of outgoing from the source are the same as in , with the same costs and capacities.

  • Arcs of going to the sink are the same as in , with the same costs and capacities.

  • For every arc in , with in , there exists a unique path in from to with zero costs and infinite capacities on every arc of the path.

  • Conversely, if there exists a path in between a vertex in and a vertex in , then there exists an arc in .

Then, the cost of the optimal min-cost flow on and are the same. Moreover, the values of the optimal flow on the arcs , in , are the same on and .

Proof.  We first notice that on both and , the cost of a flow on the graph only depends on the flow on the arcs , in , which we have denoted by in .

We will prove that finding a feasible flow on with a cost is equivalent to finding a feasible flow on with the same cost . We now use the concept of path flow, which is a flow vector in carrying the same positive value on every arc of a directed path between two nodes of . It intuitively corresponds to sending a positive amount of flow along a path of the graph.

According to the definition of graph equivalence introduced in the Lemma, it is easy to show that there is a bijection between the arcs in , and the paths in with positive capacities on every arc. Given now a feasible flow in , we build a feasible flow on which is a sum of path flows. More precisely, for every arc in , we consider its equivalent path in , with a path flow carrying the same amount of flow as . Therefore, each arc in has a total amount of flow that is equal to the sum of the flows carried by the path flows going over . It is also easy to show that this construction builds a flow on (capacity and conservation constraints are satisfied) and that this flow has the same cost as , that is, .

Conversely, given a flow on , we use a classical path flow decomposition (see Proposition 1.1 in [26]), saying that there exists a decomposition of as a sum of path flows in . Using the bijection described above, we know that each path in the previous sums corresponds to a unique arc in . We now build a flow in , by associating to each path flow in the decomposition of , an arc in carrying the same amount of flow. The flow of every other arc in is set to zero. It is also easy to show that this builds a valid flow in that has the same cost as .  

Appendix B Convergence Analysis

We show in this section the correctness of Algorithm 1 for computing the proximal operator, and of Algorithm 2 for computing the dual norm .

b.1 Computation of the Proximal Operator

We now prove that our algorithm converges and that it finds the optimal solution of the proximal problem. This requires that we introduce the optimality conditions for problem (4) derived in [11], since our convergence proof essentially checks that these conditions are satisfied upon termination of the algorithm.

Lemma 3 (Optimality conditions of the problem (4), [11])

The primal-dual variables are respectively solutions of the primal (3) and dual problems (4) if and only if the dual variable is feasible for the problem (4) and

Note that these optimality conditions provide an intuitive view of our min-cost flow problem. Solving the min-cost flow problem is equivalent to sending the maximum amount of flow in the graph under the capacity constraints, while respecting the rule that the flow outgoing from a group should always be directed to the variables with maximum residual .

Before proving the convergence and correctness of our algorithm, we also recall classical properties of the min capacity cuts, which we intensively use in the proofs of this paper. The procedure computeFlow of our algorithm finds a minimum -cut of a graph , dividing the set into two disjoint parts and . is by construction the sets of nodes in such that there exists a non-saturating path from to , while all the paths from to are saturated. Conversely, arcs from to are all saturated, whereas there can be non-saturated arcs from to . Moreover, the following properties hold

  • There is no arc going from to . Otherwise the value of the cut would be infinite. (Arcs inside have infinite capacity by construction of our graph).

  • There is no flow going from to (see properties of the minimum -cut [26]).

  • The cut goes through all arcs going from to , and all arcs going from to .

All these properties are illustrated on Figure 5.

Figure 5: Cut computed by our algorithm. , with , , and , with , . Arcs going from to are saturated, as well as arcs going from to . Saturated arcs are in bold. Arcs with zero flow are dotted.

Recall that we assume (cf. Section 3.1) that the scalars are all non negative, and that we add non-negativity constraints on . With the optimality conditions of Lemma 3 in hand, we can show our first convergence result.

Proposition 1 (Convergence of Algorithm 1)

Algorithm 1 converges in a finite and polynomial number of operations.

Proof.  Our algorithm splits recursively the graph into disjoints parts and processes each part recursively. The processing of one part requires an orthogonal projection onto an -ball and a max-flow algorithm, which can both be computed in polynomial time. To prove that the procedure converges, it is sufficient to show that when the procedure computeFlow is called for a graph and computes a cut