1 Introduction
The global minimum cut problem in graphs (MinCut) is wellknown and extensively studied. Given an undirected graph with nonnegative edge capacities , the goal is to remove a minimum capacity set of edges such that the residual graph has at least two connected components. When all capacities are one, the mincut of a graph is its global edgeconnectivity. The Cut problem is a natural generalization. Given a graph and an integer , the goal is to remove a minimum capacity set of edges such that the residual graph has at least connected components. MinCut and Cut have been extensively studied in the literature. Initial algorithms for MinCut were based on a reduction to the mincut problem. However, it was realized later on that it can be solved more efficiently and directly. Currently the best deterministic algorithm for MinCut runs in time [26] and is based on the maximum adjacency ordering approach of Nagamochi and Ibaraki [19]. On the other hand, there is a nearlinear time Monte Carlo randomized algorithm due to Karger [14]. Bridging the gap between the running times for the deterministic and randomized algorithms is a major open problem. In recent work [16, 12] obtained nearlinear time deterministic algorithms for simple unweighted graphs.
The Cut problem is NPHard if is part of the input [10], however, there is a polynomialtime algorithm for any fixed . Such an algorithm was first devised by Goldschmidt and Hochbaum [10], and subsequently there have been several different algorithms improving the runtime. The randomized algorithm of Karger and Stein [15] runs in
time and outputs the optimum cut with high probability. The fastest deterministic algorithm, due to Thorup
[28], runs in time [28]. Upcoming work of Gupta, Lee and Li [11] obtains a faster runtime of if the graph has small integer weights, where is the exponent in the runtime of matrix multiplication. It is also known that Cut is hard when parameterized by [6]; that is, we do not expect an algorithm with a runtime of . Several algorithms that yield a approximation are known for Cut; Saran and Vazirani’s algorithm based on repeated minimumcut computations gives approximation [25]; the same bound can be achieved by removing the smallest weight edges in a GomoryHu tree of the graph [25]. Nagamochi and Kamidoi showed that using the concept of extreme sets, a approximation can be found even faster [20]. Naor and Rabani developed an LP relaxation for Cut [21] and this yields a approximation [3]. Ravi and Sinha [24] obtained another approximation via a Lagrangean relaxation approach which was also considered independently by Barahona [2]. A factor of , for large , is the best possible approximation under the Small Set Expansion hypothesis [18]. Recent work has obtained a approximation in time [11]; whether a PTAS can be obtained in time is an interesting open problem.Motivation and contributions:
The main motivation for this work was to simplify and understand Thorup’s tree packing based algorithm for Cut. Karger’s nearlinear time algorithm and analysis for the MinCut problem [14] is based on the wellknown theorem of Tutte and NashWilliams (on the minmax relation for edgedisjoint trees in a graph). It is simple and elegant; the main complexity is in the improved running time which is achieved via a complex dynamic program. Karger also tightened the bound on the number of approximate minimum cuts in a graph (originally shown via his random contraction algorithm) via tree packings. In contrast to the case of mincut, the main structural result in Thorup’s work on Cut is much less easy to understand and motivate. His proof consists of two parts. He shows that an ideal tree packing obtained via a recursive decomposition of the graph, first outlined in [27], has the property that any optimum cut crosses some tree in the packing at most
times. The second part argues that a greedy tree packing with sufficiently many trees approximates the ideal tree packing arbitrarily well. The greedy tree packing is closely related to a multiplicative weight update method for solving a basic tree packing linear program, however, no explicit LP is used in Thorup’s analysis. Thus, although Thorup’s algorithm is very simple to describe (and implement), the analysis is somewhat opaque.
In this paper we make several contributions which connect Thorup’s tree packing to the LP relaxation for Cut [21]. We outline the specific contributions below.

We show that the dual of LP for Cut gives a tree packing and one can use a simple analysis, very similar to that of Karger, to show that any optimum cut crosses some tree in the packing at most times. Thorup proved a bound of for his tree packing. This leads to a slightly faster algorithm than that of Thorup and also to an improved bound on the number of approximate cuts.

We show that the optimum solution of the cut LP, for all values of , can be completely characterized by the principal sequence of partitions of the cut function of the given graph. This establishes the connection between the dual of the LP relaxation and the ideal recursive tree packing considered by Thorup. It also shows that the lower bound provided by the LP relaxation is equivalent to the Lagrangean relaxation lower bound considered by Barahona [2] and Ravi and Sinha [24].
Our results help unify and simplify the different approaches to cut via the LP relaxation and its dual. A key motivation for this paper is to simplify and improve the understanding of the tree packing approach. For this reason we take a liesurely path and reprove some of Karger’s results for the sake of completeness, and to point out the similarity of our argument for Cut to the case of MinCut. Readers familiar with [14] may wish to skip Section 3.
Organization:
Section 2 sets up some basic notation and definitions. Section 3 discusses Karger’s approach for MinCut via tree packings with some connections to recent developments on approximately solving tree packings. Section 4 describes the tree packing obtained from the dual of the LP relaxation for Cut and how it can be used to extend Karger’s approach to Cut. Section 5 gives a new proof that the LP integrality gap for Cut is . In Section 6 we show that the optimum LP solution for all values of can be characterized by a recursive decomposition of the input graph.
2 Preliminaries
We use and to denote the number of nodes and edges in a given graph. For a graph , let denote the set of spanning trees of . For a graph with edge capacities the fractional spanning tree packing number, denoted by , is the optimum value of a simple linear program shown in Fig 1 whose variables are . The LP has an exponential number of variables but is still polynomial time solvable. There are several ways to see this and efficient strongly combinatorial algorithms are also known [8]. We also observe that there is an optimum solution to the LP whose support has at most trees since the number of nontrivial constraints in the LP is at most (one per each edge).
[width=3in]
There is a minmax formula for which is a special case of the minmax formula for matroid base packing due to Tutte and NashWilliams. To state this theorem we introduce some notation. For a partition of the vertex set let denote the set of edges that cross the partition (that is, have end points in two different parts) and let denote the number of parts of . A cut is for some partition such that . A cut is a cut. It is not hard to see that for any partition of the vertex set , since every spanning tree of contains at least edges from . The minimum over all partitions of the quantity, , is also referred to as the strength of (denoted by ), and turns out to be equal to .
Theorem 1 (Tutte and Nash Williams)
For any undirected edge capacitated graph ,
A useful and wellknown corollary of the preceding theorem is given below.
Corollary 1
For any graph , where is the value of the global minimum cut of and is the number of nodes of . If is an unweighted graph then .
Proof
Consider the partition that achieves the minimum in the minmax formula. We have since the capacity of edges leaving each part of is at last and an edge in crosses exactly two parts. Thus,
since . If is unweighted graph then and hence as desired.
We say that a tree packing is approximate if . Note that we typically want a compact tree packing that can either be explicitly specified via a small number of trees of even implicitly via a data structure representing a collection of trees. Approximate spanning tree packings have been obtained via greedy spanning tree packings which can be viewed as applying the multiplicative weight update method. Recently [4] obtained the following result.
Theorem 2 ([4])
There is a deterministic algorithm that, given an edgecapacitated undirected graph on edges and an , runs in time and outputs an implicit representation of a approximate tree packing.
3 Tree packing and MinCut
We review some of Karger’s observations and results connecting tree packings and minimum cuts [14] which follow relatively easily via Corollary 1. We rephrase his results and arguments with a slightly different notation. Given a spanning tree and a cut , following Karger, we say that respects for some integer if .
Karger proved that a constant fraction of trees (in the weighted sense) of an optimum packing respect any fixed mincut. In fact this holds for a approximate tree packing for sufficiently small . The proof, as follows, is an easy consequence of Corollary 1
and an averaging argument. It is convenient to view a tree packing as a probability distribution. Let
. We then have for an exact tree packing and for a packing we have . Let be a fixed minimum cut whose capacity is . Let be the number of edges of that cross . Let be the fraction of trees that respect . Since each tree crosses at least once we have,Because is a valid packing,
Putting the two inequalities together and using Corollary 1,
which implies that
If this implies that at least half the fraction of trees respect any minimum cut. Let be the fraction of trees that respect a minimum cut. One can do similar calculations as above to conclude that
Thus, as long as . In an optimum packing there is always a tree in the support that respects a mincut. The preceding argument can be generalized in a direct fashion to yield the following useful lemma on approximate cuts.
Lemma 1
Let be a approximate tree packing. Let be a cut such that for some fixed . For integer let denote the fraction of the trees in the packing that respect . Then,
Number of approximate minimum cuts:
Karger showed that the number of approximate minimum cuts is at most via his random contraction algorithm [13]. He improved the bound to (for any fixed ) via tree packings in [14]. We review the latter argument.
Given any spanning tree we can root it canonically at a fixed vertex, say . For any cut we can associate with the set of edges of that cross , that is, . In the other direction, a set of edges induces several components in , which induces a unique cut in where any two components of adjacent in lie on opposite sides of the cut. This gives a bijection between cuts induced by edge removals in , and cuts in the graph.
Let . Fix an optimum tree packing and let be an approximate mincut, that is, . From Lemma 1 and with some simplification we see that
Note that and hence an easy counting argument for approximate mincuts is the following. There is an optimum packing whose support has at most trees. For each approximate mincut there is at least one of the trees in the packing which crosses it at most times. Hence each approximate cut can be mapped to a tree and a choice of at most edges from that tree. The total number of these choices is . We can avoid the factor of by noting that is a constant for every fixed . We give an informal argument here. Each approximate cut crosses at least trees at most times. If there are more than distinct cuts and each cut induces a subset of at most edges in at least trees then we have a contradiction. Thus, the number of approximate mincuts is where the constant hidden in the bigO is .
Minimum cut algorithm via tree packings:
Karger used tree packings to obtain a randomized near linear time algorithm for the global minimum cut. The algorithm is based on combining the following two steps.

Given a graph there is a randomized algorithm that outputs trees in time such that with high probability there is a global minimum cut that respects one of the trees in the packing.

There is a deterministic algorithm that given a graph and a spanning tree , in time finds the cut of minimum capacity in that respects . This is based on a clever dynamic programming algorithm that utilizes the dynamic tree data structure.
Only the first step of the algorithm is randomized. Karger solves the first step as follows. Given a capacitated graph and an , he sparsifies the graph to obtain an unweighted skeleton graph via random sampling such that (i) has edges (ii) and (iii) a minimum cut of corresponds to a approximate minimum cut of in that the cuts induce the same vertex partition. Karger then uses greedy tree packing in to obtain a tree packing in with trees, and via Corollary 1 argues that one of the trees in the packing respects a mincut of ; here and are chosen to be sufficiently small but fixed constants.
We observe that Theorem 2 can be used in place of the sparsification step of Karger. The deterministic algorithm implied by the theorem can be used to find an implicit approximate tree packing in near linear time for any fixed . For sufficiently small but fixed , a constant fraction of the trees in the tree packing respect any fixed minimum cut. Thus, if we sample a tree from the tree packing, and then apply Karger’s deterministic algorithm for finding the smallest cut that respects then we find the minimum cut with constant probability. We can repeat the sampling times to obtain a high probability bound.
Karger raised the following question in his paper. Can the dynamic programming algorithm for finding the minimum cut that respects a tree be made dynamic? That is, suppose is altered via edge swaps to yield a tree where is removed and replaced by a new edge . Can the solution for be updated quickly to obtain a solution for ’? Note that is static, only the tree is changing. The tree packing from Theorem 2 finds an implicit packing via edge swap operations from a starting tree . Suppose there is a dynamic version of Karger’s dynamic program that handles updates to the tree in amortized time per update. This would yield a deterministic algorithm for the global mincut with a total time of . We note that the best deterministic algorithm for capacitated graphs is [26].
4 Tree packings for Cut via the LP relaxation
In this section we consider the Cut problem. Thorup [27] constructed a probability distribution over spanning trees which were obtained via a recursive greedy tree packing and showed that there is a tree in the support of the distribution such that a minimum weight cut contains at most edges of . He then showed that greedy tree packing with trees closely approximates the ideal distribution. Via this approach he derived the currently fastest known deterministic algorithm to find the minimum Cut in time. This is only slightly slower than the randomized Monter Carlo algorithm of Karger and Stein [15] whose algorithm runs in time. Thorup’s algorithm is fairly simple. However, the proofs are somewhat complex since they rely on the recursive tree packing and its subtle properties. Arguing that greedy tree packing approximates the recursive tree packing is also technical.
Here we consider a different tree packing for Cut that arises from the LP relaxation for Cut considered by Naor and Rabani [21]. This LP relaxation is shown in Fig 2. The variables are which indicate whether an edge is cut or not. There is a constraint for each spanning tree ; at least edges from need to be chosen in a valid cut. We note that for the upper bound constraint is necessary.
The dual of the LP is given in Fig 3. Naor and Rabani claimed an integrality gap of for the Cut LP. Their proof was incomplete and correct proof was given in [3] in the context of a more general problem called the Steiner Cut problem. Let denote minimum cut capacity in .
Corollary 2
Note that Corollary 1 is a special case of the preceding corollary.
Remark 1
We note that the LP relaxation in Fig 2 assumes that is connected. This is easy to ensure by adding dummy edges of zero cost to make connected. However, it is useful to consider the general case when the number of connected components in is where we assume for simplicity that (if the problem is trivial). In this case we need to consider the maximal forests in , each of which has exactly edges; to avoid notational overload we use to denote the set of maximal forests of . The LP constraint now changes to
Tree packing interpretation of the dual LP:
The dual LP has two types of variables. For each edge there is a variable and for each spanning tree there is a variable . The dual seeks to add capacity to the original capacities , and then find a maximum tree packing within the augmented capacities . The objective is . Note that for , there is an optimum solution with ; this can be seen by the fact that for the primal LP can omit the constraints . For it may be advantageous to add capacity to some bottleneck edges (say from a minimum cut) to increase the tree packing value, which is multiplied by .
Our goal is to show that one can transparently carry over the arguments for global minimum cut via tree packings to the Cut setting via (optimum) solutions to the dual LP. Theorem 3 plays the role of Corollary 1. The key lemma below is analogous to Lemma 1.
Lemma 2
Let be an optimum solution to the dual LP for Cut shown in Fig 3. Let be any subset of edges such that for some . For integer let denote the fraction of the trees in the packing induced by that respect . Then,
Proof
Let denote and let . Let denote . Thus,
Because is a valid tree packing in capacities ,
In the penultimate inequality of the preceding line we are using the fact that and that . Putting the the preceding inequalities together,
(1) 
Corollary 3
Let be an solution to the dual LP. For every optimum cut there is a tree in the support of such that .
Proof
We apply Lemma 2 with and and observe that which implies the desired statement.
Corollary 4
Let be a approximate solution to the dual LP where . For every optimum cut there is a tree in the support of such that .
Proof
4.1 Number of approximate cuts
We now prove the following theorem.
Theorem 4
Let be an undirected edgeweighted graph and let be a fixed integer. For the number of cuts such that is .
Let . By Lemma 2, for fixed and , is a fixed and positive constant independent of . Thus, for any fixed cut satisfies the condition in the theorem, a constant fraction of the trees in the packing of an optimum solution have the property they cross at most times. For a given tree the number of distinct cuts induced by removing edges is for fixed ; as there are at most subsets of the tree’s edges, and each subset induces partitions of the vertex set into at least 2 parts for some fixed function . Thus if there are more than distinct cuts, we obtain a contradiction.
In particular, each cut is a cut, we obtain the following corollary as a special case of the above theorem.
Corollary 5
Let be an undirected edgeweighted graph and let be a fixed integer. For the number of approximate cuts is .
4.2 Enumerating all minimum cuts
We briefly describe how to enumerate all cuts via Lemma 2. The argument is basically the same as that of Karger and Thorup. First, we compute an optimum solution to the dual LP. We can do this via the Ellipsoid method or other ways. Let be the running time to find . Moreover, if we find a basic feasible solution to the dual LP we are guaranteed that the support of has at most distinct trees. Now Lemma 2 guarantees that for every minimum cut there is a tree such that and respects . Thus, to enumerate all minimum cuts the following procedure suffices. For each of the trees in the optimum packing we enumerate all cuts induced by removing edges from . With appropriate care and data structures (see [14] and [27]) this can be done for a single tree in time. Thus the total time for all trees in the support of is for . We thus obtain the following theorem.
Theorem 5
For all the minimum cuts of a graph with nodes and edges can be computed in time time where is the time to find an optimum solution to the LP for cut.
We observe that Thorup’s algorithm [27] runs in time . Thorup uses greedy tree packing in place of solving the LP. The optimality of the LP solution was crucial in using the bound of instead of . Thus, even though we obtain a slightly faster algorithm than Thorup, we need to find an optimum solution to the LP which can be done via the Ellipsoid method. The Ellipsoid method is not quite practical. Below we discuss a different approach.
In recent work Quanrud showed that a approximate solution to the dual LP can be computed in nearlinear time. We state his theorem below.
Theorem 6 ([23])
There is an algorithm that computes a approximate solution the dual LP in time.
We observe that the preceding theorem guarantees trees in the support of and also implicity stores them in space. If we choose then, via Corollary 4, for every minimum cut there is a tree in the support of that respects . This leads to an algorithm that in time enumerates all minimum cuts and recovers Thorup’s running time. However, we note that the trees generated by the algorithm in the preceding theorem are implicit, and can be stored in small space. It may be possible to use this additional structure to match or improve the runtime achieved by Theorem 5.
Remark 2
For unweighted graphs with trees [23] guarantees a approximation. This improves the running time to for unweighted graphs.
We briefly discuss a potential approach to speed up the computation futher. Recall that Karger describes an algorithm that given a spanning tree of a graph finds the minimum cut that respects in time, speeding up the easier time algorithm. We can leverage this as follows. In the case of we are given and and wish to find the minimum cut induced by the removal of at most edges where is either or depending on the tree packing we use. Suppose is a set of edges of . Removing them from yields a forest with components. We can then apply Kargers algorithm in each of these components with an appropriate graph. This results in a running time of per tree rather than . We can try to build on Karger’s ideas improve the running time to find the best cut induced by removing at most edges from . We can then leverage this for larger values of .
5 A new proof of the LP integrality gap for Cut
The proof of Theorem 3 in [3] is based on the primaldual algorithm and analysis of Agarwal, Klein and Ravi [1], and Goemans and Williamson [9] for the Steiner tree problem. For this reason the proof is technical and indirect. Further, the proof from [3] is described for the Steiner cut problem which has additional complexity. Here we give a different and intuitive proof for Cut. Unlike the proof in [3], the proof here relies on optimality properties of the LP solution and hence is less useful algorithmically. We note that [23] uses Theorem 6 and a fast implementation of the algorithmic proof in [3] to obtain a nearlinear time approximation for Cut.
Let be a graph with nonnegative edge capacities . We let denote the capacitated degree of node . We will assume without loss of generality that and that the nodes are sorted in increasing order of degrees, that is, . We observe that is an upper bound on the value of an optimum Cut; removing all the edges incident to gives a feasible solution in which the components are the isolated vertices , and a component consisting of the remaining nodes of the graph.
The key lemma is the following which proves the integrality gap in a special case.
Lemma 3
Let be a connected graph and let be an optimum solution to the Cut LP such that for each (in other words is fully fractional). Then
Proof
Let be any fixed optimum solution to the dual LP. Complementary slackness gives the following two properties:

for each , for if we would have .

for each , since .
From the second property above, and the fact that each spanning tree has exactly edges, we conclude that
(3) 
Since the degrees are sorted,
(4) 
Putting the two preceding inequalities together,
where, the last equality is based on strong duality and the fact that .
The preceding lemma can be easily generalized to the case when has connected components following the remark in the preceding section on the Cut LP. This gives us the following.
Corollary 6
Let be a graph with connected components and let be an optimum solution to the Cut LP such that for each . Then
Now we consider the general case when the optimum solution to the Cut LP is not necessarily fully fractional as needed in Lemma 3. The following claim is easy.
Let where . Let be the graph obtained from by contracting and into a single node. Then there is a feasible solution to the Cut LP in of the same cost as that of . Moreover a feasible cut in is a feasible cut in of the same cost.
Using the preceding claim we can assume without loss of generality that for each . Let . Since the LP solution paid for the full cost of the edges in , we can recurse on and the fractional solution obtained by restricting to . If is connected then is an optimum solution the Cut LP on , and is fully fractional, and we can apply Lemma 3. However, can be disconnected. Let be the number of connected components in . If we are done since is a feasible cut and . The interesting case is when . In this case we apply Corollary 3 based on the following claim which is intuitive and whose formal proof we omit.
Let be the restriction of to . Then for any maximal forest in we have . Moreover, is an optimum solution to the Cut LP in .
From Corollary 3 we can find such that induces a kcut in such that
Therefore induces a cut in and we have that
This finishes the proof. Note that the proof also gives a very simple rounding algorithm assuming we have an optimum solution for the LP. Contract all edges with , remove all edges with , and use Corollary 3 in the residual graph to find the smallest degrees vertices.
6 Characterizing the optimum LP solution
We have seen that the dual of the LP relaxation for Cut yields a tree packing that can be used in place of Thorup’s recursive tree packing. In this section we show that the two are the same by characterizing the optimum LP solution for a given graph through a recursive partitioning procedure. This yields a nested sequence of partitions of the vertex set of the graph. This sequence is called the principal sequence of partitions of a graph and is better understood in the more general context of submodular functions [22]. We refer the reader to Fujishige’s article more on this topic [7], and to [5, 17] for algorithmic aspects in the setting of graphs. We also connect the LP relaxation with the Lagrangean relaxation approach for Cut considered by Barahona [2] and Ravi and Sinha [24]. Their approach is also built upon the principal sequence of partitions. In order to keep the discussion simple we mainly follow the notation and approach of [24].
Given and an edge set let denote the number of connected components in . Recall that the strength of a capacitated graph , denoted by is defined as . The Cut problem can be phrased as . However, the constraint that is not straightforward. It is, however, not hard to show that is a supermodular set function over the ground set . A Lagrangean relaxation approach was considered in [2, 24]. To set this up we define, for any fixed edge set , a function as . We then obtain the function function where The quantity is the attack value of the graph for parameter and was considered by Cunningham [5] in his algorithm to compute the the strength of the graph.
Thus provides a lower bound on the optimum solution value. [24] describes structural properties of the function , several of which are explicit or implicit in [5]. We state them below.

The functions and are continuous, concave and piecewise linear and have no more than breakpoints. The function is nonincreasing in .

Under a nondegeneracy assumption on the graph, which is easy to ensure, the following holds. If is not a breakpoint then there is a unique edge set such that . If is a breakpoint then there are exactly two edge sets such that .

If is a breakpoint of induced by edge sets and then . In particular is contained in some connected component of .

Let be a breakpoint of induced by edge set . Then the next breakpoint is induced by the edge set which is the solution to the strength problem on the smallest strength component of .
The above properties show that the breakpoints induce a sequence of partitions of which are refinements. Alternatively we consider the sequence of edge sets obtained by the following algorithm. We will assume that is connected. Let . Given we obtain as follows. Let . If has no edges we stop. Otherwise let be the minimum strength connected component of and be a minimum strength edge set of . We define . The process stops when . Let denote the partition of induced by . Note that is obtained from by replacing by a minimum strength partition of , thus is a refinement of and consists of singleton nodes. Note that Thorup’s ideal tree packing is also based on the same recursive decomposition.
Ravi and Sinha obtained a approximation for Cut as follows. Given the preceding decomposition of they consider the smallest such that . If they output it and can argue that it is an optimum solution. Otherwise they do the following. Recall is obtained from by replacing the component in by a minimum strength decomposition of . Let . Consider the minimum strength partition of and let be the connected components of the partition with the smallest shores. Output the cut .
An optimum LP solution from the decomposition:
Given , as before let be the smallest index such that . Let . We consider the following solution to the LP:

for each .

for each , where

for each .
Lemma 4
The solution is feasible and has objective value
where we denote .
Proof
Let be any spanning tree. We want to show that . For each , let , and let . Then has edges of value , and edges of value . We have
where we observe that the RHS of the first line is decreasing in both and , and To calculate the objective value, we have
The harder part is:
Lemma 5
The solution attains the optimum value to the LP relaxation.
Proof
We prove the claim by constructing a dual solution equal value. See Figure 3 for the dual LP.
Recall the definitions of , , , and from above. For each , let be the number of components in the th partition. Let be the strengths of the components . Let be the partition on corresponding to . An ideal tree packing, following [28], is a convex combination of trees
Comments
There are no comments yet.