1 Introduction
Graph Sparsification/Compression generally describes a transformation of a large input graph into a smaller/sparser graph that preserves certain feature (e.g., distance, cut, congestion, flow) either exactly or approximately. The algorithmic value is clear, since the smaller graph might be used as a preprocessed input to an algorithm, so as to reduce subsequent running time and memory requirement. In this paper, we study a natural problem in graph sparsification, the Spanning Tree Congestion (STC) problem. Informally, the STC problem seeks a spanning tree with no treeedge routing too many of the original edges. The problem is wellmotivated by network design applications, where designers aim to build sparse networks that meet traffic demands, while ensuring no connection (edge) is too congested. Indeed, the root of this problem dates back to at least 30 years ago under the name of “load factor” [8, 36], with natural motivations from parallel computing and circuit design applications. The STC problem was formally defined by Ostrovskii [30] in 2004, and since then a number of results have been presented. The probabilistic version of the STC problem, coined as probabilistic capacity mapping, also finds applications in several important graph algorithm problems, e.g., the MinBisection problem.
Two canonical goals for graph sparsification problems are to understand the tradeoff between the sparsity of the output graph(s) and how well the feature is preserved, and to devise (efficient) algorithms for computing the sparser graph(s). These are also our goals for the STC problem. We focus on two scenarios: (A) general connected graphs with vertices and edges, and (B) graphs which exhibit certain expanding properties:

For (A), we show that the spanning tree congestion (STC) is at most , which is a factor of better than the trivial bound of . We present a polynomialtime algorithm which computes a spanning tree with congestion . We also present another algorithm for computing a spanning tree with congestion ; this algorithm runs in subexponential time when . For almost all ranges of average degree , we also demonstrate graphs with STC at least .

For (B), we show that the expanding properties permit us to devise polynomialtime algorithm which computes a spanning tree with congestion . Using this result, together with a separate lowerbound argument, we show that a random graph has STC with high probability.
For achieving the results for (A), an important intermediate theorem is generalized GyőriLovász theorem, which was first proved by Chen et al. [14]. Their proof uses advanced techniques in topology and homology theory, and is nonconstructive.
Definition 1.
In a graph , a connectedpartition is a partition of into , such that for each , is connected.
Theorem 1.1 ([14, Theorems 25, 26]).
Let be a connected ^{1}^{1}1For brevity, we say “connected” for “vertexconnected” henceforth. graph. Let be a weight function . For any , let . Given any distinct terminal vertices , and positive integers such that for each , and , there exists a connectedpartition of into , such that for each , and .
One of our main contributions is to give the first elementary and constructive proof by providing a local search algorithm with running time :^{2}^{2}2 notation hides all polynomial factors in input size.
Theorem 1.2.
(a) There is an algorithm which given a connected graph, computes a connectedpartition satisfying the conditions stated in Theorem 1.1
in time .
(b) If we need a partition instead of partition (the input graph remains assumed to be connected),
the algorithm’s running time improves to .
We make three remarks. First, the time algorithm is a key ingredient of our algorithm for computing a spanning tree with congestion . Second, since Theorem 1.1 guarantees the existence of such a partition, the problem of computing such a partition is not a decision problem but a search problem. Our local search algorithm shows that this problem is in the complexity class PLS [20]; we raise its completeness in PLS as an open problem. Third, the running times do not depend on the weights.
The STC Problem, Related Problems and Our Results. Given a connected graph , let be a spanning tree. For an edge , its detour with respect to is the unique path from to in ; let denote the set of edges in this detour. The stretch of with respect to is , the length of its detour. The dilation of is . The edgecongestion of an edge is , i.e., the number of edges in whose detours contain . The congestion of is . The spanning tree congestion (STC) of the graph is , where runs over all spanning trees of .
We note that there is an equivalent cutbased definition for edgecongestion, which we will use in our proofs. For each treeedge , removing from results in two connected components; let denote one of the components. Then .
Various types of congestion, stretch and dilation problems are studied in computer science and discrete mathematics. In these problems, one typically seeks a spanning tree (or some other structure) with minimum congestion or dilation. We mention some of the wellknown problems, where minimization is done over all the spanning trees of the given graph:

The Low Stretch Spanning Tree (LSST) problem is to find a spanning tree which minimizes the total stretch of all the edges of . [3] It is easy to see that minimizing the total stretch is equivalent to minimizing the total edgecongestion of the selected spanning tree.

The STC problem is to find a spanning tree of minimum congestion. [30]
There are other congestion and dilation problems which do not seek a spanning tree, but some other structure. The most famous among them is the Bandwidth problem and the Cutwidth problem; see the survey [34] for more details.
Among the problems mentioned above, several strong results were published in connection with the LSST problem. Alon et al. [3] had shown a lower bound of . Upper bounds have been derived and many efficient algorithms have been devised; the current best upper bound is . [3, 15, 1, 22, 2] Since total stretch is identical to total edgecongestion, the best upper bound for the LSST problem automatically implies an upper bound on the average edgecongestion. But in the STC problem, we concern the maximum edgecongestion; as we shall see, for some graphs, the maximum edgecongestion has to be a factor of larger than the average edgecongestion.
In comparison, there were not many strong and general results for the STC Problem, though it was studied extensively in the past 13 years. The problem was formally proposed by Ostrovskii [30] in 2004. Prior to this, Simonson [36] had studied the same parameter under a different name to approximate the cut width of outerplanar graph. A number of graphtheoretic results were presented on this topic [31, 25, 24, 23, 10]. Some complexity results were also presented recently [29, 9], but most of these results concern special classes of graphs. The most general result regarding STC of general graphs is an upper bound by Löwenstein, Rautenbach and Regen in 2009 [27], and a matching lower bound by Ostrovskii in 2004 [30]. Note that the above upper bound is not interesting when the graph is sparse, since there is also a trivial upper bound of . In this paper we come up with a strong improvement to these bounds after 8 years:
Theorem (informal): For a connected graph with vertices and edges, its spanning tree congestion is at most . In terms of average degree , we can state this upper bound as . There is a matching lower bound.
Our proof for achieving the upper bound is constructive. It runs in exponential time in general; for graphs with edges, it runs in subexponential time. By using an algorithm of Chen et al. [14] for computing singlecommodity confluent flow from singlecommodity splittable flow, we improve the running time to polynomial, but with a slightly worse upper bound guarantee of .
Motivated by an open problem raised by Ostrovskii [32] concerning STC of random graphs, we formulate a set of expanding properties, and prove that for any graph satisfying these properties, its STC is at most . We devise a polynomial time algorithm for computing a spanning tree with congestion for such graphs. This result, together with a separate lowerbound argument, permit us to show that for random graph with for some small constant ,^{3}^{3}3Note that the STC problem is relevant only for connected graphs. Since the threshold function for graph connectivity is , this result applies for almost all of the relevant range of values of . its STC is with high probability, thus resolving the open problem raised by Ostrovskii completely.
MinMax Graph Partitioning and the Generalized GyőriLovász Theorem. It looks clear that the powerful Theorem 1.1 can make an impact on graph partitioning. We discuss a number of its consequences which might be of wider interest.
Graph partitioning/clustering is a prominent topic in graph theory/algorithms, and has a wide range of applications.A popular goal is to partition the vertices into sets such that the number of edges across different sets is small. While the minsum objective, i.e., minimizing the total number of edges across different sets, is more widely studied, in various applications, the more natural objective is the minmax objective, i.e., minimizing the maximum number of edges leaving each set. The minmax objective is our focus here.
Depending on applications, there are additional constraints on the sets in the partition. Two natural constraints are (i) balancedness: the sets are (approximately) balanced in sizes, and (ii) inducedconnectivity: each set induces a connected subgraph. The balancedness constraint appears in the application of domain decomposition in parallel computing, while the inducedconnectivity constraint is motivated by divideandconquer algorithms for spanning tree construction. Imposing both constraints simultaneously is not feasible for every graph; for instance, consider the star graph with more than vertices and one wants a partition. Thus, it is natural to ask, for which graphs do partitions satisfying both constraints exist. Theorem 1.1 implies a simple sufficient condition for existence of such partitions.
By setting the weight of each vertex in to be its degree, and using the elementary fact that the maximum degree for any connected graph on vertices and edges, we have
Proposition 1.
If is a connected graph with edges, then there exists a connectedpartition, such that the total degree of vertices in each part is at most . Consequently, the minmax objective is also at most .
Due to expander graphs, this bound is optimal up to a small constant factor. This proposition (together with Lemma 4) implies the following crucial lemma for achieving some of our results.
Lemma 1.
Let be a connected graph with edges. Then .
Proposition 1 can be generalized to include approximate balancedness in terms of number of vertices. By setting the weight of each vertex to be plus its degree in , we have
Proposition 2.
Given any fixed , if is a connected graph with edges and vertices, then there exists a connectedpartition such that the total degree of vertices in each part is at most , and the number of vertices in each part is at most .
Further Related Work. Concerning STC problem, Okamoto et al. [29] gave an algorithm for computing the exact STC of a graph. The probabilistic version of the STC problem, coined as probabilistic capacity mapping, is an important tool for several graph algorithm problems, e.g., the MinBisection problem. Räcke [33] showed that in the probabilistic setting, distance and capacity are interchangeable, which briefly says a general upper bound for one objective implies the same general upper bound for the other. Thus, due to the abovementioned results on LSST, there is an upper bound of on the maximum average congestion. Räcke’s result also implies an approximation algorithm to the MinBisection problem, improving upon the approximation algorithm of Feige and Krauthgamer [16]. However, in the deterministic setting, such interchanging phenomenon does not hold: there is a simple tight bound for dilation, but for congestion it can be as high as . For the precise definitions, more background and key results about the concepts we have just discussed, we recommend the writing of Andersen and Feige [5].
Graph partitioning/clustering is a prominent research topic with wide applications, so it comes no surprise that a lot of work has been done on various aspects of the topic; we refer readers to the two extensive surveys by Schaeffer [35] and by Teng [41]. Kiwi, Spielman and Teng [21] formulated the minmax partitioning problem and gave bounds for classes of graphs with small separators, which are then improved by Steurer [38]. On the algorithmic side, many of the related problems are NPhard, so the focus is on devising approximation algorithms. Sparkled by the seminal work of Arora, Rao and Vazirani [6] on sparsest cut and of Spielman and Teng [37] on local clustering, graph partitioning/clustering algorithms with various constraints have attracted attention across theory and practice; we refer readers to [7] for a fairly recent account of the development. The minsum objective has been extensively studied; the minmax objective, while striking as the more natural objective in some applications, has received much less attention. The only algorithmic work on this objective (and its variants) are Svitkina and Tardos [40] and Bansal et al. [7]. None of the above work addresses the inducedconnectivity constraint.
The classical version of GyőriLovász Theorem (i.e., the vertex weights are uniform) was proved independently by Győri [17] and Lovász [26]. Lovász’s proof uses homology theory and is nonconstructive. Győri’s proof is elementary and is constructive implicitly, but he did not analyze the running time. Polynomial time algorithms for constructing the partition were devised for [39, 42], but no nontrivial finitetime algorithm was known for general graphs with .^{4}^{4}4In 1994, there was a paper by Ma and Ma in Journal of Computer Science and Technology, which claimed a polytime algorithm for all . However, according to a recent study [18], Ma and Ma’s algorithm can fall into an endless loop. Also, Győri said the algorithm should be wrong (see [28]). Recently, Hoyer and Thomas [19] provided a clean presentation of Győri’s proof by introducing their own terminology, which we use for our constructive proof of Theorem 1.1.
Notation. Given a graph , an edge set and disjoint vertex subsets , we let .
2 Technical Overview
To prove the generalized GyőriLovász theorem constructively, we follow the same framework of Győri’s proof [17], and we borrow terminology from the recent presentation by Hoyer and Thomas [19]. But it should be emphasized that proving our generalized theorem is not straightforward, since in Győri’s proof, at each stage a single vertex is moved from one set to other to make progress, while making sure that the former set remains connected. In our setting, in addition to this we also have to ensure that the weights in the partitions do not exceed the specified limit; and hence any vertex that can be moved from one set to another need not be candidate for being transferred. The proof is presented in Section 3.
As discussed, a crucial ingredient for our upper bound results is Lemma 1, which is a direct corollary of the generalized GyőriLovász theorem. The lemma takes care of the highlyconnected cases; for other cases we provide a recursive way to construct a low congestion spanning tree; see Section 4 for details. For showing our lower bound for general graphs, the challenge is to maintain high congestion while keeping density small. To achieve this, we combine three expander graphs with little overlapping between them, and we further make those overlapped vertices of very high degree. This will force a treeedge adjacent to the centroid of any spanning tree to have high congestion; see Section 5 for details.
We formulate a set of expanding properties which permit constructing a spanning tree of better congestion guarantee in polynomial time. The basic idea is simple: start with a vertex of high degree as the root. Now try to grow the tree by keep attaching new vertices to it, while keeping the invariant that the subtrees rooted at each of the neighbours of are roughly balanced in size; each such subtree is called a branch. But when trying to grow the tree in a balanced way, we will soon realize that as the tree grow, all the remaining vertices may be seen to be adjacent only to a few number of “heavy” branches. To help the balanced growth, the algorithm will identify a transferable vertex which is in a heavy branch, and it and its descendants in the tree can be transferred to a “lighter” branch. Another technique is to use multiple rounds of matching between vertices in the tree and the remaining vertices to attach new vertices to the tree. This will tend to make sure that all subtrees do not grow uncontrolled. By showing that random graph satisfies the expanding properties with appropriate parameters, we show that a random graph has STC of with high probability.
3 Generalized GyőriLovász Theorem
We prove Theorem 1.2 in this section. Observe that the classical GyőriLovász Theorem follows from Theorem 1.1 by taking for all and for all . We note that a perfect generalization where one requires that is not possible — think when all vertex weights are even integers, while some
is odd.
Let be a connected graph on vertices and edges, and be a weight function. For any subset , . Let .
3.1 Key Combinatorial Notions
We first highlight the key combinatorial notions used for proving Theorem 1.2; see Figures 1 and 2 for illustrations of some of these notions.
Fitted Partial Partition. First, we introduce the notion of fitted partial partition (FPP). An FPP is a tuple of subsets of , , such that the subsets are pairwise disjoint, and for each :

,

is connected and

(we say the set is fitted for satisfying this inequality).
We say an FPP is a Strict Fitted Partial Partition (SFPP) if is a proper subset of . We say the set is light if , and we say it is heavy otherwise. Note that there exists at least one light set in any SFPP, for otherwise , which means . Also note that by taking , we have an FPP, and hence at least one FPP exists.
Configuration. For a set in an FPP and a vertex , we define the reservoir of with respect to , denoted by , as the vertices in the same connected component as in . Note that .
For a heavy set , a sequence of vertices for some is called a cascade of if and for all . The cascade is called a null cascade if , i.e., if the cascade is empty. Note that for light set, we do not need to define its cascade since we do not use it in the proof. (See Figure 1.)
A configuration is defined as a pair , where is an FPP, and is a set of cascades, which consists of exactly one cascade (possibly, a null cascade) for each heavy set in . A vertex that is in some cascade of the configuration is called a cascade vertex.
Given a configuration, we define rank and level inductively as follows. Any vertex in a light set is said to have level . For , a cascade vertex is said to have rank if it has an edge to a level vertex but does not have an edge to any level vertex for . A vertex is said to have level , for , if for some rank cascade vertex , but for any cascade vertex such that rank of is less than . A vertex that is not in for any cascade vertex is said to have level .
A configuration is called a valid configuration if for each heavy set , rank is defined for each of its cascade vertices and the rank is strictly increasing in the cascade, i.e., if is the cascade, then . Note that by taking and taking the null cascade for each heavy set (in this case is heavy if ), we get a valid configuration. (See Figure 2.)
Configuration Vectors and Their Total Ordering.
For any vertex, we define its neighborhood level as the smallest level of any vertex adjacent to it. A vertex of level is said to satisfy maximality property if each vertex adjacent on it is either a rank cascade vertex, has a level of at most , or is one of the terminals for some . For any , a valid configuration is called an maximal configuration if all vertices having level at most satisfy the maximality property. Note that by definition, any valid configuration is a maximal configuration.For a configuration , we define . An edge is said to be a bridge in if , for some , and .
A valid configuration is said to be good if the highest rank of a cascade vertex in is exactly (if there are no cascade vertices, then we take the highest rank as ), is maximal, and all bridges in (if any) are such that and . Note that taking and taking the null cascade for each heavy set gives a good configuration.
For each configuration , we define a configuration vector as below:
where is the number of light sets in , and is the total number of all level vertices in .
Next, we define ordering on configuration vectors. Let and be configurations. We say if

, or

, and .
We say if and . We say if or . We say if , and for all .
For , we say if

, or

, and .
We say if or . We say ( is strictly better than ) if .
3.2 Proof of Theorem 1.2
We use two technical lemmas about configuration vectors and their orderings to prove Theorem 1.2(a). The proof of Theorem 1.2(b) follows closely with the proof of Theorem 1.2(a), but makes use of an observation about the rank of a vertex in the local search algorithm, to give an improved bound on the number of configuration vectors navigated by the algorithm.
Lemma 2.
Given any good configuration that does not have a bridge, we can find an good configuration in polynomial time such that .
Proof.
Since is maximal, any vertex that is at level satisfies maximality property. So, for satisfying maximality, we only need to worry about the vertices that are at level . Let be the set of all vertices such that is adjacent to a level vertex, (i.e., as the highest rank of any cascade vertex is ), , and is not a cascade vertex of rank .
We claim that there exists at least one for which is not empty. If that is not the case, then we exhibit a cut set of size at most . For each such that is a heavy set with a nonnull cascade, let be the highest ranked cascade vertex in . For each such that is a heavy set with a null cascade, let be . Let be the set of all such that is a heavy set. Note that as is an SFPP and hence has at least one light set. Let be the set of all vertices in that have level and be the remaining vertices in . Since is an SFPP, , and since all vertices in have level , we have that . is not empty because there exists at least one light set in and the vertices in a light set have level . We show that there is no edge between and in . Suppose there exists an edge such that and . If , then is a bridge which is a contradiction by our assumption that does not have a bridge. Hence for some . Note that has to be a heavy set, otherwise has level . We have that is not a cascade vertex (as all cascade vertices with level are in ) and (as all such that are in ). Also, is not of level as otherwise, but we assumed is empty. But then, has level at most , has level , and there is an edge . This means that was not maximal, which is a contradiction. Thus, there exists at least one for which is not empty.
For any such that , there is at least one vertex such that . Now we give the configuration as follows. We set for all . For each heavy set such that , we take the cascade of as the cascade of appended with . For each heavy set such that , we take the cascade of as the cascade of . It is easy to see that is maximal as each vertex that had an edge to level vertices in is now either a rank cascade vertex or a level vertex or is for some . Also, notice that all the new cascade vertices that we introduce (i.e., the ’s) have their rank as and there is at least one rank cascade vertex as is not empty for some . Since there were no bridges in , all bridges in has to be from to a vertex having level . Hence, is good. All vertices that had level at most in retained their levels in . And, at least one level vertex of became a level vertex in because the cascade vertex that was at rank becomes level vertex now in at least one set. Since had no level vertices, this means that . ∎
Lemma 3.
Given an good configuration having a bridge, we can find in polynomial time a valid configuration such that one of the following holds:

, and is an good configuration, or

, there is a bridge in such that and , and is an good configuration.
Proof.
Let be a bridge where . Let be the set containing . Note that because is good. We keep for all . But we modify to get as described below. We maintain that if is a heavy set then is also a heavy set for all , and hence maintain that .
Case 1: is a light set (i.e., when ). We take . For all such that is a heavy set, cascade of is taken as the null cascade. We have because is a light set. So, , and hence is fitted. Also, is connected and hence is an FPP. We have because either became a heavy set in which case , or it is a light set in which case and . It is easy to see that is good.
Case 2: is a heavy set i.e., when .
Case 2.1: . We take . For each such that is a heavy set ( is also heavy set for such ), the cascade of is taken as the cascade of . is clearly connected and is fitted by assumption of the case that we are in. Hence is indeed an FPP. Observe that all vertices that had level in still has level in . Since was in by goodness of , also has level in ; and had level in . Hence, . It is also easy to see that remains good.
Case 2.2: . Let be the cascade vertex of rank in . Note that should have such a cascade vertex as has level . Let be , i.e., is the set of all vertices in with level . We initialize . Now, we delete vertices one by one from in a specific order until becomes fitted. We choose the order of deleting vertices such that remains connected. Consider a spanning tree of . has at least one leaf, which is not . We delete this leaf from and . We repeat this process until is just the single vertex or becomes fitted. If is not fitted even when is the single vertex , then delete from . If is still not fitted then delete from . Note that at this point and hence is fitted. Also, note that remains connected. Hence is an FPP. does not become a light set because became fitted when the last vertex was deleted from it. Before this vertex was deleted, it was not fitted and hence had weight at least before this deletion. Since the last vertex deleted has weight at most , has weight at least and hence is a heavy set. Now we branch into two subcases for defining the cascades.
Case 2.2.1: (i.e, was not deleted from in the process above). For each such that is a heavy set, the cascade of is taken as the cascade of . Since a new level vertex is added and all vertices that had level at most retain their level, we have that . It is also easy to see that remains good.
Case 2.2.2: (i.e, was deleted from ). For each such that is a heavy set, the cascade of is taken as the cascade of but with the rank cascade vertex (if it has any) deleted from it. because all vertices that were at a level of or smaller, retain their levels. Observe that there are no bridges in to vertices that are at a level at most , all vertices at a level at most still maintain the maximality property, and we did not introduce any cascade vertices. Hence, is good. It only remains to prove that there is a bridge in such that . We know . Since was a rank cascade vertex in , had an edge to such that had level in . Observe that level of is at most in as well. Hence, taking completes the proof. ∎
Proof of Theorem 1.2(a): .
We always maintain a configuration that is good for some . If the FPP is not an SFPP at any point, then we are done. So assume is an SFPP.
We start with the good configuration where and the cascades of all heavy sets are null cascades. If our current configuration is an good configuration that has no bridge, then we use Lemma 2 to get a configuration such that and is good. We take as the new current configuration . If our current configuration is an good configuration with a bridge, then we get an good configuration for some such that by repeatedly applying Lemma 3 at most times. So in either case, we get a strictly better configuration that is good for some in polynomial time. We call this an iteration of our algorithm.
Notice that the number of iterations possible is at most the number of distinct configuration vectors possible. It is easy to see that the number of distinct configuration vectors with highest rank at most is at most . Since rank of any point is at most , the number of iterations of our algorithm is at most , which is at most . Since each iteration runs in polynomial time as guaranteed by the two lemmas, the required running time is .
When the algorithm terminates, the FPP given by the current configuration is not an SFPP and this gives the required partition. ∎
Proof of Theorem 1.2(b): .
Since any connected graph is also vertex connected, the algorithm will give the required partition due to Theorem 1.2(a). We only need to prove the better running time claimed by Theorem 1.2(b). For this, we show that the highest rank attained by any vertex during the algorithm is at most . Since the number of distinct configuration vectors with highest rank is at most , we then have that the running time is , which is , as claimed. Hence, it only remains to prove that the highest rank is at most .
For this, observe that in an good configuration, for each , the union of all vertices having level and the set of terminals together forms a cutset. Since the graph is connected, this means that for each , the number of vertices having level is at least . The required bound on the rank easily follows. ∎
4 Upper Bounds for Spanning Tree Congestion
Lemma 4.
In a graph , let be a vertex, and let be any neighbours of . Suppose that there exists a connectedpartition such that for all , , and the sum of degree of vertices in each is at most . Let be an arbitrary spanning tree of . Let denote the edge . Let be the spanning tree of defined as . Then has congestion at most .
Theorem 4.1.
For any connected graph , there is an algorithm which computes a spanning tree with congestion at most in time.
Theorem 4.2.
For any connected graph , there is a polynomial time algorithm which computes a spanning tree with congestion at most .
The two algorithms follow the same framework, depicted in Algorithm 1. It is a recursive algorithm; the parameter is a global parameter, which is the number of edges in the input graph in the first level of the recursion; let denote the number of vertices in this graph.
The only difference between the two algorithms is in Line 1 on how this step is executed, with tradeoff between the running time of the step , and the guarantee . For proving Theorem 4.1, we use Theorem 1.2(b), Proposition 1 and Lemma 4, yielding and . For proving Theorem 4.2, we make use of an algorithm in Chen et al. [14], which yields and .
In the rest of this section, we first discuss the algorithm in Chen et al., then we prove Theorem 4.2. The proof of Theorem 4.1 is almost identical, and is deferred to Appendix 0.A.2.
SingleCommodity Confluent Flow and The Algorithm of Chen et al. In a singlecommodity confluent flow problem, the input includes a graph , a demand function and sinks . For each , a flow of amount is routed from to one of the sinks. But there is a restriction: at every vertex , the outgoing flow must leave on at most edge, i.e., the outgoing flow from is unsplittable. The problem is to seek a flow satisfying the demands which minimizes the node congestion, i.e., the maximum incoming flow among all vertices. Since the incoming flow is maximum at one of the sinks, it is equivalent to minimize the maximum flow received among all sinks. (Here, we assume that no flow entering a sink will leave.)
Singlecommodity splittable flow problem is almost identical to singlecommodity confluent flow problem, except that the above restriction is dropped, i.e., now the outgoing flow at can split along multiple edges. Note that here, the maximum incoming flow might not be at a sink. It is known that singlecommodity splittable flow can be solved in polynomial time. For brevity, we drop the phrase “singlecommodity” from now on.
Theorem 4.3 ([14, Section 4]).
Suppose that given graph , demand and sinks, there is a splittable flow with node congestion . Then there exists a polynomial time algorithm which computes a confluent flow with node congestion at most for the same input.
Corollary 1.
Let be a connected graph with edges. Then for any and for any vertices , there exists a polynomial time algorithm which computes an connectedpartition such that for all , , and the total degrees of vertices in each is at most .
Congestion Analysis. We view the whole recursion process as a recursion tree. There is no endless loop, since down every path in the recursion tree, the number of vertices in the input graphs are strictly decreasing. On the other hand, note that the leaf of the recursion tree is resulted by either (i) when the input graph to that call satisfies , or (ii) when Lines 1–1 are executed. An internal node appears only when the vertexconnectivity of the input graph is low, and it makes two recursion calls.
We prove the following statement by induction from bottomup: for each graph which is the input to some call in the recursion tree, the returned spanning tree of that call has congestion at most .
We first handle the two basis cases (i) and (ii). In case (i), FindLCST returns an arbitrary spanning tree, and the congestion is bounded by . In case (ii), by Corollary 1 and Lemma 4, FindLCST returns a tree with congestion at most .
Next, let be the input graph to a call which is represented by an internal node of the recursion tree. Recall the definitions of in the algorithm.
Let . Note that . Then by induction hypothesis, the congestion of the returned spanning tree is at most
(1) 
Viewing as a real variable, by taking derivative, it is easy to see that the above expression is maximized at . Thus the congestion is at most
Comments
There are no comments yet.