1 Introduction
In the edge connectivity augmentation problem, we are given an undirected graph with (integer) edge weights , and a target connectivity . The goal is to find a minimum weight set of edges on such that adding these edges to makes the graph connected. (In other words, the value of the minimum cut of the graph after the augmentation should be at least .) The edge connectivity augmentation problem is known to be tractable in time, where and denote the number of edges and vertices respectively in . This was first shown by Watanabe and Nakamura [WatanabeN87] for unweighted graphs, and the first strongly polynomial algorithm was obtained by Frank [Frank92]. Since then, several algorithms [CaiS89, NaorGM97, Gabow16, Gabow94, NagamochiI97] have progressively improved the running time to the current best obtained by Benczúr and Karger [BenczurK00].^{1}^{1}1 ignores (poly)logarithmic factors in the running time. In this paper, we give an algorithm to solve the edge connectivity augmentation problem using calls to any maxflow algorithm: There is a randomized, Monte Carlo algorithm for the edge connectivity augmentation problem that runs in time where is the running time of any maximum flow algorithm on an undirected graph containing edges and vertices. Using the current best maxflow algorithm on undirected graphs [BrandLLSSSW21],^{2}^{2}2We note that for sparse graphs, there is a slightly faster maxflow algorithm that runs in time [GaoLP21], where is a small constant. If we use this maxflow algorithm in Section 1, we also get a running time of for the augmentation problem. this yields a running time of , thereby improving on the previous best bound of .
The edge connectivity augmentation problem is closely related to edge splitting off, a widely used tool in the graph connectivity literature (e.g., [Gabow94, NagamochiI97]). A pair of (weighted) edges and both incident on a common vertex is said to be split off by weight if we reduce the weight of both these edges by and increase the weight of their shortcut edge by . Such a splitting off is valid if it does not change the (Steiner) connectivity of the vertices . If all edges incident on are eliminated by a sequence of splitting off operations, we say that the vertex is split off. We call the problem of finding a set of edges to split off a given vertex the edge splitting off problem.
Lovász [Lovasz79] initiated the study of edge splitting off by showing that any vertex with even degree in an undirected graph can be split off while maintaining the (Steiner) connectivity of the remaining vertices. (Later, more powerful splitting off theorems [Mader78] were obtained that preserve stronger properties and/or apply to directed graphs, but these come at the cost of slower algorithms. We do not consider these extensions in this paper.) The splitting off operation has emerged as an important inductive tool in the graph connectivity literature, leading to many algorithms with progressively faster running times being proposed for the edge splitting off problem [CaiS89, Frank92, Gabow94, NagamochiI97]. Currently, the best running time is , which was obtained in the same paper of Benczúr and Karger that obtained the edge connectivity augmentation result [BenczurK00]. We improve this bound as well: There is a randomized, Monte Carlo algorithm for the edge splitting off problem that runs in time where is the running time of any maximum flow algorithm on an undirected graph containing edges and vertices.
As in previous work (e.g., [BenczurK00]), instead of giving separate algorithms for the edge connectivity augmentation and the edge splitting off problems, we give an algorithm for the degreeconstrained edge connectivity augmentation (DECA) problem, which generalizes both these problems. In this problem, given an edge connectivity augmentation instance, we add additional degree constraints requiring the total weight of added edges incident on each vertex to be at most its degree constraint. The goal is to either return an optimal set of edges for the augmentation problem that satisfy the degree constraints, or to say that the instance is infeasible.
Clearly, DECA generalizes the edge connectivity augmentation problem. To see why DECA also generalizes splitting off, create the following DECA instance from a splitting off instance: Remove the edges incident on and set to the weighted degree of in these edges. Then, set to the (Steiner) connectivity of in the input graph. Once the DECA solution is obtained, for vertices whose degree in is smaller than , use an arbitrary weighted matching to increase the degrees to exactly .
For the DECA problem, we show that: There is a randomized, Monte Carlo algorithm for the degreeconstrained edge connectivity augmentation problem that runs in time where is the running time of any maximum flow algorithm on an undirected graph containing edges and vertices. Section 1 implies Section 1 and Section 1. The rest of this paper focuses on proving Section 1.
1.1 Our Techniques
A key tool in many augmentation/splitting off algorithms (e.g., in [WatanabeN87, NaorGM97, Gabow16, Benczur94, BenczurK00]) is that of extreme sets. A nonempty set of vertices is called an extreme set in graph if for every proper subset , we have , where (resp., ) is the total weight of edges with exactly one endpoint in (resp., ) in . (If the graph is unambiguous, we drop the subscript and write .) The extreme sets form a laminar family, therefore allowing an sized representation in the form of an extreme sets tree. The main bottleneck of the BenczúrKarger algorithm is in the construction of the extreme sets tree. They use the recursive contraction framework of Karger and Stein [KargerS96] for this construction, which takes time. In this paper, we obtain a faster algorithm for finding the extreme sets of a graph: [] There is a randomized, Monte Carlo algorithm for finding the extreme sets tree of an undirected graph that runs in time where is the running time of any maximum flow algorithm on an undirected graph containing edges and vertices.
Our extreme sets algorithm is based on the isolating cuts framework that we introduced in a recent paper [LiP20deterministic]. (This was independently discovered by Abboud et al. [AbboudKT21].) Given a set of terminal vertices, this framework uses maxflows to find the minimum cuts that separate each individual terminal from the remaining terminals (called isolating cuts). In the current paper, instead of using the framework directly, we use a gadget called a Cut Threshold that is defined as follows: for a given vertex and threshold , the Cut Threshold is the set of vertices such that the value of the minimum cut . We showed recently [LiP21approximate] that the isolating cuts framework can be used to find the Cut Threshold for any vertex and threshold in maxflows. We use this result here, and focus on obtaining extreme sets using a Cut Threshold subroutine.
Our main observation is that if an extreme set partially overlaps the complement of a Cut Threshold, then it must actually be wholly contained in . (Intuitively, one may interpret this property as saying that an extreme set and a Cut Threshold are noncrossing, although our property is actually stronger, and only the noncrossing property does not suffice for our algorithm.) This allows us to design a divide and conquer algorithm that runs a recursion on two subproblems generated by contracting each side of a carefully chosen Cut Threshold. The above property ensures that every extreme set in the original problem continues to be an extreme set in either of the two subproblems. In order to bound the depth of recursion, it is important to use a Cut Threshold that produces a balanced partition of vertices. We ensure this by adapting a recent observation of Abboud et al. [AbboudKT20] which asserts that a Cut Threshold
based on the connectivity between two randomly chosen vertices is balanced with constant probability. One additional complication is that while the contraction of the
Cut Threshold (or its complement) does not eliminate any extreme set, it might actually add new extreme sets. We run a postprocessing phase where we use a dynamic tree data structure to eliminate these spurious extreme sets added by the recursive algorithm.After obtaining the extreme sets tree, the next step (in our algorithm and in previous work such as [BenczurK00]) is to add a vertex and use a postorder traversal on the extreme sets tree to find an optimal set of edges incident on for edge connectivity augmentation. This step takes time.
Next, we split off vertex using an iterative algorithm that again uses the extreme sets tree. At a high level, this splitting off algorithm follows a similar structure to the BenczúrKarger algorithm, but with a couple of crucial differences that improves the running time from to . The first difference is in the construction of a mincut cactus data structure. At the time of the BenczúrKarger result, the fastest cactus algorithm was based on recursive contraction [KargerS96] and had a running time of . But, this has since been improved to by Karger and Panigrahi [KargerP09]. Using this faster algorithm removes the first bottleneck in the augmentation algorithm.
The second and more significant improvement is in the use of data structures in the splitting off algorithm. This is an iterative algorithm that has iterations and adds edges in each iteration. The BenczúrKarger algorithm updates its data structures for each edge in all these iterations, thereby incurring updates. Instead, we use the following observation (this was known earlier): there are only distinct edges used across the iterations, and the total number of changes in the set of edges from one iteration to the next is . To exploit this property, we use a lazy procedure based on the top tree data structure due to Goldberg et al. [goldberg1991use] (and additional priority queues to maintain various ordered lists). Our data structure only performs updates on edges that are added/removed in an iteration, thereby reducing the total number of updates to , and each update can be implemented in using standard properties of top trees and priority queues. We obtain the following:
[] Given an input graph and its extreme set tree, there is an time algorithm that solves the degreeconstrained edge connectivity problem.
Section 1 now follows from Section 1.1 and Section 1.1.
Roadmap.
We give the algorithm for finding extreme sets that establishes Section 1.1 in Section 2. The algorithm for the DECA problem that uses the extreme sets tree and establishes Section 1.1 is given in Section 3.
2 Algorithm for Extreme Sets
In this section, we present our extreme sets algorithm and prove Section 1.1, restated below. See 1.1
Recall that the input graph is an undirected graph with integer edge weights . An extreme set is a set of vertices such that for every proper subset , we have . Note that all singleton vertices are also extreme sets by default since they do not have nonempty strict subsets.
The following is a wellknown property of extreme sets (see, e.g., [BenczurK00]): The extreme sets of an undirected graph form a laminar family, i.e., for any two extreme sets, either one is contained in the other, or they are entirely disjoint. This lemma allows us to represent the extreme sets of as a rooted tree with the following properties:

The set of vertices in exactly correspond to the set of leaf vertices in .

The extreme sets in exactly correspond to the (proper) subtrees of in the following sense: for any extreme set , there is a unique subtree of denoted such that the vertices in are exactly the leaves in . Overloading notation, we also use to denote the root of the subtree corresponding to in .
We call the extreme sets tree of , and give an algorithm to construct it in this section.
We will use a Cut Threshold procedure from our recent work [LiP21approximate]. Recall that a Cut Threshold is defined as follows: Let denote the value of the maxflow between two vertices and ; we call the connectivity between and . Then, the Cut Threshold for vertex and threshold , denoted , is the set of all vertices such that . In recent work, we gave an algorithm for finding a Cut Threshold [LiP21approximate] based on our isolating cuts framework [LiP20deterministic]: [Li and Panigrahi [LiP21approximate]] Let be an undirected graph containing edges and vertices. For any given vertex and threshold , there is a randomized Monte Carlo algorithm for finding the Cut Threshold in time, where is the running time of any maxflow algorithm on undirected graphs containing edges and vertices.
In order to use this result, we first relate extreme sets to Cut Threshold. We need the following definition: We say that a set of vertices respects the extreme sets of if for any extreme set of , one of the following holds: (a) or (b) or (c) . In other words, if there exist two vertices such that and , then it must be that . Our main observation that relates extreme sets to Cut Threshold is the following: Let be an undirected graph. For any vertex and threshold , the complement of the Cut Threshold , denoted , respects the extreme sets of . Note that by definition of . The crucial ingredient in the proof of Section 2 is that minimum cuts for any are noncrossing with respect to the cut : For any vertex , the side containing of a minimum cut must be entirely contained in . Suppose not; then, there is at least one vertex such that the minimum cut also separates and . But, then . This contradicts . Now, we use Section 2 to prove Section 2. [Proof of Section 2] An extreme set that violates Section 2 has the following properties: (a) separates for some vertex , and (b) contains some vertex . Let denote the side containing of a minimum cut.
Now, since the cut function is submodular, we have:
(1) 
But, by Section 2, we have . Now, since separates , if follows that also separates . As a consequence,
(2) 
Finally, since , we have . Since is a minimum cut, it follows that:
(3) 
Combining Equation 2 and Equation 3, we get:
(4) 
Finally, we note is a proper subset of . This is because contains one vertex among by virtue of separating them, but is entirely contained in by Section 2. Now, since is an extreme set, we have
(5) 
The lemma follows by noting that Equation 4 and Equation 5 contradict Equation 1.
2.1 Description of the Algorithm
We now use Section 2 to design a divide and conquer algorithm for extreme sets. The algorithm has two phases. In the first phase, we construct a tree that includes all extreme sets of as subtrees, but might contain other subtrees that do not correspond to extreme sets. In the second phase, we remove all subtrees of that are not extreme sets and obtain the final extreme sets tree .
Phase 1:
The first phase of the algorithm uses a recursive divide and conquer strategy. A general recursive subproblem is defined on a graph that is obtained by contracting some sets of vertices in that will be defined below. The contracted vertices are denoted and the uncontracted vertices . Thus, . Note that the contracted vertices form a partition of the vertices in . The graph is obtained from by contracting each set of vertices that is represented by a single contracted vertex in , deleting selfloops and unifying parallel edges into a single edge whose weight is the cumulative weight of the parallel edges. The goal of the recursive subproblem on is to build a tree that contains all extreme sets in . Initially, and , i.e., . Therefore, the overall goal of the algorithm is to find all extreme sets of .
First, we perturb the edge weights of the input graph as follows: We independently generate a random value
for each edge that is drawn from the uniform distribution defined on
. (We will set the precise value of later, but it will be polynomial in the size of the graph .) We define new edge weights for all edges . We first show that all extreme sets under the original edge weights continue to be extreme sets under the new edge weights : All extreme sets in under edge weights are also extreme sets under edge weights . To show this lemma, we will prove that the (strict) relative order of cut values is preserved by the transformation from to . Let and respectively denote the value of under edge weights and . Then, we have the following: If for two sets of vertices , then . Since all edge weights are integers, implies(6) 
Let (resp., ) denote the sum of the random values over all edges that have exactly one endpoint in (resp., ). Then,
δ_w’(X)
= mN⋅δ_w(X) + r(X)
&≤mN⋅(δ_w(Y)  1) + r(X) &(by Equation 6)&
&≤mN⋅δ_w(Y) & (since r(u, v)≤N, r(X) ≤mN)&
&¡ mN⋅δ_w(Y) + r(Y) = δ_w’(Y). & (since r(u, v)≥1, r(Y) ≥1)
We now prove Section 2.1 using Section 2.1:
[Proof of Section 2.1]
Suppose is an extreme set under edge weights . Then, for all nonempty proper subsets . By Section 2.1, this implies that . Thus, is an extreme set under edge weights as well.
Section 2.1 implies that we can use edge weights instead of since our goal is to obtain a tree that includes as subtrees all the extreme sets in under edge weights .
We are now ready to describe the recursive algorithm. There are two base cases: if or if , we use the BenczúrKarger algorithm [BenczurK00] to find the extreme sets tree and return it as .
For the recursive case, we have . Let be two distinct vertices sampled uniformly at random from (these vertices may either be contracted or uncontracted vertices), and let be the connectivity between and in . We invoke Section 2 on to find the Cut Threshold on and define . We repeat this process until we get an that satisfies:
(7) 
Once Equation 7 is satisfied, we create the following two subproblems:

In the first subproblem, we contract the vertices in into a single (contracted) vertex to form a new graph . We find the tree on by recursion.

In the second subproblem, we contract the vertices in into a single (contracted) vertex to form a new graph . We find the tree on by recursion.
We combine the trees and to obtain the overall tree as follows: in tree , we discard the leaf representing the contracted vertex ; let denote this new tree whose leaves correspond to the vertices in . Next, note that is a contracted vertex in that appears as a leaf in tree . We replace the contracted vertex in this tree with the tree to obtain our eventual tree . (This is illustrated in Figure 1.)
The following is the main claim after the first phase of the algorithm, where : Every extreme set of the input graph is a subtree of tree returned by the first phase of the extreme sets algorithm.
Phase 2:
The second phase retains only the subtrees of that are extreme sets in and eventually returns . In this phase, we do a postorder traversal of . For any vertex , let denote the set of leaves in the subtree under . During the postorder traversal, we label each vertex in with the value of in under the original edge weights . (We will describe the data structures necessary for this labeling when we analyze the running time of the algorithm.) If the label for is strictly smaller than the labels of all its children nodes, then is an extreme set and we keep in . Otherwise, we remove node from and make its parent node the new parent of all of its children nodes.
At the end of the second phase of the algorithm, we claim the following: Every extreme set of the input graph is a (proper) subtree of tree returned by the second phase of the extreme sets algorithm, and viceversa.
2.2 Correctness of the Algorithm
We now establish the correctness of the algorithm by proving Section 2.1 and Section 2.1 that respectively establish correctness for the first and second phases of the algorithm.
In order to prove Section 2.1, we show that the following more general property holds for any recursive step of the algorithm: Let be the input graph in a recursive step of the algorithm. Then, every extreme set of under edge weights is a subtree of tree returned by the recursive algorithm. Note that Section 2.1 follows from Section 2.2 when the latter is applied to the first step of the algorithm, i.e., .
Recall that , where is a randomly chosen vertex and for a randomly chosen vertex . The two recursive subproblems are on graphs and . To prove Section 2.2, we first relate the extreme sets in and to the extreme sets in . We show the following general property that holds for any graph , vertex , and threshold : Let be an undirected graph, and for any vertex and threshold , let for Cut Threshold in under edge weights . Let and be graphs obtained from by contracting and respectively. Then, every extreme set in under edge weights is an extreme set in either or under edge weights . First, note that by Section 2.1, every extreme set in under edge weights is also an extreme set under edge weights . Therefore, by applying Section 2 on with edge weights , we can claim that the extreme sets under edge weights are of one of the following types: (a) or (b) or (c) . Extreme sets of type (a) are also extreme sets in since the value of and that of for any are identical between and . Similarly, extreme sets of type (c) are also extreme sets in since the value of and that of for any are identical between and . For extreme sets of type (b), note that every proper subset of in is also a proper subset of in , and has the same cut value. Then, if for all proper subsets in , then it must be that for all proper subsets in . Therefore, an extreme set of type (b) in is also an extreme set in . (Note that because of this last case, it is possible that there are extreme sets in that are not extreme sets in .) This now allows us to prove Section 2.2: [Proof of Section 2.2] First, note that the correctness of the base case follows from the correctness of the BenczúrKarger algorithm [BenczurK00]. Thus, we consider the inductive case. Inductively, we assume that and contain as subtrees all extreme sets of and under edge weights . Therefore, by Section 2.2, every extreme set in under edge weights is a subtree of either or . Now, note that any subtree eliminated by the algorithm that combines and into has the property that contains the entire set and a proper subset of . But, by Section 2, such a set cannot be an extreme set in . Therefore, all the extreme sets in under edge weights are subtrees in .
Next, we establish correctness of the second phase of the algorithm, i.e., prove Section 2.1. We will need the following property of extreme sets: Let be an undirected graph, and let be a set of vertices that is not an extreme set. Then, there exists a set such that is an extreme set and . Let be the minimum cut value among all proper subset of , i.e., . Since is not an extreme set, it must be that . Now, consider the smallest set such that , i.e., . Now, for any nonempty proper subset , we have: (a) by definition of , and (b) by definition of . Therefore, for all nonempty proper subsets . Hence, is an extreme set. We are now ready to prove Section 2.1: [Proof of Section 2.1] Recall that for any node in , denotes the set of leaves in the subtree under . Now, if is removed by the algorithm in the second phase from , it must be that there is a child of such that . Since each node in has at least two children, it must be that is a proper subset of , and hence is not an extreme set. This implies that the second phase of the algorithm does not remove any extreme set from being a subtree of .
It remains to show that this phase does remove all subtrees that are not extreme sets. Suppose is a node in after the first phase of the algorithm such that is not an extreme set in . Consider the stage when the postorder traversal of in the second phase reaches . We need to argue that there is a child of such that . Inductively, we assume that at this stage, the subtree under exactly represents the extreme sets that are proper subsets of . Then, by Section 2.2, there is a descendant of such that . But, note that in any extreme sets tree, the cut value of a parent subtree is strictly smaller than that of a child subtree, since the child subtree represents a proper subset of the parent subtree. Thus, if is the child of that is also an ancestor of , then . Since and is a child of , the node will be discarded from when the postorder traversal reaches .
This concludes the proof of correctness of the extreme sets algorithm.
2.3 Running Time Analysis of the Algorithm
We analyze the running times of the first and second phases of the algorithm separately. It follows from Section 2 that the running time of the first phase can be written as:
(8) 
where and , where is the number of edges that have exactly one endpoint in . Note that all other steps, i.e., generating edge weights , creating the graphs and , and recombining the trees and to obtain the overall tree , can be done in time. Thus, the running time is dominated by the time taken in the Cut Threshold algorithm in Section 2.
First, we bound the depth of the recursion tree: The depth of the recursion tree in the first phase of the extreme sets algorithm is . Note that Equation 7 ensures that in every recursive step, we have:
Therefore, in each recursive subproblem, the number of vertices is since . The lemma follows.
Section 2.3 is sufficient to bound the total cost of the base cases of the algorithm: The total running time of the invocations of the BenczúrKarger algorithm for the base cases is . First, consider the base cases of constant size: . Since the other base case truncates the recursion whenever , it must be that contains at least one uncontracted vertex in each invocation of this base case. Now, since each uncontracted vertex is assigned to exactly one of the two subproblems by the recursive algorithm, it follows that each uncontracted vertex can be in only one base case. Therefore, the total number of these bases cases is . Since each base case is on a graph of size, the total running time of the BenczúrKarger algorithm over these base cases is .
Next, we consider the other base case: . Since the depth of the recursion tree is by Section 2.3, and each branch of the recursion adds a single contracted vertex in each step, the total number of contracted vertices in any instance is . Thus, the BenczúrKarger algorithm has a running time of for each instance of this base case. To count the total number of these instances, we note that the parent subproblem of any base case must contain at least one uncontracted vertex. Since the depth of the recursion tree is and an uncontracted vertex can be in only one subproblem at any layer of recursion, it follows that the total number of instances of this base case is . Therefore, the cumulative running time of all the base cases of this type is .
The rest of the proof will focus on bounding the cumulative running time of the recursive instances of the algorithm. Our first step is to show that the expected number of iterations in any subproblem before we obtain an that satisfies Equation 7 is a constant: Suppose are vertices chosen uniformly at random from , and let be the connectivity in . Then, satisfies Equation 7 with probability . To show this, we first need to establish some properties of the random transformation that changes edge weights from to . First, we establish uniqueness of the minimum cut for any vertex pair under . We need the Isolation Lemma for this purpose: [Isolation Lemma [MulmuleyVV87]] Let and be positive integers and let be a collection of subsets of . Suppose each element receives a random number uniformly and independently from . Then, with probability , there is a unique set that minimizes . We choose for some constant . (Note that this increases the edge weights from to by a factor only, thereby ensuring that the efficiency of elementary operations is not affected.) Then, we can apply the Isolation Lemma to prove the following property: Fix any vertex . For every vertex , the minimum cut under edge weights is unique with probability . Moreover, let . With probability at least , one of the following must hold: (a) , or (b) the unique minimum cut is identical to the unique minimum cut in . We first establish the uniqueness of the minimum cut. Note that by Section 2.1, the only candidates for minimum cut under are the minimum cuts under . For any two such cuts , we have , i.e., . Therefore, the minimum cuts under are those minimum cuts under that have the minimum value of , which is defined as the sum of over all edges with exactly one endpoint in . The uniqueness of the minimum cut under edge weights now follows from Section 2.3 by setting to the collection of subsets of edges that form the minimum cuts under edge weights .
Next, consider two vertices . If under edge weights , assume wlog that . This implies that for every cut , we have , where is a minimum cut under edge weights . But then, by Section 2.1, we have . This implies that under edge weights . In this case, we are in case (a). Next, suppose under edge weights . Apply Section 2.3 by setting to be the collection of subsets of edges where each subset forms a minimum cut or a minimum cut under edge weights . With probability , we get a unique minimum cut among these cuts under edge weights . If this unique minimum is a cut that separates both from , then we are in case (b), while if it only separates one of or from , then we are in case (a).
Using , and applying a union bound over all choices of , we can assume that Section 2.3 holds for all choices of vertices . (This holds with high probability, which is sufficient for our purpose because our algorithm is Monte Carlo.)
We also need the following lemma due to Abboud et al. [AbboudKT20]:[Abboud et al. [AbboudKT20]] Let be an undirected graph. If is a vertex chosen uniformly at random from , then with probability , there are vertices such that the minimal minimum cut has vertices on the side of . Here, minimal refers to the minimum cut where the side containing is minimized. But, for our purposes, we do not need this qualification since by Section 2.3, the minimum cut in is unique under edge weights .
Now, for any vertex , let denote the sequence of vertices in nonincreasing order of the value of . (If , then the relative order of in is arbitrary.) We define a run in this sequence as a maximal subsequence of consecutive vertices that have an identical value of . Combining Section 2.3 and Section 2.3, we make the following claim: Let be a vertex chosen uniformly at random from . Then, with probability , the longest run in is of length . First, note that all vertices in a run share the same unique minimum cut (and not just the value of ) by Section 2.3. Thus, if there is a run in has vertices, then for all these vertices , the unique minimum cut has vertices on the side of . It follows that there are vertices that have vertices on the side of in the (unique) minimum cut. The lemma now follows by observing that this can only happens with probability by Section 2.3 since is a vertex chosen uniformly at random from . Section 2.3 now allows us to derive the probability of choosing vertices and such that Equation 7 is satisfied: [Proof of Section 2.3] By Section 2.3, the longest run in is of length with probability . Next, the index of in is between and with probability since is chosen uniformly at random. If this happens, then we immediately get where . This is because the suffix of starting at is in . But, we also have since the longest run in has vertices, and all vertices before the start of the run containing are not in . The lemma follows.
Next, we bound the total number of vertices and edges at any level of the recursion tree: The total number of vertices and edges in all the recursive subproblems at any level of the recursion tree in the first phase of the extreme sets algorithm is and respectively. Since each step of the recursion adds one contracted vertex to each of the two subproblems, it follows from Section 2.3 that any subproblem in the recursion tree has at most contracted vertices, i.e., . Next, note that every uncontracted vertex belongs to exactly one subproblem at any level of the recursion tree. Conversely, because of the base case for , every recursive subproblem contains at least one uncontracted vertices. Therefore, the recursive subproblems at any level of the recursion tree contain uncontracted vertices and contracted vertices in total.
The edges in a subproblem are in three categories: (a) edges between two uncontracted vertices, i.e., (b) edges between contracted and uncontracted vertices, i.e., and (c) edges between two contracted vertices, i.e., . Edges in (a) are distinct between subproblems at any level of the recursion tree since the sets of uncontracted vertices in these subproblems are disjoint. An edge can appear in at most two subproblems as a category (b) edge, namely the subproblems containing the uncontracted vertices and respectively. As a result, there are edges of category (a) and (b) in total across all the subproblems at a single level of the recursion tree. Finally, since the number of contracted vertices is in any single subproblem, there are at most edges of category (c) in any subproblem. Since each recursive subproblem contains at least one uncontracted vertex, the total number of subproblems in a single layer of the recursion tree is . Consequently, the total number of edges in category (c) across all subproblems at a single level of the recursion tree is .
This lemma allows us to bound the running time of the first phase of the algorithm: The expected running time of the first phase of the algorithm is , where is the running time of a maxflow algorithm on an undirected graph of vertices and edges. We have already shown a bound of on the base cases in Section 2.3. So, we focus on the recursive subproblems. Cumulatively, over the recursive subproblems at a single level, Section 2.3 asserts that the total number of vertices and edges is and respectively. (Note that we can assume w.l.o.g. that is a connected graph and therefore . If is not connected, we run the algorithm on each connected component separately.) Now, since , the total time at a single level of the recursion tree is maximized when there are subproblems containing vertices and edges each. This gives a total running time bound of on the subproblems at a single level. (Note that by Section 2.3, the expected number of choices of before Equation 7 is satisfied is a constant.) The lemma now follows by Section 2.3 which says that the number of levels of the recursion tree is .
Next, we analyze the running time of the second phase of the algorithm. To implement the second phase, we need to find the value of for all subtrees of . We use a dynamic tree data structure for this purpose. Initialize for all subtrees . For every edge , we make the following changes to :

Increase by for all ancestors of and in .

Decrease by for all ancestors of in .
Clearly, the value of at the end of these updates is equal to for every subtree . Recall that during the postorder traversal for subtree , we declare it to be an extreme set if and only if the value is strictly smaller than that of each of its children subtrees.
This implementation of the second phase of the algorithm gives the following: The second phase of the extreme sets algorithm takes time. First, note that the size of the tree output by the first phase is since the leaves exactly correspond to the vertices of . Thus, the number of subtrees of is also . The initialization of the dynamic tree data structure takes time. Then, each dynamic tree update takes time, and there are such updates. So, the overall time for dynamic tree operations is . Finally, the time spent at a node of during postorder traversal is proportional to the number of its children, which adds to a total time of for postorder traversal of .
3 Augmentation on Extreme Sets
In this section, we present our algorithm for degreeconstrained edge connectivity augmentation (DECA) that uses extreme sets as a subroutine. Our goal is to prove Section 1.1, restated below. See 1.1
Throughout, we specify a DECA instance by a tuple , indicating the graph , the connectivity requirement , and the (weighted) degree constraints for each vertex .
3.1 The BenczúrKarger Algorithm for DECA
As mentioned before, our algorithm is essentially a speedup of the BenczúrKarger algorithm for DECA [BenczurK00] from time to given the extreme sets tree. We first describe the BenczúrKarger algorithm and then describe our improvements.
The algorithm consists of 3 phases.

Using external augmentation, transform the degree constraints to tight degree constraints for all .

Repeatedly add an augmentation chain to increase connectivity to at least .

Add a matching defined on the mincut cactus if the connectivity does not reach .
We first describe the external augmentation problem and an algorithm (from [BenczurK00]) to optimally solve it.
External augmentation.
The problem is defined as follows: Given a DECA instance , insert a new node , and find an edge set with minimum total weight such that , , and , where is the (weighted) degree of in edges .
The external augmentation problem can be solved using the following algorithm (from [BenczurK00]): Let denote the degree of in new edges. For any set , let . Initially, for all . We do a postorder traversal on the extreme sets tree. When visiting an extreme set that is still deficient, i.e., , we add edges from vertices with to until . When we fail to find a vertex such that , the DECA instance is infeasible since we have . This algorithm can be implemented in time using a linked list to keep track of vertices with in a subtree, merging these lists as we move up the tree in the postorder traversal and removing vertices once .
[Lemma 3.4 and 3.6 of [BenczurK00]] The algorithm described above outputs an optimal solution for the external augmentation problem.
The next lemma (from [BenczurK00]) relates optimal solutions of the external augmentation and DECA problem instances: [Lemma 2.6 of [BenczurK00]] If the optimal solution of the external augmentation instance has total weight , then the optimal solution of DECA instance has value .
After external augmentation, we have . If
is odd, we claim there is at least one vertex with
, else the instance is infeasible. Section 3.1 claims that the optimal solution of the DECA instance has weight , i.e., the sum of degrees is . Now, if for all vertices , then . This shows that the instance is infeasible. If the instance is feasible, we add 1 to for an arbitrary vertex such that .By Section 3.1, the optimal solution of DECA problem has edges. Now, note that if we had used instead of as our degree constraints, we would still get the same external augmentation solution and consequently the same value of . Therefore, we call the tight degree constraints. The DECA problem is now equivalent to splitting off the vertex on the external augmentation solution where is the set of weighted edges incident on where . We denote this splitting off instance .
The BenczúrKarger algorithm [BenczurK00] provides an iterative greedy solution for splitting off by using partial solutions. Given a splitting off instance where
Comments
There are no comments yet.