Determining 4-edge-connected components in linear time

05/04/2021 ∙ by Wojciech Nadara, et al. ∙ University of Warsaw 0

In this work, we present the first linear time deterministic algorithm computing the 4-edge-connected components of an undirected graph. First, we show an algorithm listing all 3-edge-cuts in a given 3-edge-connected graph, and then we use the output of this algorithm in order to determine the 4-edge-connected components of the graph.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The connectivity of graphs has always been one of the fundamental concepts of graph theory. The foremost connectivity notions in the world of undirected graphs are the -edge-connectedness and the -vertex-connectedness. Namely, a graph is -edge-connected for if it is connected, and it remains connected after removing any set of at most edges. Similarly, is -vertex-connected if it contains at least vertices, and it remains connected after the removal of any set of at most vertices.

These notions can be generalized to the graphs that are not well-connected. Namely, if is a maximal -vertex-connected subgraph of , we say that is a -vertex-connected component of . The edge-connected variant is, however, defined differently: we say that a pair of vertices of is -edge-connected if it is not possible to remove at most edges from so that and end up in different connected components. This relation of -edge-connectedness happens to be an equivalence relation; this yields a definition of a -edge-connected component of as an equivalence class of the relation. We remark that the notions of -vertex-connected components and -edge-connected components coincide for as both simply describe the connected components of . However, for these definitions diverge; in particular, for the -edge-connected components of a graph do not even need to be connected.

There has been a plethora of research into the algorithms deciding the -vertex- and -edge-connectedness of graphs, and decomposing the graphs into -vertex- or -edge-connected components. However, while classical, elementary, and efficient algorithms exist for and , these problems become increasingly more difficult for the larger values of . In fact, even for , there were no known linear time algorithms to any of the considered problems. The following description presents the previous work in this area for , and exhibits the related work for the larger values of :

.

Here, the notions of -vertex-connectedness and -edge-connectedness reduce to that of connectivity and the connected components of a graph. In the static setting, determining the connected components in linear time is trivial. As a consequence, more focus is being laid on dynamic algorithms maintaining the connected components of graphs. In the incremental setting, where the edges can only be added to the dynamic graph, the optimal solution is provided by disjoint-set data structures [FU], which solve the problem in the amortized time per query, where denotes the inverse of the Ackermann’s fast-growing function. The fully dynamic data structures are also considered [Wulff, Sparsification, Cos2, Cos3, Cos4, ThorupDuzo, Cos6, DynConnWorst, Bridge, DynConnSODA17].

.

One step further are the notions of 2-vertex-connectivity (biconnectivity) and 2-edge-connectivity. In the static setting, partitioning of a graph into -vertex-connected or -edge-connected components are classical problems, both solved in linear time by exploiting the properties of the low function [2ConnStatic]. The incremental versions of both problems are again solved optimally in the amortized time per query [Westbrook1992]. Significant research has been done in the dynamic setting as well [DBLP:conf/wads/PengSS19, ThorupDuzo, Sparsification, Bridge, Dyn3EConn, Henzinger1995, Dyn2ConnBack].

.

As a next step, we consider 3-vertex-connectivity (triconnectivity) and 3-edge-connectivity. An optimal, linear time algorithm detecting the -vertex-connected components was first given by Hopcroft and Tarjan [TriConn]. The first linear algorithm for -edge-connectivity was discovered much later by Galil and Italiano [EdgeToVertex], where they present a linear time reduction from the -edge-connectivity problem to -vertex-connectivity for , showing that in the static setting, the former problem is the easier of the two. This was later followed by a series of works simplifying the solution for -edge-connectivity [Tsin1, Tsin2, 3E1, Tsin3]. The incremental setting [3Incr, IncrSPQR] and the dynamic setting [Dyn3EConn, Sparsification] were also considered.

We also mention that in the case of 3-vertex-connectivity, there exists a structure called SPQR-tree which succinctly captures the structure of 2-vertex-cuts in graphs [SPQR, IncrSPQR]. Its edge-connectivity analogue also exists, but we defer its introduction to the general setting.

.

We move on to the problems of 4-vertex-connectivity and 4-edge-connectivity. A notable result by Kanevsky et al. [QuadConn] supports maintaining 4-vertex-connected components in incremental graphs, with an optimal amortized time per query. Their result also yields the solution for static graphs in time complexity. By applying the result of Galil and Italiano [EdgeToVertex], we derive a static algorithm determining the -edge-connected components in the same time complexity. This algorithm is optimal for .

Another result by Dinitz and Westbrook [Dinitz1998] supports maintaining the -edge-connected components in the incremental setting. Their algorithm processes any sequence of queries in time where is the number of queries, and is the total number of inserted edges to the graph.

However, it is striking that the fastest solutions for -edge-connectivity and -vertex-connectivity for static graphs were derived from the on-line algorithms working in the incremental setting. In particular, no linear time algorithms for were known before.

.

As a side note, we also present the current knowledge on the general problems of -vertex-connectivity and -edge-connectivity. A series of results [Karger, DetMinCut, HenzingerCut, AnotherMinCut] show that it is possible to compute the minimum edge cut of a graph (i.e., determine the edge-connectivity of a graph) in near-linear time. The previously mentioned work by Dinitz and Westbrook [Dinitz1998] maintains the -edge-connected components of an incremental graph which is assumed to already have been initialized with a -edge-connected graph. The data structure answers any sequence of on-line queries in time, where is the number of queries, and is the number of edges in the initial graph.

Gomory and Hu [GomoryHu] proved that for any weighted, undirected graph there exists a weighted, undirected tree on the same vertex set such that for any two vertices , the value of the minimum - edge cut in is equal to the value of the minimum - edge cut in . Moreover, such a tree can be constructed using invocations of the maximum flow algorithm. In an interesting result by Hariharan et al. [PartialGomory-Hu], the decomposition of any graph into -edge-connected components is constructed in time, producing a partial Gomory-Hu tree as its result.

Dinitz et el. [Cactus] showed that the set of all minimum edge cuts can be succinctly represented with a cactus graph

. When the minimum edge cut is odd, this cactus simplifies to a tree (see

[SimplerCactus, Corollary 8]). These results imply that if the size of the minimum cut is odd, then the number of minimum cuts is and if it is even, then the number of minimum cuts is . The structure of -vertex-cuts was also investigated [Longhui].

1.1 Our results

In this work, we present a linear time, deterministic algorithm partitioning static, undirected graphs into -edge-connected components. Even though the area of the dynamic versions of the algorithms for -edge-connectivity is still thriving, the progress in static variants appears to have plateaued. In particular, both subquadratic algorithms determining the -edge-connected components [QuadConn, Dinitz1998] originate from their dynamic incremental equivalents and are almost thirty years old, yet they did not achieve the optimal linear running time. Hence, our work constitutes the first progress in the static setting of -edge-connectivity in a long time. As a side result, our algorithm also produces the tree representation of -edge-cuts as explained in [SimplerCactus].

1.2 Organization of the work

The paper is organized as follows. In Section 3, we show how to reduce the problem of determining -edge-connected components to the problem of determining -edge-connected components in -edge-connected graphs. In Section 4, we show a linear time, randomized Monte Carlo algorithm for listing all 3-edge-cuts in 3-edge-connected graphs. In Section 5, we show how to remove the dependency on the randomness in the algorithm from the previous section, producing a linear time, deterministic algorithm listing all 3-edge-cuts in 3-edge-connected graphs. Then, in Section 6, we construct a tree of 3-edge-cuts in a 3-connected graph, given the list of all its 3-edge cuts. This tree is then used to determine the 4-edge-connected components of the graph. Finally, in Section 7, we present open problems related to this work.

2 Preliminaries

Graphs.

In this work, we consider undirected, connected graphs which may contain self-loops and multiple edges connecting pairs of vertices (i.e., multigraphs). The number of vertices of a graph and the number of its edges are usually denoted and , respectively.

We use the notions of -edge-connectedness and -edge-connected components defined in Section 1. Moreover, we say that a set of edges of a graph forms a -edge cut (or a -cut for simplicity) if the removal of these edges from the graph disconnects it.

DFS trees.

Consider a run of the depth-first search algorithm [DBLP:journals/siamcomp/Tarjan72] on a connected graph . A depth-first search tree (or a DFS tree) is a spanning tree of , rooted at the source of the search , containing all the edges traversed by the algorithm. After the search is performed, each vertex is assigned two values: its preorder (also called discovery time or arrival time) and postorder (also finishing time or departure time). Their definitions are standard [DBLP:books/daglib/0023376]; it can be assumed that the values range from to and are pairwise different.

The edges of are called tree edges, and the remaining edges are called back edges or non-tree edges. In this setup, every back edge connects two vertices remaining in ancestor-descendant relationship in ; moreover, the graph contains exactly one cycle, named the fundamental cycle of .For a vertex of , we define to be the subtree of rooted at ; similarly, for a tree edge whose deeper endpoint is , we set .

When a DFS tree of is fixed, it is common to introduce directions to the edges of the graph: all tree edges of are directed away from the root of , and all back edges are pointed towards the root of . Then, is a directed edge (either a tree or a back edge) whose origin (or tail) is , and whose destination (or head) is .

For our convenience, we introduce the following definition: a back edge leaps over a vertex of the graph if , but ; we analogously define leaping over a tree edge .

Moreover, we define a partial order on the vertices of and the tree edges of as follows: if the simple path in connecting the root of with also contains . Then, has one minimal element—the root of —and each maximal element is a leaf of . When the tree is clear from the context, we may write instead of . Using the precomputed preorder and postorder values in , we can verify if holds for given in constant time.

We use the classical function defined by Hopcroft and Tarjan [2ConnStatic]. However, for our purposes it is more convenient to define it as a function such that for a tree edge , is the back edge leaping over minimizing the preorder of its head , breaking ties arbitrarily; or , if no such edge exists. This function can be computed for all tree edges in time linear with respect to the size of the graph.

Xors.

For sets and , by we denote their symmetric difference, which is , and we call it a xor of and . Moreover, for non-negative integers and , by we denote their xor, that is, an integer whose binary representation is a bitwise symmetric difference of the binary representations of  and . The definitions can be easily generalized to the symmetric differences of multiple sets or integers.

3 Reduction to the -edge-connected case

In this section, we present a way to reduce the problem of building the structure of -connected components to a set of independent, simpler instances of the problem. Each produced instance will be a -edge-connected graph corresponding to a single -edge-connected component of the original graph. This transformation of the input will be vital to the correctness of our work since the algorithm described in the following sections assumes the -edge-connectedness of the input graph. We remark that this is not a new contribution [DinitzRatujeDupe]; we present it here for completeness only.

Firstly, each connected component of can be considered independently. Similarly, bridges (i.e., -edge cuts) split the given graph into independent -edge-connected components. Moreover, it can be shown that for , the family of -edge-connected components of will not be altered by the removal of the bridges. Thus, without loss of generality, we assume that is -edge-connected.

For a -edge-connected graph , we first build a structure of its -edge-connected components. The shape of this structure is a cactus graph, asserted by the following theorem:

Theorem 3.1 ([DinitzRatujeDupe]).

For a given -edge-connected graph , there exists an auxiliary graph such that

  • each edge lies on exactly one simple cycle of ,

  • is the family of all -edge-connected components of ,

  • there exists a bijection where is the set of all edges belonging to some -edge cut of , such that for every edge , its image is an edge of  connecting the -edge-connected component containing with the -edge-connected component containing ,

  • a pair of edges forms a -edge-cut if and only if and belong to the same cycle of .

In order to build the structure from Theorem 3.1, we use the result of Galil and Italiano [EdgeToVertex] to find all 3-edge-connected components of . Then, is defined as a quotient graph created by identifying the vertices within each 3-edge-connected component. This can easily be performed in linear time with respect to the size of .

By definition, the partition of the vertices of into 4-edge-connected components is a refinement of the partition into 3-edge-connected components. However, we can no longer restrict our attention to the independent subgraphs induced on 3-edge-connected components (which is what we did in the case of connected components and 2-edge-connected components). In fact, the -edge-connected components of may even be disconnected. In order to handle this problem, we will need to transform in order to turn each of its 3-edge-connected components into a separate connected component which itself is 3-edge-connected.

Let us now fix —a -edge-connected component of . We will now construct a -edge-connected graph whose partition into 4-edge-connected components will be the same as the partition of into 4-edge-connected components in the original graph. Initially, we assign to the set of all edges of . Then, for each cycle of incident to (), we add an additional special edge to . Formally, let be the two edges of the cycle in that are incident to . Then, both edges have exactly one endpoint belonging to ; denote them and , respectively. Then, we add to as the additional special edge for the cycle . Intuitively, the new edge simulates a path in connecting and which is internally disjoint with and which goes through the 3-edge-connected components around the cycle .

It now turns out that is -edge-connected and captures the connectivity properties of in :

Lemma 3.2 ([DinitzRatujeDupe]).

The graph defined above has the following properties:

  • it is -edge-connected,

  • for each -edge-cut of dividing into nonempty parts, there is a -edge-cut of dividing in the same way,

  • for each -edge-cut of , there is a nonempty set of -edge-cuts of , each dividing in the same way as .

Figure 1: A -edge-connected graph with marked -edge-connected components (left) and corresponding graphs for each with blue special edges (right). Green dashed edges show an example of all -edge-cuts (left) corresponding to one -edge-cut (right).

By Lemma 3.2, each -edge-connected component of is also a -edge-connected component of . Therefore, the reduction is sound.

Given the structure of -edge-connected components of , it is easy to construct graphs for each particular in total time, hence the only remaining part is to construct the structure of -edge-connected components for each independently.

Observe that the total number of special edges added to all -edge-connected components is equal to the total length of the cycles in . Thus, the reduction can easily be performed in time linear with respect to the size of . As a result, without loss of generality, we can assume that the given graph is -edge-connected.

4 Simple randomized algorithm

In this section, we are going to describe a randomized linear time algorithm listing 3-edge-cuts in 3-edge-connected graphs. In particular, the existence of this algorithm will imply that the number of 3-edge-cuts in any 3-edge-connected graph is at most linear. It has no major advantages over the algorithm presented in the succeeding section, but it is significantly simpler and it already contains most of the core ideas. Thus, it serves as a good intermediate step in the explanation.

We begin with the description of some auxiliary data structures.

Theorem 4.1 ([DBLP:journals/jcss/GabowT85]).

There exists a data structure for the disjoint set union problem which, when initialized with an undirected tree (the “union tree”) on vertices, creates singleton sets. After the initialization, the data structure accepts the following queries in any order:

  • : returns the index of the set containing ,

  • : if and are in different sets, then an arbitrary one of them is replaced with their sum and the other one with the empty set. This query can only be issued if is an edge of .

The data structure executes any sequence of queries in total time.

Lemma 4.2.

It is possible to enrich the data structure from Theorem 4.1, so that after rooting it at an arbitrary vertex, we are able to answer the following query in constant time:

  • : returns the smallest vertex of with respect to , where is the set containing .

Note that each set induces a connected subgraph of , hence the smallest vertex of is well-defined.

Proof.

We enrich our structure with information about the lowest common ancestor of all elements of such a set. Call this for a set . Whenever we merge two sets and , it holds that one of and is the ancestor of the other. Moreover, this ancestor can be determined in constant time. Hence, can be determined in constant time. ∎

Theorem 4.3.

There exists a deterministic algorithm that takes as input:

  • an undirected, unrooted tree with vertices,

  • weighted paths in the tree, where the path () is the unique path between vertices , and has weight ,

  • and a positive integer ,

and for each edge of the tree returns the indices of paths with the lowest weight containing , breaking ties arbitrarily; if is a part of fewer than paths, all such paths are returned. The time complexity of the algorithm is .

Proof.

Since weights belong to the set we can sort paths with respect to their weights in time . Hence, from this point on, we assume that it has already been done and that .

We now process these paths in that order. We create the data structure from Lemma 4.2 and initialize it with . For each edge , we are going to maintain a set , which at the end will be the set of the indices of the desired paths for edge .

We root in an arbitrary vertex, which allows us to define the function mapping to its parent in .

We define an auxiliary function Go with the following pseudocode:

function Go()
     
     while  is not an ancestor of  do
         
         
         if  then          
               
Algorithm 1 Updating information with the path on its part from to

Whenever we process path , we execute Go and Go.

We maintain an invariant that after processing the paths , sets are the sets of indices of first paths containing ; or all of them, if there are less than paths among containing as an edge. Moreover, is executed as soon as the size of becomes equal to . The reader is encouraged to think of the as the contraction of the edge in . A contracted subgraph is identified with the lowest common ancestor of all its vertices (i.e., the result of for any from such a subgraph). The function Go traverses all non-contracted edges on the path from to , where is the lowest common ancestor of and , and adds the index to the sets for all traversed edges (and contracts them if necessary). The total size of all sets will never exceed , hence the total number of iterations of the while loop throughout all executions of Go will not exceed as well. We infer that the time complexity of this algorithm is . ∎

We proceed to the description of our randomized algorithm. Let us choose an arbitrary vertex of the graph, and perform a depth-first search from . Let be the resulting DFS tree. We are now going to define a hashing function , where denotes the powerset of ; the value will be called a hash of . If , then we define . Otherwise, we take as the set of non-tree edges leaping over . Let us note that the set of edges such that forms a cycle—the fundamental cycle of .

Lemma 4.4.

For a connected graph and a subset of its edges, the graph is disconnected if and only if there is a nonempty subset such that xor of the hashes of the edges in is an empty set.

Proof.

Let us start with proving that if is disconnected, then there is a nonempty subset such that xor of hashes of edges from is the empty set.

If is disconnected, then we can partition into two nonempty sets and such that all edges between and belong to . Let be the set of the edges between and . We claim that xor of the hashes of the edges from is the empty set. Denote that xor by . Let us take any edge . Since is a cycle, it contains an even number of edges from . Hence , which proves that .

Now, let us prove that if there exists a subset such that xor of the hashes of its edges is the empty set, then is disconnected. Let us color vertices of red and blue such that two vertices connected by a tree edge of have different colors if and only if this tree edge belongs to . Since is a tree, this coloring always exists, and is unique up to swapping the colors. has to contain at least one tree edge; otherwise xor of the hashes of its edges would clearly be nonempty. Hence, in such a coloring, there are vertices of both colors. Let and denote the nonempty sets of vertices colored blue and red, respectively. We claim that there is no edge in connecting and , which in turn will conclude the proof. For the sake of contradiction, assume that such an edge exists. It clearly cannot be a tree edge based on how we defined our coloring. If is a non-tree edge connecting the vertices of different colors, it has to leap over an odd number of tree edges in . These are exactly the edges of that contain in their hashes. Since xor of the hashes of the edges from is assumed to be the empty set, has to be contained in as well, because is the only non-tree edge that contains in its hash—a contradiction. ∎

Since our graph has no 1-edge-cuts, there are no edges whose hashes are empty sets; and since our graph has no 2-edge-cuts, no two edges have equal hashes. Hence, removing a set of three edges disconnects a 3-edge-connected graph if and only if xor of the hashes of all of them is the empty set. Moreover, after removing some 3-edge-cut, the graph disconnects into exactly two components, and no removed edge connects vertices within one component.

As storing hashes as sets of edges would lead to inefficient computations, we will define compressed hashes. We express them as -bit numbers, where , i.e., as a function . For each non-tree edge , we draw randomly and uniformly from the set of -bit numbers. For a tree edge , we define its compressed hash as the xor of the compressed hashes of the edges in its hash, i.e. . Note that since

, we can perform arithmetic operations on compressed hashes in constant time. However, this comes at a cost of allowing the collisions of compressed hashes: it might happen (although with a very low probability) that two sets of edges have equal xors of their compressed hashes, but unequal xors of their original hashes.

For the ease of exposition, in the description of this algorithm we will use hash tables, which only guarantee the expected average constant time complexity per lookup.Because of that, the described algorithm will have expected linear instead of worst-case linear running time. However, there is a way to remove the dependence on hash tables and keep the running time worst-case linear. This will be explained at the end of this section (Subsection 4.7). For the time being, we create a hash table , which for any -bit number returns an edge whose compressed hash is equal to ; or if no such edge exists. In the unlikely event that there are multiple edges whose compressed hashes are equal to , returns any of them.

We will now categorize 3-edge-cuts based on the number of tree edges they contain, and explain how to handle each case. Throughout the case analysis, we will use the following fact multiple times:

Lemma 4.5.

Given two edges and of some 3-edge-cut in a 3-edge-connected graph, the remaining edge is uniquely identified by its hash: .

Proof.

If is a 3-edge-cut, then , so . ∎

4.1 Zero tree edges

As is a spanning connected subgraph, 3-edge-cuts not containing any tree edges simply do not exist.

4.2 One tree edge

Let be an edge of that is the only tree edge of some 3-edge-cut. The two resulting connected components after the removal of such a cut are and (we are slightly abusing the notation here; we actually mean that the connected components are the subgraphs induced by the vertex sets of and , respectively). Since is not a bridge, is well-defined and it connects with , so it belongs to this cut as well. This uniquely determines the third edge of this cut as it has to be the unique edge whose hash is .

In order to detect all such cuts, we iterate over all tree edges and for each of them we look up in whether there exists an edge whose compressed hash is . If it exists and if it turns out to be a non-tree edge, then we call it and output the triple as a 3-edge-cut.

4.3 Two tree edges

Let and be the edges of that form a 3-edge-cut together with some back edge .

Lemma 4.6.

The edges and are comparable with respect to .

Proof.

Assume otherwise. Then and are disjoint, and and are two connected components resulting from the removal of the 3-edge-cut containing and . However, and are well-defined different edges connecting and , so any cut between and would need to contain and at the same time—a contradiction. This proves that and are comparable. ∎

Without loss of generality, assume that . Then, the two resulting connected components are and . The remaining back edge has to connect and . We will distinguish two cases, differing in the location of the remaining back edge: two tree edges, lower case, where connects with , and two tree edges, upper case, where connects with .

Figure 2: The settings in 4.3.1 and 4.3.2. Cuts that we are looking for are formed by edges in both cases.

4.3.1 Two tree edges, lower case

Let be the set of back edges between and . It consists of the edge , which connects with , and of several edges connecting with . Let be the set of heads of edges from (we remind that back edges are directed towards the root ). All elements of lie on the path from to the root , and the head of is the deepest element of .

Based on this observation, we are going to use the data structure from Theorem 4.3. We initialize an instance of it with the tree , , and paths is a back edge, where for each non-tree edge , where is its head, we create one input path from to , and we set . This data structure lets us for each tree edge determine the back edge leaping over with the biggest value of preorder of its head. We call this edge . This back edge is the only candidate for the edge for a fixed edge .

Hence, we can find all such cuts by firstly initializing the data structure, and then iterating over all tree edges . For each , we take . Knowing and , we can look up in whether there exists an edge whose compressed hash is (Lemma 4.5). If it exists and if it is a tree edge, then we call it and output the triple as a 3-edge-cut.

4.3.2 Two tree edges, upper case

Let be the set of non-tree edges between and . It consists of the back edge , which connects with , and of several edges connecting with . Let be the set of the tails of the edges from . Let be the tail of . As , either for all vertices it holds that , or for all vertices it holds that . In the first case, is the vertex with the smallest preorder which is a tail of some edge leaping over ; while in the second case is the analogous vertex with largest preorder.

Based on this observation, we are going to use the data structure from Theorem 4.3 again. We initialize one instance of it with the tree , , and the paths , where for each back edge , we create one input path from to , and set . We also initialize another instance of this data structure in the same way, with the only difference that we set instead. The first instance lets us for each tree edge determine the edge leaping over with the smallest preorder of its tail; while the second instance determines the analogous edge , only with the largest preorder. By our considerations above, if any desired cut exists, then .

Hence, we can find all such cuts by firstly initializing both instances of the data structure. Then, we iterate over all tree edges , and for each we check both candidates for . If our hash table contains an edge whose compressed hash is and it is a tree edge, then we call it and output the triple as a 3-edge-cut.

4.4 Three tree edges

Solving this case in a way similar to the previous cases seems intractable. We consider this subsection, together with the time analysis following it, as one of the key ideas of this work.

In this case, we assume that no non-tree edges belong to the cut. Therefore, we can contract all of them simultaneously and recursively list all 3-edge-cuts in the resulting graph. Since 3-edge-cuts in the contracted graph exactly correspond to the 3-edge-cuts consisting solely of tree edges in the original graph, this reduction is sound. The contraction is performed in time by identifying the vertices within the connected components of . Let be the graph after these contractions. Since is 3-edge-connected, has to be 3-edge connected as well, so the assumption about the input graph being 3-edge-connected is preserved. We do not modify the value of in the subsequent recursive calls, even though the value of will decrease.

4.5 Time analysis

We will now compute the expected time complexity of our algorithm. The subroutines detecting the cuts with at most two tree edges clearly take expected linear time. The only nontrivial part of the time analysis is the recursion in Subsection 4.4. Let denote the maximum expected time our algorithm needs to solve any graph with at most edges. Since our graph is 3-edge-connected, the degree of each vertex in is at least three, so . The bound on follows:

The solution to this recurrence is , hence the whole algorithm runs in expected linear time.

4.6 Correctness analysis

In this subsection, we are going to prove that our algorithm works with sufficiently high probability. The only reason it can output wrong result comes from the compression of hashes—if we used their uncompressed version instead, the algorithm would clearly be correct.

The maximum number of queries to our hash table is linear in terms of , which follows from a similar argument to the one presented in Subsection 4.5. Hence, there exists some absolute constant such that the number of queries is bounded by . For each query with value , and for each edge whose hash is not equal to the hash whose compressed version we ask about, there is probability that compressed version of this hash is equal to . Hence, the probability that we ever get a false positive is bounded from above by . If we never get a false positive, then our algorithm returns the correct output. Since and , we infer that . Therefore, our algorithm works correctly with probability at least .

4.7 Removing hash tables

As mentioned earlier, there is a way to avoid using hash tables in our algorithm. Since the values of compressed hashes of edges are polynomial in (at most ), we are able to sort them in time using the radix sort. Now, instead of requiring a data structure that allows us to query the existence of an edge with a particular value of its compressed hash, we create an object representing such query, which we are going to answer in the future in an offline manner. After gathering all such objects, we may sort them together with edges, where as keys for comparisons we use values of compressed hashes for edges and values of required compressed hashes for queries. All the keys for comparisons are polynomial, so we are able to sort them together in time using the radix sort. After the sorting, we are able to easily answer all queries offline.

This concludes the description of the randomized variant of the algorithm.

5 Deterministic algorithm

Having established a linear time randomized algorithm producing 3-edge-cuts in -edge-connected graphs, we will now determinize it by designing deterministic implementations of the subroutines for each of the cases considered in the randomized variant of the algorithm. Three cases need to be derandomized: “one tree edge” (Subsection 4.2), “two tree edges, lower case” (Subsection 4.3.1), and “two tree edges, upper case” (Subsection 4.3.2). In the following description, we cannot use compressed hashes anymore: the compression is a random process which inevitably results in false positives. Instead, we will exploit additional properties of -edge-cuts in order to produce an efficient deterministic implementation of the algorithm.

Recall that in the description of the randomized implementation of the algorithm, we defined the values , , , and for any tree edge . The values and were defined as the back edges leaping over whose head has the smallest possible preorder in , or the largest possible preorder in , respectively, breaking ties arbitrarily. The values and were defined analogously, only that we chose edges with the smallest possible preorder of the source , and the largest possible preorder of , respectively.

For the deterministic variant of the algorithm, we generalize these notions: we define , , and as the three back edges leaping over with the minimal preorders of their targets; in particular, we set . We analogously define , , , , , and . We remark that might not exist if there are fewer than three edges leaping over ; in this case, we put . However, all the other values must exist—otherwise, at most one edge would leap over , which would mean that this edge, together with , would form a -edge-cut of of cardinality .

All the values defined above can be computed for every tree edge in linear time with respect to the size of : the values , , and are computed in a single depth-first search pass along in the same way as the original low function is computed,only that three lowest back edges are computed instead of just one. The generalizations of , , and are determined in the same way as in Section 4, but is passed to the algorithm from Theorem 4.3 instead of so that two best back edges of each kind are computed instead of just one. Hence, in the following description, we will assume that all the values above have already been computed.

5.1 One tree edge

Recall that in this case, we are to find all -edge-cuts intersecting a fixed depth-first search tree in a single edge. This is fairly straightforward: if some -edge-cut contains exactly one tree edge of , then the border of the cut is exactly ; hence, this cut must include all back edges leaping over . Therefore, belongs to a -edge-cut of this kind if and only if , i.e. if there only exist two back edges leaping over ; in this case, and these two edges form a -edge-cut. Since this check can be easily performed in time for all tree edges in , the whole subroutine runs in time complexity.

5.2 Two tree edges, lower case

Recall that this case requires us to find all -edge-cuts intersecting a fixed depth-first search tree in two edges, say and , such that and the remaining back edge connects with (Figure 2). Hence, is the only back edge connecting with , no back edges connect with , but there may be multiple back edges connecting with .

We shall exploit the fact that in a valid -edge-cut of this kind, there are no back edges originating from and leaping over . For a fixed edge , this severely limits the set of possible tree edges below in , since all back edges leaping over must originate from . This is formalized by the notion of the deepest down cut of :

Definition 1 (deepest down cut).

For a tree edge in , we define the deepest down cut of , denoted , as the deepest tree edge for which all back edges leaping over originate from .

Lemma 5.1.

For every tree edge , is defined correctly and uniquely. Moreover, every -edge cut of the considered kind satisfies .

Proof.

Let be the set of the tails of all the edges leaping over . Then, is the deepest tree edge such that . Equivalently, must be an ancestor of all the vertices in ; hence, the deepest such vertex, the lowest common ancestor of , is defined uniquely. The correctness of the definition of follows. For the latter part of the lemma, observe that if and only if every edge leaping over  originates from . ∎

Lemma 5.2.

For every tree edge , is the tree edge whose head is the lowest common ancestor of two vertices: the tails of and .

Proof.

Let be the set defined in the proof of Lemma 5.1. Then, is the tree edge whose head is the lowest common ancestor of . Equivalently, is the lowest common ancestor of two vertices: one of minimum preorder in , and the other of maximum preorder in . By definition, those two vertices are exactly the tails of and , respectively. ∎

Thanks to Lemma 5.2, we can compute in constant time for each tree edge as the lowest common ancestor of two vertices can be computed in constant time after linear preprocessing [DBLP:journals/siamcomp/HarelT84].

We now shift our focus to the lower tree edge . As in Section 4, the only possible candidate for a back edge of the cut leaping over is given by . The only problematic part is locating the remaining tree edge . Previously, we utilized randomness in order to calculate the compressed hash of given the compressed hashes of and . Here, we instead use the following fact:

Lemma 5.3.

If is a -edge-cut of the considered kind for some tree edges and , then is the deepest tree edge satisfying .

Proof.

Naturally, the condition must be satisfied if is a -edge-cut (Lemma 5.1). It only remains to prove that is the deepest edge with this property. Pick two tree edges , such that , both satisfying the condition regarding their deepest down cuts. We then partition the tree into four parts: , , , and . By the assumptions, all back edges leaping over or over must originate from (Figure 3).

Since is -edge-connected, cannot be its edge cut; there must exist a back edge connecting with the rest of the graph. It cannot leap over as it would also need to originate from ; hence, connects with . Analogously, is not an edge cut of , from which we infer that there exists another back edge connecting with . Since we produced two separate back edges , connecting with , there cannot exist any -edge-cut containing both and . The statement of the lemma follows.

Figure 3: The setting in Lemma 5.3. Given that and , the red back edges are forbidden from occurring in , while the blue back edges can appear in . We prove that both and appear in the graph.

Lemmas 5.1 and 5.3 naturally lead to the following idea: given a tree edge , the set of edges satisfying is a path connecting the head of with the head of ; we assign this path a weight equal to the depth of in . We then invoke Theorem 4.3 with the tree , the weighted paths , and . This allows us to find, in time, for each tree edge , the path of the maximum weight containing as an edge. This path naturally corresponds to the tree edge  from Lemma 5.3. This way, for every tree edge , we have uniquely identified a back edge and a tree edge such that the only possible -edge-cut containing as a deeper tree edge, if it exists, is .

It only remains to verify that is a -edge-cut. We need to make sure that:

  • All back edges leaping over originate from . The equivalent condition is guaranteed by the algorithm, so we do not need to check it again.

  • Exactly one edge () connects with . Since , it suffices to verify that the edge , which is a back edge leaping over whose head is the deepest apart from , also leaps over (i.e., it does not terminate in ).

It can be easily verified that the implementation of the subroutine is deterministic and runs in linear time with respect to the size of .

5.3 Two tree edges, upper case

Recall that in this case, we are required to find all -edge-cuts intersecting in two edges and , such that and the remaining back edge connects with (Figure 2). Similarly to the randomized case, we use the fact that given a tree edge , the tail of has either the smallest preorder (i.e., ) or the largest preorder (i.e., ) among all the back edges leaping over . Without loss of generality, assume that ; the latter case is analogous.

In a similar vein to previous case, observe that if is a -edge-cut for some , then all back edges leaping over other than must originate from .

This leads to the following slight generalization of (Definition 1):

Definition 2.

For a tree edge in , we define the value as the deepest tree edge for which all back edges leaping over other than originate from .

The process of computation of asserted by Lemma 5.2 can be easily modified to match our needs: recall that is the tree edge whose head was the lowest common ancestor of the origins of and . Since we exclude from the condition in Definition 2, we use the edge instead: that is, the back edge leaping over , with the next lowest preorder of the tail. Therefore, we compute as the tree edge whose head is the lowest common ancestor of the tails of and (Figure 4).

Figure 4: The computation of excludes , and computes the lowest common ancestor of the remaining back edges leaping over .

Now, fix the shallower tree edge of the cut. Let . Then, if some tree edge belongs to the -edge-cut , then (otherwise, there would be multiple edges connecting with ). The natural question is then: is -edge-cut? The only possible problem is that there may exist back edges connecting with . Fortunately, if this is the case, then the back edge , dependent only on , is one of these back edges.

Hence, let . If leaps over , we are done, and is an edge cut. Otherwise, the subtree for the sought -edge-cut must contain the head of (or else would connect with ). Let then , be the deepest tree edge containing the head of , that is, the edge whose head coincides with the head of . Then, the edge of the -edge-cut must satisfy . We can then repeat this procedure: given , we compute , and either leaps over and we are done, or we calculate another edge supplying a better bound on the depth of (Figure 5). Since the graph is finite, this process terminates in steps, producing the lowest tree edge such that no back edge connects with , and no back edge other than connects with . If , then no -edge-cut exists for . Otherwise, is naturally a correct -edge-cut. Since two edges of a -edge-cut uniquely identify the third edge (Lemma 4.5), we conclude that this is the only -edge-cut containing and .

Figure 5: The main iteration of the algorithm. We set , , and is the tree edge with the same head as .

We can now sum up the inefficient deterministic implementation of the algorithm for the current case (Algorithm 2).

function SolveTwoTreeEdgesUpper
     for  do
         
         
         
         while  and  do
              
               the tree edge with the same head as
                        
         if  then
              add to the list of -edge-cuts                Run the analogous algorithm for instead of .
Algorithm 2 Inefficient deterministic solution for “two tree edges, upper case”

In order to optimize the algorithm, we seek to eliminate the loop. Indeed, for each , the process is very similar: start with some edge , and then repeatedly replace with a higher edge, until a replacement would result in an edge closer to the root than . We can model this process with a rooted tree defined as follows:

  • the vertices of are the tree edges of , and ,

  • is the root of ,

  • for a non-root vertex of , its parent is the tree edge with the same head as . If the head of coincides with the root of , then the parent of in is .

Naturally, can be constructed in linear time with respect to the size of ; moreover, if an edge is a parent of another edge in , then .

Now, the loop is equivalent to repeated replacement of with its parent in as long as the parent is greater or equal than with respect to . In other words, the final edge is taken as the shallowest ancestor of in for which .

This leads to the final idea: we simulate a forest of rooted subtrees of using a disjoint set union data structure (Theorem 4.1). Each subtree additionally keeps its root, which can be retrieved from (Lemma 4.2). Initially, each subtree of contains a single vertex.

After the initialization of , we iterate over the tree edges of in the decreasing order of depth in . Throughout the process, we maintain the following invariant on : an edge for has been added to the forest if and only if has been considered at any previous iteration of the main loop. Hence, at the beginning of the iteration for a given edge , we add to all tree edges of originating from . At this point of time, for every tree edge such that , the edge is given by . This reduces the entire loop to a single query on (Algorithm 3).

function SolveTwoTreeEdgesUpper
      the rooted tree on defined above
      the disjoint set union data structure built on (Lemma 4.2)
     for , in order from the deepest to the shallowest do
         for  – child of in  do
                        
         
         
         
         if  then
              add to the list of -edge-cuts                Run the analogous algorithm for instead of .
Algorithm 3 Efficient deterministic solution for “two tree edges, upper case”

It can be easily seen that we initialize on a tree with vertices, and we issue queries to it in total. Therefore, the whole subroutine runs in time linear with respect to the size of .

Summing up, we replaced each randomized subroutine with its deterministic counterpart, preserving the linear guarantee on the runtime of the algorithm. We conclude that there exists a deterministic linear-time algorithm listing -edge-cuts in -edge-connected graphs.

6 Reconstructing the structure of -edge-connected components

In this section, we show how to build a structure of -edge-connected components of a -edge-connected graph , given the set of all -edge-cuts in .

First of all, recall what such a structure looks like.

Theorem 6.1.

[SimplerCactus, Corollary 8] For a -edge-connected graph