1 Introduction
Determining or testing the edge connectivity of a graph , as well as computing notions of connected components, is a classical subject in graph theory, motivated by several application areas (see, e.g., [12]), that has been extensively studied since the 1970’s. An (edge) cut of is a set of edges such that is not connected. We say that is a cut if its cardinality is . The edge connectivity of , denoted by , is the minimum cardinality of an edge cut of . A graph is edgeconnected if . A cut separates two vertices and , if and lie in different connected components of . Vertices and are edgeconnected if there is no cut that separates them. By Menger’s theorem [9], and are edgeconnected if and only if there are edgedisjoint paths between and . A edgeconnected component of is a maximal set such that there is no edge cut in that disconnects any two vertices (i.e., and are in the same connected component of for any edge cut ). We can define, analogously, the vertex cuts and the vertexconnected components of . It is known how to compute the edge cuts, vertex cuts, edgeconnected components and vertexconnected components of a graph in linear time for [4, 6, 11, 14, 17]. The case has also received significant attention [1, 2, 7, 8], but until very recently, none of the previous algorithms achieved linear running time. In particular, Kanevsky and Ramachandran [7] showed how to test whether a graph is vertexconnected in time. Furthermore, Kanevsky et al. [8] gave an time algorithm to compute the vertexconnected components of a vertexconnected graph, where is a functional inverse of Ackermann’s function [16]. Using the reduction of Galil and Italiano [4] from edge connectivity to vertex connectivity, the same bounds can be obtained for edge connectivity. Specifically, one can test whether a graph is edgeconnected in time, and one can compute the edgeconnected components of a edgeconnected graph in time. Dinitz and Westbrook [2] presented an time algorithm to compute the edgeconnected components of a general graph (i.e., when is not necessarily edgeconnected). Nagamochi and Watanabe [13] gave an time algorithm to compute the edgeconnected components of a graph , for any integer .
Very recently, two lineartime algorithms for computing the edgeconnected components of an undirected graph were presented in [5, 10]. The main part in both algorithms is the computation of the edge cuts of a edgeconnected graph . The algorithms operate on a depthfirst search (DFS) tree of [14], with start vertex , and compute types of cuts , depending on the number of tree edges in . We refer to a cut that consists of tree edges of as a type cut of . The challenging cases are when is a type or type cut. Nadara et al. [10] provided an elegant way to handle type cuts. Specifically, they showed that computing all type cuts can be reduced, in linear time, to computing type and type cuts, by contracting the edges of . To handle type cuts in linear time, both [5] and [10] require the use of the static tree disjointsetunion (DSU) data structure of Gabow and Tarjan [3], which is quite sophisticated and not amenable to simple implementations. Here, we present an improved version of the algorithm of [5] for identifying type cuts, so that it only uses simple data structures. The resulting algorithm relies only on basic properties of depthfirst search (DFS) [14], and on parameters carefully defined on the structure of a DFS spanning tree (see Section 2). As a consequence, it is simple to describe and to implement, and it does not require the power of the RAM model of computation, thus implying the following new results:
Theorem 1.1.
The edge cuts of an undirected graph can be computed in linear time on a pointer machine.
Corollary 1.2.
The edgeconnected components of an undirected graph can be computed in linear time on a pointer machine.
2 Depthfirst search and related notions
In this section we introduce the parameters that are used in our algorithm, which are defined with respect to a depthfirst search spanning tree. Let be a connected undirected graph with vertices, which may have multiple edges. Let be the spanning tree of provided by a depthfirst search (DFS) of [14], with start vertex . A vertex is an ancestor of a vertex ( is a descendant of ) in if the tree path from to contains . Thus, we consider a vertex to be both an ancestor and a descendant of itself. The edges in are called treeedges; the edges in are called backedges, as their endpoints have ancestordescendant relation in . We let denote the parent of a vertex in . If is a descendant of in , we denote the set of vertices of the simple tree path from to as . The expressions and have the obvious meaning (i.e., the vertex on the side of the parenthesis is excluded). We identify vertices with their preorder number assigned during the DFS. Thus, if is an ancestor of in , then . Let denote the set of descendants of , and let denote the number of descendants of . Then, vertex is a descendant of (i.e., ) if and only if [15].
Whenever denotes a backedge, we shall assume that is a descendant of . We let denote the set of backedges , where is a descendant of and is a proper ancestor of . Thus, if we remove the treeedge , remains connected to the rest of the graph through the backedges in . Furthermore, we have the following property:
Property 2.1.
([5]) A connected graph is edgeconnected if and only if , for every . Furthermore, is edgeconnected only if , for every .
We let denote the number of elements of . Assume that is edgeconnected, and let be a vertex of . Then we have , and therefore there are at least two backedges in . We define the first low point of , denoted by , as the minimum vertex such that there exists a backedge . Also, we let denote , i.e., a descendant of that is connected with via a backedge. (Notice that is not uniquely determined.) Furthermore, we define the second low point of , denoted by , as the minimum vertex such that there exists a backedge , and let denote . Similarly, we define the high point of , denoted by , as the maximum such that there exists a backedge . We also let denote a descendant of that is connected with via a backedge. We let denote the smallest for which there exists a backedge , or if no such backedge exists. (Thus, .) Furthermore, we let denote the smallest for which there exists a backedge , or is no such backedge exists. It is easy to compute all , , , , and during the DFS. For the computation of all (and ), [5] gave a lineartime algorithm that uses the static tree DSU data structure of Gabow and Tarjan [3].
In order to gather the connectivity information that is contained in the sets , we also have to consider the higher ends of the backedges in . Thus we define the maximum point of as the maximum vertex such that contains the higher ends of all backedges in . In other words, is the nearest common ancestor of all for which there exists a backedge . (Clearly, is a descendant of .) Let be a vertex and be all the vertices with , sorted in decreasing order. Observe that is an ancestor of , for every , since is a common descendant of all . Then we have , and we define , for every , and , for every . Thus, for every vertex , is the successor of in the decreasingly sorted list , and is the predecessor of in the decreasingly sorted list .
Now let be a vertex and let be the children of sorted in nondecreasing order w.r.t. their point. We let be , if , and if . (Note that is not uniquely determined, since some children of may have the same point.) Then we call the child of , and the child of . We let denote the nearest common ancestor of all for which there exists a backedge with a proper descendant of . We leave undefined if no such proper descendant of exists. We also define as the nearest common ancestor of all for which there exists a backedge with being a descendant of the child of , and also define as the nearest common ancestor of all for which there exists a backedge with a descendant of the child of . We leave (resp. ) undefined if no such proper descendant of the (resp., ) child of exists.
The following list summarizes the concepts used by [5] that are defined on a DFStree, and can be computed in linear time (except , which we do not compute). Refer to Figure 1 for an illustration.

.

.

.

.

a vertex such that .

.

a vertex such that .

.

a vertex such that .

.

.

.

.

the maximum vertex such that .

the minimum vertex such that .
We note that the notion of low points plays central role in classic algorithms for computing the biconnected components [14], the triconnected components [6] and the edgeconnected components [4, 6, 11, 17] of a graph. Hopcroft and Tarjan [6] also use a concept of high points, which, however, is different from ours. Our goal is to provide an method to compute type cuts that avoids the use of and . We achieve this by introducing two new parameters.
2.1 Two new key parameters
Here we assume that the input graph is edgeconnected. Let be a vertex such that . Then we have . Thus we can define as the lowest lower end of all backedges in , and we let be a vertex such that is a backedge in . Formally, we have

.

a vertex such that .
Now we describe how to compute for every vertex such that . To do this efficiently, we process the vertices in a bottomup fashion. For every vertex that we process, we check whether . If that is the case, then is defined and it lies on the simple treepath . Thus we descend the path , starting from , following the children of the vertices on the path; for every vertex that we encounter we check whether there exists a backedge with . The first with this property is , and we set . To achieve linear running time, we let denote the list of all vertices for which there exists an incoming backedge to with higher end . In other words, contains all vertices for which there exists a backedge . Furthermore, we have the elements of sorted in increasing order (this can be done easily in linear time with bucketsort). When we process a vertex as we descend , during the processing of , we traverse starting from the element we accessed the last time we traversed (or, if this is the first time we traverse , from the first element of ). Thus, we need a variable to store the element of we accessed the last time we traversed . Now, for every that we meet, we check whether . If that is the case, then we set ; otherwise, we move to the next element of . If we reach the end of , then we descend the path by moving to the child of . In fact, if , then we may descend immediately to . This ensures that will not be accessed again. Algorithm 1 shows how to compute all pairs , for all vertices with , in total linear time.
Proposition 2.2.
Algorithm 1 correctly computes all pairs , for all vertices with , in total linear time.
Proof.
Let be a vertex with . We will prove inductively that will be computed correctly, and that will be the lowest vertex which is a descendant of such that is a backedge. So let us assume that we have run Algorithm 1 and we have correctly computed all pairs , for all vertices with , and is the lowest vertex in such that is a backedge. Suppose also that we have currently descended the path , we have reached , and .
Let us assume, first, that , and let be the backedge such that and is minimal with this property. The while loop in line 1 will search the list of incoming backedges to , starting from . If is the first element of , then is it certainly true that will be found. Otherwise, let . Due to the inductive hypothesis, we have that , for a vertex with . Then, is in , but also in , and thus it is a common descendant of and . This means that and are related as ancestor and descendant. In particular, since , we have that is an ancestor of . Furthermore, since is an ancestor of , it is also an ancestor of ; therefore, since is an ancestor of , it is also an ancestor of . Since is an ancestor of , this implies that is an ancestor of . Since and , we thus have that is an ancestor of , and therefore . Thus, since is the lowest descendant of such that is a backedge, and is the lowest descendant of such that is a backedge, we have . This shows that will be accessed during the while loop in line 1.
Now let us assume that . This means that is greater than , and we have to descend the path to find it. First, let be the child of in the direction of . Then we have (since is a descendant of , and therefore a descendant of , and we have ). If there was another child of with , this would imply that , which is absurd, since is a proper ancestor of , and therefore a proper ancestor of . This means that is the child of , and thus we may descend to . Now we have . If , then we simply traverse the list of incoming backedges to , in line 1, and repeat the same process. Otherwise, let . Due to the inductive hypothesis, we know that has been computed correctly. Since is an ancestor of , it is also an ancestor of . Furthermore, is a descendant of . Thus, is an ancestor of , and therefore is an ancestor of (since and ). This means that is an ancestor of . Now we see that lies on . (For otherwise, would be a backedge in with and , contradicting the minimality of ). Thus we may descend immediately to . Then we traverse the list of incoming backedges to , in line 1, and repeat the same process. Eventually we will reach and have it computed correctly. It should be clear that no vertex on the path will be traversed again, and this ensures the linear complexity of Algorithm 1. ∎
3 Simple algorithm for computing all cuts of type
In this section we will show how to compute all cuts of type (consisting of two treeedges and one backedge) of a edgeconnected graph in linear time, without using the points of [5]. We use the following characterization of such cuts.
Lemma 3.1.
([5]) Let be two vertices with . Suppose that is a cut, where is a backedge. Then is an ancestor of , and either or . Conversely, if there exists a backedge such that or is true, then is a cut.
In the following, for any vertex , denotes the set of all vertices that are ancestors of and such that , for a backedge . Similarly, for any vertex , denotes the set of all vertices that are descendants of and such that , for a backedge . In [5] it is shown that (resp. ) for every two vertices (resp. ). Thus, in order to find all type cuts, it is sufficient to find, for every vertex (resp. ) the unique vertex (resp. ), if it exists, such that (resp. ), and then identify the backedge such that is a cut. The following two lemmas show how to identify .
Lemma 3.2.
Let be two vertices such that is a descendant of and , for a backedge . Then we have . In particular, we have that either and , or and , or and .
Proof.
First we will show that is a proper ancestor of . Obviously, is an ancestor of , since . Furthermore, since , is a descendant of , and is a proper ancestor of , and therefore a proper ancestor of . Thus, it cannot be the case that , for otherwise we would have . This shows that is a proper ancestor of . Now we will show that , for every . Suppose for the sake of contradiction, that there exists a such that . Then we have , for every , and so , for every . Then, since , we have that , for all , except possibly a . Thus, is a common ancestor of all , except possibly , and so, since , we conclude that is an ancestor of , which is absurd. Thus we have demonstrated that , for every .
Now, there are two cases to consider: either , or is a descendant of a child of . First take the case . Then is obviously a backedge in . Furthermore, since is not an ancestor of , we also have . Thus . Since every other backedge of the form with must have , we conclude that is the unique backedge of the form with . Since , this means that . Now consider the case that is a descendant of a child of . Then we have that is either a descendant of , or a descendant of (since , for every ). We will consider only the case that is a descendant of , since the other case can be treated in a similar manner. So let be a descendant of . Then we must have , for otherwise there would exist a backedge of the form with , and so we would have two distinct backedges , which is absurd. Thus, , and therefore, since , we have . Thus, we must necessarily have , for otherwise would be an ancestor of both and , and therefore an ancestor of , which is absurd. Since and , this means that , and therefore . ∎
Lemma 3.3.
Let be two vertices such that is a descendant of and , for a backedge . Then we have . In particular, we have that either and , or and , or and .
Proof.
implies that is an ancestor of . If , then . (For otherwise, there exists a vertex with and , and so we have  which is impossible, since .) Since and and is a backedge in , we conclude that .
Now let’s assume that is a proper ancestor of . There are two cases to consider: either , or is a descendant of a child of . If , then is a backedge in . Furthermore, (for otherwise we would have that is an ancestor of ). This shows that . We also see that , for any backedge with . Thus, since , we have . Finally, let’s assume that is a descendant of a child of , i.e. is a descendant of , for some . We will show that , for every . So suppose for the sake of contradiction, that , for some . Since , we have ; therefore, since , we have . This shows that there exists a backedge which is also in . But since, , it cannot be the case that . Thus we have provided two distinct backedges , which is absurd. This shows that , for every . If , this implies that is a common ancestor of at least two children of , which is absurd. Thus we have that . Now suppose for the sake of contradiction, that . It cannot be the case that , for otherwise also, which would imply that is an ancestor of , which is absurd. Now, it cannot be the case that , for otherwise there would exist two distinct backedges , which is also absurd. Thus, is the only child of with , which means that there must exist a backedge with . Now if
Comments
There are no comments yet.