Improved Linear-Time Algorithm for Computing the 4-Edge-Connected Components of a Graph

08/19/2021 ∙ by Loukas Georgiadis, et al. ∙ 0

We present an improved algorithm for computing the 4-edge-connected components of an undirected graph in linear time. The new algorithm uses only elementary data structures, and it is simple to describe and to implement in the pointer machine model of computation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Determining or testing the edge connectivity of a graph , as well as computing notions of connected components, is a classical subject in graph theory, motivated by several application areas (see, e.g., [12]), that has been extensively studied since the 1970’s. An (edge) cut of is a set of edges such that is not connected. We say that is a -cut if its cardinality is . The edge connectivity of , denoted by , is the minimum cardinality of an edge cut of . A graph is -edge-connected if . A cut separates two vertices and , if and lie in different connected components of . Vertices and are -edge-connected if there is no -cut that separates them. By Menger’s theorem [9], and are -edge-connected if and only if there are -edge-disjoint paths between and . A -edge-connected component of is a maximal set such that there is no -edge cut in that disconnects any two vertices (i.e., and are in the same connected component of for any -edge cut ). We can define, analogously, the vertex cuts and the -vertex-connected components of . It is known how to compute the -edge cuts, -vertex cuts, -edge-connected components and -vertex-connected components of a graph in linear time for  [4, 6, 11, 14, 17]. The case has also received significant attention [1, 2, 7, 8], but until very recently, none of the previous algorithms achieved linear running time. In particular, Kanevsky and Ramachandran [7] showed how to test whether a graph is -vertex-connected in time. Furthermore, Kanevsky et al. [8] gave an -time algorithm to compute the -vertex-connected components of a -vertex-connected graph, where is a functional inverse of Ackermann’s function [16]. Using the reduction of Galil and Italiano [4] from edge connectivity to vertex connectivity, the same bounds can be obtained for -edge connectivity. Specifically, one can test whether a graph is -edge-connected in time, and one can compute the -edge-connected components of a -edge-connected graph in time. Dinitz and Westbrook [2] presented an -time algorithm to compute the -edge-connected components of a general graph (i.e., when is not necessarily -edge-connected). Nagamochi and Watanabe [13] gave an -time algorithm to compute the -edge-connected components of a graph , for any integer .

Very recently, two linear-time algorithms for computing the -edge-connected components of an undirected graph were presented in [5, 10]. The main part in both algorithms is the computation of the -edge cuts of a -edge-connected graph . The algorithms operate on a depth-first search (DFS) tree of  [14], with start vertex , and compute types of cuts , depending on the number of tree edges in . We refer to a cut that consists of tree edges of as a type- cut of . The challenging cases are when is a type- or type- cut. Nadara et al. [10] provided an elegant way to handle type- cuts. Specifically, they showed that computing all type- cuts can be reduced, in linear time, to computing type- and type- cuts, by contracting the edges of . To handle type- cuts in linear time, both [5] and [10] require the use of the static tree disjoint-set-union (DSU) data structure of Gabow and Tarjan [3], which is quite sophisticated and not amenable to simple implementations. Here, we present an improved version of the algorithm of [5] for identifying type- cuts, so that it only uses simple data structures. The resulting algorithm relies only on basic properties of depth-first search (DFS) [14], and on parameters carefully defined on the structure of a DFS spanning tree (see Section 2). As a consequence, it is simple to describe and to implement, and it does not require the power of the RAM model of computation, thus implying the following new results:

Theorem 1.1.

The -edge cuts of an undirected graph can be computed in linear time on a pointer machine.

Corollary 1.2.

The -edge-connected components of an undirected graph can be computed in linear time on a pointer machine.

2 Depth-first search and related notions

In this section we introduce the parameters that are used in our algorithm, which are defined with respect to a depth-first search spanning tree. Let be a connected undirected graph with vertices, which may have multiple edges. Let be the spanning tree of provided by a depth-first search (DFS) of  [14], with start vertex . A vertex is an ancestor of a vertex ( is a descendant of ) in if the tree path from to contains . Thus, we consider a vertex to be both an ancestor and a descendant of itself. The edges in are called tree-edges; the edges in are called back-edges, as their endpoints have ancestor-descendant relation in . We let denote the parent of a vertex in . If is a descendant of in , we denote the set of vertices of the simple tree path from to as . The expressions and have the obvious meaning (i.e., the vertex on the side of the parenthesis is excluded). We identify vertices with their preorder number assigned during the DFS. Thus, if is an ancestor of in , then . Let denote the set of descendants of , and let denote the number of descendants of . Then, vertex is a descendant of (i.e., ) if and only if  [15].

Whenever denotes a back-edge, we shall assume that is a descendant of . We let denote the set of back-edges , where is a descendant of and is a proper ancestor of . Thus, if we remove the tree-edge , remains connected to the rest of the graph through the back-edges in . Furthermore, we have the following property:

Property 2.1.

([5]) A connected graph is -edge-connected if and only if , for every . Furthermore, is -edge-connected only if , for every .

We let denote the number of elements of . Assume that is -edge-connected, and let be a vertex of . Then we have , and therefore there are at least two back-edges in . We define the first low point of , denoted by , as the minimum vertex such that there exists a back-edge . Also, we let denote , i.e., a descendant of that is connected with via a back-edge. (Notice that is not uniquely determined.) Furthermore, we define the second low point of , denoted by , as the minimum vertex such that there exists a back-edge , and let denote . Similarly, we define the high point of , denoted by , as the maximum such that there exists a back-edge . We also let denote a descendant of that is connected with via a back-edge. We let denote the smallest for which there exists a back-edge , or if no such back-edge exists. (Thus, .) Furthermore, we let denote the smallest for which there exists a back-edge , or is no such back-edge exists. It is easy to compute all , , , , and during the DFS. For the computation of all (and ), [5] gave a linear-time algorithm that uses the static tree DSU data structure of Gabow and Tarjan [3].

In order to gather the connectivity information that is contained in the sets , we also have to consider the higher ends of the back-edges in . Thus we define the maximum point of as the maximum vertex such that contains the higher ends of all back-edges in . In other words, is the nearest common ancestor of all for which there exists a back-edge . (Clearly, is a descendant of .) Let be a vertex and be all the vertices with , sorted in decreasing order. Observe that is an ancestor of , for every , since is a common descendant of all . Then we have , and we define , for every , and , for every . Thus, for every vertex , is the successor of in the decreasingly sorted list , and is the predecessor of in the decreasingly sorted list .

Now let be a vertex and let be the children of sorted in non-decreasing order w.r.t. their point. We let be , if , and if . (Note that is not uniquely determined, since some children of may have the same point.) Then we call the child of , and the child of . We let denote the nearest common ancestor of all for which there exists a back-edge with a proper descendant of . We leave undefined if no such proper descendant of exists. We also define as the nearest common ancestor of all for which there exists a back-edge with being a descendant of the child of , and also define as the nearest common ancestor of all for which there exists a back-edge with a descendant of the child of . We leave (resp. ) undefined if no such proper descendant of the (resp., ) child of exists.

The following list summarizes the concepts used by [5] that are defined on a DFS-tree, and can be computed in linear time (except , which we do not compute). Refer to Figure 1 for an illustration.

  • .

  • .

  • .

  • .

  • a vertex such that .

  • .

  • a vertex such that .

  • .

  • a vertex such that .

  • .

  • .

  • .

  • .

  • the maximum vertex such that .

  • the minimum vertex such that .

We note that the notion of low points plays central role in classic algorithms for computing the biconnected components [14], the triconnected components [6] and the -edge-connected components [4, 6, 11, 17] of a graph. Hopcroft and Tarjan [6] also use a concept of high points, which, however, is different from ours. Our goal is to provide an method to compute type- cuts that avoids the use of and . We achieve this by introducing two new parameters.

[trim=0 0 0 6.2cm, clip=true, width=]dfs-example.pdf

Figure 1: An illustration of the concepts defined on a depth-first search (DFS) spanning tree of an undirected graph. Vertices are numbered in DFS order and back-edges are shown directed from descendant to ancestor in the DFS tree. Vertices and have , hence , , and . Moreover, , so .

2.1 Two new key parameters

Here we assume that the input graph is -edge-connected. Let be a vertex such that . Then we have . Thus we can define as the lowest lower end of all back-edges in , and we let be a vertex such that is a back-edge in . Formally, we have

  • .

  • a vertex such that .

Now we describe how to compute for every vertex such that . To do this efficiently, we process the vertices in a bottom-up fashion. For every vertex that we process, we check whether . If that is the case, then is defined and it lies on the simple tree-path . Thus we descend the path , starting from , following the children of the vertices on the path; for every vertex that we encounter we check whether there exists a back-edge with . The first with this property is , and we set . To achieve linear running time, we let denote the list of all vertices for which there exists an incoming back-edge to with higher end . In other words, contains all vertices for which there exists a back-edge . Furthermore, we have the elements of sorted in increasing order (this can be done easily in linear time with bucket-sort). When we process a vertex as we descend , during the processing of , we traverse starting from the element we accessed the last time we traversed (or, if this is the first time we traverse , from the first element of ). Thus, we need a variable to store the element of we accessed the last time we traversed . Now, for every that we meet, we check whether . If that is the case, then we set ; otherwise, we move to the next element of . If we reach the end of , then we descend the path by moving to the child of . In fact, if , then we may descend immediately to . This ensures that will not be accessed again. Algorithm 1 shows how to compute all pairs , for all vertices with , in total linear time.

1 calculate , for every vertex , and have its elements sorted in increasing order foreach vertex  do  first element of foreach vertex  do  for  to  do
2       if  then continue while  do
3             while  do
4                   if  then
5                         next element of
6                   end if
7                  else
8                         if  then
9                              
10                         end if
11                        break
12                   end if
13                  
14             end while
15            if  then
16                   if  then  else 
17             end if
18            
19       end while
20      
21 end for
Algorithm 1 Calculate , for every vertex with
Proposition 2.2.

Algorithm 1 correctly computes all pairs , for all vertices with , in total linear time.

Proof.

Let be a vertex with . We will prove inductively that will be computed correctly, and that will be the lowest vertex which is a descendant of such that is a back-edge. So let us assume that we have run Algorithm 1 and we have correctly computed all pairs , for all vertices with , and is the lowest vertex in such that is a back-edge. Suppose also that we have currently descended the path , we have reached , and .

Let us assume, first, that , and let be the back-edge such that and is minimal with this property. The while loop in line 1 will search the list of incoming back-edges to , starting from . If is the first element of , then is it certainly true that will be found. Otherwise, let . Due to the inductive hypothesis, we have that , for a vertex with . Then, is in , but also in , and thus it is a common descendant of and . This means that and are related as ancestor and descendant. In particular, since , we have that is an ancestor of . Furthermore, since is an ancestor of , it is also an ancestor of ; therefore, since is an ancestor of , it is also an ancestor of . Since is an ancestor of , this implies that is an ancestor of . Since and , we thus have that is an ancestor of , and therefore . Thus, since is the lowest descendant of such that is a back-edge, and is the lowest descendant of such that is a back-edge, we have . This shows that will be accessed during the while loop in line 1.

Now let us assume that . This means that is greater than , and we have to descend the path to find it. First, let be the child of in the direction of . Then we have (since is a descendant of , and therefore a descendant of , and we have ). If there was another child of with , this would imply that , which is absurd, since is a proper ancestor of , and therefore a proper ancestor of . This means that is the child of , and thus we may descend to . Now we have . If , then we simply traverse the list of incoming back-edges to , in line 1, and repeat the same process. Otherwise, let . Due to the inductive hypothesis, we know that has been computed correctly. Since is an ancestor of , it is also an ancestor of . Furthermore, is a descendant of . Thus, is an ancestor of , and therefore is an ancestor of (since and ). This means that is an ancestor of . Now we see that lies on . (For otherwise, would be a back-edge in with and , contradicting the minimality of ). Thus we may descend immediately to . Then we traverse the list of incoming back-edges to , in line 1, and repeat the same process. Eventually we will reach and have it computed correctly. It should be clear that no vertex on the path will be traversed again, and this ensures the linear complexity of Algorithm 1. ∎

3 Simple algorithm for computing all -cuts of type-

In this section we will show how to compute all -cuts of type- (consisting of two tree-edges and one back-edge) of a -edge-connected graph in linear time, without using the points of [5]. We use the following characterization of such cuts.

Lemma 3.1.

([5]) Let be two vertices with . Suppose that is a -cut, where is a back-edge. Then is an ancestor of , and either or . Conversely, if there exists a back-edge such that or is true, then is a -cut.

In the following, for any vertex , denotes the set of all vertices that are ancestors of and such that , for a back-edge . Similarly, for any vertex , denotes the set of all vertices that are descendants of and such that , for a back-edge . In [5] it is shown that (resp. ) for every two vertices (resp. ). Thus, in order to find all type- cuts, it is sufficient to find, for every vertex (resp. ) the unique vertex (resp. ), if it exists, such that (resp. ), and then identify the back-edge such that is a -cut. The following two lemmas show how to identify .

Lemma 3.2.

Let be two vertices such that is a descendant of and , for a back-edge . Then we have . In particular, we have that either and , or and , or and .

Proof.

First we will show that is a proper ancestor of . Obviously, is an ancestor of , since . Furthermore, since , is a descendant of , and is a proper ancestor of , and therefore a proper ancestor of . Thus, it cannot be the case that , for otherwise we would have . This shows that is a proper ancestor of . Now we will show that , for every . Suppose for the sake of contradiction, that there exists a such that . Then we have , for every , and so , for every . Then, since , we have that , for all , except possibly a . Thus, is a common ancestor of all , except possibly , and so, since , we conclude that is an ancestor of , which is absurd. Thus we have demonstrated that , for every .

Now, there are two cases to consider: either , or is a descendant of a child of . First take the case . Then is obviously a back-edge in . Furthermore, since is not an ancestor of , we also have . Thus . Since every other back-edge of the form with must have , we conclude that is the unique back-edge of the form with . Since , this means that . Now consider the case that is a descendant of a child of . Then we have that is either a descendant of , or a descendant of (since , for every ). We will consider only the case that is a descendant of , since the other case can be treated in a similar manner. So let be a descendant of . Then we must have , for otherwise there would exist a back-edge of the form with , and so we would have two distinct back-edges , which is absurd. Thus, , and therefore, since , we have . Thus, we must necessarily have , for otherwise would be an ancestor of both and , and therefore an ancestor of , which is absurd. Since and , this means that , and therefore . ∎

Lemma 3.3.

Let be two vertices such that is a descendant of and , for a back-edge . Then we have . In particular, we have that either and , or and , or and .

Proof.

implies that is an ancestor of . If , then . (For otherwise, there exists a vertex with and , and so we have - which is impossible, since .) Since and and is a back-edge in , we conclude that .

Now let’s assume that is a proper ancestor of . There are two cases to consider: either , or is a descendant of a child of . If , then is a back-edge in . Furthermore, (for otherwise we would have that is an ancestor of ). This shows that . We also see that , for any back-edge with . Thus, since , we have . Finally, let’s assume that is a descendant of a child of , i.e. is a descendant of , for some . We will show that , for every . So suppose for the sake of contradiction, that , for some . Since , we have ; therefore, since , we have . This shows that there exists a back-edge which is also in . But since, , it cannot be the case that . Thus we have provided two distinct back-edges , which is absurd. This shows that , for every . If , this implies that is a common ancestor of at least two children of , which is absurd. Thus we have that . Now suppose for the sake of contradiction, that . It cannot be the case that , for otherwise also, which would imply that is an ancestor of , which is absurd. Now, it cannot be the case that , for otherwise there would exist two distinct back-edges , which is also absurd. Thus, is the only child of with , which means that there must exist a back-edge with . Now if