Nearly Optimal Space Efficient Algorithm for Depth First Search

10/16/2018
by   Jayesh Choudhari, et al.
0

We design a space-efficient algorithm for performing depth-first search traversal(DFS) of a graph in O(m+n^* n) time using O(n) bits of space. While a normal DFS algorithm results in a DFS-tree (in case the graph is connected), our space bounds do not permit us even to store such a tree. However, our algorithm correctly outputs all edges of the DFS-tree. The previous best algorithm (which used O(n) working space) took O(m n) time (Asano, Izumi, Kiyomi, Konagaya, Ono, Otachi, Schweitzer, Tarui, Uehara (ISAAC 2014) and Elmasry, Hagerup, Krammer (STACS 2015)). The main open question left behind in this area was to design faster algorithm for DFS using O(n) bits of space. Our algorithm answers this open question as it has a nearly optimal running time (as the DFS takes O(m+n) time even if there is no space restriction).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/19/2019

Space Efficient Algorithms for Breadth-Depth Search

Continuing the recent trend, in this article we design several space-eff...
05/30/2018

Space-Efficient DFS and Applications: Simpler, Leaner, Faster

The problem of space-efficient depth-first search (DFS) is reconsidered....
07/23/2020

Sampling connected subgraphs: nearly-optimal mixing time bounds, nearly-optimal ε-uniform sampling, and perfect uniform sampling

We study the connected subgraph sampling problem: given an integer k ≥ 3...
08/12/2019

Space-Efficient Construction of Compressed Suffix Trees

We show how to build several data structures of central importance to st...
12/28/2018

Fast Breadth-First Search in Still Less Space

It is shown that a breadth-first search in a directed or undirected grap...
10/08/2021

On Explicit Constructions of Extremely Depth Robust Graphs

A directed acyclic graph G=(V,E) is said to be (e,d)-depth robust if for...
04/18/2021

Load-Balancing Succinct B Trees

We propose a B tree representation storing n keys, each of k bits, in ei...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In analyzing algorithms, mostly we concentrate on minimizing the running time, or the quality of the solution (if the problem is hard). After we have optimized the above parameters, we then look to reduce the space taken by the algorithm, if possible. An excellent theoretical question is: Given a problem , design an algorithm that solves it in as low space as possible. These algorithms are called space-efficient algorithms as we want to optimize on the space taken by the algorithm while not increasing the running time by much (compared to the best algorithm for the problem with no space restriction).

Recently, designing space-efficient algorithms has gained importance because of the rapid growth in the use of mobile devices and other hand-held devices which come with limited memory (e.g., the devices like Raspberry Pi, which are widely used in IoT applications). Another crucial reason for the increasing importance of the space-efficient algorithms is the rate and the volume at which huge datasets are generated (“big data”). Areas like machine learning, scientific computing, network traffic monitoring, Internet search, signal processing, etc., need to process big data using as less memory as possible.

Algorithmic fields like Dynamic Graph Algorithm [10, 21, 25, 26, 28] and Streaming algorithm [2, 3, 12, 27, 3, 1, 2] mandate low space usage by the algorithm. In a streaming algorithm, the mandate is mentioned upfront. In a dynamic graph algorithm, this mandate is implied as we want the update time of the algorithm to be as low as possible. Low update time implies that we don’t have enough time to look at our data-structure. Thus, we want our data-structure to be as compact as possible. Motivated by the growing body of work in the field of space-efficient algorithms, this paper focuses on optimizing the space taken by the DFS algorithm, which is one of the fundamental graph algorithms.

However, one needs to be slightly cautious about the definition of space. For a graph problem, it would take space just to represent the graph. So, it seems that any graph problem requires bits. To avoid such trivial answers, we first define our model of computation.

1.1 Model of Computation : Register Input Model [19]

Frederickson [19] introduced the register input model in which the input (graph – in this case) is given in a read-only memory (thus, it cannot be modified). Also the output of the algorithm is written on a write-only memory. Along with the input and the output memory, a random-access memory of limited size is also available. Similar to the standard RAM model, the data on the input memory and the workspace is divided into words of size bits. Any arithmetic, logical and bitwise operations on constant number of words take time.

When we say that our algorithm uses bits, this is the space on the random-access memory used by our algorithm. The above model takes care of the case when the input itself takes a lot of space — by designating a special read-only memory for the input.

We highlight some results that make use of the register input model. Pagter and Rauhe [23] described a comparison-based algorithm for sorting numbers: for every given with , an algorithm that takes time using bits. A matching lower bound of for the time-space product was given by Beame [11] for the strong branching-program model. Please see references for other problems in this model [13, 14, 18, 22, 24, 4, 6, 9, 8, 15]. In this paper, our main focus is on the Depth First Search Problem.

1.2 DFS Problem

The problem of space efficient Dfs has received a lot of attention recently. Asano et al. [5] designed an algorithm that can perform Dfs in (unspecified) polynomial time using bits. If the space is increased to bits then their running time decreases to . They also showed how to perform Dfs in time using bits. Elmasry et al. [17] improved this result by designing an algorithm that can perform Dfs in time using bits. Banerjee et al.[7] proposed an efficient Dfs algorithm that takes time using space. Note that this is a strict improvement (over the Elmasry et al. [17] result) only if the graph is sparse. The following open question was raised by Asano et al. [5] in their paper:

Using space, can Dfs be done in time?

Recently, Hagerup [20] claimed an algorithm that finds Dfs in time using bits of space. We improve upon this algorithm giving a near optimal running time for Dfs — it is almost linear in . Our result can be succinctly stated as follows:

Theorem 1.

There exists a randomized algorithm that can perform Dfs of a given graph in

time with a high probability (

(where )) using bits of space. (Note that our algorithm is randomized because we use succinct dictionaries that use random bits)

The succinct dictionary (used by our algorithm) performs insertion/deletion in time with a probability of (where ). Our algorithm performs at most insertions/deletions across all dictionaries. Hence, the probability that our algorithm takes more than time for any of these insertions/deletions is (by union bound).

2 Overview

We will assume that vertices of input graph are numbered from 1 to . Let denote the neighborhood of the vertex and denote the -th neighbor of the vertex , where . As in [17], we will assume that is an array. So, we have random access to any element in this array. Also, we implicitly know the degree of , .

Normally, the Dfs algorithm outputs the Dfs tree. Given the space bounds, we cannot store the Dfs tree, but, we output the edges of the Dfs tree as soon as we encounter them. We view that the problem is solved if the output edges form a valid Dfs tree.

We first give a quick overview of the non-recursive implementation of the Dfs algorithm. Let be the input graph having vertices and edges. For this implementation, we will use a stack . Initially, all vertices are colored white and assume that we start the Dfs from a vertex . So, is added to the stack . The algorithm then processes all elements of the stack till it becomes empty. Thus, the top vertex, say , is popped from the stack and is processed as follows: each neighbor of is explored. If a white vertex is found, then is pushed on to the stack and processing of starts. If none of the neighbors of vertex are white, then is colored black. Whenever discovers a white vertex , we push a tuple on to , where the second entry in the tuple tells us which neighbor of vertex to explore once processing of resumes.

Now, let us formally define the second entry in the tuple

Definition 2.

For any vertex , if is an entry on the stack , then denotes the first neighbor of the vertex which is still not explored while processing .

1 for  to n do
2       ;
3      
4 end for
5foreach  do
6       if  is white then
7             Process(;
8            
9       end if
10      
11 end foreach
Algorithm 1 Initialize()
1 ;
2 while  is not empty  do
3       top element of ;
4       ;
       /* scan neighbors of */
5       if  then
6             if  is white then
7                   output edge ;
8                   ;
9                  
10             end if
11            
12       end if
13      else
14             ;
15            
16       end if
17      
18 end while
Algorithm 2 Process

The space required to represent the first and second term of each tuple in the stack is bits. As there are vertices in the graph, the size of the stack can reach in the worst case. So, the total space taken by the trivial algorithm is bits.

Our algorithm closely follows [17]. So, we first give a brief overview of their approach and later, we will explain our improvement over their approach.

2.1 Previous Approach (Elmasry et. al. [17])

The trivial Dfs algorithm does not work for Elmasry et al.[17] because the stack itself takes bits of space. Hence, stack is not implemented — but, is referred to as an imaginary stack. Let the stack be divided into segments of size — the first segment is the bottommost vertices of , the second segment is the next vertices of and so on. A new stack is implemented, which contains vertices from at most top two segments of the imaginary stack . Each entry of the stack is a tuple: where . The space required to represent these two terms is at most . Thus, the total space required for is bits. Since, the size of is very small as compared to the imaginary stack , the main problem arises when an element is to be pushed on but it is full or when becomes empty (but contains vertices). Thus, there is a need to make space in or a way to restore vertices in .

To handle the case when is full, Elmasry et al.[17] remove the bottom half elements of . So, a new entry can now be pushed on to , and the Dfs algorithm can proceed as usual.

Handling the second case (when is empty) requires to restore the top segment of in . It turns out that the restoration process is the main bottleneck of this Dfs algorithm. To aid the restoration process, Elmasry et al.[17] propose an elegant solution by maintaining an additional stack , called a trailer stack. The top-most element of each segment in is called as a trailer element. The stack stores the trailer element of each segment in – except trailers of those segments which are already present in .

The stack is crucially used in the restoration process. Let be the second top most entry in stack . This implies that the first vertex of top segment of is . Now, a Dfs-like algorithm is run starting from the vertex to restore the top segment of in as follows:

Temporarily the meaning of gray and white vertex is changed. Then, process to find as follows: find the first gray neighbor of , mark it white, push (where ), and then start processing of . Elmasry et al. [17] show that this restoration process correctly restores the top segment of .

Some explanation is in order about the above procedure. Once we have found , we want to find . Analogously, we can say that we want to find . This vertex, , was a white vertex encountered while processing . Due to , we stopped the processing of , put on and start the processing of .

Even though the above algorithm is correct, it is still slow. Finding the first gray neighbor of a vertex takes time. To overcome this difficulty, Elmasry et al.[17] suggest the use of two more data-structures. The first data-structure is an array of size that contains the following information for each vertex : if is an element of , then contains

  • The segment number in which lies.

  • The approximate position of in .

Since there are segments of (as each segment is of size , it requires bits to represent the first quantity. Similarly, storing the approximate position also takes bits. Thus the space required for is bits.

The second term in helps to fasten the search process for only if the degree of is sufficiently small. However, to take care of high degree vertices, the trailer stack is extended to include not only trailers but also all the pair , where is a high degree vertex. Finally, Elmasry et al. [17] show that the extended trailer stack takes bits. Moreover, using and the extended restores correctly and efficiently.

Imaginary stack

restore

uses
Figure 1: A pictorial description of the approach in [17]. The restoration of stack depends on . The size of is and our aim is to reduce this size. Note that all the data-structures [17] are not shown in the figure.

2.2 Our Approach

We give a brief overview of our approach. In [17], the array plays a critical role in the restoration process. While restoring the top segment, provides the required information for each vertex which is a part of the top-most segment. However, takes bits – a space we cannot afford. Our main observation is that we do not require information related to all vertices while restoring . Indeed, storing information about vertices in the top-most segment suffices. Unfortunately, it is not easy to keep information related to vertices in top-most segment efficiently in space. To overcome this difficulty, along with the stack 111In our algorithm, size of is bit different than that in [17]. It is mentioned in Remark 1 we implement (a dynamic dictionary – as described in Lemma 3) which contains information about top vertices vertices of the imaginary stack . For each vertex in , we store bits of information that will help us when we restore (remember that the size of is much less that the size of ). We can show that the size of is bits. Thus, we have successfully reduced the size of (named in [17]).

Since does not store the information of all the vertices in stack , it faces the restoration problem as well. If top vertices are popped out of , those are also deleted from . Thus, we need to restore . To aid in the restoration of , we implement another data-structure , which contains the information top vertices of . For each vertex in , we will store bits of information. The size of can be shown to be bits. It is not hard to see that this process goes on recursively and we have many data-structures where the last data-structure is . stores information about vertices, where is some constant. But, the restoration problem does not disappear yet. Now the question is how do we restore ? Beyond this, we do not create any more data-structure. We restore using the most trivial strategy, that is by running Dfs all over again. Our main claim is that throughout our algorithm is restored at most times. We will show that the time taken to restore is . Thus the total time taken to restore is (since is a constant). For other ’s (), our analysis is slightly different and it is the main technical contribution of this paper. We will show that the total time taken to restore over the entire course of the algorithm is where . Thus, the time taken to restore all ’s over the entire course of the algorithm is .

Let us now briefly describe the space taken by our algorithm. Each stores information about at most top vertices of . Also, for each such vertex, we will only store bits. Using succinct dictionary [16], we will show that we can implement in space. Thus, the total space taken by our algorithm is bits. Note that our algorithm will also use some other data-structures which we have not described till now. However, the major challenge in our work was to bound the size of ’s. All our other data-structures take bits cumulatively. Thus, the total space taken by our algorithm is bits. This completes the overview of our algorithm.

restore

uses

restore

uses

Figure 2: A pictorial description of our approach. We implement many ”stacks” . The restoration of uses as contains the vertices to be restored in . Note that all our data-structures are not shown in the figure.
Remark 1.

In the above description, each contains at most top elements of . Thus, the size of is . This is a crucial difference from the Elmasry et al. [17] algorithm, where the size of was . The main reason for this change is to decreases the space taken by our algorithm. Indeed, the cumulative space taken by all ’s (in our algorithm) can be shown to be . In spite of this change, the running time of our algorithm does not suffer. To summarize, this is an important technical change from the previous work with the sole aim to decrease the space taken by the algorithm.

3 Preliminaries

In our algorithm, the following data-structure plays a crucial role.

Lemma 3.

(Succinct Dynamic Dictionary [16]) Given a universe   of size , there exists a dynamic dictionary that stores a subset of size at most . Each element of has a satellite data of size where . The time taken for membership, retrieval, insert, and delete any element (and its satellite data) is with probability for some chosen constant . The space taken by the data-structure is bits.

Note that a similar dictionary was also described in Lemma 2.1 of [17].

We define few basic notation/data-structures that will be used in the ensuing discussion.

  • .

  • (iterated logarithm) is the number of times the logarithm function is iteratively applied till the result is . Define . Note that

  • We divide the imaginary stack into segments of size . An () contains vertices of consecutive segments of . We divide the imaginary stack into s from bottom to top (only the topmost may contain less number of consecutive segments). The total number of vertices in an is at most and the total number of s is at most . For brevity, we will drop the ceil notation in the rest of the paper.

  • Stack

    A stack will store the vertices present in at most top two segments of . Each cell of contains the tuple of type .

  • Dynamic Dictionary for

    We will store information about vertices of at most top two in a dynamic dictionary . This information will be crucial in restoring .

  • Trailers

    In [17], the restoration algorithm uses the trailer stack to find a vertex from which the restoration of should start. In our algorithm, as we have to restore , we require many trailer stacks.

    To this end, we implement a trailer stack for each . In the trailer stack (, we keep the bottommost element of the imaginary stack and the top vertex of all s of that are not present in .

4 Our Algorithm

Our algorithm is nearly similar to the Elmasry et al.[17] algorithm. We initially color all the vertices white (the space taken by the Color array is bits as we color a vertex white, gray or black only). Then we take an arbitrary vertex, say , and do a Dfs from . Like Elmasry et al.[17], initially is pushed on to the stack. Additionally, we also insert to all other ’s.

1 for  to 1 do
2       ;
3      
4 end for
5while trailer is not empty  do
6       top element of ;
7       for  to 1 do
8             ;
9            
10       end for
11      ;
12       while  do
13             if  is white then
14                   for  to 1 do
15                         ;
16                         ;
17                        
18                   end for
19                  break;
20                  
21             end if
22            else
23                   ;
24                  
25             end if
26            
27       end while
28      else
29             ;
30            
31       end if
32      
33 end while
Algorithm 3

We then go over the stack till it becomes empty. Analogously, we can say that we will process the stack till the trailer becomes empty — as always contains the bottommost element of the imaginary stack . Our While loop is similar to the standard Dfs algorithm with the addition that we push and pop not only to but insert to and delete from all ’s. Let be the top element of . We pop from and also delete it from all other ’s. Then we color gray. We then check if the -th neighbor of , is white or not. If it is white, then we first push back on to the stack (and all other ’s). After that, is pushed to and all the other relevant data-structure. When we have processed all the neighbors of , it is colored black.

We now calculate the running time of our Dfs algorithm in Algorithm 3. In the classical Dfs algorithm, a gray vertex is pushed onto the stack again after it finds a new white vertex. This implies that vertices can be pushed on to the stack at most times. Our Dfs algorithm is nearly similar to the classical Dfs algorithm with the only difference that we insert/delete into “stacks” instead of one. Thus we claim the following running time:

Lemma 4.

Not accounting for the time taken by Insert and Delete procedures, the time taken by our Dfs algorithm in Algorithm 3 is .

In procedure, we add the information about vertex to . Remember that is used to restore . We will now describe in detail.

5 Information in

In [17], where we just have to restore , the following two pieces of information about each vertex is stored in : (1) The segment number in which lies. (2) The approximate position in where lies.

We try to generalize this idea. Unlike , the dictionary in our algorithm contains information about vertices present in at most two top s. For each such , let denote the cell in which information related to is stored. We will store the following information related to .

  1. The number in which lies.

    Remember that ’s main function is to restore . Thus, for each vertex , we will store the to which belongs, let us denote it by . will help the restore algorithm of to check whether indeed lies in the top . Since the total number of is , bits are required to represent .

  2. The approximate position in where lies.

    The above information is used to find efficiently. It would have been nice if we could explicitly store . However, this will require bits for each vertex in — a space which we cannot afford. To overcome the space limitation, we divide into groups of appropriate size and store the group number in which lies.

The exact definition of the second term requires some more work. Note that takes just bits. We want the second term also to take bits. Thus, the number of groups into which we divide should not be huge (it should be ). However, if the number of groups is small, it implies that the group size, i.e., the number of vertices in each group, may be large. Thus, given the group number, finding in the group will take more time. Thus, we are faced with a dilemma where reducing the space increases the running time of our algorithm. To overcome this dilemma, we extend a strategy used in [17]. Elmasry et al. [17] divided the vertices into two sets – heavy and light. A light vertex has low degree — thus, its group size is small. For heavy vertices, they show that the total number of heavy vertices is small and for each heavy vertex , can be stored explicitly without using too much space. We plan to extend this strategy. But unlike [17], we have a hierarchy of heavy and light vertices (since we have a hierarchy of ’s).

5.1 Light Vertices

Definition 5.

A vertex is if where . We define all the vertices in to be .

We are now ready to define the second information related to stored in . If is , then we divide into groups of size .

Definition 6.

If is , then the second information of (approximate position of in ) stored in is defined as follows: if .

The total number of groups of is . Thus the total number of bits required to represent is 3 bits.

Remember that we partitioned the set of vertices into light and heavy only to make the group size small. We now bound the number of vertices in a group of a vertex.

Observation 7.

If is , then the total number of vertices in each group of is .

We are now ready to formally define the information about vertex stored in .

  • If an vertex becomes a part of top of imaginary stack , then we store the following information about .

  • If vertex is not , then , that is we just store the in which resides.

Some explanation is in order. If is an vertex, then we can store the information corresponding to . We have already shown that both these terms take bits. Moreover, given the group number , we can find in time, as the number of vertices in each group of an vertex is (using Observation 7).

However, if is not , then its group size may be which is not desirable (as this might increase the search time for ). So, for such a vertex, we store only as there is no point in storing the second term (the second term 0 is just a dummy term). But for efficiency, we need to store some information regarding even for the vertex which is not . In the next section, we describe a data-structure which will efficiently store information about all non vertices.

5.2 Heavy Vertices

Definition 8.

A vertex is if where . We define a vertex separately. A vertex is said to be if .

Note that our definition partitions the vertex set nicely. We prove this nice property in the following lemma:

Lemma 9.

If is not , then it is for some where .

Proof.

Since is not , . Thus, there exists a () such that or (the case when ). ∎

We store the information related to an vertex in a dynamic dictionary where . Since degree of a vertex is , total number of vertices is . Similar to vertices, we divide into groups of size . The only problem with this group size is that it is not defined for . If , then we divide into groups of size 1.

We store the group number of in the dynamic dictionary , that is defined as follows: if . Since we divide into groups of size , the total number of groups is . This implies that total space required to represent the group number per vertex in is bits.

Using Observation 7, if a vertex is , then the associated group size (stored in ) is . The next lemma present a very crucial feature of our algorithm:

Lemma 10.

Let be a vertex in , then the group size associated with is of size .

Proof.

If is , then we have already seen that the group size associated with (and stored in ) is . Using Lemma 9, if is not , then it is for . Thus, the information about the group of is stored in , that is . To this end, we divide into group of size . There are two cases:

  1. Since is , . This implies that the size of each group is .

  2. By definition, the group size is exactly .

Thus, the group size associated with is .

The above lemma shows a crucial property of all vertices in . The associated group size of all these vertices is irrespective of their degree. Thus, whenever we are searching for for a vertex , we have to search atmost . We will crucially exploit this property in the restoration algorithm. However, before that let us take a look at the insert and delete procedures.

6 Insert and Delete Procedures

1 if   then
2       ;
3      
4 end if
5if  is  then
6       or (if );
7      
8 end if
9else
10      
11 end if
12if  is empty or recently pushed element becomes the top element of an  then
13       ;
14      
15 end if
16if  is  then
17       ;
18      
19 end if
Algorithm 4

In the Insert procedure, is to be inserted in . But may be full, that is, it has vertices. So, we call which basically aims at removing half of the elements of . After the restoration, has the top vertices of the imaginary stack . We then insert in . If this newly added element becomes the top element of an or the trailer itself is empty then we add to the trailer . Lastly, if is , then it is added to . Three details are missing from the pseudo code of Insert. We list them now:

  1. Calculating

    Let be the total number of vertices in trailer and be the total number of vertices in . We first calculate the total number of vertices below in the imaginary stack . This is = . Once we have calculated , finding is just a mathematical calculation.

  2. Calculating or

    This is just a mathematical calculation once we know and .

  3. Finding if is a top element of an

    This can be done by maintaining the number of elements currently present in the imaginary stack . Before inserting , if or , then we insert on to the trailer .

1 if  and has at least two elements  then
2       ;
3      
4 end if
5if  is  then
6       ;
7      
8 end if
9if  is on the top of the trailer  then
10       ;
11      
12 end if
return or (if
Algorithm 5

The is nearly similar to the Insert procedure. We first check if the number of elements in is less. If yes, then we also have to check if the trailer itself has enough elements. If yes, then we call . After its execution, contains topmost vertices of the imaginary stack . If is , then it is removed from . After this, the top element of (and if necessary) is removed.

The following lemma about the running time of Insert and Delete is immediate (due to our data-structure in Lemma 3).

Lemma 11.

Apart from the time taken by Restore-Empty and Restore-Full, the running time taken by Insert and Delete procedure is with high probability222Since we use the data-structure described in Lemma 3 at most poly() times, all insert and deletes are successful with probability where is some constant. .

7 Restore Procedure

1 color all the gray vertices white;
2 the vertex from which we started our Dfs in . ;
3 do
4       Let be the index of the first white neighbor of ;
5       ;
6       if  lies in the top  then
7             if  is  then
8                   ;
9                  
10             end if
11            else
12                   ;
13                  
14             end if
15            
16       end if
17      ;
18      
19while  is not equal to the top of trailer