Deterministic Combinatorial Replacement Paths and Distance Sensitivity Oracles

05/17/2019 ∙ by Noga Alon, et al. ∙ Tel Aviv University 0

In this work we derandomize two central results in graph algorithms, replacement paths and distance sensitivity oracles (DSOs) matching in both cases the running time of the randomized algorithms. For the replacement paths problem, let G = (V,E) be a directed unweighted graph with n vertices and m edges and let P be a shortest path from s to t in G. The replacement paths problem is to find for every edge e ∈ P the shortest path from s to t avoiding e. Roditty and Zwick [ICALP 2005] obtained a randomized algorithm with running time of O(m √(n)). Here we provide the first deterministic algorithm for this problem, with the same O(m √(n)) time. For the problem of distance sensitivity oracles, let G = (V,E) be a directed graph with real-edge weights. An f-Sensitivity Distance Oracle (f-DSO) gets as input the graph G=(V,E) and a parameter f, preprocesses it into a data-structure, such that given a query (s,t,F) with s,t ∈ V and F ⊆ E ∪ V, |F| < f being a set of at most f edges or vertices (failures), the query algorithm efficiently computes the distance from s to t in the graph G ∖ F ( i.e., the distance from s to t in the graph G after removing from it the failing edges and vertices F). For weighted graphs with real edge weights, Weimann and Yuster [FOCS 2010] presented a combinatorial randomized f-DSO with O(mn^4-α) preprocessing time and subquadratic O(n^2-2(1-α)/f) query time for every value of 0 < α < 1. We derandomize this result and present a combinatorial deterministic f-DSO with the same asymptotic preprocessing and query time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In many algorithms used in computing environments such as massive storage devices, large scale parallel computation, and communication networks, recovering from failures must be an integral part. Therefore, designing algorithms and data structures whose running time is efficient even in the presence of failures is an important task. In this paper we study variants of shortest path queries in setting with failures.

The computation of shortest paths and distances in the presence of failures was extensively studied. Two central problems researched in this field are the Replacement Paths problem and Distance Sensitivity Oracles, we define these problems hereinafter.

The Replacement Paths problem (See, e.g., [37, 40, 20, 18, 30, 39, 6, 43, 31, 33, 34, 35, 42, 19]). Let be a graph (directed or undirected, weighted or unweighted) with vertices and edges and let be a shortest path from to . For every edge a replacement path is a shortest path from to in the graph (which is the graph after removing the edge ). Let be the length of the path . The replacement paths problem is as follows: given a shortest path from to in , compute (or an approximation of it) for every .

Distance Sensitivity Oracles (See, e.g., [11, 21, 8, 9, 13, 15, 16, 17, 28]). An -Sensitivity Distance Oracle (-DSO) gets as input a graph and a parameter , preprocesses it into a data-structure, such that given a query with and being a set of at most edges or vertices (failures), the query algorithm efficiently computes (exactly or approximately) which is the distance from to in the graph (i.e., in the graph after removing from it the failing edges and vertices ). Here we would like to optimize several parameters of the data-structure: minimize the size of the oracle, support many failures , have efficient preprocessing and query algorithms, and if the output is an approximation of the distance then optimize the approximation-ratio.

An important line of research in the theory of computer science is derandomization. In many algorithms and data-structures there exists a gap between the best known randomized algorithms and the best known deterministic algorithms. There has been extensive research on closing the gaps between the best known randomized and deterministic algorithms in many problems or proving that no deterministic algorithm can perform as good as its randomized counterpart. There also has been a long line of work on developing derandomization techniques, in order to obtain deterministic versions of randomized algorithms (e.g., Chapter 16 in [2]).

In this paper we derandomize algorithms and data-structures for computing distances and shortest paths in the presence of failures. Many randomized algorithms for computing shortest paths and distances use variants of the following sampling lemma (see Lemma 1 in Roditty and Zwick [37]).

[Lemma 1 in [37]] Let satisfy for and . If

is a random subset obtained by selecting each vertex, independently, with probability

, for some , then with probability of at least we have for every .

Our derandomization step of Lemma 1 is very simple, as described in Section 1.3, we use the folklore greedy approach to prove the following lemma, which is a deterministic version of Lemma 1.

[See also Section 1.3] Let satisfy for and . One can deterministically find in time a set such that and for every .

We emphasize that the use of Lemma 1 is very standard and is not our main contribution. The main technical challenge is how to efficiently and deterministically compute a small number of sets so that the invocation of Lemma 1 is fast.

1.1 Derandomizing the Replacment Paths Algorithm of Roditty and Zwick [37]

We derandomize the algorithm of Roditty and Zwick [37] and obtain a near optimal deterministic algorithm for the replacement paths problem in directed unweighed graphs (a problem which was open for more than a decade since the randomized algorithm was published) as stated in the following theorem.

There exists a deterministic algorithm for the replacement paths problem in unweighted directed graphs whose runtime is . This algorithm is near optimal assuming the conditional lower bound of combinatorial boolean matrix multiplication of [42].

The term “combinatorial algorithms” is not well-defined, and it is often interpreted as non-Strassen-like algorithms [4], or more intuitively, algorithms that do not use any matrix multiplication tricks. Arguably, in practice, combinatorial algorithms are to some extent considered more efficient since the constants hidden in the matrix multiplication bounds are high. On the other hand, there has been research done to make fast matrix multiplication practical, e.g., [27, 5].

Vassilevska Williams and Williams [42] proved a subcubic equivalence between occurrences of the combinatorial replacement paths problem in unweighted directed graphs and the combinatorial boolean multiplication (BMM) problem. More precisely, they proved that there exists some fixed such that the combinatorial replacement paths problem can be solved in time if and only if there exists some fixed such that the combinatorial boolean matrix multiplication (BMM) can be solved in subcubic time. Giving a subcubic combinatorial algorithm to the BMM problem, or proving that no such algorithm exists, is a long standing open problem. This implies that either both problems can be polynomially improved, or neither of them does. Hence, assuming the conditional lower bound of combinatorial BMM, our combinatorial algorithm for the replacement paths problem in unweighted directed graphs is essentially optimal (up to factors).

The replacement paths problem is related to the simple shortest paths problem, where the goal is to find the simple shortest paths between two vertices. Using known reductions from the replacement paths problem to the simple shortest paths problem, we close this gap as the following Corollary states.

There exists a deterministic algorithm for computing simple shortest paths in unweighted directed graphs whose runtime is .

More related work can be found in Section 1.5. As written in Section 1.5, the trivial time algorithm for solving the replacement paths problem in directed weighted graphs (simply, for every edge run Dijkstra in the graph ) is deterministic and near optimal (according to a conditional lower bound by [42]). To the best of our knowledge the only deterministic combinatorial algorithms known for directed unweighted graphs are the algorithms for general directed weighted graphs whose runtime is leaving a significant gap between the randomized and deterministic algorithms. As mentioned above, in this paper we derandomize the algorithm of Roditty and Zwick [37] and close this gap.

1.2 Derandomizing the Combinatorial Distance Sensitivity Oracle of Weimann and Yuster [39]

Our second result is derandomizing the combinatorial distance sensitivity oracle of Weimann and Yuster [39] and obtaining the following theorem.

Let be a directed graph with real edge weights, let and . There exists a deterministic algorithm that given and parameters and constructs an -sensitivity distance oracle in time. Given a query with and being a set of at most edges or vertices (failures), the deterministic query algorithm computes in time the distance from to in the graph .

We remark that while our focus in this paper is in computing distances, one may obtain the actual shortest path in time proportional to the number of edges of the shortest paths, using the same algorithm for obtaining the shortest paths in the replacement paths problem [37], and in the distance sensitivity oracles case [39].

1.3 Technical Contribution and Our Derandomization Framework

Let be a random algorithm that uses Lemma 1 for sampling a subset of vertices . We say that is a set of critical paths for the randomized algorithm if uses the sampling Lemma 1 and it is sufficient for the correctness of algorithm that is a hitting set for (i.e., every path in contains at least one vertex of ). According to Lemma 1 one can derandomize the random selection of the hitting set in time that depends on the number of paths in . Therefore, in order to obtain an efficient derandomization procedure, we want to find a small set of critical paths for the randomized algorithms.

Our main technical contribution is to show how to compute a small set of critical paths that is sufficient to be used as input for the greedy algorithm stated in Lemma 1.

Our framework for derandomizing algorithms and data-structures that use the sampling Lemma 1 is given in Figure 1.

Step 1: Prove the existence of a small set of critical paths such that and show that it is sufficient for the correctness of the randomized algorithm that the set obtained by Lemma 1 hits all the paths . Step 2: Find an efficient algorithm to compute the paths . Step 3: Use a deterministic algorithm to compute a small subset of vertices such that for every . For example, one can use the greedy algorithm of Lemma 1 or the blocker set algorithm of [29] to find a subset of vertices.
Figure 1: Our derandomization framework to derandomize algorithms that use the sampling Lemma 1.

Our first main technical contribution, denoted as Step 1 in Figure 1, is proving the existence of small sets of critical paths for the randomized replacement path algorithm of Roditty and Zwick [37] and for the distance sensitivity oracles of Weimann and Yuster [39]. Our second main technical contribution, denoted as Step 2 in Figure 1, is developing algorithms to efficiently compute these small sets of critical paths.

For the replacement paths problem, Roditty and Zwick [37] proved the existence of a critical set of paths, each path containing at least edges. Simply applying Lemma 1 on this set of paths requires time which is too much, and it is also not clear from their algorithm how to efficiently compute this set of critical paths. As for Step 1, we prove the existence of a small set of critical paths, each path contains edges, and for Step 2, we develop an efficient algorithm that computes this set of critical paths in time.

For the problem of distance sensitivity oracles, Weimann and Yuster [39] proved the existence of a critical set of paths, each path containing edges (where ). Simply applying Lemma 1 on this set of paths requires time which is too much, and here too, it is also not clear from their algorithm how to efficiently and deterministically compute this set of critical paths. As for Step 1, we prove the existence of a small set of critical paths, each path contains edges, and for Step 2, we develop an efficient deterministic algorithm that computes this set of critical paths in time.

For Step 3, we use the folklore greedy deterministic algorithm denoted here by
GreedyPivotsSelection. Given as input the paths , each path contains at least vertices, the algorithm chooses a set of pivots such that for every it holds that . In addition, it holds that and the runtime of the algorithm is .

The GreedyPivotsSelection algorithm works as follows. Let . Starting with , find a vertex which is contained in the maximum number of sets of , add it to and remove all the sets that contain from . Repeat this process until .

Let and be two integers. Let be paths satisfying for every . The algorithm GreedyPivotsSelection finds in time a set such that for every it holds that and .

Proof.

We first prove that for every it holds that and .

When the algorithm terminates then every set contains at least one of the vertices of , as otherwise would have contained the sets which are disjoint from and the algorithm should have not finished since .

For every vertex , let

be a variable which denotes, at every moment of the algorithm, the number of sets in

which contain .

Denote by the set after iterations. Let be the initial set given as input to the algorithm, then . We claim that the process terminates after at most iterations, and since at every iteration we add one vertex to , it follows that . Recall that contains sets of size at least . Hence, . It follows that the average number of sets that a vertex belongs to is: . By the pigeonhole principle, the vertex belongs to at least sets of . Therefore, . At iteration we remove from the sets , so in each iteration we decrease the size of by at least a factor of . After the iteration, the size of is at most . Therefore, after the iteration, the size of is at most , where the last inequality holds since . It follows that after iterations we have .

At each iteration we add one vertex to the set , thus the size of the set is .

Next we describe an implementation of the GreedyPivotsSelection algorithm (see Figure 2 for pseudo-code). The first thing we do is keep only an arbitrary subset of vertices from every so that every set contains exactly vertices.

We implement the algorithm GreedyPivotsSelection as follows. During the runtime of the algorithm we maintain a counter for every vertex which equals the number of sets in that contain . During the initialization of the algorithm, we construct a subset of vertices which contains all the vertices in all the paths , and compute we compute directly, first by setting and then we scan all the sets and every vertex and increase the counter . After this initialization we have which is the number of sets of that contain . We further initialize a binary search tree and insert every vertex into with the key , and initialize . We also create a list for every vertex which contains pointers to the sets that contain . Hence, and .

To obtain the set we run the following loop. While we find the vertex which is contained in the maximum number of paths of and add to . The vertex is computed in time by extracting the element in whose key is maximal. Then we remove from all the sets which contain (these are exactly the sets ) and we update the counters by scanning every set and every vertex and decreasing the counter by one (we also update the key of in to the counter ).

We analyse the runtime of this greedy algorithm. Computing the subset of vertices and setting all the values at the beginning for every takes time. Computing the values takes time as we loop over all the sets and for every we loop over the exactly vertices and increase the counter by one. Initializing the binary search tree and inserting to it every vertex with key takes time, and all the extract-max operations on take additional time. The total time of operations of the form is as this is the sum of all values at the beginning and each such operation is handled in time by updating the key of the vertex in to . The total time for checking the lists of all vertices chosen to is at most , as this is the sum of sizes of all sets . Therefore, the total running time is . ∎

Algorithm:  GreedyPivotsSelection
 /* Initialization */
1 for  do
2      for  do
3            
4       end for
5      
6 end for
7for  do
8      
9 end for
10for  do
11      for  do
             .append()   /* Append the list with a pointer to the set */
12            
13       end for
14      
15 end for
16 Empty-Binary-Search-Tree() for  do
17       Insert to the binary-search with the key .
18 end for
.  /* Loop Invariant: */
19 while  do
         /* is the vertex in whose key is maximal. */
20       for  do
21            if  then
22                  for  do
23                         .remove() Insert to the binary-search with the key .
24                   end for
25                  .delete()
26             end if
27            
28       end for
29      
30 end while
Figure 2: Algorithm GreedyPivotsSelection

1.4 Related Work - the Blocker Set Algorithm of King

We remark that the GreedyPivotsSelection algorithm is similar to the blocker set algorithm described in [29] for finding a hitting set for a set of paths. The blocker set algorithm was used in [29] to develop sequential dynamic algorithms for the APSP problem. Additional related work is that of Agarwal et. al. [1]. They presented a deterministic distributed algorithm to compute APSP in an edge-weighted directed or undirected graph in rounds in the Congest model by incorporating a deterministic distributed version of the blocker set algorithm.

While our derandomization framework uses the greedy algorithm (or the blocker set algorithm) to find a hitting set of vertices for a critical set of paths , we stress that our main contribution are the techniques to reduce the number of sets the greedy algorithm must hit (Step 1), and the algorithms to efficiently compute the sets (Step 2). These techniques are our main contribution, which enable us to use the greedy algorithm (or the blocker set algorithm) for a wider range of problems. Specifically, these techniques allow us to derandomize the best known random algorithms for the replacement paths problem and distance sensitivity oracles. We believe that our techniques can also be leveraged for additional related problems which use a sampling lemma similar to Lemma 1.

1.5 More Related Work

We survey related work for the replacement paths problem and distance sensitivity oracles.

The replacement paths problem. The replacement paths problem is motivated by several different applications and has been extensively studied in the last few decades (see e.g. [34, 26, 25, 35, 42, 20, 37, 18, 30, 6]). It is well motivated by its own right from the fault-tolerance perspective. In many applications it is desired to find algorithms and data-structures that are resilient to failures. Since links in a network can fail, it is important to find backup shortest paths between important vertices of the graph.

Furthermore, the replacement paths problem is also motivated by several applications. First, the fastest algorithms to compute the simple shortest paths between and in directed graphs executes iterations of the replacement paths between and in total time (see [43, 31]). Second, considering path auctions, suppose we would like to find the shortest path from to in a directed graph , where links are owned by selfish agents. Nisan and Ronen [36] showed that Vickrey Pricing is an incentive compatible mechanism, and in order to compute the Vickery Pricing of the edges one has to solve the replacement paths problem. It was raised as an open problem by Nisan and Ronen [36] whether there exists an efficient algorithm for solving the replacement paths problem. In biological sequence alignment [10] replacement paths can be used to compute which pieces of an alignment are most important.

The replacement paths problem has been studied extensively, and by now near optimal algorithms are known for many cases of the problem. For instance, the case of undirected graphs admits deterministic near linear solutions (see [34, 26, 25, 35]). In fact, Lee and Lu present linear -time algorithms for the replacement-paths problem in on the following classes of -node -edge graphs: (1) undirected graphs in the word-RAM model of computation, (2) undirected planar graphs, (3) undirected minor-closed graphs, and (4) directed acyclic graphs.

A natural question is whether a near linear time algorithm is also possible for the directed case. Vassilevska Williams and Williams [42] showed that such an algorithm is essentially not possible by presenting conditional lower bounds. More precisely, Vassilevska Williams and Williams [42] showed a subcubic equivalence between the combinatorial all pairs shortest paths (APSP) problem and the combinatorial replacement paths problem. They proved that there exists a fixed and an time combinatorial algorithm for the replacement paths problem if and only if there exists a fixed and an time combinatorial algorithm for the APSP problem. This implies that either both problems admit truly subcubic algorithms, or neither of them does. Assuming the conditional lower bound that no subcubic APSP algorithm exists, then the trivial algorithm of computing Dijkstra from in every graph for every edge , which takes time, is essentially near optimal.

The near optimal algorithms for the undirected case and the conditional lower bounds for the directed case seem to close the problem. However, it turned out that if we consider the directed case with bounded edge weights then the picture is not yet complete.

For instance, if we assume that the graph is directed with integer weights in the range and allow algebraic solutions (rather than combinatorial ones), then Vassilevska Williams presented [40] an time algebraic randomized algorithm for the replacement paths problem, where is the matrix multiplication exponent, whose current best known upper bound is ([32, 41, 14]).

Bernstein presented in [6] a -approximate deterministic replacement paths algorithm which is near optimal (whose runtime is , where is the largest edge weight in the graph and is the smallest edge weight).

For unweighted directed graphs the gap between randomized and deterministic solutions is even larger for sparse graphs. Roditty and Zwick [37] presented a randomized algorithm whose runtime is time for the replacement paths problem for unweighted directed graphs. Vassilevska Williams and Williams [42] proved a subcubic equivalence between the combinatorial replacement paths problem in unweighted directed graphs and the combinatorial boolean multiplication (BMM) problem. They proved that there exists some fixed such that the combinatorial replacement paths problem can be solved in time if and only if there exists some fixed such that the combinatorial boolean matrix multiplication (BMM) can be solved in subcubic time. Giving a subcubic combinatorial algorithm to the BMM problem, or proving that no such algorithm exists, is a long standing open problem. This implies that either both problems can be polynomially improved, or neither of them does. Hence, assuming the conditional lower bound of combinatorial BMM, the randomized algorithm of Roditty and Zwick [37] is near optimal.

In the deterministic regime no algorithm for the directed case is known that is asymptotically better (up to ploylog) than invoking APSP algorithm. Interestingly, in the fault-tolerant and the dynamic settings many of the existing algorithms are randomized, and for many of the problems there is a polynomial gap between the best randomized and deterministic algorithms (see e.g. sensitive distance oracles [21], dynamic shortest paths [22, 7], dynamic strongly connected components [23, 24, 12], dynamic matching [38, 3], and many more). Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time. However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve. A central line of research is focused on the derandomization of algorithms that relies on randomness.

Our main contribution is a derandomization of the replacement paths algorithm of [37] for the case of unweighted directed graphs. After more than a decade we give the first deterministic algorithm for the replacement paths problem, whose runtime is . Our deterministic algorithm matches the runtime of the randomized algorithm, which is near optimal assuming the conditional lower bound of combinatorial boolean matrix multiplication [42]. In addition, to the best of our knowledge this is the first deterministic solution for the directed case that is asymptotically better than the APSP bound.

The replacement paths problem is related to the shortest paths problem, where the goal is to find the shortest paths between two vertices. Eppstein [19] solved the shortest paths problem for directed graphs with nonnegative edge weights in time. However, the shortest paths may not be simple, i.e., contain cycles. The problem of simple shortest paths (loopless) is more difficult. The deterministic algorithm by Yen [43] (which was generalized by Lawler [31]) for finding simple shortest paths in weighted directed graphs can be implemented in time. This algorithm essentially uses in each iteration a replacement paths algorithm. Roditty and Zwick [37] described how to reduce the problem of simple shortest paths into executions of the second shortest path problem. For directed unweighted graphs, the randomized replacement paths algorithm of Roditty and Zwick [37] implies that the simple shortest paths has a randomized time algorithm. To the best of our knowledge no better deterministic algorithm is known than the algorithms for general directed weighted graphs, yielding a significant gap between randomized and the deterministic simple shortest paths for directed unweighted graphs. Our deterministic replacement paths algorithm closes this gap and gives the first deterministic simple shortest paths algorithm for directed unweighted graphs whose runtime is .

The best known randomized algorithm for the simple shortest paths problem in directed unweighted graphs takes time ([37]), leaving a significant gap compared to the best known deterministic algorithm which takes time (e.g., [43], [31]). We close this gap by proving the existence of a deterministic algorithm for computing simple shortest paths in unweighted directed graphs whose runtime is .

1.6 Outline

The structure of the paper is as follows. In Section 2 we describe some preliminaries and notations. In Section 3 we apply our framework to the replacement paths algorithm of Roditty and Zwick [37]. In Section 4 we apply our framework to the DSO of Weimann and Yuster for graphs with real-edge weights [39].

In order for this paper to be self-contained, a full description of the combinatorial deterministic replacement paths algorithm is given in Section 5 and a full description of the deterministic distance sensitivity oracles is given in Section 6.

2 Preliminaries

Let be a directed weighted graph with vertices and edges with real edge weights . Given a path in we define its weight .

Given , let be a shortest path from to in and let be its length, which is the sum of its edge weights. Let denote the number of edges along . Note that for unweighted graphs we have . When is known from the context we sometimes abbreviate with respectively.

We define the path concatenation operator as follows. Let and be two paths. Then is defined as the path , and it is well defined if either or .

For a graph we denote by the set of its vertices, and by the set of its edges. When it is clear from the context, we abbreviate by and by .

Let be a path which contains the vertices such that appears before along . We denote by the subpath of from to .

For every edge a replacement path for the triple is a shortest path from to avoiding . Let be the length of the replacement path .

We will assume, without loss of generality, that every replacement path can be decomposed into a common prefix with the shortest path , a detour which is disjoint from the shortest path (except for its first vertex and last vertex), and finally a common suffix which is common with the shortest path . Therefore, for every edge it holds that (the prefix and/or suffix may be empty).

Let be a set of vertices and edges. We define the graph as the graph obtained from by removing the vertices and edges . We define a replacement path as a shortest path from to in the graph , and let be its length.

3 Deterministic Replacement Paths in Time - an Overview

In this section we apply our framework from Section 1.3 to the replacement paths algorithm of Roditty and Zwick [37]. A full description of the deterministic replacement paths algorithm is given in Section 5.

The randomized algorithm by Roddity and Zwick as described in [37] takes expected time. They handle separately the case that a replacement path has a short detour containing at most edges, and the case that a replacement path has a long detour containing more than edges. The first case is solved deterministically. The second case is solved by first sampling a subset of vertices according to Lemma 1, where each vertex is sampled uniformly independently at random with probability for large enough constant . Using this uniform sampling, it holds with high probability (of at least ) that for every long triple (as defined hereinafter), the detour of the replacement path contains at least one vertex of .

Let . The triple is a long triple if every replacement path from to avoiding has its detour part containing more than edges.

Note that in Definition 3 we defined to be a long triple if every replacement path from to avoiding has a long detour (containing more than edges). We could have defined to be a long triple even if at least one replacement path from to avoiding has a long detour (perhaps more similar to the definitions in [37]), however we find Definition 3 more convenient for the following reason. If has a replacement path whose detour part contains at most edges, then the algorithm of [37] for handling short detours finds deterministically a replacement path for . Hence, we only need to find the replacement paths for triples for which every replacement path from to avoiding has a long detour, and this is the case for which we define as a long triple.

It is sufficient for the correctness of the replacement paths algorithm that the following condition holds; For every long triple the detour of the replacement path contains at least one vertex of . As the authors of [37] write, the choice of the random set is the only randomization used in their algorithm. To obtain a deterministic algorithm for the replacement paths problem and to prove Theorem 1.1, we prove the following deterministic alternative of Lemma 1.

[Our derandomized version of Lemma 1 for the replacement paths algorithm] There exists an time deterministic algorithm that computes a set of vertices, such that for every long triple there exists a replacement path whose detour part contains at least one of the vertices of .

Following the above description, in order to prove Theorem 1.1, that there exists an deterministic replacement paths algorithm, it is sufficient to prove the derandomization Lemma 3, we do so in the following sections.

3.1 Step 1: the Method of Reusing Common Subpaths - Defining the Set

In this section we prove the following lemma.

There exists a set of at most paths, each path of length exactly with the following property; for every long triple there exists a path and a replacement path such that is contained in the detour part of .

In order to define the set of paths and prove Lemma 3.1 we need the following definitions. Let be the graph obtained by removing the edges of the path from . For two vertices and , let be the distance from to in .

We use the following definitions of the index , the set of vertices and the set of paths .

[The index ] Let and let be the subset of all the vertices such that there exists at least one index with .

For every vertex we define the index to be the minimum index such that .

[The set of vertices ] We define the set of vertices . In other words, is the set of all vertices such that for all the vertices before along it holds that .

[A set of paths ] For every vertex , let be an arbitrary shortest path from to in (whose length is as ). We define .

Note that while is uniquely defined (as it is defined according to distances between vertices) the set of paths is not unique, as there may be many shortest paths from to in , and we take to be an arbitrary such shortest path.

The basic intuition for the method of reusing common subpaths is as follows. Let be arbitrary replacement paths such that is the vertex along the detours of all the replacement path . Then one can construct replacement paths such that the subpath is contained in all these replacement paths. Therefore, the subpath is reused as a common subpath in many replacement paths. We utilize this observation in the following proof of Lemma 3.1.

Proof of Lemma 3.1.

Obviously, the set described in Definition 3.1 contains at most paths, each path is of length exactly .

We prove that for every long triple there exists a path and a replacement path s.t. is contained in the detour part of .

Let be a replacement path for . Since is a long triple then the detour part of contains more than edges. Let be the vertex along , and let be the first vertex of . Let be the subpath of from to and let be the subpath of from to . In other words, . Since contains more than edges and is disjoint from except for the first and last vertices of and it follows that is disjoint from (except for the vertex ). In particular, since is a shortest path in that is edge-disjoint from , then is also a shortest path in . We get that .

We prove that and . As we have already proved that , we need to prove that for every it holds that . Assume by contradiction that there exists an index such that . Then the path is a path from to that avoids and its length is:

This means that the path is a path from to in and its length is shorter than the length of the shortest path from to in , which is a contradiction. We get that and for every it holds that . Therefore, according to Definitions 3.1 and 3.1 it holds that and .

Let , then according to Definition 3.1, is a shortest path from to in . We define the path . It follows that is a path from to that avoids and . Hence, is a replacement path for such that so the lemma follows. ∎

3.2 Step 2: the Method of Decremental Distances from a Path - Computing the Set

In this section we describe a decremental algorithm that enables us to compute the set of paths in time, proving the following lemma.

There exists a deterministic algorithm for computing the set of paths in time.

Our algorithm for computing the set of path is a variant of the decremental SSSP (single source shortest paths) algorithm of King [29]. Our variant of the algorithm is used to find distances of vertices from a path rather than from a single source vertex as we define below.

Overview of the Deterministic Algorithm for Computing in Time.

In the following description let . Consider the following assignment of weights to edges of . We assign weight for every edge on the path , and weight for all the other edges where is a small number such that . We define a graph as the weighted graph with edge weights . We define for every the graph and the path . We define the graph as the weighted graph with edge weights .

The algorithm computes the graph by simply taking and setting all edge weights of to be (for some small such that ) and all other edge weights to be 1. The algorithm then removes the vertices of from one after the other (starting from the vertex that is closest to ). Loosely speaking after each vertex is removed, the algorithm computes the distances from in the current graph. In each such iteration, the algorithm adds to all vertices such that their distance from in the current graph is between and . We will later show that at the end of the algorithm we have . Unfortunately, we cannot afford running Dijkstra after the removal of every vertex of as there might be vertices on . To overcome this issue, the algorithm only maintains nodes at distance at most from . In addition, we observe that to compute the SSSP from in the graph after the removal of a vertex we only need to spend time on nodes such that their shortest path from uses the removed vertex. Roughly speaking, for these nodes we show that their distance from rounded down to the closest integer must increase by at least 1 as a result of the removal of the vertex. Hence, for every node we spend time on it in at most iterations until its distance from is bigger than . As we will show later this will yield our desired running time.

In Section 5.4 we give a formal description and analysis of the algorithm and prove Lemma 3.2.

Proof of Theorem 1.1. We summarize the deterministic replacement paths algorithm and outline the proof of Theorem 1.1. First, compute in time the set of paths as in Lemma 3.2. Given , the deterministic greedy selection algorithm GreedyPivotsSelection (as described in Lemma 1) computes a set of vertices in time with the following property; every path contains at least one of the vertices of . Theorem 1.1 follows from Lemmas 3, 3.1 and 3.2.

4 Deterministic Distance Sensitivity Oracles - an Overview

In this section we apply our framework from Section 1.3 to the combinatorial distance sensitivity oracles of Weimann and Yuster [39]. A full description of the deterministic combinatorial distance sensitivity oracles is given in Section 6.

Let and be two parameters. In [39], Weimann and Yuster considered the following notion of intervals (note that in [39] they use a parameter and we use a parameter such that ). They define an interval of a long simple path as a subpath of consisting of consecutive vertices, so every simple path induces less than (overlapping) intervals. For every subset of at most edges, and for every pair of vertices , let be a shortest path from to in . The path induces less than (overlapping) intervals. The total number of possible intervals is less than as each one of the (at most) possible queries corresponds to a shortest path that induces less than intervals.

Let be defined as all the intervals (subpaths containing edges) of all the replacement paths for every with .

Weimann and Yuster apply Lemma 1 to find a set of vertices that hit w.h.p. all the intervals . According to these bounds (that contains paths, each containing exactly edges) applying the greedy algorithm to obtain the set deterministically according to Lemma 1 takes time, which is very inefficient.

In this section we assume that all weights are non-negative (so we can run Dijkstra’s algorithm) and that shortest paths are unique, we justify these assumptions in Section 6.4.

4.1 Step 1: the Method of Using Fault-Tolerant Trees to Significantly Reduce the Number of Intervals

In Lemma 4.1 we prove that the set of intervals actually contains at most unique intervals, rather than the naive upper bound mentioned above. From Lemmas 4.1 and 1 it follows that the GreedyPivotsSelection finds in time the subset of vertices that hit all the intervals . In Section 6.3.4 we further reduce the time it takes for the greedy algorithm to compute the set of pivots to .

.

In order to prove Lemma 4.1 we describe the fault-tolerant trees data-structure, which is a variant of the trees which appear in Appendix A of [11].

Let be the shortest among the -to- paths in that contain at most edges and let . In other words,