Fully-Dynamic All-Pairs Shortest Paths: Improved Worst-Case Time and Space Bounds

01/29/2020 ∙ by Maximilian Probst Gutenberg, et al. ∙ 0

Given a directed weighted graph G=(V,E) undergoing vertex insertions and deletions, the All-Pairs Shortest Paths (APSP) problem asks to maintain a data structure that processes updates efficiently and returns after each update the distance matrix to the current version of G. In two breakthrough results, Italiano and Demetrescu [STOC '03] presented an algorithm that requires Õ(n^2)amortized update time, and Thorup showed in [STOC '05] that worst-case update time Õ(n^2+3/4) can be achieved. In this article, we make substantial progress on the problem. We present the following new results: (1) We present the first deterministic data structure that breaks the Õ(n^2+3/4) worst-case update time bound by Thorup which has been standing for almost 15 years. We improve the worst-case update time to Õ(n^2+5/7) = Õ(n^2.71..) and to Õ(n^2+3/5) = Õ(n^2.6) for unweighted graphs. (2) We present a simple deterministic algorithm with Õ(n^2+3/4) worst-case update time (Õ(n^2+2/3) for unweighted graphs), and a simple Las-Vegas algorithm with worst-case update time Õ(n^2+2/3) (Õ(n^2 + 1/2) for unweighted graphs) that works against a non-oblivious adversary. Both data structures require space Õ(n^2). These are the first exact dynamic algorithms with truly-subcubic update time and space usage. This makes significant progress on an open question posed in multiple articles [COCOON'01, STOC '03, ICALP'04, Encyclopedia of Algorithms '08] and is critical to algorithms in practice [TALG '06] where large space usage is prohibitive. Moreover, they match the worst-case update time of the best previous algorithms and the second algorithm improves upon a Monte-Carlo algorithm in a weaker adversary model with the same running time [SODA '17].

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The All-Pairs Shortest Paths problem is one of the most fundamental algorithmic problems and is commonly taught in undergraduate courses to every Computer Science student. Whilst static algorithms for the problem are well-known for several decades, the dynamic versions of the problem have recently received intense attention by the research community. In the dynamic setting, the underlying graph undergoes updates, most commonly edge insertions and/or deletions. Most dynamic All-Pairs Shortest Paths algorithms can further handle vertex insertions (with up to incident edges) and/or deletions.

The problem.

In this article, we are only concerned with the fully-dynamic All-Pairs Shortest Path (APSP) problem with worst-case update time, i.e. given a fully-dynamic graph , undergoing vertex insertions and deletions, we want to guarantee minimal update time after each vertex update to recompute the distance matrix of the new graph. This is opposed to the version of the problem that allows for amortized update time. Moreover, we focus on space-efficient data structures and show that our data structures even improve over the most space-efficient APSP algorithms with amortized update time. We further point out, that for the fully-dynamic setting, vertex updates are more general than edge updates, since any edge update can be simulated by a constant number of vertex updates.

Related Work.

The earliest partially-dynamic algorithm to the All-Pairs Shortest Path problem is most likely the algorithm by Johnson [johnson1977efficient] that can be easily extended to handle vertex insertions in worst-case update time per insertion given the distance matrix of the current graph. The first fully-dynamic algorithm was presented by King [king1999fully] with amortized update time per edge insertion/deletion where is the largest edge weight building upon a classic data structure for decremental Single-Source Shortest Paths by Even and Shiloach [even1979line]. Later, King and Thorup [king2001space] improved the space bound to . In follow-up work by Demetrescu and Italiano [demetrescu2002improved, demetrescu2006fully], the result was generalized to real edge weights with the same bounds. In 2004, Demetrescu and Italiano[demetrescu2004new] presented a new approach to the All-Pairs Shortest Paths problems that only requires amortized update time for vertex insertions/deletions using space. Thorup improved and simplified the approach in [thorup2004fully] and even extended it to handle negative edge weights. Based on this data structure, he further developed the first data structure with worst-case update time [thorup2005worst] for vertex insertion/deletions improving over the trivial worst-case update time that can be obtained by recomputation. However, his data structure requires supercubic space in .

Abraham, Chechik and Krinninger [AbrahamCK17] showed that using randomization a Monte Carlo data structure can be devised with worst-case update time . For unweighted graphs, they further obtain worst-case update time . Both algorithms require space since they require a list of size

for each pair of vertices. Both algorithms work against an oblivious adaptive adversary, that is the adversary can base the update sequence on the output produced by the algorithm but has no access to the random choices that the algorithm makes. A drawback of the algorithm is that if it outputs an incorrect shortest path (which it does with probability at most

for some constant ), the adversary can exploit the revealed information and compromise the data structure for up to updates before their data structure is recomputed.

We also point out that the problem of Approximate All-Pairs Shortest Paths was solved in various dynamic graph settings [baswana2002improved, demetrescu2004new, roditty2004dynamic, bernstein2009fully, roditty2012dynamic, abraham2013dynamic, bernstein2016maintaining, henzinger2016dynamic, shiriFocs, brand2019dynamic, probstWulffNilsenDetSSSP]. In the setting of -approximate shortest paths, the best algorithms achieve amortized update time . However, the only of these algorithms that gives a better guarantee than the trivial on the worst-case update time is the algorithm in [brand2019dynamic] that achieves time for directed graphs with positive edge weights relies on fast matrix multiplication.

Our results.

We present the first deterministic data structure that breaks Thorup’s longstanding bound of worst-case update time.

Theorem 1.1.

Let be an -vertex directed edge-weighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worst-case update time . If the graph is unweighted, the running time can be improved to .

Further, we present the first algorithm for the fully-dynamic All-Pairs Shortest Paths problem (even amortized) in weighted graphs that obtains truly sub-cubic time and space usage at the same time111For small weights, i.e. weights in for some , the algorithm [demetrescu2006fully] gives subcubic update time and space, both bound by . However, as pointed out in [demetrescu2006experimental], real-world graphs often have large edge weights (for example, the internet graph had largest weight of roughly in 2006). Further, this is also the first algorithm that breaks the space/update-time product of which stood up to this article even for unweighted, undirected graphs. We hope that this gives new motivation to study amortized fully-dynamic algorithms that achieve update-time and space which is a central open question in the area, posed in [demetrescu2004new, thorup2004fully, demetrescu2006experimental, Italiano2008] and has practical importance.

Theorem 1.2.

Let be an -vertex directed edge-weighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worst-case update time using space . If the graph is unweighted, the running time can be improved to .

Finally, we present a data structure that is randomized and matches the update times achieved in [AbrahamCK17] up to polylogarithmic factors. However, their data structure is Monte-Carlo, and our data structure uses only compared to space and is slightly more robust, i.e. the data structure in [AbrahamCK17] works against an adaptive adversary and therefore the adversary can base its updates on the output of the algorithm, whilst our algorithm works against a non-oblivious adversary that is the adversary also has access to the random bits used throughout the execution of the algorithm222The former model assumes for example that the adversary cannot use information about the running time of the algorithm during each update whilst we do not require this assumption..

Theorem 1.3.

Let be an -vertex directed edge-weighted graph undergoing vertex insertions and deletions. Then, there exists a Las-Vegas data structure which can maintain distances in between all pairs of vertices with update time w.h.p. using space against a non-oblivious adversary. If the graph is unweighted, the running time can be improved to .

Our Techniques.

We focus on the decremental problem that we then generalize to the fully-dynamic setting using Johnson’s algorithm. The most crucial ingredient of our new decremental algorithms is a new way to use congestion: for each shortest path from to , each vertex on the shortest path is assigned a congestion value that relates to the costs induced by a deletion of such a vertex. If a vertex participates in many shortest paths, its deletion is expensive since we need to restore all shortest paths in which it participated. Thus, if the congestion of a vertex accumulated during some shortest path computations is too large, we simply remove the vertex from the graph and continue our shortest path computations on the graph . We defer handling the vertices of high congestion to a later stage and prepare for their deletion more carefully. This differs significantly from previous approaches that compute all paths in a specific order to avoid high congestion. Our new approach is simpler, more flexible and can be used to avoid vertices even at lower thresholds.

The second technique we introduce is to use separators to recompute shortest paths after a vertex deletion. This allows us to speed up the computation since we can check fewer potential shortest paths. Since long paths have better separators, we can reduce the congestion induced by these paths and therefore reduce the overall amount of congestion on all vertices.

Once we complete our shortest path computations, we obtain the set of highly congested vertices and handle them using a different approach presented by Abraham, Chechik and Krinninger [AbrahamCK17] that maintains deterministically the shortest paths through these vertices. These are exactly the shortest paths that we might have missed in the former step when we removed congested vertices. Thus, taking the paths of smaller weight, we obtain the real shortest paths in .

Finally, we present a randomized technique to layer our approach where we use a low congestion threshold initially to identify uncritical vertices and then obtain with each level in the hierarchy a smaller set of increasingly critical vertices that require more shortest path computations on deletion. Since the sets of critical vertices are decreasing in size, we can afford to invest more update time in the maintenance of shortest paths through these vertices.

2 Preliminaries

We denote by the input digraph where is the weight function mapping each edge to a number in the reals and define and . In this article, we define to refer to being a vertex-induced subgraph of , i.e. for some . We also slightly abuse notation and write for to denote . We let the graph with edge directions reversed be denote by .

The weight of a path in an edge-weighted graph is the sum of weights of its edges. We let denote the weight function that maps each path in to its weight. We use to denote the empty path of weight . Given two paths and in where , denote by the concatenated path . For any path , we define .

Let be a path starting in vertex and ending in vertex . Then is a shortest path in if its sum of edge weights is minimized over all paths from to in . We denote the weight of a shortest path from to by .

We say is the shortest path from to through , if is the path of minimum weight from to that contains a vertex in . We further say a path has hop or is a -hop-restricted path in if it consists of at most edges. We denote by the weight of the -hop-restricted shortest path from to . Finally, we define the notion of an improving shortest path in with regard to to be a path of weight at most . We often combine these notions, saying, for example, that is an -hop-improving shortest path through in with respect to to refer to a path that is in and has weight at most equal to the shortest path between and of hop that contains a vertex in in .

In this paper, we often use a black box a result by Zwick [Zwick02] that extends -hop-improving shortest paths in to improving shortest-paths. Since the lemma is implicit in [Zwick02] we provide an implementation of the algorithm and a proof of correctness that can be found in appendix A.

Lemma 2.1 (see [Zwick02, AbrahamCK17]).

Given a collection of the -hop-improving shortest paths for all pairs in , then there exists a procedure that returns improving shortest paths for all pairs in time .

3 The Framework

In this section, we describe the fundamental approach that we use for our data structures. We then refine this approach in the next section to obtain our new data structures. We start by stating a classic reduction that was used in all existing approaches.

Lemma 3.1 (see [henzinger2001maintaining, thorup2005worst, AbrahamCK17]).

Given a data structure on that supports a batch deletion of up to vertices from such that the data structure computes for each a shortest path in , and can return the first edges on in time time. Then, if the preprocessing time is and the batch deletion worst-case time is , there exists a fully dynamic APSP algorithm with worst-case update time.

This lemma reduces the problem to finding a data structure with good preprocessing time that can handle batch deletions. To get some intuition on how the reduction stated above works, note that vertex insertions can be solved very efficiently.

Lemma 3.2 (implied by Johnson’s algorithm, see for example [cormen2009introduction]).

Given a graph , where of size , and given all-pairs shortest paths , we can compute all-pairs shortest paths in in time .

Therefore, the algorithm can reduce the problem to a decremental problem and using the shortest paths in insert a batch of vertices after each update. When becomes of size larger than (i.e. after at least updates), we recompute the decremental data structure. Using standard deamortization techniques for rebuilding the data structure, the preprocessing time can be split into small chunks that are processed at each update and therefore we obtain a worst-case guarantee for each update.

3.1 A Batch Deletion Data Structure with Efficient Preprocessing Time

In the following, we present a procedure , given in Algorithm 1, that is invoked with parameters and integers and to compute paths for each tuple and for every with . Our goal is to use these paths in the batch deletion to recompute all-pairs shortest paths.

Input: A graph , a positive integer determining the maximum hop and an integer regulating the congestion.
Output: A tuple with the properties of Lemma 3.4.
1 ;
2 foreach  do ;
3 foreach  do ;
4 ;
5 while  do
6       Remove an arbitrary root from ;
7       foreach  do
8             ;
9             foreach  do
10                   ;
11                  
12            ;
13            
14      
Algorithm 1

This procedure maintains congestion values for each . These counters are initially . Let and let , for throughout the rest of the article. For each such , -hop-restricted shortest paths in are computed from roots to all where roots are considered in an arbitrary order.

For each , whenever an -hop-restricted path is found that passes through , is increased by . Hence, paths of long hop congest vertices on them less than small hop paths; this is key to getting our update time improvement as it helps us to keep the amount of congestion at for a path of any hop (as opposed to existing techniques which can only bound the cost at ). Once a congestion value increases beyond threshold value , is removed from the vertex set. More precisely, a growing set keeps track of the set of vertices whose congestion value is above and all hop-restricted paths are computed in the shrinking graph .

Lemma 3.3.

The procedure can be implemented to run in time.

Proof.

In each iteration of the for-loop in line 1, computing -hop-restricted shortest paths from source to all can be done with iterations of Bellman-Ford from in in time time. It is straight-forward to see that this dominates the cost incurred by the accounting in lines 1 to 1. From this and from a geometric sums argument, it follows that the total running time over all sources is . The lemma now follows. ∎

To bound the time for updates in the next subsection, we need the following lemma.

Lemma 3.4.

At termination of , the algorithm ensures that

  1. : ,

  2. , and

  3. .

  4. Each computed path is a -hop-improving shortest path in with regard to .

Proof.

We first observe that we maintain the loop invariant for the while-loop in line 1 that the congestion of any vertex not in is at most . This is true since initially the congestion of all vertices is and at the end of each iteration, we explicitly remove vertices with congestion larger than from . To prove property 1, it therefore suffices to show that during an iteration of the while-loop in line 1, the congestion of any vertex is increased by at most . To see this, observe that there are at most paths under consideration in each iteration of the while loop. Every vertex has its congestion increased by for each such path it belongs to. Therefore, we add at most congestion to any vertex during an iteration of the while-loop.

To see property 2, define . Initially, . Observe that during an iteration of the while-loop in line 1, we have at most paths of hop up to . Thus at most vertices increase their congestion due to a path by and so each such path increases by at most . Thus each while-loop iteration adds at most to and since we execute the while-loop exactly times, the final value of is .

Property 3 follows since each vertex has congestion at least , implying that there can be at most vertices in . Property 4 follows from the analysis of Bellman-Ford. ∎

The space-efficiency is straight-forward to analyze since each pair requires one path to be stored for each , storing the shortest paths explicitly requires space . We defer the description of a more space-efficient data structure with the same guarantees until Section 4.2.

3.2 Handling Deletions

In this section, we use the data structure computed by with being again the set of congested vertices, and show how to use this data structure to handle a batch of at most deletions, i.e. we show how to efficiently compute all-pairs shortest paths in . Our update procedure proceeds in multiple phases . Throughout the procedure, we enforce the following invariant.

Invariant 3.5.

After the execution of the phase, each path is an -hop-improving shortest paths in with regard to , for every .

Before we describe how to enforce the invariant, observe that the invariant implies that after we finished phase , we have for each pair a -hop-improving shortest path in which can then be extended using procedure as described in Lemma 2.1 to give all-pairs shortest paths in , as required.

1 foreach  do
2      
3for  to  do
4       foreach  do
5             if  then
6                   Compute an integer that minimizes the size of
7            else
8                  
9            
10      foreach  do
11             ;
12            
13      
return
Algorithm 2

Let us now describe how to implement the execution of a phase which is also depicted in Algorithm 2. Initially, we change all precomputed paths with or in to the empty path . Clearly, this enforces Invariant 3.5 and can be implemented in time.

In the phase (for ), we start by computing for each vertex , a hitting set of all -hop-improving shortest paths starting in . We take the separator set such that in particular each (real) shortest path from of length at least contains at least one vertex in that is at distance from . Here is chosen to be between the and vertex on each path (with exception for very small where we chose the separator to be the entire vertex set). Since there are layers to chose from, and the layers partition the vertex set , we obtain that is of size by the pigeonhole principle. Finally, to fix any precomputed -hop-improving shortest path that is no longer in , we check the paths for each and take a path of minimal weight. We point out that this path is either the concatenation of two -hop-improving shortest paths, or the path . This completes the description of our update algorithm.

Lemma 3.6.

The Invariant 3.5 is enforced throughout the entire execution of procedure .

Proof.

The base case for is proved by our observation above. Let us therefore take the inductive step for and let us focus on some path . Clearly, if contains no vertex in , it is still -hop-improving in and therefore no action is required. Otherwise, since each path is -hop-improving by the induction hypothesis, we have that the separator set contains a vertex between the and vertex on any path of length at least . It follows that the -hop-restricted shortest path from to is either of length at most in which case the path is also an -hop-restricted shortest path, or the concatenation of a -hop-restricted shortest path from to a hitting set vertex and from to of minimal weight is -hop-improving. The lemma follows. ∎

Lemma 3.7.

Given a data structure that satisfies the properties listed in Lemma 3.4 with congestion threshold and a set of congested vertices , there exists an algorithm that computes all-pairs shortest paths in and returns the corresponding distance matrix in time .

Proof.

By Invariant 3.5, we have after the phase of Algorithm 2, all paths to be -hop-improving shortest paths in with regard to . Thus, the collection of paths contains all -restricted shortest paths that contain no vertex in . It is straightforward to adapt the procedure described in Lemma 3.2 to return in time a collection of -hop-improving shortest paths in . Then, the Lemma 2.1 can be applied to recover in time the shortest paths in , as required.

It remains to analyze the running time of Algorithm 2. We note that each phase requires us to compute a separator for each vertex in . Since returning the first edges of each path requires time since we represent paths explicitly, the time required to compute a single separator in phase is at most . Thus, the overall time to compute all separators can be bound by (using a geometric sum argument for the different phases).

To bound the time spend in the foreach-loop in line 2, observe that we iterate only over paths that contain a vertex in which can be detected in linear time. Observe that if a vertex in is on a path , then the path contributed credits to the congestion of in the preprocessing procedure. Since the separator of at phase has size by the arguments mentioned above, we have that the iteration to recover path requires time (that is since checking the weight of each path and concatenation can both be implemented in constant time). Since each vertex has total congestion at most by Lemma 3.4, we can bound the total running time of the algorithm by . ∎

Choosing and in Lemma 3.1, we obtain the following corollary.

Corollary 3.8.

Let be an -vertex directed edge-weighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worst-case update time .

3.3 Batch Deletion Data Structure for Unweighted Graphs

We point out that for unweighted graphs, we can replace the Bellman-Ford procedure by a simple Breath-First-Search procedure (see for example [cormen2009introduction]) which improves the running time from to . This was also exploited before in [AbrahamCK17].

Corollary 3.9.

Let be an -vertex directed edge-weighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worst-case update time .

In the following sections, we will not explicitly point out that the Bellman-Ford procedure can be replaced by BFS but simply state the improved bound.

4 Efficient Data Structures

We now describe how to use the general strategy described in the previous section and describe the necessary changes to obtain efficient data structures.

4.1 A Faster Deterministic Algorithm

To obtain a faster algorithm, we mix our framework with the following result from by Abraham, Chechik and Forster [AbrahamCK17]. It is not explicitly stated in their paper but follows immediately by replacing their randomly sampled vertex set by an arbitrary vertex set. Informally, the data structure takes a decremental graph and a set of vertices and maintains for all vertices , the shortest-path through some vertex in .

Lemma 4.1.

Given an edge-weighted directed graph , a set and a hop bound . Then there exists a deterministic data structure that supports the operations:

  • : Initializes the data structure with the given parameters and returns a pointer to the data structure.

  • : assuming , returns for each , a -hop-improving shortest path through some vertex in in (with respect to ).

The operation runs in time and each operation runs in time.

It is now straight-forward to obtain a new batch deletion data structure that combines these two data structures. Intuitively, we exploit the strengths of both algorithms by setting the -threshold of the algorithm introduced in previous section slightly lower which increases the size of the set of congested vertices but improves the running time of the data structure to maintain shortest-paths that do not contain any vertices in . Since is precomputed, we then use the data structure described above to obtain the shortest-paths through some vertex in . Let us now give a more formal description.

To initialize the new data structure, we invoke algorithm 1 with parameters and to be fixed later. The algorithm gives a data structure and a set is of size . We then handle updates as follows: At initialization and every updates, we compute a data structure by invoking the preprocessing algorithm in Lemma 4.1 with parameters and . We later chose larger than in the last section which implies that we can increase , and take care of the shortest paths through by recomputing more often, i.e. we set . Since the preprocessing time of is smaller, this can be balanced efficiently such that both have small batch update time at all times.

For each update, we let be the batch of deletions since was initialized and the batch of deletions since was initialized. We then invoke and and combine the results in a straight-forward manner. This concludes the algorithm.

Using the reduction 3.1, and using that and , we obtain an algorithm with worst-case running time

which is optimized by setting , , , and .

Corollary 4.2.

Let be an -vertex directed edge-weighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worst-case update time . If the graph is unweighted, the running time can be improved to .

4.2 A Simple and Space-Efficient Deterministic Data Structure

In order to reduce space, we replace the procedure in the preprocessing at line 1 by procedure that is depicted in algorithm 3. Unlike the Bellman-Ford algorithm, our algorithm does not return the -restricted shortest paths but instead returns -improving shortest paths of length at most . Using that the length of each -improving shortest paths is , it can be verified that the proof of lemma 3.4 still holds under these conditions. Moreover, the information computed by can be efficiently stored in space.

Input: A graph and source , an integer .
Output: The algorithm returns a set of -hop-improving shortest paths each of length that can be represented in space .
1 ;
2 for  down to  do
3       ;
4       if  then
5             Compute an integer that minimizes the size of
6      else
7            
8      foreach  do
9             Store path ;
10            
11      
Algorithm 3

The algorithm runs in iterations executed by the for-loop where the index is initially set to and decreased after every iteration until it is . In each iteration, we compute the hop--restricted shortest paths on the graph . For the sake of analysis, we let be the graph of at the start of iteration . After computing the paths on , we compute a separator set that contains all vertices whose shortest path from has length which is taken to be strictly between and (except for very small where we chose the separator set to be ).

In the foreach-loop in line 3, we store for every vertex in the hitting set , the -hop-restricted shortest path . If the first edge on the path represents a subpath from a higher level, we add a pointer to the subpath. Then, we update by setting the weight of the edge from to to the weight of . Observe that after the foreach-loop finished, all paths , for any , can be mapped to a path in of same weight and of length at most and observe that this graph is graph .

Finally, we store the paths by a pointer to . Observe that each path might then be unpacked to an -improving shortest path in by replacing the first edge on a path by the corresponding subpath on a higher level.

Lemma 4.3.

The procedure computes a collection of -hop-improving shortest paths from source where each path is of length at most and provides a sized data structure such that:

  1. Each path can be extracted from the data structure in time , and

  2. , we can identify all paths that contain in time .

The procedure takes time .

Proof.

We argued above that every hop -restricted shortest path in can be mapped to a -restricted shortest path in . Thus, computing the -restricted shortest path using Bellman-Ford on returns -hop-improving shortest paths. By a simple inductive argument, it follows that every shortest path for any is -hop-improving in regard to .

To see that every path is of length observe that on level , we add at most new edges to the path since the only subpaths that we replace by shortcuts are to paths. Thus the final path corresponds to a path of length .

To see that the data structure requires only space, observe that at iteration , each path computed on consists of at most edges that need to be stored explicitly and a pointer to a higher level subpath corresponding to the first edge of . Since , and we only store paths to each , we therefore only require space

We can further implement the pointers for the subpath corresponding the first edge on a path to point to the next higher level where the subpath is non-trivial (i.e. not itself an edge). Thus following a pointer we can ensure to add at least one additional edge to the path, and therefore we can extract the path in time . Making pointers of the structure bidirectional, we can also find all paths containing a vertex in linear time. The overall running time is dominated by running Bellman-Ford, which takes time. ∎

The lemma gives a straight-forward way to verify that Lemma 3.3 and 3.4 hold even by using the relaxed Bellman-Ford procedure. The corollary below follows.

Corollary 4.4.

Let be an -vertex directed edge-weighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worst-case update time using space . If the graph is unweighted, the running time can be improved to .

4.3 A Space-Efficient and More Robust Las-Vegas Algorithm

In this section, we present a simple randomized procedure that allows to refine the approach of our framework. On a high-level, we set the congestion threshold for each vertex quite small (very close to ). Whilst this implies that our set of congested vertices is quite large, we ensured that we have the paths in covered for many deletions. We then try to fine recursively all paths through vertices in with slightly larger congestion threshold. By shrinking the set in each iteration, we speed-up the proprecessing procedure and therefore we can re-compute the data structure more often. We point out that even though our layering process again gives an efficient data structure to maintain paths that go through vertices in , it does not rely on the techniques by Abraham, Chechik and Krinninger [AbrahamCK17].

Input: A graph , a positive integer determining the maximum hop and an integer regulating the congestion.
Output: A tuple with the properties of Lemma 4.5.
1 ;
2 foreach  do ;
3 foreach  do ;
4 ;
5 while  do
6       Remove a center uniformly at random from ;
7       foreach  do
8             ;
9             ;
10             foreach  with  do
11                   ;
12                   foreach  do
13                         ;
14                        
15                  if  then
16                         ;
17                         ;
18                         ;
19                        
20                  
21            
22      
Algorithm 4

We start by presenting an adapted version of the preprocessing algorithm 1 that is depicted in algorithm 4. The new algorithm takes a set of vertices and the goal of the procedure is to produce -hop-improving shortest paths through the vertices in in the graph where is a set of vertices that are congested over the course of the algorithm. Instead of taking vertices from in arbitrary order, we now sample a vertex uniformly at random in each iteration. We then compute -hop-improving shortest paths from and to by invoking the adapted Bellman-Ford procedure on the original and the reversed graph.

We then test whether the concatenation of has lower weight than the current path from to . If so, we add units of congestion to each vertex on the path . In contrast to previous algorithms, if the congestion of one of the vertices exceeds , we immediately remove from the graph and recompute the paths through the vertex in the new graph. Let us now analyze the algorithm.

Lemma 4.5.

At termination of , the algorithm ensures that

  1. : ,

  2. , and

  3. .