1 Introduction
The AllPairs Shortest Paths problem is one of the most fundamental algorithmic problems and is commonly taught in undergraduate courses to every Computer Science student. Whilst static algorithms for the problem are wellknown for several decades, the dynamic versions of the problem have recently received intense attention by the research community. In the dynamic setting, the underlying graph undergoes updates, most commonly edge insertions and/or deletions. Most dynamic AllPairs Shortest Paths algorithms can further handle vertex insertions (with up to incident edges) and/or deletions.
The problem.
In this article, we are only concerned with the fullydynamic AllPairs Shortest Path (APSP) problem with worstcase update time, i.e. given a fullydynamic graph , undergoing vertex insertions and deletions, we want to guarantee minimal update time after each vertex update to recompute the distance matrix of the new graph. This is opposed to the version of the problem that allows for amortized update time. Moreover, we focus on spaceefficient data structures and show that our data structures even improve over the most spaceefficient APSP algorithms with amortized update time. We further point out, that for the fullydynamic setting, vertex updates are more general than edge updates, since any edge update can be simulated by a constant number of vertex updates.
Related Work.
The earliest partiallydynamic algorithm to the AllPairs Shortest Path problem is most likely the algorithm by Johnson [johnson1977efficient] that can be easily extended to handle vertex insertions in worstcase update time per insertion given the distance matrix of the current graph. The first fullydynamic algorithm was presented by King [king1999fully] with amortized update time per edge insertion/deletion where is the largest edge weight building upon a classic data structure for decremental SingleSource Shortest Paths by Even and Shiloach [even1979line]. Later, King and Thorup [king2001space] improved the space bound to . In followup work by Demetrescu and Italiano [demetrescu2002improved, demetrescu2006fully], the result was generalized to real edge weights with the same bounds. In 2004, Demetrescu and Italiano[demetrescu2004new] presented a new approach to the AllPairs Shortest Paths problems that only requires amortized update time for vertex insertions/deletions using space. Thorup improved and simplified the approach in [thorup2004fully] and even extended it to handle negative edge weights. Based on this data structure, he further developed the first data structure with worstcase update time [thorup2005worst] for vertex insertion/deletions improving over the trivial worstcase update time that can be obtained by recomputation. However, his data structure requires supercubic space in .
Abraham, Chechik and Krinninger [AbrahamCK17] showed that using randomization a Monte Carlo data structure can be devised with worstcase update time . For unweighted graphs, they further obtain worstcase update time . Both algorithms require space since they require a list of size
for each pair of vertices. Both algorithms work against an oblivious adaptive adversary, that is the adversary can base the update sequence on the output produced by the algorithm but has no access to the random choices that the algorithm makes. A drawback of the algorithm is that if it outputs an incorrect shortest path (which it does with probability at most
for some constant ), the adversary can exploit the revealed information and compromise the data structure for up to updates before their data structure is recomputed.We also point out that the problem of Approximate AllPairs Shortest Paths was solved in various dynamic graph settings [baswana2002improved, demetrescu2004new, roditty2004dynamic, bernstein2009fully, roditty2012dynamic, abraham2013dynamic, bernstein2016maintaining, henzinger2016dynamic, shiriFocs, brand2019dynamic, probstWulffNilsenDetSSSP]. In the setting of approximate shortest paths, the best algorithms achieve amortized update time . However, the only of these algorithms that gives a better guarantee than the trivial on the worstcase update time is the algorithm in [brand2019dynamic] that achieves time for directed graphs with positive edge weights relies on fast matrix multiplication.
Our results.
We present the first deterministic data structure that breaks Thorup’s longstanding bound of worstcase update time.
Theorem 1.1.
Let be an vertex directed edgeweighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worstcase update time . If the graph is unweighted, the running time can be improved to .
Further, we present the first algorithm for the fullydynamic AllPairs Shortest Paths problem (even amortized) in weighted graphs that obtains truly subcubic time and space usage at the same time^{1}^{1}1For small weights, i.e. weights in for some , the algorithm [demetrescu2006fully] gives subcubic update time and space, both bound by . However, as pointed out in [demetrescu2006experimental], realworld graphs often have large edge weights (for example, the internet graph had largest weight of roughly in 2006). Further, this is also the first algorithm that breaks the space/updatetime product of which stood up to this article even for unweighted, undirected graphs. We hope that this gives new motivation to study amortized fullydynamic algorithms that achieve updatetime and space which is a central open question in the area, posed in [demetrescu2004new, thorup2004fully, demetrescu2006experimental, Italiano2008] and has practical importance.
Theorem 1.2.
Let be an vertex directed edgeweighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worstcase update time using space . If the graph is unweighted, the running time can be improved to .
Finally, we present a data structure that is randomized and matches the update times achieved in [AbrahamCK17] up to polylogarithmic factors. However, their data structure is MonteCarlo, and our data structure uses only compared to space and is slightly more robust, i.e. the data structure in [AbrahamCK17] works against an adaptive adversary and therefore the adversary can base its updates on the output of the algorithm, whilst our algorithm works against a nonoblivious adversary that is the adversary also has access to the random bits used throughout the execution of the algorithm^{2}^{2}2The former model assumes for example that the adversary cannot use information about the running time of the algorithm during each update whilst we do not require this assumption..
Theorem 1.3.
Let be an vertex directed edgeweighted graph undergoing vertex insertions and deletions. Then, there exists a LasVegas data structure which can maintain distances in between all pairs of vertices with update time w.h.p. using space against a nonoblivious adversary. If the graph is unweighted, the running time can be improved to .
Our Techniques.
We focus on the decremental problem that we then generalize to the fullydynamic setting using Johnson’s algorithm. The most crucial ingredient of our new decremental algorithms is a new way to use congestion: for each shortest path from to , each vertex on the shortest path is assigned a congestion value that relates to the costs induced by a deletion of such a vertex. If a vertex participates in many shortest paths, its deletion is expensive since we need to restore all shortest paths in which it participated. Thus, if the congestion of a vertex accumulated during some shortest path computations is too large, we simply remove the vertex from the graph and continue our shortest path computations on the graph . We defer handling the vertices of high congestion to a later stage and prepare for their deletion more carefully. This differs significantly from previous approaches that compute all paths in a specific order to avoid high congestion. Our new approach is simpler, more flexible and can be used to avoid vertices even at lower thresholds.
The second technique we introduce is to use separators to recompute shortest paths after a vertex deletion. This allows us to speed up the computation since we can check fewer potential shortest paths. Since long paths have better separators, we can reduce the congestion induced by these paths and therefore reduce the overall amount of congestion on all vertices.
Once we complete our shortest path computations, we obtain the set of highly congested vertices and handle them using a different approach presented by Abraham, Chechik and Krinninger [AbrahamCK17] that maintains deterministically the shortest paths through these vertices. These are exactly the shortest paths that we might have missed in the former step when we removed congested vertices. Thus, taking the paths of smaller weight, we obtain the real shortest paths in .
Finally, we present a randomized technique to layer our approach where we use a low congestion threshold initially to identify uncritical vertices and then obtain with each level in the hierarchy a smaller set of increasingly critical vertices that require more shortest path computations on deletion. Since the sets of critical vertices are decreasing in size, we can afford to invest more update time in the maintenance of shortest paths through these vertices.
2 Preliminaries
We denote by the input digraph where is the weight function mapping each edge to a number in the reals and define and . In this article, we define to refer to being a vertexinduced subgraph of , i.e. for some . We also slightly abuse notation and write for to denote . We let the graph with edge directions reversed be denote by .
The weight of a path in an edgeweighted graph is the sum of weights of its edges. We let denote the weight function that maps each path in to its weight. We use to denote the empty path of weight . Given two paths and in where , denote by the concatenated path . For any path , we define .
Let be a path starting in vertex and ending in vertex . Then is a shortest path in if its sum of edge weights is minimized over all paths from to in . We denote the weight of a shortest path from to by .
We say is the shortest path from to through , if is the path of minimum weight from to that contains a vertex in . We further say a path has hop or is a hoprestricted path in if it consists of at most edges. We denote by the weight of the hoprestricted shortest path from to . Finally, we define the notion of an improving shortest path in with regard to to be a path of weight at most . We often combine these notions, saying, for example, that is an hopimproving shortest path through in with respect to to refer to a path that is in and has weight at most equal to the shortest path between and of hop that contains a vertex in in .
In this paper, we often use a black box a result by Zwick [Zwick02] that extends hopimproving shortest paths in to improving shortestpaths. Since the lemma is implicit in [Zwick02] we provide an implementation of the algorithm and a proof of correctness that can be found in appendix A.
Lemma 2.1 (see [Zwick02, AbrahamCK17]).
Given a collection of the hopimproving shortest paths for all pairs in , then there exists a procedure that returns improving shortest paths for all pairs in time .
3 The Framework
In this section, we describe the fundamental approach that we use for our data structures. We then refine this approach in the next section to obtain our new data structures. We start by stating a classic reduction that was used in all existing approaches.
Lemma 3.1 (see [henzinger2001maintaining, thorup2005worst, AbrahamCK17]).
Given a data structure on that supports a batch deletion of up to vertices from such that the data structure computes for each a shortest path in , and can return the first edges on in time time. Then, if the preprocessing time is and the batch deletion worstcase time is , there exists a fully dynamic APSP algorithm with worstcase update time.
This lemma reduces the problem to finding a data structure with good preprocessing time that can handle batch deletions. To get some intuition on how the reduction stated above works, note that vertex insertions can be solved very efficiently.
Lemma 3.2 (implied by Johnson’s algorithm, see for example [cormen2009introduction]).
Given a graph , where of size , and given allpairs shortest paths , we can compute allpairs shortest paths in in time .
Therefore, the algorithm can reduce the problem to a decremental problem and using the shortest paths in insert a batch of vertices after each update. When becomes of size larger than (i.e. after at least updates), we recompute the decremental data structure. Using standard deamortization techniques for rebuilding the data structure, the preprocessing time can be split into small chunks that are processed at each update and therefore we obtain a worstcase guarantee for each update.
3.1 A Batch Deletion Data Structure with Efficient Preprocessing Time
In the following, we present a procedure , given in Algorithm 1, that is invoked with parameters and integers and to compute paths for each tuple and for every with . Our goal is to use these paths in the batch deletion to recompute allpairs shortest paths.
This procedure maintains congestion values for each . These counters are initially . Let and let , for throughout the rest of the article. For each such , hoprestricted shortest paths in are computed from roots to all where roots are considered in an arbitrary order.
For each , whenever an hoprestricted path is found that passes through , is increased by . Hence, paths of long hop congest vertices on them less than small hop paths; this is key to getting our update time improvement as it helps us to keep the amount of congestion at for a path of any hop (as opposed to existing techniques which can only bound the cost at ). Once a congestion value increases beyond threshold value , is removed from the vertex set. More precisely, a growing set keeps track of the set of vertices whose congestion value is above and all hoprestricted paths are computed in the shrinking graph .
Lemma 3.3.
The procedure can be implemented to run in time.
Proof.
In each iteration of the forloop in line 1, computing hoprestricted shortest paths from source to all can be done with iterations of BellmanFord from in in time time. It is straightforward to see that this dominates the cost incurred by the accounting in lines 1 to 1. From this and from a geometric sums argument, it follows that the total running time over all sources is . The lemma now follows. ∎
To bound the time for updates in the next subsection, we need the following lemma.
Lemma 3.4.
At termination of , the algorithm ensures that

: ,

, and

.

Each computed path is a hopimproving shortest path in with regard to .
Proof.
We first observe that we maintain the loop invariant for the whileloop in line 1 that the congestion of any vertex not in is at most . This is true since initially the congestion of all vertices is and at the end of each iteration, we explicitly remove vertices with congestion larger than from . To prove property 1, it therefore suffices to show that during an iteration of the whileloop in line 1, the congestion of any vertex is increased by at most . To see this, observe that there are at most paths under consideration in each iteration of the while loop. Every vertex has its congestion increased by for each such path it belongs to. Therefore, we add at most congestion to any vertex during an iteration of the whileloop.
To see property 2, define . Initially, . Observe that during an iteration of the whileloop in line 1, we have at most paths of hop up to . Thus at most vertices increase their congestion due to a path by and so each such path increases by at most . Thus each whileloop iteration adds at most to and since we execute the whileloop exactly times, the final value of is .
Property 3 follows since each vertex has congestion at least , implying that there can be at most vertices in . Property 4 follows from the analysis of BellmanFord. ∎
The spaceefficiency is straightforward to analyze since each pair requires one path to be stored for each , storing the shortest paths explicitly requires space . We defer the description of a more spaceefficient data structure with the same guarantees until Section 4.2.
3.2 Handling Deletions
In this section, we use the data structure computed by with being again the set of congested vertices, and show how to use this data structure to handle a batch of at most deletions, i.e. we show how to efficiently compute allpairs shortest paths in . Our update procedure proceeds in multiple phases . Throughout the procedure, we enforce the following invariant.
Invariant 3.5.
After the execution of the phase, each path is an hopimproving shortest paths in with regard to , for every .
Before we describe how to enforce the invariant, observe that the invariant implies that after we finished phase , we have for each pair a hopimproving shortest path in which can then be extended using procedure as described in Lemma 2.1 to give allpairs shortest paths in , as required.
Let us now describe how to implement the execution of a phase which is also depicted in Algorithm 2. Initially, we change all precomputed paths with or in to the empty path . Clearly, this enforces Invariant 3.5 and can be implemented in time.
In the phase (for ), we start by computing for each vertex , a hitting set of all hopimproving shortest paths starting in . We take the separator set such that in particular each (real) shortest path from of length at least contains at least one vertex in that is at distance from . Here is chosen to be between the and vertex on each path (with exception for very small where we chose the separator to be the entire vertex set). Since there are layers to chose from, and the layers partition the vertex set , we obtain that is of size by the pigeonhole principle. Finally, to fix any precomputed hopimproving shortest path that is no longer in , we check the paths for each and take a path of minimal weight. We point out that this path is either the concatenation of two hopimproving shortest paths, or the path . This completes the description of our update algorithm.
Lemma 3.6.
The Invariant 3.5 is enforced throughout the entire execution of procedure .
Proof.
The base case for is proved by our observation above. Let us therefore take the inductive step for and let us focus on some path . Clearly, if contains no vertex in , it is still hopimproving in and therefore no action is required. Otherwise, since each path is hopimproving by the induction hypothesis, we have that the separator set contains a vertex between the and vertex on any path of length at least . It follows that the hoprestricted shortest path from to is either of length at most in which case the path is also an hoprestricted shortest path, or the concatenation of a hoprestricted shortest path from to a hitting set vertex and from to of minimal weight is hopimproving. The lemma follows. ∎
Lemma 3.7.
Given a data structure that satisfies the properties listed in Lemma 3.4 with congestion threshold and a set of congested vertices , there exists an algorithm that computes allpairs shortest paths in and returns the corresponding distance matrix in time .
Proof.
By Invariant 3.5, we have after the phase of Algorithm 2, all paths to be hopimproving shortest paths in with regard to . Thus, the collection of paths contains all restricted shortest paths that contain no vertex in . It is straightforward to adapt the procedure described in Lemma 3.2 to return in time a collection of hopimproving shortest paths in . Then, the Lemma 2.1 can be applied to recover in time the shortest paths in , as required.
It remains to analyze the running time of Algorithm 2. We note that each phase requires us to compute a separator for each vertex in . Since returning the first edges of each path requires time since we represent paths explicitly, the time required to compute a single separator in phase is at most . Thus, the overall time to compute all separators can be bound by (using a geometric sum argument for the different phases).
To bound the time spend in the foreachloop in line 2, observe that we iterate only over paths that contain a vertex in which can be detected in linear time. Observe that if a vertex in is on a path , then the path contributed credits to the congestion of in the preprocessing procedure. Since the separator of at phase has size by the arguments mentioned above, we have that the iteration to recover path requires time (that is since checking the weight of each path and concatenation can both be implemented in constant time). Since each vertex has total congestion at most by Lemma 3.4, we can bound the total running time of the algorithm by . ∎
Choosing and in Lemma 3.1, we obtain the following corollary.
Corollary 3.8.
Let be an vertex directed edgeweighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worstcase update time .
3.3 Batch Deletion Data Structure for Unweighted Graphs
We point out that for unweighted graphs, we can replace the BellmanFord procedure by a simple BreathFirstSearch procedure (see for example [cormen2009introduction]) which improves the running time from to . This was also exploited before in [AbrahamCK17].
Corollary 3.9.
Let be an vertex directed edgeweighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worstcase update time .
In the following sections, we will not explicitly point out that the BellmanFord procedure can be replaced by BFS but simply state the improved bound.
4 Efficient Data Structures
We now describe how to use the general strategy described in the previous section and describe the necessary changes to obtain efficient data structures.
4.1 A Faster Deterministic Algorithm
To obtain a faster algorithm, we mix our framework with the following result from by Abraham, Chechik and Forster [AbrahamCK17]. It is not explicitly stated in their paper but follows immediately by replacing their randomly sampled vertex set by an arbitrary vertex set. Informally, the data structure takes a decremental graph and a set of vertices and maintains for all vertices , the shortestpath through some vertex in .
Lemma 4.1.
Given an edgeweighted directed graph , a set and a hop bound . Then there exists a deterministic data structure that supports the operations:

: Initializes the data structure with the given parameters and returns a pointer to the data structure.

: assuming , returns for each , a hopimproving shortest path through some vertex in in (with respect to ).
The operation runs in time and each operation runs in time.
It is now straightforward to obtain a new batch deletion data structure that combines these two data structures. Intuitively, we exploit the strengths of both algorithms by setting the threshold of the algorithm introduced in previous section slightly lower which increases the size of the set of congested vertices but improves the running time of the data structure to maintain shortestpaths that do not contain any vertices in . Since is precomputed, we then use the data structure described above to obtain the shortestpaths through some vertex in . Let us now give a more formal description.
To initialize the new data structure, we invoke algorithm 1 with parameters and to be fixed later. The algorithm gives a data structure and a set is of size . We then handle updates as follows: At initialization and every updates, we compute a data structure by invoking the preprocessing algorithm in Lemma 4.1 with parameters and . We later chose larger than in the last section which implies that we can increase , and take care of the shortest paths through by recomputing more often, i.e. we set . Since the preprocessing time of is smaller, this can be balanced efficiently such that both have small batch update time at all times.
For each update, we let be the batch of deletions since was initialized and the batch of deletions since was initialized. We then invoke and and combine the results in a straightforward manner. This concludes the algorithm.
Using the reduction 3.1, and using that and , we obtain an algorithm with worstcase running time
which is optimized by setting , , , and .
Corollary 4.2.
Let be an vertex directed edgeweighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worstcase update time . If the graph is unweighted, the running time can be improved to .
4.2 A Simple and SpaceEfficient Deterministic Data Structure
In order to reduce space, we replace the procedure in the preprocessing at line 1 by procedure that is depicted in algorithm 3. Unlike the BellmanFord algorithm, our algorithm does not return the restricted shortest paths but instead returns improving shortest paths of length at most . Using that the length of each improving shortest paths is , it can be verified that the proof of lemma 3.4 still holds under these conditions. Moreover, the information computed by can be efficiently stored in space.
The algorithm runs in iterations executed by the forloop where the index is initially set to and decreased after every iteration until it is . In each iteration, we compute the hoprestricted shortest paths on the graph . For the sake of analysis, we let be the graph of at the start of iteration . After computing the paths on , we compute a separator set that contains all vertices whose shortest path from has length which is taken to be strictly between and (except for very small where we chose the separator set to be ).
In the foreachloop in line 3, we store for every vertex in the hitting set , the hoprestricted shortest path . If the first edge on the path represents a subpath from a higher level, we add a pointer to the subpath. Then, we update by setting the weight of the edge from to to the weight of . Observe that after the foreachloop finished, all paths , for any , can be mapped to a path in of same weight and of length at most and observe that this graph is graph .
Finally, we store the paths by a pointer to . Observe that each path might then be unpacked to an improving shortest path in by replacing the first edge on a path by the corresponding subpath on a higher level.
Lemma 4.3.
The procedure computes a collection of hopimproving shortest paths from source where each path is of length at most and provides a sized data structure such that:

Each path can be extracted from the data structure in time , and

, we can identify all paths that contain in time .
The procedure takes time .
Proof.
We argued above that every hop restricted shortest path in can be mapped to a restricted shortest path in . Thus, computing the restricted shortest path using BellmanFord on returns hopimproving shortest paths. By a simple inductive argument, it follows that every shortest path for any is hopimproving in regard to .
To see that every path is of length observe that on level , we add at most new edges to the path since the only subpaths that we replace by shortcuts are to paths. Thus the final path corresponds to a path of length .
To see that the data structure requires only space, observe that at iteration , each path computed on consists of at most edges that need to be stored explicitly and a pointer to a higher level subpath corresponding to the first edge of . Since , and we only store paths to each , we therefore only require space
We can further implement the pointers for the subpath corresponding the first edge on a path to point to the next higher level where the subpath is nontrivial (i.e. not itself an edge). Thus following a pointer we can ensure to add at least one additional edge to the path, and therefore we can extract the path in time . Making pointers of the structure bidirectional, we can also find all paths containing a vertex in linear time. The overall running time is dominated by running BellmanFord, which takes time. ∎
The lemma gives a straightforward way to verify that Lemma 3.3 and 3.4 hold even by using the relaxed BellmanFord procedure. The corollary below follows.
Corollary 4.4.
Let be an vertex directed edgeweighted graph undergoing vertex insertions and deletions. Then there exists a deterministic data structure which can maintain distances in between all pairs of vertices in worstcase update time using space . If the graph is unweighted, the running time can be improved to .
4.3 A SpaceEfficient and More Robust LasVegas Algorithm
In this section, we present a simple randomized procedure that allows to refine the approach of our framework. On a highlevel, we set the congestion threshold for each vertex quite small (very close to ). Whilst this implies that our set of congested vertices is quite large, we ensured that we have the paths in covered for many deletions. We then try to fine recursively all paths through vertices in with slightly larger congestion threshold. By shrinking the set in each iteration, we speedup the proprecessing procedure and therefore we can recompute the data structure more often. We point out that even though our layering process again gives an efficient data structure to maintain paths that go through vertices in , it does not rely on the techniques by Abraham, Chechik and Krinninger [AbrahamCK17].
We start by presenting an adapted version of the preprocessing algorithm 1 that is depicted in algorithm 4. The new algorithm takes a set of vertices and the goal of the procedure is to produce hopimproving shortest paths through the vertices in in the graph where is a set of vertices that are congested over the course of the algorithm. Instead of taking vertices from in arbitrary order, we now sample a vertex uniformly at random in each iteration. We then compute hopimproving shortest paths from and to by invoking the adapted BellmanFord procedure on the original and the reversed graph.
We then test whether the concatenation of has lower weight than the current path from to . If so, we add units of congestion to each vertex on the path . In contrast to previous algorithms, if the congestion of one of the vertices exceeds , we immediately remove from the graph and recompute the paths through the vertex in the new graph. Let us now analyze the algorithm.
Lemma 4.5.
At termination of , the algorithm ensures that

: ,

, and

.
Comments
There are no comments yet.