Fast Deterministic Fully Dynamic Distance Approximation

by   Jan van den Brand, et al.

In this paper, we develop deterministic fully dynamic algorithms for computing approximate distances in a graph with worst-case update time guarantees. In particular we obtain improved dynamic algorithms that, given an unweighted and undirected graph G=(V,E) undergoing edge insertions and deletions, and a parameter 0 < ϵ≤ 1, maintain (1+ϵ)-approximations of the st distance of a single pair of nodes, the distances from a single source to all nodes ("SSSP"), the distances from multiple sources to all nodes ("MSSP”), or the distances between all nodes ("APSP"). Our main result is a deterministic algorithm for maintaining (1+ϵ)-approximate single-source distances with worst-case update time O(n^1.529) (for the current best known bound on the matrix multiplication coefficient ω). This matches a conditional lower bound by [BNS, FOCS 2019]. We further show that we can go beyond this SSSP bound for the problem of maintaining approximate st distances by providing a deterministic algorithm with worst-case update time O(n^1.447). This even improves upon the fastest known randomized algorithm for this problem. At the core, our approach is to combine algebraic distance maintenance data structures with near-additive emulator constructions. This also leads to novel dynamic algorithms for maintaining (1+ϵ, β)-emulators that improve upon the state of the art, which might be of independent interest. Our techniques also lead to improvements for randomized approximate diameter maintenance.



There are no comments yet.


page 1

page 2

page 3

page 4


Dynamic Approximate Shortest Paths and Beyond: Subquadratic and Worst-Case Update Time

Consider the following distance query for an n-node graph G undergoing e...

Deterministic Dynamic Matching In Worst-Case Update Time

We present deterministic algorithms for maintaining a (3/2 + ϵ) and (2 +...

Near-Optimal Decremental Approximate Multi-Source Shortest Paths

We provide new algorithms for maintaining approximate distances in a wei...

Maintaining an EDCS in General Graphs: Simpler, Density-Sensitive and with Worst-Case Time Bounds

In their breakthrough ICALP'15 paper, Bernstein and Stein presented an a...

New Techniques and Fine-Grained Hardness for Dynamic Near-Additive Spanners

Maintaining and updating shortest paths information in a graph is a fund...

Counting Triangles under Updates in Worst-Case Optimal Time

We consider the problem of incrementally maintaining the triangle count ...

Fully Dynamic Algorithms for Minimum Weight Cycle and Related Problems

We consider the directed minimum weight cycle problem in the fully dynam...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

From the procedural point of view, an algorithm is a set of instructions that outputs the result of a computational task for a given input. This static viewpoint neglects that computation is often not a one-time task with input data in successive runs of the algorithm being very similar. The idea of dynamic graph algorithms is to explicitly model the situation that the input is constantly undergoing changes and the algorithm needs to adapt its output after each change to the input. This paradigm has been highly successfully applied to the domain of graph algorithms. The major goal in designing dynamic graph algorithms is to spend as little computation time as possible for processing each update to the input graph.

Despite the progress on dynamic graph algorithms in recent years, many state-of-the-art solutions suffer from at least one of the following restrictions: (1) Many dynamic algorithms only support one type of updates, i.e., are incremental (supporting only insertions) or decremental (supporting only deletions). Fully dynamic algorithms support both types of updates. (2) Many dynamic algorithms only achieve amortized update time guarantees, i.e., the stated bound only holds “on average” over a sequence of updates with individual updates possibly taking significantly more time than the stated amortized bound. Worst-case bounds also hold for individual updates, which for example is relevant in real-time systems. (3) Many dynamic algorithms are randomized. (i) On one hand, this means these algorithms only give probabilistic guarantees on correctness or running time that do not hold in all cases. (ii) On the other hand, randomized algorithms often do not allow the “adversary” creating the sequence of updates to be adaptive in the sense that it may react to the outputs of the algorithm111This type of adversary is called “adaptive online adversary” in the context of online algorithms [Ben-DavidBKTW94]. Note that despite being allowed to choose the next update in its sequence based on the outputs of the algorithm so far, this adversary may not explicitly observe the internal random choices of the algorithm.. This is because the power of randomization can in many cases only be unleashed if the adversary is oblivious to the outputs of the algorithm, which guarantees probabilistic independence of the random choices made by the algorithm.

Deterministic algorithms avoid these two issues.

While these restrictions are not prohibitive in certain settings, they obstruct the general-purpose usage of dynamic algorithms as “black boxes”. Thus, the “gold standard” in the design of dynamic algorithms should be deterministic fully dynamic algorithms with worst-case update time bounds. To date, there is only a limited number of problems that admit such algorithms and additionally have almost optimal time bounds. To the best of our knowledge, this is the case only for -approximate maximum fractional matching and minimum vertex cover [BhattacharyaHN17], -edge coloring [BhattacharyaCHN18], -approximate densest subgraph [SawlaniW20], connectivity [ChuzhoyGLNPS20], minimum spanning tree [ChuzhoyGLNPS20], and edge connectivity [JinS21].

In this paper, we add an important problem to this list: -approximate single-source distances in unweighted, undirected graphs. More specifically, our main result is a deterministic fully dynamic algorithm that after each edge insertion or deletion outputs a -approximation of the distances from a given source node to all other vertices in worst-case update time222To simplify the presentation of running time bounds in the introductory part of this paper we assume that is a constant. We will later make them explicit in our theorem statements. Throughout this paper, we use -notation to suppress terms that are polylogarithmic in , the number of nodes of the graph. , or, more generally, time for any where denotes the matrix-multiplication exponent.333Two matrices can be multiplied in operations with [Williams12, Gall14, AlmanW21]. We write for the complexity of multiplying an by matrix [GallU18]. Note that our update time matches a conditional lower bound for this problem (up to subpolynomial factors) [BrandNS19]. We also match the randomized worst-case time bound for approximate single-source shortest paths in unweighted undirected graphs implicitly obtained by [BHGWW2021] against an oblivious adversary. Prior to our work, the fastest algorithm for maintaining approximate single-source distances against an adaptive adversary had an update time of , which applied to more general graphs [BrandN19].

We also give deterministic algorithms for maintaining -approximate distances from sources with update time , and thus all-pairs distances with update time . Note that this almost matches the trivial lower bounds of and , respectively, for explicit distance maintenance (up to subpolynomial factors). This further matches (up to subpolynomial factors) the randomized worst-case time bound for maintaining approximate all-pairs distances in unweighted undirected graphs obtained by [BrandN19] against an adaptive adversary.444For general graphs, [BrandN19] obtain an update time of . Our techniques also lead to improved fully-dynamic bounds for several other distance problems, such as - distance, near-additive emulators, and diameter approximation. See Section 1.1 for details of our results.

We believe that another virtue of our algorithms is that they are conceptually simpler with a more direct approach than prior works. In particular, our results follow from a combination of algebraic distance maintenance data structures and near-additive emulators. While the state-of-the-art algebraic data structures based on maintaining matrix inverses are rather involved, we use them only to optimize the -dependence in our running times. If we were fine with a slightly worse -dependence, then – since we only use the algebraic data structures to maintain relatively small distances – we could resort to the simpler and well-known path-counting approach for directed acyclic graphs (DAGs) that was introduced by King and Sagert [KingS02] and subsequently refined by Demetrescu and Italiano [DemetrescuI00] (see Appendix A for details on this approach). We believe that overall this results in a pretty accessible combination of algebraic and combinatorial approaches.

1.1 Our Results

In this section, we summarize our main results for deterministic fully dynamic distance computation (, SSSP, APSP, and MSSP supporting distance queries) and emulators. A summary of our algorithms for maintaining -approximate distances with their worst-case update time guarantees can be found in Table 1. In addition to these deterministic results, our techniques also give improved randomized solutions for diameter approximation, and subquadratic update-time -APSP distance oracles555By a “distance oracle”, we mean a data structure that supports fast queries. Our goal – unlike many static algorithms – is not optimizing the space of this data structure. with sublinear query time. We next discuss each of these results and compare them with related work. Throughout this paper we assume that we are given an unweighted graph with nodes and edges.

Approx Type Worst-case update
Table 1: Summary of our deterministic results for dynamic approximate undirected shortest paths. The stated complexities hide terms. Here by -MSSP we mean multi-source distances from sources, and we are showing one important special case of our general result.

Optimal Deterministic -Sssp.

Our first result is a conditionally optimal deterministic algorithm for maintaining -single source distances. Formally we show the following.

Theorem 1.1.

Given an unweighted undirected graph and a single source , and , there is a deterministic fully-dynamic data structure for maintaining -distances from with

  • [nosep]

  • Preprocessing time of , where .

  • Worst-case update time of for any . For current bounds on and the best choice of , this is .

This running time matches a conditional lower bound stated in [BrandNS19], meaning it is unlikely that an algorithm can improve our running time by any polynomial factor. More specifically, the lower bound states that no dynamic -approximate SSSP algorithm on unweighted undirected graphs can run in worst-case time per edge update for any constant . Prior to this work,666In general, when we state bounds from previous work we sometimes hide terms unless our dependence on is substantially different from those results. the fully-dynamic algorithms for single-source distances were randomized and either much slower with time per update [BrandN19] (albeit for more general classes of graphs), or only worked against an oblivious adversary implied by techniques of [BHGWW2021].777It is worth noting that the techniques in [BHGWW2021] are path reporting, while our deterministic SSSP algorithm is suitable for distance computation. It seems challenging to obtain path reporting deterministically. Note that, similar to our work, the techniques of [BHGWW2021] are also restricted to unweighted, undirected graphs. Additionally, we remove the superpolynomial dependence on in the update time inherent to the approach of [BHGWW2021].

Deterministic - distances.

Next, we show that our techniques lead to a faster algorithm for maintaining distance from a fixed pair of nodes . Specifically, we give an algorithm for this problem with a worst-case update time smaller than the time required for single-source distances (based on the conditional lower bound).

Theorem 1.2.

Given an unweighted undirected graph and a pair of nodes and , there is a fully-dynamic data structure for maintaining -distances between and deterministically with

  • [nosep]

  • Preprocessing time of , where .

  • Worst-case update time of for any parameter , which is for current .

When relaxing determinism to randomization against adaptive adversaries, the previously fastest fully-dynamic algorithms for -distances had update time [BrandNS19] and maintained the distance exactly for unweighted directed graphs.

Our algorithm improves upon this time bound and, unlike all previous work ([Sankowski05, BrandNS19, BrandN19, BHGWW2021]), is deterministic. Moreover, this bound has only a small gap to the conditional lower bound for -approximate dynamic -distances on unweighted undirected graphs, which for current is [BrandNS19].888The conditional lower bound states that no algorithm can run in time for any constant . Thus no algorithm using current fast matrix multiplication algorithms can beat time by a polynomial factor.

Sparse Emulators.

Our deterministic dynamic algorithms are based on several emulator constructions with various tradeoffs. We show here that we can also maintain more general emulators, which may be of independent interest. We start by defining -emulators and spanners.

Definition 1.3.

Given an input graph , an -emulator of is a graph (that is not necessarily a subgraph of and might be weighted) in which

for all pairs of nodes .

If the graph above is a subgraph of then we call an -spanner of . Note that a spanner of an unweighted graph remains unweighted.

In this paper, we obtain the following result for maintaining near-additive emulators.

Lemma 1.4.

Given an unweighted, undirected graph , and , we can maintain a -emulator of with size , where deterministically with worst-case update time of . The preprocessing time of this algorithm is .

The static running time for constructing a -emulator, also called a near-additive emulator, with size is . For our applications in APSP and MSSP, we are interested in the special case where and the emulator has size .999In static settings there are somewhat more involved algorithms for near-additive emulators that lead to slightly better tradeoffs in specific parameter settings (e.g. see [EN2018]). For our applications in distance approximation the current simpler algorithms suffice. In this special case, our results can be compared to the -spanner construction of [BHGWW2021]. They provide a randomized algorithm against an oblivious adversary with worst-case update time of , whereas we give a deterministic algorithm with worst-case update time of . Hence we improve over [BHGWW2021] both in running time and by having a deterministic algorithm. We note however that [BHGWW2021] maintain a spanner, whereas we maintain an emulator.


Another implication of our techniques is an an algorithm for -multi-source distances. In Section 3.6, we give an algorithm that combines our sparse emulator construction with the algebraic techniques to prove the following theorem.

Theorem 1.5.

Given an unweighted, undirected graph , and , and a fixed set of sources , we can maintain -approximate distances from deterministically with:

  • Preprocessing time of .

  • Worst-case update of .

Hence we can maintain distances from up to sources in almost (up to an factor) the same time as maintaining distances from a single-source.

Deterministic near-optimal -Apsp.

One implication of Theorem 1.5 (by simply setting ) is a deterministic fully-dynamic algorithm for maintaining all-pair-shortest path that nearly (up to an factor) matches the trivial lower bound of time per update for this problem. More formally,

Corollary 1.6.

Given an unweighted, undirected graph , and , we can maintain -all-pairs distances deterministically with:

  • Preprocessing time of .

  • Worst-case update time of .

It is worth mentioning that there is another (simpler) approach to obtain this bound that we will discuss in Section 3.7. The previous comparable bounds for this problem either used randomization [BrandN19] or have amortized bounds [DemetrescuI04, Thorup04].

Next we discuss two other implications of our multi-source data structure, which unlike our previous bound are randomized.

Diameter Approximation.

We can maintain a -approximation of the diameter in unweighted graphs by using our emulator to compute -MSSP algorithms for certain sets of size based on an algorithm by [RV13]. Formally we have,

Corollary 1.7.

Given an unweighted graph with diameter , and

, we can maintain an estimate

such that101010Note that the term is only relevant for graphs with very small diameter .:

with the following guarantees:

  • Pre-processing time of .

  • worst-case update time that holds with high probability against an adaptive adversary.

Previously the fastest fully-dynamic algorithm for this problem was by [BrandN19] with a worst-case update time of against an adaptive adversary. We get better bounds by combining our sparse emulator algorithms with the algorithm of [BrandN19]. The algorithm and the parameter setting for obtaining this bound is included in Appendix B.

APSP distance oracle with sub-quadratic update time.

Another implication of our dynamic -MSSP algorithm is improved bounds for maintaining a data structure with subquadratic update time that supports all-pair queries with a small polynomial query time. This type of data structure for undirected and unweighted graph was proposed by [RZ12] and also studied in [BrandN19]. We show that our improved -MSSP algorithm directly improves their update time.

Corollary 1.8.

Given an unweighted, undirected graph , and , we can maintain a data structure that supports all-pair -approximate distance queries with following guarantees:

  • Preprocessing time: .

  • Worst-case update time: .

  • Query time: .

Moreover, these bounds hold with high probability against an adaptive adversary.

This result demonstrates that our techniques lead to improved bounds over [BrandN19] for a data structure explicitly designed for unweighted graphs. Their algorithm has an update time of with the same query time. While the analysis is similar to [BrandN19], we give a sketch of this algorithm in Appendix B.2 for completeness.

2 Technical Overview

In this section we give a high-level overview of our contributions based on the following outline: First, we give an algorithm for maintaining a deterministic -emulator. We then describe how the update time can be improved by maintaining a low-recourse hitting set. Then in Section 2.2, we give an overview of the algebraic techniques that, combined with our deterministic emulators, lead to a conditionally optimal time bound for -SSSP. Finally, we show that by maintaining sparser emulators with different guarantees, we can get improved bounds for - distances, multi-source distances, and APSP.

2.1 Dynamic Emulators via Low-Recourse Hitting Sets

Deterministic -emulator and -Sssp.

We start with a deterministic algorithm for maintaining a -emulator. This algorithm is inspired by a randomized algorithm (working against an oblivious adversary) used by [HKN2013] in the decremental setting, which in turn is based on the purely additive static construction of [DorHZ00]. Given an unweighted graph , we maintain an emulator with size as follows:

  1. [nosep]

  2. Let be a degree threshold. For any node where , add all the edges incident to to . These edges have weight .

  3. Construct a hitting set of size , such that every node with degree at least , called a heavy node, has a neighbor in .

  4. For any node , add an edge to all nodes within distance to . Set the weight of such an edge to .

It is easy to see that if we were interested in a randomized algorithm that only works against an oblivious adversary, we could simply construct a hitting set by uniformly sampling a fixed set of size  [UllmanY91]. We could then maintain the corresponding -bounded distances from this fixed set with a worst-case update time of , which matches the lower bound of [BrandNS19]. As we will discuss in Section 2.2, this could be done by using an algebraic data structure by [Sankowski05] for maintaining and then querying -bounded distances for all pairs in after each update. Note that the distance bound of in our algorithms leverage the power of algebraic distance maintenance data structures as their running times scale with the given distance bound. However, these ideas alone are not enough for obtaining an efficient deterministic algorithm.

Before explaining how to maintain both the hitting set and the corresponding distances deterministically, let us sketch the properties of this emulator and how it can be used for maintaining -SSSP. It is easy to see that has size : we add edges incident to low-degree nodes, and edges in . For the stretch analysis, consider any pair of nodes , and let be the shortest path between . We can divide into segments of equal length , an possibly one additional smaller segment. Consider one such segment . If all the nodes on this segment are low-degree, then we have included all the corresponding edges in the emulator. Otherwise there is a node that is adjacent to the first heavy node on this segment. We have , and thus in the third step of the algorithm we have added (weighted) an edge in the emulator. It is easy to see that the path going through either provides a multiplicative factor, or (for the one smaller segment) an additive term of .

Given a emulator, we can now maintain -SSSP by (i) using algebraic techniques to maintain -bounded distances from the source to all nodes in , and (ii) statically running Dijkstra’s algorithm on the emulator in time , and finally (iii) taking the minimum of the two distance values for each pair . We observe that if , then we are maintaining a correct estimate in step (i). Otherwise in step (ii) the combination of the multiplicative factor and the additive term, leads to an overall -approximate estimate.

Deterministic low-recourse hitting set.

As discussed, we can easily obtain a fixed hitting set using randomization, but we are interested in a deterministic algorithm. One natural approach for constructing the hitting set deterministically is as follows: For each node with degree at least , consider a set of exactly neighbors of . After each update we can statically and deterministically compute an -approximation to this instance of the hitting set problem. We use a simple greedy algorithm that proceeds by sequentially adding nodes to that hit the maximum number of uncovered heavy nodes.

This can be done in time and gives us a hitting set of size as well. This time is within our desired update-time bound, but we also need to maintain -bounded distances from elements in this hitting set. As we will see in Section 2.2, by using the naive approach of recomputing a hitting set in each update and employing off-the-shelf algebraic data structures (e.g. by [BrandN19]) for maintaining bounded distances in , we would get an update time of for current . However there is a conditional lower bound of for this problem [BrandNS19], and our goal is to design an algorithm that matches this bound.

To get a better running time, we change our construction and the algebraic data structure to use a low-recourse hitting set instead to ensure that in each update only a constant number of nodes are added to the set. More formally in Section 3.3 we will prove the following lemma:

Lemma 2.1.

Given a graph undergoing edge insertions and edge deletions and a degree threshold , call a node heavy if it has degree larger than . We can deterministically maintain a hitting set of size with worse-case recourse and worst-case time per update such that all heavy nodes have a neighbor in .

At a high-level our dynamic low recourse hitting set proceeds as follows: we start by using the static greedy hitting set algorithm. We then note that each update (insertion or deletion) can make at most heavy nodes uncovered. We can keep on adding such nodes to our hitting set until the size of the hitting set doubles, and then reset the construction. This leads to an amortized constant recourse bound, and we can then use a standard technique to turn this into a worst case constant recourse bound.

Note that this hitting set problem can be seen as a set cover instance of size , where each set consists of exactly neighbor of a heavy node. Dynamic set cover approximation has received significant attention in recent years (e.g. [abboud2019, BHN2019, gupta2017, BHNW2021]). The most relevant result to our setting is a fully-dynamic -approximate set cover algorithm by [gupta2017]). However we cannot use their result directly, as they state that their polynomial time algorithm only leads to constant amortized recourse, and their update-time guarantees are also only amortized111111Of course the goal in [gupta2017] is a generic set cover approximation algorithm, which is why they are not comparable to our specialized algorithm. Also, the other set cover algorithms cited lead to approximation ratio dependent on , that can be as large as in our case.. Here we use a simple approach that utilizes the properties of our hitting set instance, which is enough to get worst-case recourse bounds.

We next move on to explaining how to design algebraic data structures that maintain the distances required for this algorithm more efficiently.

2.2 Dynamic Pairwise Bounded Distances via Matrix Inverse

As outlined before, for obtaining our desired bounds, we need to efficiently maintain bounded pairwise distances for some sets , where the sets and change over updates. We additionally use the fact that even though these sets change, they do not change substantially after each update because of our low-recourse hitting sets.

Previous algebraic distance algorithms only considered maintaining pairwise distances for (a) some fixed sets , that do not change [Sankowski05, BrandNS19, BHGWW2021], or (b) a completely new sets , in each iteration [BrandN19]. Case (a) can be used if we randomly sample a hitting set and keeping this set fixed over the sequence of edge updates. This will only useful for randomized algorithms against oblivious adversaries. Case (b) can be used when constructing a new hitting set from scratch after each update, which works against adaptive adversaries. However, querying the distance for a new set in each update turns out to be slower and does not allow us to match the conditional lower bound for approximate SSSP. More specifically, the complexities in these case are as follows:

Lemma 2.2 ([Sankowski05, BrandNS19]).

There exist randomized dynamic algorithms that, after preprocessing a given unweighted directed121212The algebraic algorithms for -bounded distances all work on directed graphs. Restrictions to undirected graphs in our main results stem from the emulator arguments that extend the results to unbounded distances. graph and a parameter in time, supports edge-updates to in time.

The algorithm supports queries that return for any given the pairwise -bounded distances in time where such that , . [BrandNS19]

The algorithm can also support maintaining the pairwise distances for some fixed sets by increasing the update time by . [Sankowski05]

In our SSSP algorithm, as discussed in Section 2.1, we have , . For the current , this leads to a query time of (by choosing ), while the conditional lower bound for SSSP is .

We improve the query time for the case where sets are slowly changing with each update. This allows us to reach the conditional lower bound. We do this by reducing dynamic -bounded pairwise distances for slowly changing sets to existing dynamic matrix inverse algorithms.

We start by explaining the existing reduction from dynamic distances to matrix inverse. Afterward, we explain how to use this reduction for maintaining pairwise distances.

Reducing distances to matrix inverse.

Previous dynamic algebraic algorithms that maintain distances work by reducing the task to the so called “dynamic matrix inverse” problem [Sankowski05, BrandNS19, BrandN19, BrandS19, GuR21]. Here the dynamic algorithms are given some matrix and must support updates that change any entry of and queries that return any entry of . Such dynamic matrix inverse algorithms can be used to maintain distances in dynamic graphs via the following reduction: Given an adjacency matrix , note that is the number of (not necessarily simple) paths from to of length . Specifically, the smallest with is the distance from to .

We can maintain these powers of dynamically via the following reduction. For any , we write for the ring of polynomials over (with variable ) modulo , i.e. polynomials over where we truncate all monomials of degree . We now consider matrices whose entries are from .

Let be the matrix with on the diagonal and for all edges . Note that . To see this, consider

by because of the entrywise mod . Thus a dynamic algorithm that maintains the inverse of matrix is able to maintain distances of length in dynamic graphs. The task of maintaining pairwise distances for thus reduces to the task of maintaining the submatrix for some dynamic matrix .131313Throughout, we use for sets and matrix to denote the submatrix consisting of rows with index in and columns with index in .

Submatrix maintenance.

In this work, we show how to extend the previous algorithms to efficiently maintain a submatrix of an inverse for some dynamic matrix . Obviously, any dynamic matrix algorithm can maintain such a submatrix by just querying all entries after each change to , but this would not be fast enough for our algorithm to match the conditional lower bounds. We now show a blackbox reduction that is able to maintain the submatrix more efficiently. For simplicity assume is a matrix over some field .141414The reduction does extend to matrices of form (as required to maintain distances ), but for simplicity we just consider instead. Consider an entry update where we add to some entry , i.e. we add some to . The Sherman-Morrison identity [ShermanM50] states

Note that and are just th column and th row of . So to obtain the submatrix of the new inverse we just need to compute

This allows us to maintain throughout all updates by just querying and from some dynamic matrix algorithm instead of the entire submatrix. Updates to set (or ) are handled by querying the missing row (or column), i.e. adding some to just requires us to query in order to know the new submatrix . In Section 4, we use these ideas to formally prove the following results by reducing to existing dynamic matrix inverse algorithms from [Sankowski04, Sankowski05, BrandNS19]:

Lemma 2.3.

There exist two deterministic dynamic algorithms that, after preprocessing a given unweighted directed graph and sets , support edge-updates to and set-updates to and (i.e. adding or removing a node to or ). After each edge-update the algorithms return the -bounded pairwise distances of in .

The first algorithm has preprocessing time , edge-update time and set-update time for updating and for updating .

The second algorithm has preprocessing time , edge-update time and set-update time for updating and for updating .

For both algorithms, all update times are worst-case.

The complexity dependence stems from determinism. With a standard Monte-Carlo randomization technique (see e.g. [Sankowski05]), one could improve the complexity of Lemma 2.3 by an factor, which is comparable to the previous randomized bounds stated in Lemma 2.2.

2.3 Sparsification for Approximate distances, APSP, MSSP.

Deterministic -emulator for distances.

Next, we outline how we can improve the update time to in case of -distances. For this purpose, we construct a -emulator with size , which again is inspired by the purely additive construction of [DorHZ00] in the static setting, by making the following modifications to the algorithm described in Section 2.1 above: We set the degree threshold to . More importantly, rather than adding edges corresponding to bounded distances in , we only add pairwise edges between nodes (with bounded distance) in . This has two advantages: First, we need to run Dijkstra on a sparser graph. Second, the algebraic steps can be performed much faster when we only need to maintain pairwise distances between two sets of sublinear size (here , rather than from a set of size to all nodes in .

It is easy to see that this emulator has size . There are edges corresponding to low-degree nodes, and corresponding to edges in . The stretch argument follows a similar structure to the one for the -emulator. Again, for each pair of nodes , we divide the shortest path to segments of equal length . The main difference is that here we should consider the first and last heavy nodes on each segment, which we denote by and . Then there must be nodes that are adjacent to and respectively. We have and thus we have added an edge in the emulator. The path using this edge will lead to either a -multiplicative stretch for this segment, or an additive term of for the (at most) one smaller segment.

Note that this algorithm does not lead to better bounds for single-source distances since querying -bounded single-source distances still takes time using the algebraic techniques. However, if we are interested in the -bounded distance between a fixed pair of nodes and , the algebraic approach leads to better bounds. In this case, we get an improved bound of , as outlined in Section 2.2, which is less than the conditional lower bounds for approximate SSSP.

Sparser emulators with applications in APSP and MSSP.

Finally, we give another algorithm that leads to improvements when we need to maintain distances from many sources, such as all-pairs distances.

We start by describing how we maintain multi-source distances from many sources with an update time almost the same as the time required for single-source distances. For this purpose, given a set of sources , we maintain two emulators simultaneously: first maintain a -emulator of . Then statically construct a much sparser -emulator of size . The key idea here is to use in order to construct more efficiently. We use a static deterministic emulator algorithm (based on [RTZ2005, TZ2006emulators]) that can construct such an emulator in time . This leads to a fully-dynamic algorithm for maintaining -emulators deterministically in worst-case update time.

Then, similar to before, in each update we find distance estimates for pairs in by computing the minimum of the following estimates: i) -bounded distances from all sources maintained by an algebraic data structure, ii) distances from all sources on emulator , computed in time .

This lets us maintain -MSSP from up to sources in almost (up to an factor) the same running time as maintaining distances from a single-source by computing multi-source distances statically on this very sparse emulator and querying small distances from the algebraic data structure. This approach naturally extends to maintaining all-pairs distances deterministically and yields a worst-case update time of by setting .

Further Applications.

Our sparse emulators can also be used to maintain a (nearly) -approximation of the diameter. Our algorithm is an adaptation of the dynamic algorithm by [BrandN19], which is in turn based on an algorithm by [RV13]. At a high-level, we need to query (approximate) multi-source distances from three sets of size at most . We show that our emulators can be used to maintain such approximate distances much more efficiently than the data structures of [BrandN19]. Note that the diameter approximation of [RV13] is inherently randomized.

Finally, we maintain a data structure with worst-case subquadratic update time that supports sublinear all-pairs -approximate distance queries. Our algorithm is based on ideas of [RZ12, BrandN19] that utilize well-known path hitting techniques (e.g. [UllmanY91]). In order to get improved bounds we again use our sparse -emulators. We need to handle some technicalities both in the algorithm and its analysis introduced by the additive factor , combined with -bounded distances maintained in the algorithm of [BrandN19] for an appropriately chosen parameter .

3 Approximate Distances via Emulators

In this section we focus on constructing emulators with various tradeoffs and describing how they can be combined with the algebraic data structure of Lemma 2.3 for obtaining (conditionally) optimal dynamic single-source distance approximations.

First, assuming that we have a low-recourse dynamic hitting set, we maintain a -emulator and explain how it can be used for maintaining -SSSP. We then show how to maintain a -emulator, that can be used for maintaining - distances more efficiently.

We will then move on to give a deterministic algorithm that maintains low-recourse hitting sets, and the required distances from the nodes in this hitting set to other nodes.

3.1 Deterministic -Emulators and -Sssp

In this section we describe how to maintain -SSSP with conditionally optimal worst-case update time. We start by describing how to maintain a -emulator, assuming that we have a low-recourse hitting set, and can compute bounded-hop distances from elements in this set. The algorithm is summarized in Algorithm LABEL:alg:2_emulator. Assume that we are given two functions:

  • , which returns a dynamically maintained hitting set satisfying Lemma 2.1. We provide an efficient algorithm for this function in Section 3.3.

  • , which can query -bounded distances between pairs in as specified in Lemma 2.3. In Section 4, we formally explain how these distances can be maintained and then queried for our low-recourse hitting sets.


Observe that even though we start with an unweighted graph, we need to add weighted edges to the emulator (with weight corresponding to the distance between the endpoints). We note that a similar, but randomized version of this emulator construction (working only against an oblivious adversary) was used in [HKN2013] for maintaining approximate shortest paths decrementally. For completeness we provide a full analysis of the properties of this emulator here. Assuming that we can maintain hitting set for satisfying Lemma 2.1 and -bounded distances in , Algorithm LABEL:alg:2_emulator can be used to show the following theorem:

Theorem 3.1.

Given an unweighted graph , , we can deterministically maintain a -emulator with size . The worst-case update time is , and preprocessing time is .


The size analysis is straightforward. We set , and add edges for sparse nodes, and by Lemma 2.1, we have nodes in the hitting set and thus we add an overall edges for all of them.

We next move on to the stretch analysis. Consider any pair of nodes . Let be the shortest path between and in . If all the nodes on this path have degree less than , then this whole path exists in .

We divide into segments of equal length of , with possibly one segment with smaller length. Note that if , there will be only one segment. We show that for all but one segment there is a path with stretch in the emulator. Consider the -th segment that we denote by , and let be the shortest path between and . Same as before if all the nodes on are low-degree, then all the edges in the segment are in . Otherwise, let be the first heavy node on . Since is a heavy node, by Lemma 2.1 there is a node adjacent to . First assume that . Note that since and is a neighbor of and on the shortest path between and , we have . Thus we have added an emulator edge between and . Consider the path in going through . For the length of this path we have,


Now assume that and hence . Then we have added an edge from to in the emulator. Using the same analysis as above, and by inequality 4 we have .

Since is the shortest path in , the overall multiplicative stretch is and there will be one additive term of .

The running time (update time and preprocessing) follows from Lemma 2.1 for maintaining a low-recourse hitting set , and Lemma 2.3 for maintaining -bounded distances in , by setting and . Note that we also need to maintain edges incident to heavy nodes that overlap with any node added to , but this only takes time per update. ∎

Using an emulator for maintaining -Sssp.

Given an unweighted graph , we first maintain a -emulator for . Given a single-source and , we can now maintain the distances by:

  1. Using the algebraic data structure of Lemma 2.3: -hop bounded distances from on ,

  2. After each update, statically computing SSSP on in time.

See 1.1


The distance estimate stored at each node is the minimum of the two estimates (i) and (ii) described above. To see the correctness (stretch), we simply observe that for any node where , we have:

Hence a -approximate estimate is returned in step (ii) above. On the other hand, if , then we directly maintained the exact distance in step (i).

Next, we analyze the running time. The update time for maintaining the emulator, as described in Theorem 3.1 is . Then use Lemma 2.3 again for maintaining -hop bounded distances from the source. Additionally, we statically compute single source distance on an emulator of size in time. ∎

3.2 Deterministic -Emulator and - Distances

In this section, we give another emulator-based algorithm that lets us maintain the approximate distance from a given source to a given destination with better update time than the time bound we showed for SSSP. We maintain a sparser emulator with a slightly larger additive stretch that supports faster computation of the approximate distance estimate. In particular, we maintain a emulator of size .

Compared to the emulator of Section 3.1, for this emulator construction we need to maintain bounded distances with our algebraic data structure for a smaller number of pairs of nodes, which increases efficiency. This, combined with the fact that our emulators are sparser, leads to a faster algorithm for maintaining -approximate distances.


We start by maintaining a sparse emulator with slightly larger additive stretch term. The algorithm is summarized in Algorithm LABEL:alg:4_emulator.

algocf[h]     Assuming that we can maintain a low-recourse hitting set and -bounded distance between pairs , Algorithm LABEL:alg:4_emulator leads to an emulator with the following guarantees:

Theorem 3.2.

Given an unweighted graph , , we can deterministically maintain a -emulator of size with worst-case update time of , and preprocessing time of .


It is easy to see that Algorithm LABEL:alg:4_emulator leads to an emulator of size . This follows from the fact that we add at most pairwise edges between nodes in , and we add edges for non-heavy node where .

Next we move on to stretch analysis. Consider any pair of nodes . We let be the shortest path between and .

We divide into segments of equal length at most , and possibly one smaller segment that we handle separately (which could be the only segment if ). We show that for each segment there is a path with stretch in the emulator. Consider the -th segment that we denote by , and let the corresponding path shortest path between and be . Same as before, if all the nodes on have degree less than , then all the edges in the segment are in .

Otherwise, let be the first heavy node, and let the last (furthest from ) heavy node on . By Lemma 2.1 we know that there are nodes where is adjacent to and is adjacent to . The case where is a special easy case, so let us assume . First assume . Since are on the shortest path between and , we have and thus we have added an emulator edge between and . Consider the path in going through . For the length of this path we have,


Now assume that there is a segment . We have again added an edge in . Using the same analysis as before, by inequality 10, we have . Hence the overall multiplicative stretch is and there will be at most one additive factor of .

The running time analysis now follows by applying Lemma 2.3 to maintain -bounded distances from the low-recourse hitting set obtained by Lemma 2.1. For this we use the second algorithm of 2.3, and set , where . ∎

We can now use the emulator of Theorem 3.2 to maintain -approximate -distances using the same approach as in Section 3.1. We set the distance estimate to be the minimum obtained by maintaining -bounded distances on (using the second algorithm in Lemma 2.3), and statically running Dijkstra’s on the emulator. Formally we have,

See 1.2


We use Theorem 3.2 to maintain a -emulator with worst-case update time of . Then we use Lemma 2.3 again for maintaining -bounded distances. Additionally, we statically compute single source distance on an emulator of size . The stretch analysis is the same as in Theorem 1.1. ∎

3.3 Deterministic Dynamic Low-Recourse Hitting Sets

In this section, we focus on the deterministic maintenance of the hitting sets in order to efficiently maintain the emulators described in the previous two sections.

Let us first review a very simple static deterministic construction and later discuss how to obtain a low-recourse dynamic algorithm. Given a graph , we create a sparse subgraph of with size , which we denote by as follows: For any heavy node (a node with degree at least ), we choose an arbitrary set of neighbors of , denoted by .

We then statically compute an -approximate hitting set for deterministically (i.e., exceeds the size of a minimum hitting set on by a factor of ). Consider the following simple greedy algorithm: in each step consecutively we add to the node that hits (i.e. is incident to) the maximum number of uncovered heavy nodes. It is well-known that such an algorithm leads to the following guarantees.

Lemma 3.3 (Greedy Hitting Set, [johnson1974]).

Given a graph , and a degree threshold , we can deterministically construct a hitting set with size in time, such that every heavy node has a neighbor in .151515This lemma can also be seen as an -approximation greedy algorithm for set cover, where the sets that we need to cover are formed by set of node incident to each heavy node.

It is easy to see that the hitting set obtained has size : by a simple sampling procedure we know that there exists a hitting set of size , and we are using an -approximate hitting set algorithm. A tighter analysis (e.g. see [BHK2017]) will lead to a total size of .

Next we move on to a dynamic maintenance of hitting sets. Our goal is to prove the following lemma.

See 2.1

First, we start with an algorithm with amortized constant recourse, and then turn it into a an algorithm with worst-case constant recourse.

The amortized low-recourse hitting set algorithm is as follows:

  • Preprocess a hitting set in time, using Lemma 3.3

  • After each updates, reset, i.e. construct a new hitting set from scratch.

  • When an edge is deleted: if either of the endpoints, w.l.o.g , is a high-degree node, and is the only node covering , add to .

  • When an edge is inserted: if one endpoint, w.l.o.g , becomes high-degree and it is not covered by any other node, add to .

It is easy to see that this construction has low amortized recourse, since over a sequence of updates, only are added to the hitting set. The amortized update time is , since we reset the algorithm every time steps, and the size of the input of the hitting algorithm has size . Hence the amortized time per update is . We next turn this into a worst-case recourse bound using a standard reduction.

Worst-case constant recourse bound.

In order to ensure a worst-case recourse bound, we use a standard technique in which we run two copies of the algorithm at the same time. One copy maintains the hitting set whenever the other copy gradually perform the reset operation spread out over some sequence of updates.

Proof of Lemma 2.1.

We maintain two copies of the previously described hitting set algorithm in parallel. We denote the hitting sets maintained by these algorithms by and