1 Introduction
From the procedural point of view, an algorithm is a set of instructions that outputs the result of a computational task for a given input. This static viewpoint neglects that computation is often not a onetime task with input data in successive runs of the algorithm being very similar. The idea of dynamic graph algorithms is to explicitly model the situation that the input is constantly undergoing changes and the algorithm needs to adapt its output after each change to the input. This paradigm has been highly successfully applied to the domain of graph algorithms. The major goal in designing dynamic graph algorithms is to spend as little computation time as possible for processing each update to the input graph.
Despite the progress on dynamic graph algorithms in recent years, many stateoftheart solutions suffer from at least one of the following restrictions: (1) Many dynamic algorithms only support one type of updates, i.e., are incremental (supporting only insertions) or decremental (supporting only deletions). Fully dynamic algorithms support both types of updates. (2) Many dynamic algorithms only achieve amortized update time guarantees, i.e., the stated bound only holds “on average” over a sequence of updates with individual updates possibly taking significantly more time than the stated amortized bound. Worstcase bounds also hold for individual updates, which for example is relevant in realtime systems. (3) Many dynamic algorithms are randomized. (i) On one hand, this means these algorithms only give probabilistic guarantees on correctness or running time that do not hold in all cases. (ii) On the other hand, randomized algorithms often do not allow the “adversary” creating the sequence of updates to be adaptive in the sense that it may react to the outputs of the algorithm^{1}^{1}1This type of adversary is called “adaptive online adversary” in the context of online algorithms [BenDavidBKTW94]. Note that despite being allowed to choose the next update in its sequence based on the outputs of the algorithm so far, this adversary may not explicitly observe the internal random choices of the algorithm.. This is because the power of randomization can in many cases only be unleashed if the adversary is oblivious to the outputs of the algorithm, which guarantees probabilistic independence of the random choices made by the algorithm.
Deterministic algorithms avoid these two issues.
While these restrictions are not prohibitive in certain settings, they obstruct the generalpurpose usage of dynamic algorithms as “black boxes”. Thus, the “gold standard” in the design of dynamic algorithms should be deterministic fully dynamic algorithms with worstcase update time bounds. To date, there is only a limited number of problems that admit such algorithms and additionally have almost optimal time bounds. To the best of our knowledge, this is the case only for approximate maximum fractional matching and minimum vertex cover [BhattacharyaHN17], edge coloring [BhattacharyaCHN18], approximate densest subgraph [SawlaniW20], connectivity [ChuzhoyGLNPS20], minimum spanning tree [ChuzhoyGLNPS20], and edge connectivity [JinS21].
In this paper, we add an important problem to this list: approximate singlesource distances in unweighted, undirected graphs. More specifically, our main result is a deterministic fully dynamic algorithm that after each edge insertion or deletion outputs a approximation of the distances from a given source node to all other vertices in worstcase update time^{2}^{2}2To simplify the presentation of running time bounds in the introductory part of this paper we assume that is a constant. We will later make them explicit in our theorem statements. Throughout this paper, we use notation to suppress terms that are polylogarithmic in , the number of nodes of the graph. , or, more generally, time for any where denotes the matrixmultiplication exponent.^{3}^{3}3Two matrices can be multiplied in operations with [Williams12, Gall14, AlmanW21]. We write for the complexity of multiplying an by matrix [GallU18]. Note that our update time matches a conditional lower bound for this problem (up to subpolynomial factors) [BrandNS19]. We also match the randomized worstcase time bound for approximate singlesource shortest paths in unweighted undirected graphs implicitly obtained by [BHGWW2021] against an oblivious adversary. Prior to our work, the fastest algorithm for maintaining approximate singlesource distances against an adaptive adversary had an update time of , which applied to more general graphs [BrandN19].
We also give deterministic algorithms for maintaining approximate distances from sources with update time , and thus allpairs distances with update time . Note that this almost matches the trivial lower bounds of and , respectively, for explicit distance maintenance (up to subpolynomial factors). This further matches (up to subpolynomial factors) the randomized worstcase time bound for maintaining approximate allpairs distances in unweighted undirected graphs obtained by [BrandN19] against an adaptive adversary.^{4}^{4}4For general graphs, [BrandN19] obtain an update time of . Our techniques also lead to improved fullydynamic bounds for several other distance problems, such as  distance, nearadditive emulators, and diameter approximation. See Section 1.1 for details of our results.
We believe that another virtue of our algorithms is that they are conceptually simpler with a more direct approach than prior works. In particular, our results follow from a combination of algebraic distance maintenance data structures and nearadditive emulators. While the stateoftheart algebraic data structures based on maintaining matrix inverses are rather involved, we use them only to optimize the dependence in our running times. If we were fine with a slightly worse dependence, then – since we only use the algebraic data structures to maintain relatively small distances – we could resort to the simpler and wellknown pathcounting approach for directed acyclic graphs (DAGs) that was introduced by King and Sagert [KingS02] and subsequently refined by Demetrescu and Italiano [DemetrescuI00] (see Appendix A for details on this approach). We believe that overall this results in a pretty accessible combination of algebraic and combinatorial approaches.
1.1 Our Results
In this section, we summarize our main results for deterministic fully dynamic distance computation (, SSSP, APSP, and MSSP supporting distance queries) and emulators. A summary of our algorithms for maintaining approximate distances with their worstcase update time guarantees can be found in Table 1. In addition to these deterministic results, our techniques also give improved randomized solutions for diameter approximation, and subquadratic updatetime APSP distance oracles^{5}^{5}5By a “distance oracle”, we mean a data structure that supports fast queries. Our goal – unlike many static algorithms – is not optimizing the space of this data structure. with sublinear query time. We next discuss each of these results and compare them with related work. Throughout this paper we assume that we are given an unweighted graph with nodes and edges.
Approx  Type  Worstcase update 

SSSP  
st  
APSP  
MSSP 
Optimal Deterministic Sssp.
Our first result is a conditionally optimal deterministic algorithm for maintaining single source distances. Formally we show the following.
Theorem 1.1.
Given an unweighted undirected graph and a single source , and , there is a deterministic fullydynamic data structure for maintaining distances from with

[nosep]

Preprocessing time of , where .

Worstcase update time of for any . For current bounds on and the best choice of , this is .
This running time matches a conditional lower bound stated in [BrandNS19], meaning it is unlikely that an algorithm can improve our running time by any polynomial factor. More specifically, the lower bound states that no dynamic approximate SSSP algorithm on unweighted undirected graphs can run in worstcase time per edge update for any constant . Prior to this work,^{6}^{6}6In general, when we state bounds from previous work we sometimes hide terms unless our dependence on is substantially different from those results. the fullydynamic algorithms for singlesource distances were randomized and either much slower with time per update [BrandN19] (albeit for more general classes of graphs), or only worked against an oblivious adversary implied by techniques of [BHGWW2021].^{7}^{7}7It is worth noting that the techniques in [BHGWW2021] are path reporting, while our deterministic SSSP algorithm is suitable for distance computation. It seems challenging to obtain path reporting deterministically. Note that, similar to our work, the techniques of [BHGWW2021] are also restricted to unweighted, undirected graphs. Additionally, we remove the superpolynomial dependence on in the update time inherent to the approach of [BHGWW2021].
Deterministic  distances.
Next, we show that our techniques lead to a faster algorithm for maintaining distance from a fixed pair of nodes . Specifically, we give an algorithm for this problem with a worstcase update time smaller than the time required for singlesource distances (based on the conditional lower bound).
Theorem 1.2.
Given an unweighted undirected graph and a pair of nodes and , there is a fullydynamic data structure for maintaining distances between and deterministically with

[nosep]

Preprocessing time of , where .

Worstcase update time of for any parameter , which is for current .
When relaxing determinism to randomization against adaptive adversaries, the previously fastest fullydynamic algorithms for distances had update time [BrandNS19] and maintained the distance exactly for unweighted directed graphs.
Our algorithm improves upon this time bound and, unlike all previous work ([Sankowski05, BrandNS19, BrandN19, BHGWW2021]), is deterministic. Moreover, this bound has only a small gap to the conditional lower bound for approximate dynamic distances on unweighted undirected graphs, which for current is [BrandNS19].^{8}^{8}8The conditional lower bound states that no algorithm can run in time for any constant . Thus no algorithm using current fast matrix multiplication algorithms can beat time by a polynomial factor.
Sparse Emulators.
Our deterministic dynamic algorithms are based on several emulator constructions with various tradeoffs. We show here that we can also maintain more general emulators, which may be of independent interest. We start by defining emulators and spanners.
Definition 1.3.
Given an input graph , an emulator of is a graph (that is not necessarily a subgraph of and might be weighted) in which
for all pairs of nodes .
If the graph above is a subgraph of then we call an spanner of . Note that a spanner of an unweighted graph remains unweighted.
In this paper, we obtain the following result for maintaining nearadditive emulators.
Lemma 1.4.
Given an unweighted, undirected graph , and , we can maintain a emulator of with size , where deterministically with worstcase update time of . The preprocessing time of this algorithm is .
The static running time for constructing a emulator, also called a nearadditive emulator, with size is . For our applications in APSP and MSSP, we are interested in the special case where and the emulator has size .^{9}^{9}9In static settings there are somewhat more involved algorithms for nearadditive emulators that lead to slightly better tradeoffs in specific parameter settings (e.g. see [EN2018]). For our applications in distance approximation the current simpler algorithms suffice. In this special case, our results can be compared to the spanner construction of [BHGWW2021]. They provide a randomized algorithm against an oblivious adversary with worstcase update time of , whereas we give a deterministic algorithm with worstcase update time of . Hence we improve over [BHGWW2021] both in running time and by having a deterministic algorithm. We note however that [BHGWW2021] maintain a spanner, whereas we maintain an emulator.
Mssp.
Another implication of our techniques is an an algorithm for multisource distances. In Section 3.6, we give an algorithm that combines our sparse emulator construction with the algebraic techniques to prove the following theorem.
Theorem 1.5.
Given an unweighted, undirected graph , and , and a fixed set of sources , we can maintain approximate distances from deterministically with:

Preprocessing time of .

Worstcase update of .
Hence we can maintain distances from up to sources in almost (up to an factor) the same time as maintaining distances from a singlesource.
Deterministic nearoptimal Apsp.
One implication of Theorem 1.5 (by simply setting ) is a deterministic fullydynamic algorithm for maintaining allpairshortest path that nearly (up to an factor) matches the trivial lower bound of time per update for this problem. More formally,
Corollary 1.6.
Given an unweighted, undirected graph , and , we can maintain allpairs distances deterministically with:

Preprocessing time of .

Worstcase update time of .
It is worth mentioning that there is another (simpler) approach to obtain this bound that we will discuss in Section 3.7. The previous comparable bounds for this problem either used randomization [BrandN19] or have amortized bounds [DemetrescuI04, Thorup04].
Next we discuss two other implications of our multisource data structure, which unlike our previous bound are randomized.
Diameter Approximation.
We can maintain a approximation of the diameter in unweighted graphs by using our emulator to compute MSSP algorithms for certain sets of size based on an algorithm by [RV13]. Formally we have,
Corollary 1.7.
Given an unweighted graph with diameter , and
, we can maintain an estimate
such that^{10}^{10}10Note that the term is only relevant for graphs with very small diameter .:with the following guarantees:

Preprocessing time of .

worstcase update time that holds with high probability against an adaptive adversary.
Previously the fastest fullydynamic algorithm for this problem was by [BrandN19] with a worstcase update time of against an adaptive adversary. We get better bounds by combining our sparse emulator algorithms with the algorithm of [BrandN19]. The algorithm and the parameter setting for obtaining this bound is included in Appendix B.
APSP distance oracle with subquadratic update time.
Another implication of our dynamic MSSP algorithm is improved bounds for maintaining a data structure with subquadratic update time that supports allpair queries with a small polynomial query time. This type of data structure for undirected and unweighted graph was proposed by [RZ12] and also studied in [BrandN19]. We show that our improved MSSP algorithm directly improves their update time.
Corollary 1.8.
Given an unweighted, undirected graph , and , we can maintain a data structure that supports allpair approximate distance queries with following guarantees:

Preprocessing time: .

Worstcase update time: .

Query time: .
Moreover, these bounds hold with high probability against an adaptive adversary.
This result demonstrates that our techniques lead to improved bounds over [BrandN19] for a data structure explicitly designed for unweighted graphs. Their algorithm has an update time of with the same query time. While the analysis is similar to [BrandN19], we give a sketch of this algorithm in Appendix B.2 for completeness.
2 Technical Overview
In this section we give a highlevel overview of our contributions based on the following outline: First, we give an algorithm for maintaining a deterministic emulator. We then describe how the update time can be improved by maintaining a lowrecourse hitting set. Then in Section 2.2, we give an overview of the algebraic techniques that, combined with our deterministic emulators, lead to a conditionally optimal time bound for SSSP. Finally, we show that by maintaining sparser emulators with different guarantees, we can get improved bounds for  distances, multisource distances, and APSP.
2.1 Dynamic Emulators via LowRecourse Hitting Sets
Deterministic emulator and Sssp.
We start with a deterministic algorithm for maintaining a emulator. This algorithm is inspired by a randomized algorithm (working against an oblivious adversary) used by [HKN2013] in the decremental setting, which in turn is based on the purely additive static construction of [DorHZ00]. Given an unweighted graph , we maintain an emulator with size as follows:

[nosep]

Let be a degree threshold. For any node where , add all the edges incident to to . These edges have weight .

Construct a hitting set of size , such that every node with degree at least , called a heavy node, has a neighbor in .

For any node , add an edge to all nodes within distance to . Set the weight of such an edge to .
It is easy to see that if we were interested in a randomized algorithm that only works against an oblivious adversary, we could simply construct a hitting set by uniformly sampling a fixed set of size [UllmanY91]. We could then maintain the corresponding bounded distances from this fixed set with a worstcase update time of , which matches the lower bound of [BrandNS19]. As we will discuss in Section 2.2, this could be done by using an algebraic data structure by [Sankowski05] for maintaining and then querying bounded distances for all pairs in after each update. Note that the distance bound of in our algorithms leverage the power of algebraic distance maintenance data structures as their running times scale with the given distance bound. However, these ideas alone are not enough for obtaining an efficient deterministic algorithm.
Before explaining how to maintain both the hitting set and the corresponding distances deterministically, let us sketch the properties of this emulator and how it can be used for maintaining SSSP. It is easy to see that has size : we add edges incident to lowdegree nodes, and edges in . For the stretch analysis, consider any pair of nodes , and let be the shortest path between . We can divide into segments of equal length , an possibly one additional smaller segment. Consider one such segment . If all the nodes on this segment are lowdegree, then we have included all the corresponding edges in the emulator. Otherwise there is a node that is adjacent to the first heavy node on this segment. We have , and thus in the third step of the algorithm we have added (weighted) an edge in the emulator. It is easy to see that the path going through either provides a multiplicative factor, or (for the one smaller segment) an additive term of .
Given a emulator, we can now maintain SSSP by (i) using algebraic techniques to maintain bounded distances from the source to all nodes in , and (ii) statically running Dijkstra’s algorithm on the emulator in time , and finally (iii) taking the minimum of the two distance values for each pair . We observe that if , then we are maintaining a correct estimate in step (i). Otherwise in step (ii) the combination of the multiplicative factor and the additive term, leads to an overall approximate estimate.
Deterministic lowrecourse hitting set.
As discussed, we can easily obtain a fixed hitting set using randomization, but we are interested in a deterministic algorithm. One natural approach for constructing the hitting set deterministically is as follows: For each node with degree at least , consider a set of exactly neighbors of . After each update we can statically and deterministically compute an approximation to this instance of the hitting set problem. We use a simple greedy algorithm that proceeds by sequentially adding nodes to that hit the maximum number of uncovered heavy nodes.
This can be done in time and gives us a hitting set of size as well. This time is within our desired updatetime bound, but we also need to maintain bounded distances from elements in this hitting set. As we will see in Section 2.2, by using the naive approach of recomputing a hitting set in each update and employing offtheshelf algebraic data structures (e.g. by [BrandN19]) for maintaining bounded distances in , we would get an update time of for current . However there is a conditional lower bound of for this problem [BrandNS19], and our goal is to design an algorithm that matches this bound.
To get a better running time, we change our construction and the algebraic data structure to use a lowrecourse hitting set instead to ensure that in each update only a constant number of nodes are added to the set. More formally in Section 3.3 we will prove the following lemma:
Lemma 2.1.
Given a graph undergoing edge insertions and edge deletions and a degree threshold , call a node heavy if it has degree larger than . We can deterministically maintain a hitting set of size with worsecase recourse and worstcase time per update such that all heavy nodes have a neighbor in .
At a highlevel our dynamic low recourse hitting set proceeds as follows: we start by using the static greedy hitting set algorithm. We then note that each update (insertion or deletion) can make at most heavy nodes uncovered. We can keep on adding such nodes to our hitting set until the size of the hitting set doubles, and then reset the construction. This leads to an amortized constant recourse bound, and we can then use a standard technique to turn this into a worst case constant recourse bound.
Note that this hitting set problem can be seen as a set cover instance of size , where each set consists of exactly neighbor of a heavy node. Dynamic set cover approximation has received significant attention in recent years (e.g. [abboud2019, BHN2019, gupta2017, BHNW2021]). The most relevant result to our setting is a fullydynamic approximate set cover algorithm by [gupta2017]). However we cannot use their result directly, as they state that their polynomial time algorithm only leads to constant amortized recourse, and their updatetime guarantees are also only amortized^{11}^{11}11Of course the goal in [gupta2017] is a generic set cover approximation algorithm, which is why they are not comparable to our specialized algorithm. Also, the other set cover algorithms cited lead to approximation ratio dependent on , that can be as large as in our case.. Here we use a simple approach that utilizes the properties of our hitting set instance, which is enough to get worstcase recourse bounds.
We next move on to explaining how to design algebraic data structures that maintain the distances required for this algorithm more efficiently.
2.2 Dynamic Pairwise Bounded Distances via Matrix Inverse
As outlined before, for obtaining our desired bounds, we need to efficiently maintain bounded pairwise distances for some sets , where the sets and change over updates. We additionally use the fact that even though these sets change, they do not change substantially after each update because of our lowrecourse hitting sets.
Previous algebraic distance algorithms only considered maintaining pairwise distances for (a) some fixed sets , that do not change [Sankowski05, BrandNS19, BHGWW2021], or (b) a completely new sets , in each iteration [BrandN19]. Case (a) can be used if we randomly sample a hitting set and keeping this set fixed over the sequence of edge updates. This will only useful for randomized algorithms against oblivious adversaries. Case (b) can be used when constructing a new hitting set from scratch after each update, which works against adaptive adversaries. However, querying the distance for a new set in each update turns out to be slower and does not allow us to match the conditional lower bound for approximate SSSP. More specifically, the complexities in these case are as follows:
Lemma 2.2 ([Sankowski05, BrandNS19]).
There exist randomized dynamic algorithms that, after preprocessing a given unweighted directed^{12}^{12}12The algebraic algorithms for bounded distances all work on directed graphs. Restrictions to undirected graphs in our main results stem from the emulator arguments that extend the results to unbounded distances. graph and a parameter in time, supports edgeupdates to in time.
The algorithm supports queries that return for any given the pairwise bounded distances in time where such that , . [BrandNS19]
The algorithm can also support maintaining the pairwise distances for some fixed sets by increasing the update time by . [Sankowski05]
In our SSSP algorithm, as discussed in Section 2.1, we have , . For the current , this leads to a query time of (by choosing ), while the conditional lower bound for SSSP is .
We improve the query time for the case where sets are slowly changing with each update. This allows us to reach the conditional lower bound. We do this by reducing dynamic bounded pairwise distances for slowly changing sets to existing dynamic matrix inverse algorithms.
We start by explaining the existing reduction from dynamic distances to matrix inverse. Afterward, we explain how to use this reduction for maintaining pairwise distances.
Reducing distances to matrix inverse.
Previous dynamic algebraic algorithms that maintain distances work by reducing the task to the so called “dynamic matrix inverse” problem [Sankowski05, BrandNS19, BrandN19, BrandS19, GuR21]. Here the dynamic algorithms are given some matrix and must support updates that change any entry of and queries that return any entry of . Such dynamic matrix inverse algorithms can be used to maintain distances in dynamic graphs via the following reduction: Given an adjacency matrix , note that is the number of (not necessarily simple) paths from to of length . Specifically, the smallest with is the distance from to .
We can maintain these powers of dynamically via the following reduction. For any , we write for the ring of polynomials over (with variable ) modulo , i.e. polynomials over where we truncate all monomials of degree . We now consider matrices whose entries are from .
Let be the matrix with on the diagonal and for all edges . Note that . To see this, consider
by because of the entrywise mod . Thus a dynamic algorithm that maintains the inverse of matrix is able to maintain distances of length in dynamic graphs. The task of maintaining pairwise distances for thus reduces to the task of maintaining the submatrix for some dynamic matrix .^{13}^{13}13Throughout, we use for sets and matrix to denote the submatrix consisting of rows with index in and columns with index in .
Submatrix maintenance.
In this work, we show how to extend the previous algorithms to efficiently maintain a submatrix of an inverse for some dynamic matrix . Obviously, any dynamic matrix algorithm can maintain such a submatrix by just querying all entries after each change to , but this would not be fast enough for our algorithm to match the conditional lower bounds. We now show a blackbox reduction that is able to maintain the submatrix more efficiently. For simplicity assume is a matrix over some field .^{14}^{14}14The reduction does extend to matrices of form (as required to maintain distances ), but for simplicity we just consider instead. Consider an entry update where we add to some entry , i.e. we add some to . The ShermanMorrison identity [ShermanM50] states
Note that and are just th column and th row of . So to obtain the submatrix of the new inverse we just need to compute
This allows us to maintain throughout all updates by just querying and from some dynamic matrix algorithm instead of the entire submatrix. Updates to set (or ) are handled by querying the missing row (or column), i.e. adding some to just requires us to query in order to know the new submatrix . In Section 4, we use these ideas to formally prove the following results by reducing to existing dynamic matrix inverse algorithms from [Sankowski04, Sankowski05, BrandNS19]:
Lemma 2.3.
There exist two deterministic dynamic algorithms that, after preprocessing a given unweighted directed graph and sets , support edgeupdates to and setupdates to and (i.e. adding or removing a node to or ). After each edgeupdate the algorithms return the bounded pairwise distances of in .
The first algorithm has preprocessing time , edgeupdate time and setupdate time for updating and for updating .
The second algorithm has preprocessing time , edgeupdate time and setupdate time for updating and for updating .
For both algorithms, all update times are worstcase.
2.3 Sparsification for Approximate distances, APSP, MSSP.
Deterministic emulator for distances.
Next, we outline how we can improve the update time to in case of distances. For this purpose, we construct a emulator with size , which again is inspired by the purely additive construction of [DorHZ00] in the static setting, by making the following modifications to the algorithm described in Section 2.1 above: We set the degree threshold to . More importantly, rather than adding edges corresponding to bounded distances in , we only add pairwise edges between nodes (with bounded distance) in . This has two advantages: First, we need to run Dijkstra on a sparser graph. Second, the algebraic steps can be performed much faster when we only need to maintain pairwise distances between two sets of sublinear size (here , rather than from a set of size to all nodes in .
It is easy to see that this emulator has size . There are edges corresponding to lowdegree nodes, and corresponding to edges in . The stretch argument follows a similar structure to the one for the emulator. Again, for each pair of nodes , we divide the shortest path to segments of equal length . The main difference is that here we should consider the first and last heavy nodes on each segment, which we denote by and . Then there must be nodes that are adjacent to and respectively. We have and thus we have added an edge in the emulator. The path using this edge will lead to either a multiplicative stretch for this segment, or an additive term of for the (at most) one smaller segment.
Note that this algorithm does not lead to better bounds for singlesource distances since querying bounded singlesource distances still takes time using the algebraic techniques. However, if we are interested in the bounded distance between a fixed pair of nodes and , the algebraic approach leads to better bounds. In this case, we get an improved bound of , as outlined in Section 2.2, which is less than the conditional lower bounds for approximate SSSP.
Sparser emulators with applications in APSP and MSSP.
Finally, we give another algorithm that leads to improvements when we need to maintain distances from many sources, such as allpairs distances.
We start by describing how we maintain multisource distances from many sources with an update time almost the same as the time required for singlesource distances. For this purpose, given a set of sources , we maintain two emulators simultaneously: first maintain a emulator of . Then statically construct a much sparser emulator of size . The key idea here is to use in order to construct more efficiently. We use a static deterministic emulator algorithm (based on [RTZ2005, TZ2006emulators]) that can construct such an emulator in time . This leads to a fullydynamic algorithm for maintaining emulators deterministically in worstcase update time.
Then, similar to before, in each update we find distance estimates for pairs in by computing the minimum of the following estimates: i) bounded distances from all sources maintained by an algebraic data structure, ii) distances from all sources on emulator , computed in time .
This lets us maintain MSSP from up to sources in almost (up to an factor) the same running time as maintaining distances from a singlesource by computing multisource distances statically on this very sparse emulator and querying small distances from the algebraic data structure. This approach naturally extends to maintaining allpairs distances deterministically and yields a worstcase update time of by setting .
Further Applications.
Our sparse emulators can also be used to maintain a (nearly) approximation of the diameter. Our algorithm is an adaptation of the dynamic algorithm by [BrandN19], which is in turn based on an algorithm by [RV13]. At a highlevel, we need to query (approximate) multisource distances from three sets of size at most . We show that our emulators can be used to maintain such approximate distances much more efficiently than the data structures of [BrandN19]. Note that the diameter approximation of [RV13] is inherently randomized.
Finally, we maintain a data structure with worstcase subquadratic update time that supports sublinear allpairs approximate distance queries. Our algorithm is based on ideas of [RZ12, BrandN19] that utilize wellknown path hitting techniques (e.g. [UllmanY91]). In order to get improved bounds we again use our sparse emulators. We need to handle some technicalities both in the algorithm and its analysis introduced by the additive factor , combined with bounded distances maintained in the algorithm of [BrandN19] for an appropriately chosen parameter .
3 Approximate Distances via Emulators
In this section we focus on constructing emulators with various tradeoffs and describing how they can be combined with the algebraic data structure of Lemma 2.3 for obtaining (conditionally) optimal dynamic singlesource distance approximations.
First, assuming that we have a lowrecourse dynamic hitting set, we maintain a emulator and explain how it can be used for maintaining SSSP. We then show how to maintain a emulator, that can be used for maintaining  distances more efficiently.
We will then move on to give a deterministic algorithm that maintains lowrecourse hitting sets, and the required distances from the nodes in this hitting set to other nodes.
3.1 Deterministic Emulators and Sssp
In this section we describe how to maintain SSSP with conditionally optimal worstcase update time. We start by describing how to maintain a emulator, assuming that we have a lowrecourse hitting set, and can compute boundedhop distances from elements in this set. The algorithm is summarized in Algorithm LABEL:alg:2_emulator. Assume that we are given two functions:
algocf[h]
Observe that even though we start with an unweighted graph, we need to add weighted edges to the emulator (with weight corresponding to the distance between the endpoints). We note that a similar, but randomized version of this emulator construction (working only against an oblivious adversary) was used in [HKN2013] for maintaining approximate shortest paths decrementally. For completeness we provide a full analysis of the properties of this emulator here. Assuming that we can maintain hitting set for satisfying Lemma 2.1 and bounded distances in , Algorithm LABEL:alg:2_emulator can be used to show the following theorem:
Theorem 3.1.
Given an unweighted graph , , we can deterministically maintain a emulator with size . The worstcase update time is , and preprocessing time is .
Proof.
The size analysis is straightforward. We set , and add edges for sparse nodes, and by Lemma 2.1, we have nodes in the hitting set and thus we add an overall edges for all of them.
We next move on to the stretch analysis. Consider any pair of nodes . Let be the shortest path between and in . If all the nodes on this path have degree less than , then this whole path exists in .
We divide into segments of equal length of , with possibly one segment with smaller length. Note that if , there will be only one segment. We show that for all but one segment there is a path with stretch in the emulator. Consider the th segment that we denote by , and let be the shortest path between and . Same as before if all the nodes on are lowdegree, then all the edges in the segment are in . Otherwise, let be the first heavy node on . Since is a heavy node, by Lemma 2.1 there is a node adjacent to . First assume that . Note that since and is a neighbor of and on the shortest path between and , we have . Thus we have added an emulator edge between and . Consider the path in going through . For the length of this path we have,
(1)  
(2)  
(3)  
(4)  
(5) 
Now assume that and hence . Then we have added an edge from to in the emulator. Using the same analysis as above, and by inequality 4 we have .
Since is the shortest path in , the overall multiplicative stretch is and there will be one additive term of .
The running time (update time and preprocessing) follows from Lemma 2.1 for maintaining a lowrecourse hitting set , and Lemma 2.3 for maintaining bounded distances in , by setting and . Note that we also need to maintain edges incident to heavy nodes that overlap with any node added to , but this only takes time per update. ∎
Using an emulator for maintaining Sssp.
Given an unweighted graph , we first maintain a emulator for . Given a singlesource and , we can now maintain the distances by:

Using the algebraic data structure of Lemma 2.3: hop bounded distances from on ,

After each update, statically computing SSSP on in time.
See 1.1
Proof.
The distance estimate stored at each node is the minimum of the two estimates (i) and (ii) described above. To see the correctness (stretch), we simply observe that for any node where , we have:
Hence a approximate estimate is returned in step (ii) above. On the other hand, if , then we directly maintained the exact distance in step (i).
3.2 Deterministic Emulator and  Distances
In this section, we give another emulatorbased algorithm that lets us maintain the approximate distance from a given source to a given destination with better update time than the time bound we showed for SSSP. We maintain a sparser emulator with a slightly larger additive stretch that supports faster computation of the approximate distance estimate. In particular, we maintain a emulator of size .
Compared to the emulator of Section 3.1, for this emulator construction we need to maintain bounded distances with our algebraic data structure for a smaller number of pairs of nodes, which increases efficiency. This, combined with the fact that our emulators are sparser, leads to a faster algorithm for maintaining approximate distances.
emulator.
We start by maintaining a sparse emulator with slightly larger additive stretch term. The algorithm is summarized in Algorithm LABEL:alg:4_emulator.
algocf[h] Assuming that we can maintain a lowrecourse hitting set and bounded distance between pairs , Algorithm LABEL:alg:4_emulator leads to an emulator with the following guarantees:
Theorem 3.2.
Given an unweighted graph , , we can deterministically maintain a emulator of size with worstcase update time of , and preprocessing time of .
Proof.
It is easy to see that Algorithm LABEL:alg:4_emulator leads to an emulator of size . This follows from the fact that we add at most pairwise edges between nodes in , and we add edges for nonheavy node where .
Next we move on to stretch analysis. Consider any pair of nodes . We let be the shortest path between and .
We divide into segments of equal length at most , and possibly one smaller segment that we handle separately (which could be the only segment if ). We show that for each segment there is a path with stretch in the emulator. Consider the th segment that we denote by , and let the corresponding path shortest path between and be . Same as before, if all the nodes on have degree less than , then all the edges in the segment are in .
Otherwise, let be the first heavy node, and let the last (furthest from ) heavy node on . By Lemma 2.1 we know that there are nodes where is adjacent to and is adjacent to . The case where is a special easy case, so let us assume . First assume . Since are on the shortest path between and , we have and thus we have added an emulator edge between and . Consider the path in going through . For the length of this path we have,
(6)  
(7)  
(8)  
(9)  
(10)  
(11)  
(12) 
Now assume that there is a segment . We have again added an edge in . Using the same analysis as before, by inequality 10, we have . Hence the overall multiplicative stretch is and there will be at most one additive factor of .
We can now use the emulator of Theorem 3.2 to maintain approximate distances using the same approach as in Section 3.1. We set the distance estimate to be the minimum obtained by maintaining bounded distances on (using the second algorithm in Lemma 2.3), and statically running Dijkstra’s on the emulator. Formally we have,
See 1.2
3.3 Deterministic Dynamic LowRecourse Hitting Sets
In this section, we focus on the deterministic maintenance of the hitting sets in order to efficiently maintain the emulators described in the previous two sections.
Let us first review a very simple static deterministic construction and later discuss how to obtain a lowrecourse dynamic algorithm. Given a graph , we create a sparse subgraph of with size , which we denote by as follows: For any heavy node (a node with degree at least ), we choose an arbitrary set of neighbors of , denoted by .
We then statically compute an approximate hitting set for deterministically (i.e., exceeds the size of a minimum hitting set on by a factor of ). Consider the following simple greedy algorithm: in each step consecutively we add to the node that hits (i.e. is incident to) the maximum number of uncovered heavy nodes. It is wellknown that such an algorithm leads to the following guarantees.
Lemma 3.3 (Greedy Hitting Set, [johnson1974]).
Given a graph , and a degree threshold , we can deterministically construct a hitting set with size in time, such that every heavy node has a neighbor in .^{15}^{15}15This lemma can also be seen as an approximation greedy algorithm for set cover, where the sets that we need to cover are formed by set of node incident to each heavy node.
It is easy to see that the hitting set obtained has size : by a simple sampling procedure we know that there exists a hitting set of size , and we are using an approximate hitting set algorithm. A tighter analysis (e.g. see [BHK2017]) will lead to a total size of .
Next we move on to a dynamic maintenance of hitting sets. Our goal is to prove the following lemma.
See 2.1
First, we start with an algorithm with amortized constant recourse, and then turn it into a an algorithm with worstcase constant recourse.
The amortized lowrecourse hitting set algorithm is as follows:

Preprocess a hitting set in time, using Lemma 3.3

After each updates, reset, i.e. construct a new hitting set from scratch.

When an edge is deleted: if either of the endpoints, w.l.o.g , is a highdegree node, and is the only node covering , add to .

When an edge is inserted: if one endpoint, w.l.o.g , becomes highdegree and it is not covered by any other node, add to .
It is easy to see that this construction has low amortized recourse, since over a sequence of updates, only are added to the hitting set. The amortized update time is , since we reset the algorithm every time steps, and the size of the input of the hitting algorithm has size . Hence the amortized time per update is . We next turn this into a worstcase recourse bound using a standard reduction.
Worstcase constant recourse bound.
In order to ensure a worstcase recourse bound, we use a standard technique in which we run two copies of the algorithm at the same time. One copy maintains the hitting set whenever the other copy gradually perform the reset operation spread out over some sequence of updates.
Proof of Lemma 2.1.
We maintain two copies of the previously described hitting set algorithm in parallel. We denote the hitting sets maintained by these algorithms by and
Comments
There are no comments yet.