Dynamic Low-Stretch Trees via Dynamic Low-Diameter Decompositions

04/13/2018 ∙ by Gramoz Goranci, et al. ∙ Universitätsbibliothek Salzburg Universität Wien 0

Spanning trees of low average stretch on the non-tree edges, as introduced by Alon et al. [SICOMP 1995], are a natural graph-theoretic object. In recent years, they have found significant applications in solvers for symmetric diagonally dominant (SDD) linear systems. In this work, we provide the first dynamic algorithm for maintaining such trees under edge insertions and deletions to the input graph. Our algorithm has update time n^1/2 + o(1) and the average stretch of the maintained tree is n^o(1) , which matches the stretch in the seminal result of Alon et al. Similar to Alon et al., our dynamic low-stretch tree algorithm employs a dynamic hierarchy of low-diameter decompositions (LDDs). As a major building block we use a dynamic LDD that we obtain by adapting the random-shift clustering of Miller et al. [SPAA 2013] to the dynamic setting. The major technical challenge in our approach is to control the propagation of updates within our hierarchy of LDDs. We believe that the dynamic random-shift clustering might be useful for independent applications. One of these potential applications follows from combining the dynamic clustering with the recent spanner construction of Elkin and Neiman [SODA 2017]. We obtain a fully dynamic algorithm for maintaining a spanner of stretch 2k - 1 and size O (n^1 + 1/kn) with amortized update time O (k ^2 n) for any integer 2 ≤ k ≤ n . Compared to the state-of-the art in this regime [Baswana et al. TALG '12], we improve upon the size of the spanner and the update time by a factor of k .

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graph compression is an important paradigm in modern algorithm design. Given a graph with nodes, can we find a substantially smaller (read: sparser) subgraph such that preserves central properties of ? Very often, this compression is “lossy” in the sense that the properties of interest are only preserved approximately. A ubiquitous example of graph compression schemes are spanners: every graph admits a spanner with edges that has stretch  (for any integer ), meaning that for every edge of  not present in there is a path from to in  of length at most . Thus, when , very succinct compression with edges can be achieved at the price of stretch .

The most succinct form of subgraph compression is achieved when is a tree. Spanning trees, for example, are a well-known tool for preserving the connectivity of a graph. It is thus natural to ask whether, similar to spanners, one could also have spanning trees with low stretch for each edge. This unfortunately is known to be false: in a ring of nodes every tree will result in a stretch of for the single edge not contained in the tree. However, it turns out that a quite similar goal can be achieved by relaxing the concept of stretch: every graph admits a spanning tree of average stretch  [AbrahamN12], where the average stretch is the sum of the stretches of all edges divided by the total number of edges. Such subgraphs are called low (average) stretch (spanning) trees and have found numerous applications in recent years, most notably in the design of fast solvers for symmetric diagonally dominant (SDD) linear systems [SpielmanT14, KoutisMP14, BlellochGKMPT14, KoutisMP11, KelnerOSZ13, CohenKMPPRX14]. We believe that their fundamental graph-theoretic motivation and their powerful applications make low-stretch spanning trees a very natural object to study as well in a dynamic setting, similar to spanners [AusielloFI06, Elkin11, BaswanaKS12, BodwinK16] and minimum spanning trees [Frederickson85, EppsteinGIN97, HenzingerK01, HolmLT01, Wulff-Nilsen17, NanongkaiSW17]. Indeed, the design of a dynamic algorithm for maintaining a low-stretch spanning tree was posed as an open problem by Baswana et al. [BaswanaKS12], but despite extensive research on dynamic algorithms in recent years, no such algorithm has yet been formulated.

In this paper, we give the first non-trivial algorithm for this problem in the dynamic setting. Specifically, we maintain a low-stretch tree of a dynamic graph  undergoing updates in the form of edge insertions and deletions in the sense that after each update to we compute the set of necessary changes to . The goal in this problem is to keep the time spent after each update small while still keeping the average stretch of tolerable. Our main result is a fully dynamic algorithm for maintaining a spanning tree of expected average stretch  with expected amortized update time . At a high level, we obtain this result by combining the classic low-stretch tree construction of Alon et al. [AlonKPW95] with a dynamic algorithm for maintaining low diameter decompositions (LDD) based on random-shift clustering [MillerPX13]. Our LDD algorithm might be of independent interest, and we provide another application ourselves by using it to obtain a dynamic version of the recent spanner construction of Elkin and Neiman [ElkinN17]. The resulting dynamic spanner algorithm improves upon one of the state-of-the-art algorithms by Baswana et al. [BaswanaKS12].

Our overall approach towards the low-stretch tree algorithm – to use low-diameter decompositions based on random-shift clustering in the construction of Alon et al. [AlonKPW95] – is fairly well-known in the static setting, in particular for parallel and distributed [HaeuplerLi18] models. However, to make this approach work in the dynamic setting we need to circumvent some non-trivial challenges that at first might not be obvious. In particular, we cannot employ the following paradigm that often is very helpful in designing dynamic algorithms: obtain an algorithm that can only handle edge deletions and then extend it to a fully dynamic one using a general reduction. While we do follow this paradigm for our dynamic LDD algorithm, there are two obstacles that prevent us from doing so for the dynamic low-stretch tree: First, there is no obvious candidate for such a decremental-to-fully-dynamic reduction. Second, in our dynamic low-diameter decomposition edges might start and stop being inter-cluster edges, even if the input graph is only undergoing deletions. In the hierarchy of Alon et al. this leads to both insertions and deletions at the next level of the hierarchy. As opposed to other dynamic problems [HenzingerKN16, AbrahamDKKP16], our algorithm cannot simply enforce some type of “monotonicity” by not passing on insertions to the next level of the hierarchy (to keep a deletions-only setting). In particular, if an edge appears as an inter-cluster edge and we do not make it available to the next level, cannot be part of the low-stretch tree’ maintained by the algorithm. We would then, in a separate step, continuously have to fix to incorporate edges that have been left out. It is however not clear how to make such a fix while still keeping the average stretch of the tree low, in particular because there might be many such edges inserted over the course of the deletions-only algorithm. Thus, it seems that we really have to deal with the fully dynamic setting in the first place. We show that this can be done by a sophisticated amortization approach that explicitly analyzes the number of updates passed on to the next level.

Related Work.

Low average stretch spanning trees have been introduced by Alon et al. [AlonKPW95] who obtained an average stretch of and also gave a lower bound of on the average stretch. The first construction with polylogarithmic average stretch was given by Elkin et al. [ElkinEST08]. Further improvements [AbrahamBN08, KoutisMP11] culminated in the state-of-the-art construction of Abraham and Neiman [AbrahamN12] with average stretch . All these trees with polylogarithmic average stretch can be computed in time .111Throughout this paper we will use -notation to suppress factors that are polylogarithmic in .

The main application of low stretch trees has been in solving symmetric, diagonally dominant (SDD) systems of linear equations. It has been observed that iterative methods for solving these systems can be made faster by preconditioning with a low stretch tree [Vaidya91, BomanH03, SpielmanW09]. Consequently, they have been an important ingredient in the breakthrough result of Spielman and Teng [SpielmanT14] for solving SDD systems in nearly linear time. In this solver, low stretch trees are utilized for constructing ultra-sparsifiers, which in turn are used as preconditioners. Beyond this initial breakthrough, low stretch spanning trees have also been used in subsequent, faster solvers [KoutisMP14, BlellochGKMPT14, KoutisMP11, KelnerOSZ13, CohenKMPPRX14]. Another prominent application of low-stretch spanning trees (concretely, the variant of random spanning trees with low expected stretch) is the remarkable cut-based graph decomposition of Räcke Racke08,AndersenF09, which embeds any general undirected graph into convex combination of spanning trees, while paying only a congestion for the embedding. This decomposition tool, initially aimed at giving the best competitive ratio for oblivious routing, has found several applications ranging from approximation algorithms for cut-based problems (e.g., minimum bisection [Racke08]) to graph compression (e.g., vertex sparsifiers [Moitra09]). Other classic problems that utilize the properties of low stretch trees include the -server problem [AlonKPW95] and the minimum communication cost spanning tree problem [Hu74, PelegR98].

In terms of dynamic algorithms, we are not aware of any prior work for maintaining low stretch spanning trees. Regarding the motivation of solving SDD systems, the closest related result is possibly the algorithm of Abraham et al. [AbrahamDKKP16] for maintaining a spectral sparsifier with polylogarithmic update time. Apart from that, the problems of dynamically maintaining a spanner or a (minimum) spanning tree seem to be the most related ones.

For dynamic spanner algorithms, the main goal is to maintain, for any given integer , a spanner of stretch with edges. Spanners of stretch and size exist for every graph [Awerbuch85], and this trade-off is presumably tight under Erdős’s girth conjecture. The dynamic spanner problem has been introduced by Ausiello et al. [AusielloFI06]. They showed how to maintain a - or -spanner with amortized update time , where is the maximum degree of the graph. Using techniques from the streaming literature, Elkin [Elkin11] provided an algorithm for maintaining a -spanner with expected update time. Faster update times were achieved by Baswana et al. [BaswanaKS12]: their algorithms maintain -spanners either with expected amortized update time or with expected amortized update time . Recently, Bodwin and Krinninger [BodwinK16]

complemented these results by giving algorithms with worst-case update times that are correct with high probability, namely update time

for maintaining a -spanner and update time for maintaining a -spanner. All of these algorithms exhibit the stretch/space trade-off mentioned above, up to polylogarithmic factors in the size of the spanner. All randomized dynamic spanner algorithms mentioned above need to assume an oblivious adversary whose chosen sequence of updates is independent of the random choices of the algorithm.

The first non-trivial algorithm for dynamically maintaining a spanning tree (and also an MST, a minimum spanning tree) was developed by Frederickson [Frederickson85] and had a worst-case update time of . Using a general sparsification technique, this bound was improved to by Eppstein et al. [EppsteinGIN97]. For the dynamic spanning tree problem, this barrier has first been broken by Henzinger and King [HenzingerK99] who gave a randomized algorithm with polylogarithmic amortized update time. Holm et al. [HolmLT01] later showed that polylogarithmic amortized update time can be obtained deterministically, also for the dynamic MST problem. In terms of worst-case guarantees, Kapron et al. [KapronKM13] were the first to present an approach for polylogarithmic worst-case update time. The main goal of Kapron et al. was to solve the dynamic connectivity problem, but their algorithm can also maintain a spanning tree against an oblivious adversary. Although there has been a significant amount of follow-up work in this area [HenzingerK01, HenzingerT97, Thorup00, Wulff-Nilsen13, GibbKKT15, HuangHKP17, Kejlberg-Rasmussen16], the worst-case update time for the MST problem stayed at . This changed with a recent breakthrough of Nanongkai et al. [Wulff-Nilsen17, NanongkaiS17, NanongkaiSW17], who finally achieved a worst-case update time of for maintaining an MST. Note that for MSTs a simple reweighting can enforce a unique solution and thus dynamic MST algorithms do not require the oblivious adversary assumption; they immediately work against an adaptive online adversary who may choose the next update after observing the output of the algorithm.

Our Results.

Our main result is a dynamic algorithm for maintaining a low average stretch spanning tree of an unweighted, undirected graph.

Theorem 1.1.

Given any unweighted, undirected graph undergoing edge insertions and deletions, there is a fully dynamic algorithm for maintaining a spanning forest of expected average stretch that has expected amortized update time . These guarantees hold against an oblivious adversary.

This is the first non-trivial algorithm for this problem. Our stretch matches the seminal construction of Alon et al. [AlonKPW95], which we essentially modify for the dynamic setting. The update time bound of is certainly competitive, but it does not seem to be an inherent barrier. If sufficiently high polynomial stretch is tolerated, we can also be faster. A modification of our algorithm gives average stretch and update time . In terms of the adversarial model, we must assume that the adversary is oblivious, which means that its choices are independent of the random choices of our algorithm. There are some situations where this assumption is not justified [Madry10], but the oblivious adversary is one of the standard models studied in this field. For example, all randomized dynamic spanner algorithms known so far [Elkin11, BaswanaKS12, BodwinK16] need to assume an oblivious adversary.

One of the main building blocks of our dynamic low stretch tree algorithm is the following dynamic algorithm for maintaining a low-diameter decomposition (LDD).

Theorem 1.2.

Given any unweighted, undirected multigraph undergoing edge insertions and deletions, there is a fully dynamic algorithm for maintaining a -decomposition (with clusters of strong radius and at most inter-cluster edges in expectation) that has expected amortized update time . The expected amortized number of edges to become inter-cluster edges after each update is . These guarantees hold against an oblivious adversary.

Our algorithm is based on the random-shift clustering of Miller at al. [MillerPX13], with many tweaks to make it work in a dynamic setting. In our analysis of the algorithm, we bound the amortized number of changes to the clustering per update by , which is significantly smaller than the naive bound of suggested by the update time. This is particularly important for hierarchical approaches, such as in our dynamic low-stretch tree algorithm, because a small bound on the number of amortized changes helps in controlling the number of induced updates to be processed within the hierarchy. We remark that, in the decremental setting, the expander decomposition and pruning techniques of Nanongkai et al. [NanongkaiSW17] will yield a result similar to Theorem 1.2, but with overheads of  [Saranurak18].222Note that the decremental algorithm of Nanongkai et al. will have a stronger worst-case update time guarantee. However, this guarantee will not carry over the fully dynamic setting, at least not by using our reduction in Section 5.3, which to the best of our standing cannot be de-amortized with standard techniques. We believe that our solution is arguably simpler than the dynamic expander decomposition [NanongkaiSW17]. Furthermore, our dynamic random-shift clustering algorithm may be of independent interest.

A direct consequence of our dynamic random-shift clustering algorithm is the following new result for the dynamic spanner problem.

Theorem 1.3.

Given any unweighted, undirected graph undergoing edge insertions and deletions, there is a fully dynamic algorithm for maintaining a spanner of stretch and expected size that has expected amortized update time . These guarantees hold against an oblivious adversary.

Remember that the decremental algorithm of Baswana et al. [BaswanaKS12] maintains a spanner of stretch and expected size with total update time . Our new algorithm thus improves both the size and the update time by a factor of . This is particularly relevant because the stretch/size trade-off of vs.  is tight under the girth conjecture. We thus exceed the conjectured optimal size by a factor of only compared to the prior , where might be as large as . When we restrict ourselves to the decremental setting, we do achieve size with expected amortized update time . Again, this saves a factor of  compared to Baswana et al. [BaswanaKS12]. To obtain Theorem 1.3, employ our dynamic LDD algorithm in sparse spanner construction of Elkin and Neiman [ElkinN17] and combine it with the dynamic spanner framework of Baswana et al. [BaswanaKS12]. However, we believe that this application is another demonstration of the usefulness of our dynamic LDD algorithm.

Structure of this Paper.

The remainder of this paper is structured as follows. We first settle the notation and terminology in Section 2. We then give a high-level overview of our results and techniques in Section 3. Finally, we provide all necessary details for our dynamic low-stretch tree (Section 4) our dynamic low-diameter decomposition (Section 5), and our dynamic spanner algorithm (Section 6).

2 Preliminaries

Graphs.

Let be an undirected weighted graph, where , and . If for all , then we say is an undirected unweighted graph. If is a multiset, in which every element has some integer multiplicity of at least , then we call a multigraph. For a subset let denote the subgraph of induced by . Throughout the paper we call a cluster. For any positive integer , a clustering of is a partition of into disjoint subsets . We say that an edge is an intra-cluster edges if both its endpoints belong to the same cluster for some ; otherwise, we say that an edge is an inter-cluster edge.

For any let denote the length of shortest path between and induced by the edge weights of the graph . When is clear from the context, we will omit the subscript. The strong diameter of a cluster is the maximum length of the shortest path between two vertices in the graph induced by , i.e., . In the following we define a low-diameter clustering of .

Definition 2.1.

Let be any positive integer and . Given an undirected, unweighted graph , a -decomposition of is a partition of into disjoint subsets such that:

  1. The strong diameter of each is at most .

  2. The number of edges with endpoints belonging to different subsets is at most .

In the -decompositions of the randomized dynamic algorithms in this paper, the bound in Condition 2 is in expectation.

Let be a subgraph of . For any pair of vertices , we let denote length of the shortest path between and in . We define the stretch of an edge with respect to to be

The stretch of is defined as the maximum stretch of any of edge . If is a forest, then the average stretch over all edges of with respect to is given by

Exponential Distribution.

For a parameter

, the probability density function of the

exponential distribution is given by

The mean of the exponential distribution is .

Dynamic Algorithms.

Consider a graph with nodes undergoing updates in the form of edge insertions and edge deletions. An incremental algorithm is a dynamic algorithm that can only handle insertions, a decremental algorithm can only handle deletions, and a fully dynamic algorithm can handle both. We follow the convention that a fully dynamic algorithm starts from an empty graph with nodes. The (maximum) running time spent by a dynamic algorithm for processing each update (before the next update arrives) is called update time. We say that a dynamic algorithm has (expected) amortized update time if its total running time spent for processing a sequence of updates is bounded by (in expectation). In this paper, we assume that the updates to the graph are performed by an oblivious adversary who fixes the sequences of updates in advance, i.e., the adversary is not allowed to adapt its sequence of updates as the algorithm proceeds. This in particular implies that for randomized dynamic algorithms the sequence of updates is independent from the random choices of the algorithm.

3 Technical Overview

In the following, we provide some intuition for our approach and highlight the main ideas of this paper.

Low Average Stretch Spanning Tree.

Before we give an overview of our new approach, we review the straightforward algorithm for an update time of . The idea is to maintain a cut sparsifier [BenczurK15] of the underlying graph and to do a recomputation from scratch on the cut sparsifier after each update. The state-of-the-art static algorithm of Abraham and Neiman [AbrahamN12] computes a spanning tree of average stretch in time on an input graph with edges. The fully dynamic algorithm of Abraham et al. [AbrahamDKKP16] maintains a -cut sparsifier with edges in polylogarithmic update time with high probability. An argument of Koutis, Levin, and Peng [KoutisLP16] shows that a spanning tree of total stretch  with respect to a -cut sparsifier of has stretch with respect to . Thus, by combining these two algorithms (and setting ) we can maintain a spanning tree of average stretch with update time with high probability.

A first idea is to decrease the update time with the help of the dynamic low-diameter decomposition of Theorem 1.2. This algorithm can maintain a -decomposition, i.e., a partitioning of the graph into clusters such that there are at most inter-cluster edges and the (strong) radius of each cluster is at most . In particular, each cluster has a designated center and the algorithm maintains a spanning tree of each cluster in which every node is at distance at most from the center. Now consider the following simple dynamic algorithm:

  1. Maintain a -decompositions of the input graph .

  2. Contract the clusters in the decomposition to single nodes and maintain a graph containing one node for each cluster and all inter-cluster edges.

  3. Compute a low-stretch tree of after each update to  using a static algorithm.

  4. Maintain as the “expansion” of in which every node in is replaced by the spanning tree of the cluster representing the node.

As the clusters are non-overlapping it is immediate that is indeed a tree. To analyze the average stretch of , we distinguish between inter-cluster edges (with endpoints in different clusters) and intra-cluster edges (with endpoints in the same cluster). Each intra-cluster edge has stretch at most as the spanning tree of the cluster containing both endpoints of such an edge is a subtree of . Each inter-cluster edge has polylogarithmic average stretch in with respect to . By expanding the clusters, the length of each path in increases by a factor of at most . Thus, inter-cluster edges have an average stretch of in . As there are at most intra-cluster edges and at most inter-cluster edge, the total stretch over all edges is at most , which gives an average stretch of .

To bound the update time, first observe that the number of inter-cluster edges is . Thus, has at most edges and therefore the static algorithm for computing takes time per update. Together with the update time of the dynamic LDD, we obtain an update time of . By setting , we would already obtain an algorithm for maintaining a tree of average stretch with update time .

We can improve the stretch and still keep the update time sublinear by a hierarchical approach in which the scheme of clustering and contracting is repeated times. Observe that the -th contracted graph will contain many edges and, in the final tree , the stretch of each edge disappearing with the -th contraction is , which can be obtained by expanding the contracted low-diameter clusters. After  contractions, there are at most edges remaining and they have polylogarithmic average stretch in  with respect to , which, again by expanding clusters, implies an average stretch of at most in  with respect to . This leads to a total stretch of , which gives an average stretch of . To bound the update time, observe that updates propagate within the hierarchy as each change to inter-cluster edges of one layer will appear as an update in the next layer. Each operation in the dynamic LDD algorithm will perform at most one change to the clustering, i.e., the number of changes propagated to the next layer of the hierarchy is at most per update to the current layer. This will result in an update time of in the -th contracted graph per update to the input graph. The update time for maintaining the tree  will then be , which is at best, i.e., no better than the simpler approach above. A tighter analysis can improve this update time significantly: The second part of Theorem 1.2 bounds the amortized number of edges to become inter-cluster edges by . This results in an update time of . By setting and we can roughly balance these two terms in the update time and thus arrive at an update time of while the average stretch is . The update time reduces to by running the algorithm on a sparse cut sparsifier.

Low Diameter Decomposition.

To obtain a suitable algorithm for dynamically maintaining a low-diameter decomposition, we follow the widespread paradigm of first designing a decremental – i.e., deletions-only – algorithm and then extending it to a fully dynamic one. We can show that, for any sequence of at most edge deletions (where is the initial number of edges in the graph), a -decomposition can be maintained with expected total update time . Here, we build upon the work of Miller et al. [MillerPX13] who showed that exponential random-shift clustering produces clusters of radius such that each edge has a probability of to go between clusters. This clustering is obtained by first having each node sample a random shift value from the exponential distribution and then determining the cluster center of each node as the node to which it minimizes the difference between distance and (other node’s) shift value.

In the parallel algorithm of [MillerPX13], the clustering is obtained by essentially computing one single-source shortest path tree of maximum depth . To make this computation efficient333For their parallel algorithm, efficiency in particular means low depth of the computation tree., the shift values are rounded to integer values and the fractional values are only considered for tie-breaking. We observe that one can maintain this bounded-depth shortest path tree with a simple modification of the well-known Even-Shiloach algorithm that spends time every time a node increases its level (distance from the source) in the tree. By rounding to integer edge weights, similar to [MillerPX13], we can make sure that the number of level increases to consider is at most for each node. Note however that this standard argument charging each node only when it increases its level is not enough for our purpose: the assignment of nodes to clusters follows the fractional values for tie-breaking, which might result in some node changing its cluster – and in this way also spend time – without increasing its level. As has been observed in [MillerPX13], the fractional values of the shift values effectively induce a random permutation on the nodes. Using a similar argument as in the analysis of the dynamic spanner algorithm of Baswana et al. [BaswanaKS12], we can thus show that in expectation each node changes its cluster at most times while staying at a particular level. This results in a total update time of and we can similarly argue that the total number of edges to ever become inter-cluster edges during the whole decremental algorithm is .

To obtain a fully dynamic algorithm, we observe that any LDD can tolerate a certain number of insertions to the graph. A -decomposition has at most inter-cluster edges and thus, if we insert edges to the graph without changing the decomposition, we still have an -decomposition. We can exploit this observation by simply running a decremental algorithm, that is restarted from scratch after each phase of updates to the graph. We then deal with edge deletions by delegating them to the decremental algorithm and we deal with edge insertions in a lazy way by doing nothing. This results in a total time of that is amortized over updates to the graph, i.e., amortized update time . Similarly, the amortized number of edges to become inter-cluster edges after an update is .

4 Dynamic Low Average Stretch Spanning Tree

Our dynamic algorithms for maintaining a low-average stretch spanning tree will use a hierarchy of low-diameter decompositions. We first analyze very generally the update time for maintaining such a decomposition and explain how to obtain a spanning tree from this hierarchy in a natural way, similar to the construction of Alon et al. [AlonKPW95]. We then analyze two different approaches for maintaining the tree, which will give us two complementary points in the design space of dynamic low-stretch tree algorithms.

Consider some integer parameter and parameters . For each , let be the fully dynamic algorithm for maintaining a -decomposition as given by Theorem 1.2. Our LDD-hierarchy consists of multigraphs where is the input graph and, for each , the graph is obtained from contracting according to a -decomposition of as follows: For every node , let denote center of the cluster to which is assigned in the -decomposition of . Now define as the multiset of edges containing for every edge such that one edge , i.e., , where the multiplicity of each edge is equal to the number of edges between the corresponding clusters in . Remember that all graphs  have the same set of nodes, but nodes that do not serve as cluster centers in will be isolated in . It might seem counter-intuitive at first that these isolated nodes are not removed from the graph, but observe that in our dynamic algorithm nodes might start or stop being cluster centers over time. By keeping all nodes in all subgraphs, we avoid having to deal with insertions or deletions of nodes.444Note that it is easy to explicitly maintain the sets of isolated and non-isolated nodes by observing the degrees.

Note that the -decomposition of guarantees that in expectation, which implies the following bound.

Observation 4.1.

For every , in expectation.555Note that for the product is empty and thus equal to .

We now analyze the update time for maintaining this LDD-hierarchy under insertions and deletions to the input graph . Note that for each level  of the hierarchy, changes made to the graph might result in the dynamic algorithm  making changes to the -decomposition of . In particular, edges of could start or stop being inter-cluster edges in the decomposition, which in turn leads to edges being added to or removed from . Thus, a single update to the input graph  might result in a blow-up of induced updates to be processed by the algorithms . To limit this blow-up, we use an additional property of our LDD-decomposition given in Theorem 1.2, namely the non-trivial bound on the number of edges to become inter-cluster edges after each update.

Lemma 4.2.

The LDD-hierarchy can be maintained with an expected amortized update time of

Proof.

For every and every

define the following random variables:

  • : The total time spent by algorithm for processing any sequence of  updates to .

  • : The total number of changes performed to by while processing any sequence of  updates to .

  • : The total time spent by algorithms for processing any sequence of  updates to .

Note that the expected values of and are bounded by Theorem 1.2. We will show by induction on that , which with implies the claim we want to prove.

In the base case , we know by Theorem 1.2 that algorithm maintaining the -decomposition of spends expected amortized time per update to , i.e., for any . For the inductive step, consider some and any . Any sequence of updates to induces at most updates to . Each of those updates has to be processed by the algorithms . We thus have .

To bound , recall first the expectations of the involved random variables. As by Theorem 1.2 the algorithm  maintaining the -decomposition of has expected amortized update time , it spends an expected total time of for any sequence of updates to . Furthermore, over the whole sequence of  updates, the expected number of edges to ever become inter-cluster edges in the -decomposition of is . This induces at most updates (insertions or deletions) to the graph , i.e., . By the induction hypothesis, the expected amortized update time spent by for any sequence of updates to is .

Now by linearity of expectation we get

and by the law of total expectation we can bound as follows:

We thus get

as desired. ∎

Given any spanning forest of , there is a natural way of defining a spanning forest  of  from the LDD-hierarchy. To this end, we first formally define the contraction of a node of to a cluster center of (for as follows: Every node of is contracted to itself in , and, for every , a node of is contracted to in if is contracted to in and . Similarly, for every , an edge of is contracted to an edge of if is contracted to and is contracted to . Now define inductively as follows: We let be the forest consisting of the shortest path trees of the clusters in the -decomposition of . For every , we obtain from and a -decomposition of as follows: for every edge in a shortest path tree in one of the clusters, we include in the edge of contracted to in . Finally, is obtained from as follows: for every edge  in the spanning forest  of , we include in  the edge  of  contracted to  in . As the clusters in each decomposition are non-overlapping, we are guaranteed that is indeed a forest. Note that, apart from the time needed to maintain , we can maintain in the same asymptotic update time as the LDD-hierarchy (up to logarithmic factors).

We now partially analyze the stretch of with respect to .

Lemma 4.3.

For every , and for every pair of nodes and  that are contracted to the same cluster center in , there is a path from to in of length at most in expectation.

Proof.

The proof is by induction on . The induction base is straightforward: For and to be contracted to the same cluster center in , they must be contained in the same cluster  of the -decomposition of maintained by . Remember that has strong diameter at most . Thus, in the shortest path tree of there is a path of length at most from to using edges of . By the definition of , this path is also present in .

For the inductive step, let and let and denote the cluster centers to which and are contracted in , respectively. For and to be contracted to the same cluster center in , and must be contained in the same cluster  of the -decomposition of maintained by . As has strong diameter at most , there is a path from to of length at most in the shortest path tree of . Let denote the nodes on , where and . By the definition of our tree , there must exist edges of  such that

  • is contained in for all ,

  • and are contracted to the same cluster center in ,

  • and are contracted to the same cluster center in , and

  • and are contracted to the same cluster center in for all .

By the induction hypothesis we know that for every there is a path of length at most from to in . Paths of the same maximum length also exist from to and from to . It follows that there is a path from to in of length at most

as desired. ∎

To analyze the stretch of , we will use the following terminology: we let the level of an edge of be the largest such that edge is contracted to some edge  in . Remember that is a multiset of edges containing as many edges as there are edges with and being contracted to different cluster centers and in , respectively. Thus, the number of edges at level is at most . Note that for an edge to be at level , and must be contracted to the same cluster center in . Therefore, by Lemma 4.3, the stretch of edges at level  in with respect to is at most . The contribution to the total stretch of by edges at level is thus at most

(1)

To now obtain a fully dynamic algorithm for maintaining a low-stretch spanning forest, it remains to plug in a concrete algorithm for maintaining together with suitable choices of the parameters. We analyze two choices for dynamically maintaining . The first is the “lazy” approach of recomputing a low-stretch spanning forest from scratch after each update to the input graph. The second is a fully dynamic spanning forest algorithm with only trivial stretch guarantees.

Theorem 4.4 (Restatement of Theorem 1.1).

Given any unweighted, undirected graph undergoing edge insertions and deletions, there is a fully dynamic algorithm for maintaining a spanning forest of expected average stretch that has expected amortized update time . These guarantees hold against an oblivious adversary.

Proof.

It is sufficient to provide an algorithm with update time as we can run this algorithm on a cut sparsifier of size with polylogarithmic stretch that is maintained by a dynamic algorithm with polylogarithmic update time. We provide the details of this approach in Appendix A.

We set and for all and maintain an LDD-hierarchy with these parameters. The spanning forest is obtained by recomputing a low-average stretch forest of from scratch after each update to . Note that this recomputation is performed after having updated all graphs in the hierarchy. For this recomputation we use the state-of-the-art static algorithm for computing a spanning forest of the multigraph with total stretch in time .

By Equation (1), the contribution to the total stretch of by edges at level is at most . To bound the contribution of edges at level , consider some edge at level  and let and denote the cluster centers to which and are contracted in , respectively. Let denote the path from to in . Using similar arguments as in the proof of Lemma 4.3, the contracted nodes and edges of can be expanded to a path from to  in  of length at most . Thus, the contribution of edges at level  is at most and the total stretch of  with respect to  is

which gives an average stretch of .

Using the bound of Lemma 4.2 for the update time of the LDD-hierarchy and the trivial bound of for recomputing from scratch , the expected amortized update time for maintaining is

Theorem 4.5.

Given any unweighted, undirected graph undergoing edge insertions and deletions, there is a fully dynamic algorithm for maintaining a spanning forest of expected average stretch that has expected amortized update time for every . These guarantees hold against an oblivious adversary.

Proof.

We set , and for all and maintain an LDD-hierarchy with these parameters. The spanning forest is obtained by fully dynamically maintaining a spanning forest of using any algorithm with polylogarithmic update time.

By Equation (1), the contribution to the total stretch of by edges at level is at most . For every edge at level  with contracted to and contracted to in , there is a path from to in that by undoing the contractions can be expanded to a path from to in , which trivially has length at most . Thus, the contribution by each edge at level is at most . As for every there are at most edges at level in expectation, we can bound the expected total stretch of with respect to as follows:

This gives an average stretch of . We now simplify these two terms. Exploiting that for all , we get