DeepAI
Log In Sign Up

Simple Dynamic Spanners with Near-optimal Recourse against an Adaptive Adversary

Designing dynamic algorithms against an adaptive adversary whose performance match the ones assuming an oblivious adversary is a major research program in the field of dynamic graph algorithms. One of the prominent examples whose oblivious-vs-adaptive gap remains maximally large is the fully dynamic spanner problem; there exist algorithms assuming an oblivious adversary with near-optimal size-stretch trade-off using only polylog(n) update time [Baswana, Khurana, and Sarkar TALG'12; Forster and Goranci STOC'19; Bernstein, Forster, and Henzinger SODA'20], while against an adaptive adversary, even when we allow infinite time and only count recourse (i.e. the number of edge changes per update in the maintained spanner), all previous algorithms with stretch at most log^5(n) require at least Ω(n) amortized recourse [Ausiello, Franciosa, and Italiano ESA'05]. In this paper, we completely close this gap with respect to recourse by showing algorithms against an adaptive adversary with near-optimal size-stretch trade-off and recourse. More precisely, for any k≥1, our algorithm maintains a (2k-1)-spanner of size O(n^1+1/klog n) with O(log n) amortized recourse, which is optimal in all parameters up to a O(log n) factor. As a step toward algorithms with small update time (not just recourse), we show another algorithm that maintains a 3-spanner of size Õ(n^1.5) with polylog(n) amortized recourse and simultaneously Õ(√(n)) worst-case update time.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

04/17/2020

Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary

Designing dynamic graph algorithms against an adaptive adversary is a ma...
11/07/2021

Dynamic Algorithms Against an Adaptive Adversary: Generic Constructions and Lower Bounds

A dynamic algorithm against an adaptive adversary is required to be corr...
09/08/2019

Fully Dynamic Maximal Independent Set in Expected Poly-Log Update Time

In the fully dynamic maximal independent set (MIS) problem our goal is t...
04/30/2020

Fully-Dynamic Coresets

With input sizes becoming massive, coresets—small yet representative sum...
09/28/2022

Adaptive Out-Orientations with Applications

We give simple algorithms for maintaining edge-orientations of a fully-d...
11/02/2020

Constant-Time Dynamic Weight Approximation for Minimum Spanning Forest

We give two fully dynamic algorithms that maintain a (1+ε)-approximation...
10/20/2020

New Techniques and Fine-Grained Hardness for Dynamic Near-Additive Spanners

Maintaining and updating shortest paths information in a graph is a fund...

1 Introduction

Increasingly, algorithms are used interactively for data analysis, decision making, and classically as data structures. Often it is not realistic to assume that a user or an adversary is oblivious to the outputs of the algorithms; they can be adaptive in the sense that their updates and queries to the algorithm may depend on the previous outputs they saw. Unfortunately, many classical algorithms give strong guarantees only when assuming an oblivious adversary. This calls for the design of algorithms that work against an adaptive adversary whose performance match the ones assuming an oblivious adversary. Driven by this question, there have been exciting lines of work across different communities in theoretical computer science, including streaming algorithms against an adaptive adversary [ben2020framework, hasidim2020adversarially, woodruff2020tight, alon2021adversarial, kaplan2021separating, Braverman2021adversarial], statistical algorithms against an adaptive data analyst [hardt2014preventing, dwork2015preserving, bassily2021algorithmic, steinke2017tight], and very recent algorithms for machine unlearning [gupta2021adaptive].

In the area of this paper, namely dynamic graph algorithms, a continuous effort has also been put on designing algorithms against an adaptive adversary. This is witnessed by dynamic algorithms for maintaining spanning forests [holm2001poly, NanongkaiS17, Wulff-Nilsen17, NanongkaiSW17, ChuzhoyGLNPS19], shortest paths [BernsteinC16, Bernstein17, BernsteinChechikSparse, ChuzhoyK19, ChuzhoyS20, gutenberg2020decremental, gutenberg2020deterministic, GutenbergWW20, Chuzhoy21], matching [BhattacharyaHI15, BhattacharyaHN16, BhattacharyaHN17, BhattacharyaK19, Wajc19, BhattacharyaK21deterministic], and more. This development led to new powerful tools, such as the expander decomposition and hierarchy [SaranurakW19, GoranciRST20, liS2021] applicable beyond dynamic algorithms [Li21, li2021nearly, abboud2021apmf, zhang2021faster], and other exciting applications such as the first almost-linear time algorithms for many flow and cut problems [BrandLNPSSSW20, BrandLLSS0W21, Chuzhoy21, BernsteinGS21]. Nevertheless, for many fundamental dynamic graph problems, including graph sparsifiers [AbrahamDKKP16], reachability [BernsteinPW19], directed shortest paths [gutenberg2020decremental], the performance gap between algorithms against an oblivious and adaptive adversary remains large, waiting to be explored and, hopefully, closed.

One of the most prominent dynamic problems whose oblivious-vs-adaptive gap is maximally large is the fully dynamic spanner problem [AusielloFI06, Elkin11, BaswanaKS12, BodwinK16, ForsterG19, BernsteinFH19, Bernstein2020fully]. Given an unweighted undirected graph with vertices, an -spanner is a subgraph of whose pairwise distances between vertices are preserved up to the stretch factor of , i.e., for all , we have .111Here, denotes the distance between and in graph . In this problem, we want to maintain an -spanner of a graph while undergoes both edge insertions and deletions, and for each edge update, spend as small update time as possible.

Assuming an oblivious adversary, near-optimal algorithms have been shown: for every , there are algorithms maintaining a -spanner containing edges222 hides a factor., which is nearly tight with the bound from Erdős’ girth conjecture (proven for the cases where [wenger1991extremal]). Their update times are either amortized [BaswanaKS12, ForsterG19] or worst-case [BernsteinFH19], both of which are polylogarithmic when is a constant.

In contrast, the only known dynamic spanner algorithm against an adaptive adversary by [AusielloFI06] requires amortized update time and it can maintain a -spanner of size only for . Whether the bound can be improved remained open until very recently. Bernstein et al. [Bernstein2020fully] show that a -spanner can be maintained against an adaptive adversary using amortized update time. The drawback, however, is that their expander-based technique is too crude to give any stretch smaller than . Hence, for , it is still unclear if the bound is inherent. Surprisingly, this holds even if we allow infinite time, and only count recourse, i.e., the number of edge changes per update in the maintained spanner. The stark difference in performance between the two adversarial settings motivates the main question of this paper:

Is the recourse bound inherent for fully dynamic spanners against an adaptive adversary?

Recourse is an important performance measure of dynamic algorithms. There are dynamic settings where changes in solutions are costly while computation itself is considered cheap, and so the main goal is to directly minimize recourse [gupta2014maintaining, gupta2014online, avin2020dynamic, gupta2020fully]. Even when the final goal is to minimize update time, there are many dynamic algorithms that crucially require the design of subroutines with recourse bounds stronger than update time bounds to obtain small final update time [chechik2020dynamic, GoranciRST20, chen2020fast]. Historically, there are dynamic problems, such as planar embedding [HolmR20soda, HolmR20stoc] and maximal independent set [Censor-HillelHK16, BehnezhadDHSS19, ChechikZ19], where low recourse algorithms were first discovered and later led to fast update-time algorithms. Similar to dynamic spanners, there are other fundamental problems, including topological sorting [BernsteinC18cycle] and edge coloring [bhattacharya2021online], for which low recourse algorithms remain the crucial bottleneck to faster update time.

In this paper, we successfully break the recourse barrier and completely close the oblivious-vs-adaptive gap with respect to recourse for fully dynamic spanners against an adaptive adversary. There exists a deterministic algorithm that, given an unweighted graph with vertices undergoing edge insertions and deletions and a parameter , maintains a -spanner of containing edges using amortized recourse.

As the above algorithm is deterministic, it automatically works against an adaptive adversary. Each update can be processed in polynomial time. Both the recourse and stretch-size trade-off of Section 1 are optimal up to a factor. When ignoring the update time, it even dominates the current best algorithm assuming an oblivious adversary [BaswanaKS12, ForsterG19] that maintains a -spanner of size using recourse. Therefore, the oblivious-vs-adaptive gap for amortized recourse is closed.

The algorithm of Section 1 is as simple as possible. As it turns out, a variant of the classical greedy spanner algorithm [AlthoferDDJS93] simply does the job! Although the argument is short and “obvious” in hindsight, for us, it was very surprising. This is because the greedy algorithm sequentially inspects edges in some fixed order, and its output solely depends on this order. Generally, long chains of dependencies in algorithms are notoriously hard to analyze in the dynamic setting. More recently, a similar greedy approach was dynamized in the context of dynamic maximal independent set [BehnezhadDHSS19] by choosing a random order for the greedy algorithm. Unfortunately, the random order must be kept secret from the adversary and so this fails completely in our adaptive setting. Despite these intuitive difficulties, our key insight is that we can adaptively choose the order for the greedy algorithm after each update. This simple idea is enough, see Section 2 for details.

Section 1 leaves open the oblivious-vs-adaptive gap for the update time. Below, we show a partial progress on this direction by showing an algorithm with near-optimal recourse and simultaneously non-trivial update time. There exists a randomized algorithm that, given an unweighted graph with

vertices undergoing edge insertions and deletions, with high probability maintains against an adaptive adversary a

-spanner of containing edges using amortized recourse and worst-case update time.

We note again that, prior to the above result, there was no algorithm against an adaptive adversary with amortized update time that can maintain a spanner of stretch less than . Section 1 shows that for -spanners, the update time can be worst-case, while guaranteeing near-optimal recourse.

We prove Section 1 by employing a technique called proactive resampling, which was recently introduced in [Bernstein2020fully] for handling an adaptive adversary. We apply this technique on a modification of a spanner construction by Grossman and Parter [GrossmanP17] from distributed computation community. The modification is small, but seems inherently necessary for bounding the recourse.

To successfully apply proactive resampling, we refine the technique in two ways. First, we present a simple abstraction in terms of a certain load balancing problem that captures the power of proactive resampling. Previously, the technique was presented and applied specifically for the dynamic cut sparsifier problem [Bernstein2020fully]. But actually, this technique is conceptually simple and quite generic, so our new abstraction will likely facilitate future applications. Our second technical contribution is to generalize and make the proactive resampling technique more flexible. At a very high level, in [Bernstein2020fully], there is a single parameter about sampling probability that is fixed throughout the whole process, and their analysis requires this fact. In our load-balancing abstraction, we need to work with multiple sampling probabilities and, moreover, they change through time. We manage to analyze this generalized process using probabilistic tools about stochastic domination, which in turn simplifies the whole analysis.

If a strong recourse bound is not needed, then proactive resampling can be bypassed and the algorithm becomes very simple, deterministic, and has slightly improved bounds as follows. There exists a deterministic algorithm that, given an unweighted graph with vertices undergoing edge insertions and deletions, maintains a -spanner of containing edges using worst-case update time, where is the maximum degree of .

Despite its simplicity, the above result improves the update time of the fastest deterministic dynamic 3-spanner algorithm [AusielloFI06] from amortized to worst-case. In fact, all previous dynamic spanner algorithms with worst-case update time either assume an oblivious adversary [Elkin11, BodwinK16, BernsteinFH19] or have a very large stretch of [Bernstein2020fully]. See Table 1 for detailed comparison.

-.5cm Ref. Stretch Size Recourse Update Time Deterministic? Against an oblivious adversary [BaswanaKS12] amortized rand. oblivious [BaswanaKS12, ForsterG19] amortized rand. oblivious [BernsteinFH19] worst-case rand. oblivious Against an adaptive adversary [AusielloFI06] amortized deterministic amortized deterministic [Bernstein2020fully] amortized rand. adaptive worst-case deterministic Ours amortized worst-case deterministic amortized worst-case rand. adaptive worst-case deterministic

Table 1: The state of the art of fully dynamic spanner algorithms.

Organization. In Section 2, we give a very short proof of Section 1. In Section 3, we prove Section 1 assuming a crucial lemma (Section 3.2) needed for bounding the recourse. To prove this lemma, we show a new abstraction for the proactive resampling technique in Section 4 and complete the analysis in Section 5. Our side result, Section 1, is based on the the static construction presented in Section 3.1 and its simple proof is given in Section 3.2.

2 Deterministic Spanner with Near-optimal Recourse

Below, we show a decremental algorithm that handles edge deletions only with near-optimal recourse. This will imply Section 1 by a known reduction formally stated in Section 2. To describe our decremental algorithm, let us first recall the classic greedy algorithm.

The Greedy Algorithm. Althöfer et al. [dcg/Althofer93] showed the following algorithm for computing -spanners. Given a graph with vertices, fix some order of edges in . Then, we inspect each edge one by one according to the order. Initially . When we inspect , if , then add into . Otherwise, ignore it. We have the following:

[[dcg/Althofer93]] The greedy algorithm above outputs a -spanner of containing edges.

It is widely believed that the greedy algorithm is extremely bad in dynamic setting: an edge update can drastically change the greedy spanner. In contrary, when we allow the order in which greedy scans the edges to be changed adaptively, we can avoid removing spanner edges until it is deleted by the adversary. This key insight leads to optimal recourse. When recourse is the only concern, prior to our work this result was known only for spanners with polylog stretch, which is a much easier problem.

The Decremental Greedy Algorithm. Now we describe our deletion-only algorithm. Let be an initial graph with edges and be a -spanner with edges. Suppose an edge is deleted from the graph . If , then we do nothing. Otherwise, we do the following. We first remove from . Now we look at the only remaining non-spanner edges , one by one in an arbitrary order. (Note that the order is adaptively defined and not fixed through time because it is defined only on .) When we inspect , as in the greedy algorithm, we ask if and add to if and only if it is the case. This completes the description of the algorithm.

Analysis. We start with the most crucial point. We claim that the new output after removing is as if we run the greedy algorithm that first inspects edges in (the order within is preserved) and later inspects edges in .

To see the claim, we argue that if the greedy algorithm inspects first, then the whole set must be included, just like remains in the new output. To see this, note that, for each , when was inspected according to some order. But, if we move the whole set to be the prefix of the order (while the order within is preserved), it must still be the case that when is inspected and so must be added into the spanner by the greedy algorithm.

So our algorithm indeed “simulates” inspecting first, and then it explicitly implements the greedy algorithm on the remaining part . So we conclude that it simulates the greedy algorithm. Therefore, by Section 2, the new output is a -spanner with edges.

The next important point is that, whenever an edge added into the spanner , the algorithm never tries to remove from . So remains in until it is deleted by the adversary. Therefore, the total recourse is . With this, we conclude the following key lemma:

Given a graph with vertices and initial edges undergoing only edge deletions, the algorithm above maintains a -spanner of of size with total recourse.

By plugging Section 2 to the fully-dynamic-to-decremental reduction by [BaswanaKS12] below, we conclude Section 1. We also include the proof of Section 2 in Appendix C for completeness.

[[BaswanaKS12]] Suppose that for a graph with vertices and initial edges undergoing only edge deletions, there is an algorithm that maintains a -spanner of size with total recourse where , then there exists an algorithm that maintains a -spanner of size in a fully dynamic graph with vertices using total recourse. Here is the total number of updates, starting from an empty graph.

3 -Spanner with Near-optimal Recourse and Fast Update Time

In this section, we prove Section 1 by showing an algorithm for maintaining a -spanner with small update time in addition to having small recourse. We start by explaining a basic static construction and needed data structures in Section 3.1 and show the dynamic algorithm in Section 3.2. Assuming our key lemma (Section 3.2) about proactive resampling, most details here are quite straight forward. Hence, due to space constraint, most proofs are either omitted or deferred to Appendix D.

Throughout this section, we let denote the set of neighbors of a node in a graph , and we let denote the degree of the node in the graph .

3.1 A Static Construction and Basic Data Structures

A Static Construction. We now describe our static algorithm. Though our presentation is different, our algorithm is almost identical to [GrossmanP17]. The only difference is that we do not distinguish small-degree vertices from large-degree vertices.

We first arbitrarily partition into equal-sized buckets . We then construct three sets of edges . For every bucket , we do the following. First, for all , if is not empty, we choose a neighbor and add to . We call an -partner of . Next, for every edge , where both , we add to . Lastly, for with overlapping neighborhoods, we pick an arbitrary common neighbor and add to . We refer to the node as the witness for the pair .

The subgraph is a -spanner of consisting of at most edges.

Dynamizing the Construction. Notice that it suffices to separately maintain , in order to maintain the above dynamic -spanner. Maintaining and is straightforward and can be done in a fully-dynamic setting in worst-case update time. Indeed, if is deleted, then we pick a new -partner for . Maintaining for all allows us to update efficiently. If , where , is deleted, then we do nothing.

The remaining task, maintaining , is the most challenging part of our dynamic algorithm. Before we proceed, let us define a subroutine and a data structure needed to implement our algorithm.

Resampling Subroutine. We define Resample as a subroutine that uniformly samples a witness (i.e. a common neighbor of and ), if exists. Notice that, we can obtain by calling Resample for all and for all .

Partnership Data Structures. The subroutine above hints that we need a data structure for maintaining the common neighborhoods for all pairs of vertices that are in the same bucket. For vertices and within the same bucket, we let be the partnership between and . To maintain these structures dynamically, when an edge is inserted, if and , we add to for all , and symmetrically add to for all . This clearly takes worst-case time for edge insertion. and this is symmetric for edge deletion.

As we want to prove that our final update time is , we can assume from now that , and all partnerships are maintained in the background.

3.2 Maintaining Witnesses via Proactive Resampling

Remark. For clarity of exposition, we will present an amortized update time analysis. Using standard approach, we can make the update time worst case. We will discuss this issue at the end of this section.

Our dynamic algorithm runs in phases, where each phase lasts for consecutive updates (edge insertions/deletions). As a spanner is decomposable333Let and . If and are -spanners and respectively, then is a -spanner of ., it suffices to maintain a -spanner of the graph undergoing only edge deletions within this phase and then include all edges inserted within this phase into , which increases the size of by at most edges. Henceforth, we only need to present how to initialize a phase and how to handle edge deletions within each phase. The reason behind this reduction is because our proactive resampling technique naturally works for decremental graphs.

Initialization. At the start of the phase, since our partnerships structures only processed edge deletions from the previous phase, we first update partnerships with all the inserted edges from the previous phase. Then, we call Resample for all for all to replace all witnesses and initialize of this phase.

Difficulty of Bounding Recourse. Maintaining (equivalently, the witnesses) in worst-case time is straightforward because the partnership data structure has update time. However, our goal is to show amortized recourse, which is the most challenging part of our analysis. To see the difficulty, if is deleted and , a vertex may serve as a witness for all . In this case, deleting causes the algorithm to find a new witness for all . This implies a recourse of . To circumvent this issue, we apply the technique of proactive resampling, as described below.

Proactive Resampling. We keep track of a time-variable ; the number of updates to that have occurred in this phase until now. is initially . We increment each time an edge gets deleted from .

In addition, for all and with , we maintain: (1) , the witness for the pair and and (2) a set of positive integers, which is the set of timesteps where our algorithm intends to proactively resample a new witness for . This set grows adaptively each time the adversary deletes or .

Finally, to ensure that the update time of our algorithm remains small, for each we maintain a , which consists of all those pairs of nodes such that .

When an edge , where and is deleted, we do the following operations. First, for all that had as a common neighbor with before deleting , we add the timesteps to . Second, analogous to the previous one, for all that had as a common neighbor with before deleting , we add the timesteps to . Third, we set . Lastly, for each , we call the subroutine .

The key lemma below summarizes a crucial property of this dynamic algorithm. Its proof appears in Section 4. During a phase consisting of edge deletions, our dynamic algorithm makes at most calls to the Resample subroutine after each edge deletion. Moreover, the total number of calls to the Resample subroutine during an entire phase is at most w.h.p. Both these guarantees hold against an adaptive adversary.

Analysis of Recourse and Update Time. Our analysis are given in the lemmas below.

[Recourse] The amortized recourse of our algorithm is w.h.p., against an adaptive adversary.

Proof.

To maintain the edge-sets and , we pay a worst-case recourse of per update. For maintaining the edge-set , our total recourse during the entire phase is at most times the number of calls made to the Resample subroutine, which in turn is at most w.h.p. (see Lemma 3.2). Finally, while computing in the beginning of a phase, we pay recourse. Therefore, the overall total recourse during an entire phase is w.h.p.. Since a phase lasts for time steps, we conclude the lemma. ∎

[Worst-case Update Time within a Phase] Within a phase, our algorithm handles a given update in worst case time w.h.p..

Proof.

Recall that the sets can be maintained in worst case update time. Henceforth, we focus on the time required to maintain the edge-set after a given update in .

Excluding the time spent on maintaining the partnership data structure (which is in the worst-case anyway), this is proportional to times the number of calls made to the Resample subroutine, plus times the number of pairs where we need to adjust . The former is w.h.p. at most according to Lemma 3.2, while the latter is also at most since . Thus, within a phase we can also maintain the set w.h.p. in worst case update time. ∎

Although the above lemma says that we can handle each edge deletion in worst-case update time, our current algorithm does not guarantee worst-case update time yet because the intialization time exceed the bound. In more details, observe that the total initialization time is because we need to insert edges into partnership data structures, which has update time. Over a phase of steps, this implies only amortized update time.

However, since the algorithm takes long time only at the initialization of the phase, but takes worst-case step for each update during the phase, we can apply the standard building-in-the-background technique (see Section D.1) to de-amortized the update time. We conclude the following:

[Worst-case Update Time for the Whole Update Sequence] W.h.p., the worst-case update time of our dynamic algorithm is .

4 Proactive Resampling: Abstraction

The goal of this section is to prove Section 3.2 for bounding the recourse of our 3-spanner algorithm. This is the most technical part of this paper. To ease the analysis, we will abstract the problem situation in Section 3.2 as a particular dynamic problem of assigning jobs to machines while an adversary keeps deleting machines and the goal is to minimize the total number of reassignments. Below, we formalize this problem and show how to use it to bound the recourse of our 3-spanner algorithm.

Our abstraction has two technical contributions: (1) it allows us to easily work with multiple sampling probabilities, while in [Bernstein2020fully], they fixed a single parameter on sampling probability, (2) the simplicity of this abstraction can expose the generality of the proactive resampling technique itself; it is not specific to the cut sparsifier problem as used in [Bernstein2020fully].

Jobs, Machines, Routines, Assignments, and Loads. Let denote a set of jobs and denote a set of machines. We think of them as two sets of vertices of the (hyper)-graph .444This graph is different from the graph that we maintain a spanner in previous sections. A routine is a hyperedge of such that contains exactly one job-vertex from , denoted by , and may contain several machine-vertices from , denoted by . Each routine in means there is a routine for handling by simultaneously calling machines in . Note that . We say that is a routine for . For each machine , we say that routine involves machine , or that contains . The set is then defined as the set of routines involving machine . Observe that there are different routines for handling job . An assignment is simply a subgraph of . We say assignment is feasible iff for every job where . That is, every job is handled by some routine, if exists. When , we say that is handled by routine . Finally, given an assignment , the load of a machine is the number of routines in involving , or in other words, is the degree of in , . We note explicitly that our end-goal is not to optimize loads of machines. Rather, we want to minimize the number of reassignments needed to maintain feasible assignments throughout the process.

In this section, we usually use to denote jobs, use to denote machines, and use or to denote routines or (hyper)edges.

The Dynamic Problem. Our problem is to maintain a feasible assignment while the graph undergoes a sequence of machine deletions (which might stop before all machines are deleted). More specifically, when a machine is deleted, all routines containing are deleted as well. But when routines in are deleted, might not be feasible anymore and we need to add new edges to to make feasible. Our goal is to minimize the total number of routines ever added to .

To be more precise, write the graph and the assignment after machine-deletions as and , respectively. Here, we define recourse at timestep to be , which is the number of routined added into at timestep . When the adversary deletes machines, the goal is then to minimize the total recourse .

The Algorithm: Proactive Resampling. To describe our algorithm, first let denote the process of reassigning job to a uniformly random routine for . In the graph language, removes the edge such that from , sample an edge from , and then add into . At timestep , we initialize a feasible assignment by calling for every job , i.e., assign each job to a random routine for . Below, we describe how to handling deletions.

Let be the total number of machine-deletions. For each job , we maintain containing all time steps that we will invoke . That is, at any timestep , before an adversary takes any action, we call if .

We say that an adversary touches at timestep if the routine handling at time is deleted. When is touched, we call and, very importantly, we put where into . This is the action that we call proactive resampling because we do not just resample a routine for only when is touched, but do so proactively in the future as well. This completes the description of the algorithm.

Clearly, remains a feasible assignment throughout because whenever a job is touched, we immediately call . The key lemma below states that the algorithm has low recourse, even if the adversary is adaptive in the sense that each deletion at time depends on previous assignment before time .

Let be the total number of machine-deletions. The total recourse of the algorithm running against an adaptive adversary is with high probability where is the maximum degree of any job. Moreover, if the load of a machine never exceeds , then our algorithm has worst-case recourse.

We will prove Section 4 in Section 5. Before proceeding any further, however, we argue why Section 4 directly bounds the recourse of our 3-spanner algorithm.

Back to 3-spanners: Proof of Section 3.2. It is easy to see that maintaining in our -spanner algorithm can be framed exactly as the job-machine load-balancing problem. Suppose the given graph is where and . We create a job for each pair of vertices with . For each edge , we create a machine . Hence, and . For each job, as we want to have a witness , this witness is corresponding to two edges and . Hence, we create a routine for each and a common neighbor . Since there are at most common neighbors between each and , . A feasible assignment is then corresponding to finding a witness for each job. Our algorithm that maintains the spanner is also exactly this load-balancing algorithm. Hence, the recourse of the -spanner construction follows from Lemma 4 where we delete exactly machines. As , the total recourse bound then becomes . As , averaging this bound over all timesteps yields amortized recourse.

5 Proactive Resampling: Analysis (Proof of Section 4)

The first step to prove Section 4 is to bound the loads of machines . This is because whenever machine is deleted, its load of would contribute to the total recourse.

What would be the expected load of each machine? For intuition, suppose that the adversary was oblivious. Recall that denote the set of all routines involving machine . Then, the expected load of machine would be because each job samples its routine uniformly at random, and this is concentrated with high probability using Chernoff’s bound. Although in reality our adversary is adaptive, our plan is to still prove that the loads of machines do not exceed its expectation in the oblivious setting too much. This motivates the following definitions.

The target load of machine is . The target load of at time is . An assignment has overhead iff for every machine .

Our key technical lemma is to show that, via proactive resampling, the loads of machines are indeed close to its expectation in the oblivious setting. That is, the maintained assignment has small overhead. Recall that is the total number of machine-deletions.

With high probability, the assignment maintained by our algorithm always has overhead even when the adversary is adaptive.

Due to space limit, the proof of Section 5 will given in Appendix A. Assuming Section 5, we formally show how to bound of machine loads implies the total recourse, which proves Section 4.

Proof of Section 4.

Let be the total number of deletions. Observe that the total recourse up to time is precisely the total number of calls up to time , which in turn is at most the total number of calls put into since time until time . Therefore, our strategy is to bound, for each time , the number of calls newly generated at time . Let be the machine deleted at time . Observe this is at most where is ’s load at time and the factor is because of proactive sampling.

By Section 5, we have . Also, we claim that where is the maximum degree of jobs (to be proven below). Therefore, the total recourse up to time is at most

as .

It remains to show that . Recall that . Imagine when machine is deleted at time . We will show how to charge to jobs with edges connecting to . For each job with (hyper)edges connecting to , ’s contribution of is . So we distribute the charge of to . Since these edges are charged from machine to job only once, the total charge of each job at most Since there are jobs, the bound follows.

To see that we have worst-case recourse, one can look at any timestep . There are timesteps that can cause Resample to be invoked at timestep , namely, . At each of these timesteps , one machine is deleted, so the number of Resample calls added from timestep is also bounded by the load of the deleted machine , which does not exceed . Summing this up, the number of calls we make at timestep is at most . This concludes our proof. ∎

6 Conclusion

In this paper, we study fully dynamic spanner algorithms against an adaptive adversary. Our algorithm in Section 1 maintains a spanner with near-optimal stretch-size trade-off using only amortized recourse. This closes the current oblivious-vs-adaptive gap with respect to amortized recourse. Whether the gap can be closed for worst-case recourse is an interesting open problem.

The ultimate goal is to show algorithms against an adaptive adversary with polylogarithmic amortized update time or even worst-case. Via the multiplicative weight update framework [Fleischer00, GargK07], such algorithms would imply -approximate multi-commodity flow algorithm with time which would in turn improve the state-of-the-art. We made partial progress toward this goal by showing the first dynamic 3-spanner algorithms against an adaptive adversary with update time in Section 1 and simultaneously with amortized recourse in Section 1, improving upon the amortized update time since the 15-year-old work by [AusielloFI06].

Generalizing our Section 1 to dynamic -spanners of size , for any , is also a very interesting and challenging open question.

References

Appendix A Proof of Section 5

Here, we show that the load of every machine at each time is small. Some basic notions are needed in the analysis.

Experiments and Relevant Experiments. An experiment

is a binary random variable associated with an edge/routine

and time step , where iff is called at time and is chosen to handle , among all edges incident to . Observe that . Note that each call to at time creates new experiments. We order all experiments by the time of their creation. For convenience, for each experiment , we let , , and denote its edge, time of creation, and job respectively.

Next, we define the most important notion in the whole analysis. For any time and edge at time , an experiment is -relevant if

  • , and

  • there is no such that .

Moreover, is -relevant if it is -relevant and edge is incident to . Intuitively, is a -relevant experiment if could cause to appear in the assignment at time . To see why, clearly if , then cannot cause to appear. Otherwise, if but there is where , then cannot cause to appear at time either. This is because even if is successful and so appears at time , then later at time , will be resampled again, and so has nothing to do whether appears at time . With the same intuition, is -relevant if could contribute to the load of machine at time .

It is important to note that, we decide whether is a -relevant based on at time . If it was based on at time , then there would be only a single experiment that is -relevant (which is the one with and maximum ).

According to Appendix A, there could be more than one experiments that are -relevant. For example, suppose is -relevant. At time , the adversary could touch , hence, adding into . Because of this action, there is another experiment that is -relevant and . This motivates the following definition.

Let be the random variable denoting the number of -relevant experiments, and let denote the total number of -relevant experiments.

To simplify the notations in the proof of Section 5 below, we assume the following. [The Machine-disjoint Assumption] For any routines with , then . That is, the edges adjacent to the same job are machine-disjoint. Note that this assumption indeed holds for our -spanner application. This is because any two paths of length between a pair of centers and must be edge disjoint in any simple graph. We show how remove this assumption in Appendix B, but the notations are more complicated.

Roadmap for Bounding Loads. We are now ready to describe the key steps for bounding the load , for any time and machine .

First, we write as the sequence of all -relevant experiments (ordered by time step the experiments are taken). The order in will be important only later. For now, we write

as the total number of success -relevant experiments. As any edge adjacent to in may appear only because of some successful -relevant experiment , we conclude the following: [Key Step 1] .

Appendix A reduces the problem to bounding . If all -relevant experiments were independent, then we could have easily applied standard concentration bounds to . Unfortunately, they are not independent as the outcome of earlier experiments can affect the adversary’s actions, which in turn affect later experiments.

Our strategy is to relate the sequence of -relevant experiments to another sequence of independent random variables defined as follows. For each -relevant experiment where and , we carefully define as an independent binary random variable such that

which is the probability that chooses at time . We similarly define

that sums independent random variables, where each term in the sum is closely related to the corresponding -relevant experiments. By our careful choice of , we can relates to via the notion of stochastic dominance defined below.

Let and be two random variables not necessarily defined on the same probability space. We say that stochastically dominates , written as , if for all , we have .

Our second important step is to prove the following: [Key Step 2] .

Appendix A, which will be proven in Section A.1, reduces the problem to bounding , which is indeed relatively easy to bound because it is a sum of independent random variables. The last key step of our proof does exactly this:

[Key Step 3] with probability .

We prove Appendix A in Section A.2. Here, we only mention one important point about the proof. The factor above follows from the factor the number of -relevant experiment is always at most for any time and edge . This property is so crucial and, actually, is what the proactive resampling technique is designed for.

Given three key steps above (Appendices A, A and A), we can conclude the proof of Section 5.

Proof of Section 5. Recall that we ultimately want to show that, for every timestep , the maintained assignment has overhead . In other words, for every and every , we want to show that

By Appendix A, it suffices to show that

By Appendices A and A,