1 Introduction
Dynamic graph algorithms constitute an active area of research in theoretical computer science. Their objective is to maintain a solution to a combinatorial problem in an input graph—for example, a minimum spanning tree or maximal matching—under insertion and deletion of edges. The research on dynamic graph algorithms addresses the natural question of whether one essentially needs to recompute the solution from scratch after every update.
This question has been asked over the years for a wide range of problems such as connectivity [30, 32], minimum spanning tree [22, 20, 29, 32, 45], maximal matching [31, 7, 37, 43], approximate matching and vertex cover [31, 38, 37, 25, 10, 14, 11, 15, 16, 12], shortest paths [21, 34, 19, 44, 42, 9, 27, 26, 28, 8, 1], and graph coloring [6, 3, 13] (this is by no means a comprehensive summary of previous results).
Surprisingly however, almost no work has been done for the prominent problem of maintaining a maximal independent set (MIS) in dynamic graphs. Indeed, the only previous result for this problem that we are aware of is due to CensorHillel et al. [17], who developed a randomized algorithm for this problem in distributed dynamic networks and left the sequential case (the main focus of this paper) as a major open question. We note that implementing their distributed algorithm in the sequential setting requires ^{1}^{1}1 It is not clear whether time is also sufficient for this algorithm or not; see Section 6 of their paper. update time in expectation, where is a fixed upper bound on the degree of vertices in the graph and can be as large as in sparse graphs.
The maximal independent set problem is of fundamental importance in graph theory with natural connections to a plethora of other basic problems, such as vertex cover, matching, vertex coloring, and edge coloring (in fact, all these problems can be solved approximately by finding an MIS, see, e.g., the paper of Linial [35]). As a result, this problem has been studied extensively in different settings, in particular in parallel and distributed algorithms [18, 33, 2, 36, 35, 39, 4, 5, 23]. (We refer the interested reader to the papers of Barenboim et al. [5] and Ghaffari [23] for the story of this problem in these settings and a comprehensive summary of previous work.)
In this paper, we concentrate on sequential algorithms for maintaining a maximal independent set in a dynamic graph. Our results are also applicable to the dynamic distributed setting and improve upon the previous work of CensorHillel et al. [17].
1.1 Problem Statement and Our Results
Recall that a maximal independent set (MIS) of an undirected graph, is a maximal collection of vertices subject to the restriction that no pair of vertices in the collection are adjacent. In the maximal independent set problem, the goal is to compute an MIS of the input graph.
We study the fully dynamic variant of the maximal independent set problem in which the goal is to maintain an MIS of a dynamic graph , denoted by , subject to a sequence of edge insertions and deletions. When an edge change occurs, the goal is to maintain in time significantly faster than simply recomputing it from scratch. Our main result is the following: [backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]
Theorem 1.
Starting from an empty graph on fixed vertices, a maximal independent set can be maintained deterministically over any sequence of edge insertions and deletions in amortized update time, where denotes the dynamic number of edges.
As a warmup to our main result in Theorem 1, we also present an extremely simple deterministic algorithm for maintaining an MIS with amortized update time, where is a fixed upper bound on the maximum degree of the graph. Our algorithms can be combined together to achieve a deterministic amortized update time algorithm for maintaining an MIS in dynamic graphs. This constitutes the first improvement on the update time required for this problem in fully dynamic graphs over the naïve bound for all possible values of . We now elaborate more on the details of our algorithm in Theorem 1.
Deterministic Algorithm.
An important feature of our algorithm in Theorem 1 is that it is deterministic. The distinction between deterministic and randomized algorithms is particularly important in the dynamic setting as almost all existing randomized algorithms require the assumption of a nonadaptive oblivious adversary who is not allowed to learn anything about the algorithm’s random bits. Alternately, this setting can be viewed as the requirement that the entire sequence of updates be fixed in advance, in which case the adversary cannot use the solution maintained by the algorithm in order to break its guarantees. While these assumptions can be naturally justified in many settings, they can render randomized algorithms entirely unusable in certain scenarios (see, e.g., [8, 10, 15] for more details).
As a result of this assumption, obtaining a deterministic algorithm for most dynamic problems is considered a distinctively harder task compared to finding a randomized one. This is evident by the polynomial gap between the update time of best known deterministic algorithms compared to randomized ones for many dynamic problems. For example, a maximal matching can be maintained in a fully dynamic graph with update time via a randomized algorithm [43], assuming a nonadaptive oblivious adversary, while the best known deterministic algorithm for this problem requires update time [37] (see [13] for a similar situation for coloring of vertices of a graph).
Amortized Adjustment Complexity.
An important performance measure of a dynamic algorithm is its adjustment complexity (sometimes called recourse) that counts the number of vertices (or edges) that need to be inserted to or deleted from the maintained solution after each update (see, e.g. [17, 3, 24, 13]). For many natural graph problems such as maintaining a maximal matching, constant worstcase adjustment complexity can be trivially achieved since one edge update cannot ever necessitate more than a constant number of changes in the maintained solution. This is, however, not the case for the MIS problem: by inserting an edge between two vertices already in , the adversary can force the algorithm to delete at least one end point of this edge from , which in turn forces the algorithm to pick all neighbors of this deleted vertex to ensure maximality (this phenomena also highlights a major challenge in the treatment of this problem compared to the maximal matching problem which we discuss further below).
Nevertheless, we prove that the adjustment complexity of our algorithm in Theorem 1 is on average which is clearly optimal. Can we further improve our results to achieve an worstcase adjustment complexity? We claim that this is indeed not possible by showing that the worstcase adjustment complexity of any algorithm for maintaining an MIS is , using a simple adaption of an example proposed originally by [17] for proving a similar result in distributed settings when vertex deletions are also allowed by the adversary (this follows seamlessly from the result in [17] and is provided in Appendix A only for completeness).
Distributed Implementation.
Finding a maximal independent set is one of the most studied problems in distributed computing. In the distributed computing model, there is a processor on each vertex of the graph. Computation proceeds in synchronous rounds during which every processor can communicate messages of size with its neighbors (this corresponds to the model of distributed computation; see Section 5 for further details). In the dynamic setting, both edges and vertices can be inserted to or deleted from the graph and the goal is to update the solution in a small number of rounds of communication, with small communication cost and adjustment complexity.
Our results in the sequential setting also imply a deterministic distributed algorithm for maintaining an MIS in a dynamic network with amortized round complexity, amortized adjustment complexity, and amortized message complexity per each update. This result achieves an improved message complexity compared to the distributed algorithm of [17] with asymptotically the same round and adjustment complexity (albeit in amortized sense as opposed to in expectation; see Section 5). More importantly, our result is achieved via a deterministic algorithm and does not require the assumption of a nonadaptive oblivious adversary. Similar to [17], our algorithm can also be implemented in the asynchronous model, where there is no global synchronization of communication between nodes. We elaborate more on this result in Section 5.
Maximal Independent Set vs. Maximal Matching.
We conclude this section by comparing the maximal independent set problem to the closely related problem of maintaining a maximal matching^{2}^{2}2A maximal matching in a graph can be obtained by computing an MIS of the line graph of . in dynamic graphs. We discuss additional challenges that one encounters for the maximal independent set problem.
In sharp contrast to the maximal independent set problem, maintaining maximal matchings in dynamic graphs has been studied extensively, culminating in an worstcase update time deterministic algorithms [37] and expected update time randomized algorithm [43] (assuming a nonadaptive oblivious adversary).
Maintaining an MIS in a dynamic graph seems inherently more complicated than maintaining a maximal matching. One simple reason is that as argued before, a single update can only change the status of edges/vertices in the maximal matching, while any algorithm can be forced to make changes to the MIS for a single edge update in the worst case. As a result, a maximal matching can be maintained with an worstcase update time via a straightforward algorithm (see, e.g. [31, 38]), while the analogous approach for MIS only results in update time.
Another, perhaps more fundamental difference between the two problems lies in their different level of “locality.” To adjust a maximal matching after an update, we only need to consider the neighbors of the recently unmatched vertices (to find another unmatched vertex to match with), while to fix an MIS, we need to consider the twohop neighborhood of a recently removed vertex from the MIS (to add to the MIS the neighbors of this vertex which themselves do not have another neighbor in the MIS). We note that this difficulty is similarinspirit to the barrier for maintaining a better than approximate matching via excluding length augmenting paths in dynamic graphs. Currently, the best known algorithm for achieving a better than approximation to matching in dynamic graphs requires update time [10, 11]. Achieving subpolynomial in update time—even using randomness and assuming a nonadaptive oblivious adversary—remains a major open problem in this area (we refer the interested reader to [15] for more details).
We emphasize that even for the seemingly easier problem of maximal matching, the best upper bound on update time using a deterministic algorithm (the focus of our paper) is only [37].
1.2 Overview of Our Techniques
Amortized Update Time.
Consider the following simple algorithm for maintaining an MIS of a dynamic graph: for each vertex, maintain the number of its neighbors in in a counter, and for each update in the graph or , spend time to update this counter for the neighbors of the updated vertex. What is the complexity of this algorithm? Unfortunately, as argued before, an update to the graph may inevitably result in an update of size to . Processing it may take time as we have to update all neighbors of every updated vertex. However, all we need to handle this case is the following basic observation: while a single update can force the algorithm to insert up to vertices to , it can never force the algorithm to remove more than one vertex from . We therefore charge the time needed to insert a vertex into (and there can be many such vertices per one update) to the time spent in a previous update in which the same vertex was (the only one) removed from . This allows us to argue that on average, we only spend time per update.
Amortized Update Time.
Achieving an amortized update time however is distinctly more challenging. On the one hand, we cannot afford to update all neighbors of a vertex after every change in the graph. On the other hand, we do not have enough time to iterate over all neighbors of an updated vertex to even check whether or not they should be added to and hence need to maintain this information, which is a function of vertices in the twohop neighborhood of a vertex, explicitly for every vertex.
To bypass these challenges, we relax our requirement for knowing the status of all vertices in the neighborhood of a vertex, and instead maintain the status of some vertices that are in the twohop neighborhood of a vertex. More concretely, we allow “high” degree vertices to not update their “low” degree neighbors about their status (as the number of low degree neighbors can be very large), while making every “low” degree vertex update not only all its neighbors but even some of its neighbor’s neighbors, using the extra time available to this vertex (as its degree is small). This approach allows us to maintain a “noisy” version of the information described above. Note that this information is not completely accurate as the status of some vertices in would be unknown to their neighbors and their neighbor’s neighbors (in the actual algorithm, we use a more finegrained partition of vertices based on their degree into more than two classes, not only “high” and “low”).
We now need to address a new challenge introduced by working with this “noisy” information: we may decide that a vertex is ready to join based on the information stored in the algorithm and insert this vertex to , only to find out that there are already some vertices in adjacent to this vertex. To handle this, we also relax the property of the basic algorithm above that only allowed for deleting one vertex from per each update. This allows us to insert multiple vertices to as long as a large portion (but not all) of their neighbors are known to be not in . Then we go back and delete a small number of “violating” vertices from to make sure it is indeed an independent set. Note that deleting those vertices may now require inserting a new set vertices in their neighborhood to to ensure maximality.
In order to be able to perform all those operations and recursively treat the newly deleted vertices in a timely manner, we maintain the invariant that whenever we need to remove more than one vertex from , the number of inserted vertices leading to this case is much larger than the number of removed vertices. This allows us to extend the simple charging scheme used in the analysis of the basic algorithm above to this new algorithm and prove our upper bound on the amortized update time of the algorithm.
We point out that despite the multiple challenges along the way that are described above, our algorithm turned to be quite simple in hindsight. The main delicate matters are in the choice of parameters and in the analysis. This in turn makes the implementation of our results in sequential and distributed settings quite practical.
Organization.
We introduce our notation and preliminaries in Section 2. We then present a simple proof of the amortized update time algorithm in Section 3 as a warmup to our main result. Section 4 contains the proof of our main result in Theorem 1. The distributed implementation of our result and a detailed comparison of our results with that of CensorHillel et al. [17] appear in Section 5.
2 Preliminaries
Notation.
We denote the static vertex set of the input graph by . Let be the sequence of graphs that are given to the algorithm: initial graph is empty and each graph is obtained from the previous graph by either inserting or deleting a single edge . We use to denote the graph at step and define . Finally, throughout the paper, denotes the maximal independent set maintained by the algorithm at every time step.
Greedy MIS Algorithm.
Consider the following algorithm for computing an MIS of a given graph: Fix an arbitrary ordering of the vertices in the graph, add the first vertex to the MIS, remove all its neighbors from the list, and continue. This algorithm clearly computes an MIS of the input graph. In the rest of the paper, we refer to this algorithm as the greedy MIS algorithm.
Fact 2.1.
For an vertex graph with maximum degree , the greedy MIS algorithm computes an MIS of size at least .
3 WarmUp: A Simple UpdateTime Dynamic Algorithm
As a warmup to our main result, we describe a straightforward algorithm for maintaining an MIS in a dynamic graph with amortized update time, where is a fixed upper bound on the maximum degree in the graph. For every vertex in the graph, we simply maintain a counter MISCounter[], counting number of its neighbors in . In the following, we consider updating and this counter after each edge update.
Let be the updated edge. Suppose first that we delete this edge. In this case, and cannot both be in by definition of an independent set. Also, if none of them belong to , there is nothing to do. The interesting case is thus when exactly one of or belongs to ; without loss of generality, we assume this vertex is . We first subtract one from MISCounter[] (as it is no longer adjacent to ). If still, it means that is adjacent to some vertex in and hence we are done. Otherwise, we add to and update the counter of all its neighbors in time. Clearly, this step takes time in the worst case, after that is indeed an MIS.
Now suppose was inserted to the graph. The only interesting case here is when both and belong to (we do not need to do anything in the remaining cases, other than perhaps updating the neighbor list of and in time). To ensure that remains an independent set, we need to remove one of these vertices, say , from . After this, to ensure the maximality, we have to insert to any neighbor of that can now join . To do this, we first update the MISCounter[] of all neighbors of in time. Next (using the updated counter), we iterate over all neighbors of and for each one check if they can be inserted to now or not. If so, we add this new vertex to and inform all its neighbors in time to update their MISCounter[]. It is easy to see that in this case, we spend time in the worst case, where is the number of vertices added to .
The correctness of this algorithm is straightforward to verify. We now prove that the amortized running time of the algorithm is . The crucial observation is that whenever we change , we may increase its size without any restriction, but we never decrease its size by more than one. We use the following straightforward charging scheme.
Initially, we start with all vertices being in as the original graph is empty. Whenever we delete one vertex from , we spend time to handle this vertex (including updating its neighbors and checking which ones can join ), and place “extra budget” on this vertex to be spent later. We use this budget when this vertex is being inserted to again. Whenever we want to bring this vertex back to , we only need to spend this extra budget and hence the time spent for inserting this vertex back to can be charged to the time spent for this vertex when we removed it from . This implies that the update time is in average. We can therefore conclude the following lemma.
Lemma 3.1.
Starting from an empty graph on vertices, a maximal independent set can be maintained deterministically over any sequence of edge insertions and deletions in time where is a fixed bound on the maximum degree in the graph.
4 An UpdateTime Dynamic Algorithm
We present our fully dynamic algorithm for maintaining a maximal independent set in this section and prove Theorem 1. The following lemma is a somewhat weaker looking version of Theorem 1. However, we prove next that this lemma is all we need to prove Theorem 1.
Lemma 4.1.
Starting with any arbitrary graph on vertices and edges, a maximal independent set can be maintained deterministically over any sequence of edge insertions and deletions in time, as long as the number of edges remains within a factor of .
We first show that this lemma implies Theorem 1.
Proof of Theorem 1.
For simplicity, we define in case of empty graphs. We start from the empty graph and run the algorithm in Lemma 4.1 until the number of edges in the graph differs from by a factor more than . This crucially implies that the total number of updates before terminating the algorithm (the parameter in Lemma 4.1), is . As such, we can invoke Lemma 4.1 to obtain an upper bound of on the amortized update time of the algorithm throughout these updates. We then update and start running the algorithm in Lemma 4.1 on the current graph using the new choice of . Clearly, this results in an amortized update time of where now denotes the number of dynamic edges in the graph. As the algorithm in Lemma 4.1 always maintain an MIS of the underlying graph, we obtain the final result.
The rest of this section is devoted to the proof of Lemma 4.1. In the following, we first describe the data structure maintained in the algorithm for storing the required information and its main properties and then present our update algorithm.
4.1 The Data Structure
For every vertex , we maintain the following information: [ enlarge top by=5pt, enlarge bottom by=5pt, breakable, boxsep=0pt, left=4pt, right=4pt, top=10pt, arc=0pt, boxrule=1pt,toprule=1pt, colback=white ]

[itemsep=0pt]

neighbors[]: a list of current neighbors of in the graph.

neighborsdegree[]: a list containing degree[] for every vertex in neighbors[].

MISflag[]: a boolean entry indicating whether or not belongs to .

MISneighbors[]: a counter denoting the size of a suitable subset of current neighbors of in . Any vertex counted in MISneighbors[] belongs to but not all neighbors of in are (necessarily) counted in MISneighbors[] (see Invariant 1 for more detail).

MIS2hopneighbors[]: a list, containing for every vertex in neighbors[], a counter that counts the size of a suitable subset of current neighbors of in . Any vertex counted in is also counted in MISneighbors[] but not vice versa (see Invariant 2 below for more detail).
Additionally, we maintain a partition of the vertices into four sets based on their current approximate degree, namely, degree[]. In particular, belongs to iff , to iff , to iff , and to iff . We refer to the vertices of as the lowdegree vertices. Throughout, we assume that in any of the lists maintained for a vertex by the algorithm, we can directly iterate over vertices of a particular subset in . (This can be done, for example, by storing these lists as four separate linked lists, one per each such subset.)
The following invariant is concerned with the information we need from MISneighbors[].
[hidealllines=false,backgroundcolor=gray!10,innertopmargin=0pt]
Invariant 1.
For any vertex , MISneighbors[] counts the number of all neighbors of in . For any vertex , MISneighbors[] counts the number of neighbors of that are in but not in , i.e., are in .
By Invariant 1, any vertex either knows the number of all its neighbors in or is a lowdegree vertex and can iterate over all its neighbors in time to count this number. Moreover, even a lowdegree vertex knows the number of its neighbors in . This is crucial for our algorithm as in some cases, we need to iterate over many vertices that belong to and decide if they can join and hence cannot spend time per each vertex to determine this information. Note that the information we obtain in this way is “noisy”, as we ignore some neighbors of vertices in that are potentially in . We shall address this problem using a postprocessing step that exploits the fact that the total number of ignored vertices, i.e., vertices in , is small.
The following invariant is concerned with the information we need from MIS2hopneighbors[]. [hidealllines=false,backgroundcolor=gray!10,innertopmargin=0pt]
Invariant 2.
For any and and every , counts the number of vertices in that belong to and are neighbors of (the entry in for any vertex is ).
Invariant 2 allows us to infer some nontrivial information about the twohop neighborhood of any vertex. We use Invariant 2 to quickly determine which neighbors of a vertex can be added to in case is deleted from it. Similar to the onehop information we obtain through maintaining Invariant 1, the information we obtain in this way is also “noisy”.
We show how to update the information per each vertex after a change in the topology or . Maintaining neighbors[] under edge updates is straightforward. To maintain degree[], each vertex simply keeps a approximation of its degree in degree[]. Whenever the current actual degree of differs from degree[] by more than a factor of two, updates degree[] to its actual degree and informs all its neighbors to update neighborsdegree[]. This requires only amortized time.
The above information is a function of the underlying graph and not . We also need to update the information per each vertex that are functions of whenever changes. Maintaining MISflag[] is trivial for any vertex , hence in the following we focus on the remaining two parts.
Once a vertex changes its status in , we apply the following algorithm to update the value of MISneighbors[] for every vertex (we only need to update this for ).
[ enlarge top by=5pt, enlarge bottom by=5pt, breakable, boxsep=0pt, left=4pt, right=4pt, top=10pt, arc=0pt, boxrule=1pt,toprule=1pt, colback=white ] Algorithm . An algorithm called whenever a vertex enters or exists to update MISneighbors[] for neighbors of .

[itemsep=0pt]

If , update MISneighbors[] for any vertex not in accordingly (i.e., add or subtract one depending on whether joined or left ).

If , update MISneighbors[] for every vertex .
It is immediate to see that by running in our main algorithm whenever a vertex is updated in , we can maintain Invariant 1. Also each call to takes time in worstcase since in both cases of the algorithm, we only need to update vertices: if , the algorithm only updates the vertices in whose size is , and if , only has neighbors to update. We also point out that MISneighbors[] can be updated easily whenever an edge incident on is inserted or deleted in time by simply visiting MISflag[] and updating MISneighbors[] accordingly.
Now consider updating MIS2hopneighbors[]. We use the following algorithm on a vertex that has changed its status in to update MIS2hopneighbors[] for every vertex in the graph (we only need to update this information for the twohop neighborhood of ).
[ enlarge top by=5pt, enlarge bottom by=5pt, breakable, boxsep=0pt, left=4pt, right=4pt, top=10pt, arc=0pt, boxrule=1pt,toprule=1pt, colback=white ] Algorithm . An algorithm called when a vertex enters or exists to update MIS2hopneighbors[] for the twohop neighborhood of .

[itemsep=0pt]

If , for any vertex :

[itemsep=0pt]

If belongs to , iterate over all vertices .

For any such , update accordingly (i.e., add or subtract one depending on whether joined or left ).

Each call to takes time in the worst case. This is because only updates its neighbors if it has neighbors as should be in and when it updates its neighbors, it changes the counter of vertices (as should be in ). This ensures that the running time of the algorithm is . Whenever an edge is updated in the graph, we can run a similar algorithm to update the twohop neighborhood of and in the same way in time; we omit the details. It is also straightforward to verify that by running in our main algorithm whenever a vertex is updated in , we preserve Invariant 2.
Finally, recall that we also need a preprocessing step that given a graph initializes this data structure. We can implement this step by first initializing all nonMISrelated information in this data structure in time (we do not need to handle isolated vertices at this point). Next, we run the greedy MIS algorithm to compute an MIS of this graph in time (again only on nonisolated vertices). Finally, we update the information for every vertex in using the two procedures above which takes time in total. (We note that a more efficient implementation for this initial stage is possible.) As in Lemma 4.1, this (one time only) initialization cost is within the bounds stated in the lemma statement.
We summarize the results in this section in the following two lemmas.
Lemma 4.2.
4.2 The Update Algorithm
The update algorithm is applied following edge insertions and deletions to and from the graph. After any edge update, the algorithm updates the data structure and . In order to do the latter task, the algorithm may need to remove and/or insert multiple vertices from and to . Since we already argued that maintaining the data structure requires amortized time (by Lemma 4.2), from now on, without loss of generality, we only measure the time needed to fix after any edge update and ignore the additive term needed to update the data structure. The following is the core invariant that we aim to maintain in our algorithm. [hidealllines=false,backgroundcolor=gray!10,innertopmargin=0pt]
Invariant 3 (Core Invariant).
Following every edge update, the set maintained by the algorithm is an MIS of the input graph. Moreover,

[label=(),itemsep=0pt]

if only a single vertex leaves , then there is no restriction on the number of vertices joining (which could be zero).

if at least two vertices leave , then at least twice as many vertices join .
In either case, the total time spent by the algorithm to fix for an edge update is at most an factor larger than the total number of vertices leaving and joining .
Proof.
The main idea behind the proof is as follows. By Invariant 3, after each step, the size of either decreases by at most one, or it will increase. At the same time, cannot grow more than , the number of vertices in the graph. It then follows that the average number of changes to per each update is . As we only spend per each update, we obtain the final result. We now present the formal proof using the following charging scheme.
Recall that we compute an MIS of the initial graph in the preprocessing step and that the initialization phase takes time in total. We place “extra budget” on vertices in the initial graph that do not belong to to be spent later when these vertices are inserted to . As the number of such vertices is , this extra budget can be charged to the time spent in the initialization phase. Note that at this point, an extra budget is allocated to any vertex not in and we maintain this throughout the algorithm.
Whenever an update results in only a single vertex leaving (corresponding to Part 1 of Invariant 3), we spend time to handle this vertex and additionally place budget on this vertex and then for the vertices inserted to , we simply use the extra budget allocated to these vertices before to charge for the time needed to handle each. If an update results in removing vertices from , we know that at least vertices would be added to after this update (corresponding to Part 2 of Invariant 3). In this case, we use the extra budget on these (at least) vertices that are joining to charge for the time needed to insert these vertices to , remove the initial vertices from , and place extra budget on every removed vertex. As a result, this type of updates can be handled free of charge. Finally, if an update only involves inserting some vertices to , we simply use the budgets on these vertices to handle them free of charge. This finalizes the proof of Lemma 4.1.
We point out that using the above charging scheme, we can also argue that the average number of changes to is in each update.
Fix a time step and suppose the invariant holds up until this time step. Let be the edge updated at this time step. In the remainder of this section, we describe one round of the update algorithm to handle this single edge update and preserve Invariant 3.
4.2.1 Edge Deletions
We start with the easier case of deleting an edge .
Case 1: Neither nor belong to . In this case, there is nothing to do.
Case 2: belongs to but not (or vice versa). After deleting the edge , it is possible that may need to join as well. We first check whether . If not, there is nothing else to do as is still adjacent to some vertex in . Otherwise, we need to ensure that does not have any neighbor in (outside those vertices counted in MISneighbors[]). If , by Invariant 1, MISneighbors[] counts all neighbors of and hence there is nothing more to check. If , we can go over all the vertices in the neighborhood of and check whether has a neighbor in or not. This only takes time in the worst case. Again, if we find a neighbor in there is nothing else to do. Otherwise, we add to and update the data structure which takes time in the worst case by Lemma 4.3. After this step, is again a valid MIS and hence Invariant 3 is preserved as we only spent time and inserted at most one vertex to without deleting any vertex from it.
Case 3: Both and belong to . This case is not possible in the first place by Invariant 3 as otherwise maintained by the algorithm before this edge update was not an MIS.
4.2.2 Edge Insertions
We now consider the by far more challenging case of edge insertions where we concentrate bulk of our efforts. It is immediate to see that the only time we need to handle an edge insertion is when the inserted edge connects two vertices already in (there is nothing to do in the remaining cases). Hence, in the following, we assume both and belong .
To ensure that is an independent set, we first need to remove one of or from it and then potentially insert some of the neighbors of the deleted vertex to to ensure its maximality. Let be the deleted vertex (the choice of which vertex to delete is arbitrary). After deleting , we update the algorithm’s data structure in time by Lemma 4.3.
Let denote the set of low degree neighbors of . We first show that one can easily handle all neighbors of which are not in . To do so, we can iterate over these vertices as there are of them and for any vertex , by Invariant 1, we know whether can be added or not by simply checking MISneighbors[]. Hence, we can add the necessary vertices to and spend time for each inserted one using Lemma 4.3. As such, we spend time for iterating the vertices which did not join and time for the vertices that joined . Hence, Invariant 3 is preserved after this step.
We now consider the challenging case of updating the neighbors of that belong to . As the number of such vertices is potentially very large, we cannot iterate over all of them anymore. Define the following subsets of :

[leftmargin=*,itemsep=0pt]

: the set of vertices in that do not have any neighbor in .

: all vertices where i.e., our algorithm did not count any neighbor for them in . Recall that MISneighbors[] does not count all neighbors of in ; it is missing the vertices in by Invariant 1.

: all vertices , where . Again, recall that does not count all neighbors of (and consequently in ); it misses the vertices in in MISneighbors[] (and additionally in ) by Invariant 2.
Let , and , where . Our algorithm does not know the sets and or even their sizes. However, the update algorithm knows the value of and has access to vertices in through the list MIS2hopneighbors[] and can iterate over them in time per each vertex in (notice that even this can be potentially too time consuming as size of this list can be too large). We consider different cases based on the value of these parameters.
Case 1: when is small, i.e., . In this case, we iterate over vertices in time and check whether or not. This allows us to compute the set and as well. We further distinguish between two cases.
Case 1a: when is very small, i.e., . We iterate over vertices and for each vertex, spend time to go over all its neighbors and decide whether has any neighbor in or not (degree of is since it belongs to ). Hence, in this case, we can obtain the set fully in time in total.
We then iterate over vertices in , insert each one greedily to , and update the data structure in time using Lemma 4.3. It is possible that some vertices in are adjacent to each other and hence before inserting any vertex , we first need to check MISneighbors[] to make sure it is zero still (by Invariant 1 and since all vertices in belong to , any vertex added to here would update MISneighbors[] for any neighbor ). Hence, in this case, we spend time for each vertex inserted to and did not delete any vertex from it. Therefore, Invariant 3 is preserved after the edge update in this case.
Case 1b: when is not very small, i.e., . In this case, we cannot afford to compute explicitly. Rather, we simply add the vertices in to directly, without considering whether they are adjacent to vertices already in or not at all (although we check that they are not adjacent to the previously inserted vertices from ). As a result, it is possible that after this process, is not an independent set of the graph anymore. To fix this, we perform a post processing step in which we delete some vertices from to ensure that the remaining vertices indeed form an MIS of the original graph.
Concretely, we go over vertices in and insert each to if none of its neighbors have been added to in this step, and then invoke Lemma 4.3 to update the algorithm’s data structure. Since in this step, we are only adding vertices that are in , we can check in time whether a vertex has a neighbor in (that has been added in this step) or not by Invariant 1. This step clearly takes time per each vertex inserted to the MIS.
At this point, it is possible that there are some vertices in which are adjacent to the newly inserted vertices. By Invariant 1, we know that these vertices can only belong to and hence there are at most of them. We iterate over all vertices in and check whether they have a neighbor in (by Invariant 1, we stored this information for these vertices) and mark all such vertices. Next, we remove all these marked vertices from simultaneously and update the algorithm’s state by Lemma 4.3. We are not done yet though because after removing these vertices, it is possible that we may need to bring some of their neighbors back to . We solve this problem recursively using the same update algorithm by treating these marked vertices the same as .
We argue that Invariant 3 is preserved. As the degree of vertices in is bounded by , the number of vertices added to in this part is at least (by Fact 2.1 and the assumption on in this case). On the other hand, the number of vertices removed from is at most equal to size of which is . As a result, in this specific step, the number of vertices inserted to is at least twice as many as the vertices removed from it. For any vertex inserted or deleted from also, we spent time. As we are performing the recursive step using the same algorithm, we can argue inductively that for any vertex deleted in those recursive calls, at least twice as many vertices would be added to and that the total running time would be proportional to the number of vertices added or removed from times . We point out that any recursive call that leads to another one in this algorithm necessarily increase the number of vertices in and hence the algorithm does indeed terminate (see also case 2).
Case 2: when is not small, i.e., . We use a similar strategy as case 1b here as well. We iterate over all vertices in , greedily add each vertex to as long as this vertex is not adjacent to any of the newly added vertices (which can be checked in time by Invariant 1), and update the data structure using Lemma 4.3. As the maximum degree of vertices in is at most , we add at least vertices to by Fact 2.1. By Invariant 2, if a vertex belongs to , the only neighbors of this vertex in belong to or and hence has degree at least . We go over these vertices next and mark them. Then, we remove all of them from simultaneously and update the algorithm by Lemma 4.3. Similar to case 1b, we now also have to consider bringing some of the neighbors of these vertices to which is handled recursively exactly the same way as in case 1b.
We first analyze the time complexity of this step. Iterating over takes time and since we are inserting at least vertices from to , we can charge the time needed for this step to the time allowed for inserting these vertices to . Moreover, we inserted at least vertices to and would remove at most vertices after considering violating vertices in and . Hence, number of inserted vertices is at least twice the number of removed ones at this step. We can also argue inductively that this property hold for each recursive call similar to the case 1b. This finalizes the proof of this case.
5 Maximal Independent Set in Dynamic Distributed Networks
We consider the model of distributed computation (cf. [41]) which captures the essence of both spatial locality and congestion. The network is modeled by an undirected graph where the vertexset is , and corresponds to both the edgeset in the current graph and also the vertex pairs that can directly communicate with each other. We assume a synchronous communication model, where time is divided into rounds and in each round, each vertex can send a message of size bits to any of its neighbors, where . The goal is to maintain an MIS in in a way that each vertex is able to output whether or not it belongs to .
We focus on dynamically changing networks where both edges and vertices can be inserted to or deleted from the network. For deletions, we consider graceful deletions where the deleted vertex/edge may be used for passing messages between its neighbors (endpoints), and is only deleted completely once the network is stable again. After each change, the vertices communicate with each other to adjust their outputs, namely make the network stable again. We make the standard assumption that the changes occur in large enough time gaps, and hence the network is always stable before the next change occurs (see, e.g., [40, 17]). We further assume that each change in the network is indexed and vertices affected by this change know how many updates have happened before^{3}^{3}3This is only needed by our algorithm in Theorem 2 to have an approximation of the number of edges in the graph, which is a global quantity and cannot be maintained by each vertex locally.
There are three complexity measures for the algorithms in this model. The first is the socalled adjustment complexity, which measures the number of vertices that change their output as a result of a recent topology change. The second is the round complexity, the number of rounds required for the network to become stable again after each update. The third is the message complexity, measuring the total number of length messages communicated by the algorithm.
Our main result in this section is an implementation of Theorem 1 in this distributed setting for maintaining an MIS in a dynamically changing network.
Theorem 2.
Starting from an empty distributed network on vertices, a maximal independent set can be maintained deterministically in a distributed fashion (under the communication model) over any sequence of vertex/edge insertions and (graceful) deletions with amortized adjustment complexity, amortized round complexity, and amortized message complexity. Here, denotes the number of dynamic edges.
The algorithm in Lemma 3.1 can also be trivially implemented in this distributed setting, resulting in an extremely simple deterministic distributed algorithm for maintaining an MIS of a dynamically changing graph in amortized adjustment complexity and round complexity, and amortized message complexity. As argued before, this simple algorithm already strengthens the previous randomized algorithm of CensorHillel et al. [17] by virtue of being deterministic and not requiring an assumption of a nonadaptive oblivious adversary. In the following, we compare our results in the distributed setting with those of [17].
Amortized vs in Expectation Guarantee.
The guarantees on the complexity measures provided by our deterministic algorithms in this setting are amortized, while the randomized algorithm in [17] achieves its bound in expectation which may be considered somewhat stronger than our guarantee. To achieve this guarantee however, the algorithm in [17], besides using randomization, also assumes a nonadaptive oblivious adversary. An adaptive adversary (the assumption supported by all our algorithms in this paper) can force the algorithm in [17] to adjust the MIS by vertices in every round, which in turn blows up all the complexity measures in [17] by a factor of . It is also worth mentioning that the guarantee achieved by [17] only holds in expectation and not with high probability and for a fundamental reason: It was shown in [17] that for every value of , there exists an instance for which at least
adjustments are needed for any algorithm with probability at least
(see Section 1.1 of their paper).Broadcast vs Unicast.
The communication in algorithm of [17] in each round is broadcast messages in expectation that requires only bits on every edge (i.e., each vertex communicates the same bits to every one of its neighbors). As such, the total communication at every round of this algorithm is bits in expectation. Our amortized message complexity algorithm (distributed implementation of Lemma 3.1) also works with the same guarantee: indeed, every vertex simply needs to send bits to all its neighbors in a broadcast manner so that their neighbors know whether to add or subtract the contribution of this vertex to or from their counter. This is however not the case for our main algorithm in Theorem 2 which requires a processor to communicate differently to its neighbor over each edge (in general, one cannot hope to achieve communication with only broadcast messages). Additionally, this algorithm now requires to communicate bits (as opposed to in the previous two algorithms) over every edge. This is mainly due to the fact that in this new algorithm we need to communicate with vertices which are at distance of the current vertex and hence we need to carry the ID of original senders in the messages also.
Graceful vs Abrupt Deletions.
A stronger notion of deletion in the dynamic setting is abrupt deletion in which the neighbors of the deleted vertex/edge simply discover that this vertex/edge is being deleted and the deleted vertex/edge cannot be used for communication anymore right after the deletion happens. CensorHillel et al. [17] also extend their result to this more general setting and achieved the same guarantees except for message complexity of abrupt deletion of a node which is now broadcasts as opposed to . We do not consider this model explicitly. However, it is straightforward to verify that our amortized message complexity algorithm (distributed implementation of Lemma 3.1) works in this more general setting with virtually no change and even still achieves amortized broadcast per abrupt deletion of a vertex as well. We believe that our main algorithm in Theorem 2 should also work in this more general setting with proper modifications but we did not prove this formally.
Synchronous vs Asynchronous Communication.
We focused only on the synchronous communication in this paper. CensorHillel [17] also considered the asynchronous model of communication and showed that their algorithm holds in this model as well, albeit with a weaker guarantee on its message complexity. Our algorithms can be modified to work in an asynchronous model as well, as at each stage of the algorithm we can identify a (different) local “coordinator” that can be used to synchronize the operations with an added overhead that is within a constant multiplicative of the synchronous complexity (as per each update only vertices within twohop neighborhood of a vertex need to communicate with each other in our algorithm); we omit the details but refer the reader to Section 5.2.2 for more information on the use of a local coordinator in our algorithms.
We now turn to proving Theorem 2, using the following lemma the same way we used Lemma 4.1 in the proof of Theorem 1.
Lemma 5.1.
Starting with any arbitrary graph on vertices and edges, a maximal independent set can be maintained deterministically in a distributed fashion (under the communication model) over any sequence of vertex/edge insertions and (graceful) deletions as long as the number of edges in the graph remains within a factor of . The algorithm:

[label=()]

makes adjustment to in total, i.e., has amortized adjustment complexity,

requires rounds in total, i.e., has amortized round complexity, and

communicates messages in total, i.e., has amortized message complexity.
The algorithm in Lemma 5.1 is a simple implementation of our sequential dynamic algorithm in Lemma 4.1. In the following, we first adapt the data structures introduced in Section 4.1 to the distributed setting. We then show that with proper adjustments, the (sequential) update algorithm in Section 4.2 can also be used in the model and prove Theorem 2.
5.1 The Data Structure
We store the same exact information in Section 4.1 per each vertex here as well and maintain Invariants 1 and 2. We first prove that the two procedures UpdateNeighbors and UpdateTwoHopNeighbors can both be implemented in constant rounds and messages in total. In particular,
Lemma 5.2.
For any vertex ,

[label=()]

operation requires spending round and messages in total, and

operation requires rounds and messages in total.
Proof.
Part . If , it only needs to send a message to its neighbors in and inform them on the status of (whether it is inserted to or deleted from ), which requires only round (as they are all neighbors to ) and messages as . If , it would update all its neighbors again in round and messages as the latter is an upper bound on number of its neighbors.
Part . If there is nothing to do. Otherwise, needs to send a message to all its (at most ) neighbors that belong to and ask them to relay this information to their neighbors. These vertices can then spend another round to inform all their (at most ) neighbors about the status of . This takes rounds and messages in total.
Lemma 5.2 ensures that Invariants 1 and 2 (the only MISrelated information stored for beside MISflag[] that can be trivially updated) are preserved after any change in within a constant number of rounds and messages.
In the following, we briefly describe how to update the information stored for vertices per each topology change in the graph.
Vertex Updates.
Let be the updated vertex. In case of vertex insertion, we simply initialize the data structures at and we are almost done as the neighbors of are already informed about being inserted to the graph and hence can update their information locally. We only need to send degree[] to all the neighbors (the time needed for this can be charged to the initialization cost of this algorithm). Now suppose is being deleted. We can update neighbors[] and neighborsdegree[] for any neighbor of without any communication as they are informed that is deleted. We can also run (virtually) with no communication as this procedure only informs the neighbors of that this vertex is being deleted from and by knowing that has left the graph, any vertex in the neighborhood of can update MISneighbors[] accordingly. Finally, we can also run with only round of communication and messages (see Lemma 5.2) by relaying the information from the neighbors of (which are informed about leaving the graph) to their neighbors.
Edge Updates.
These updates are handled exactly the same as in our sequential algorithm. Let be the updated edge. Nertices and can update all information except for updating MIS2hopneighbors[] (in particular neighborsdegree[] can be updated by the procedure described in Section 4.1 with worstcase round complexity and amortized message complexity). To do the latter task, vertex (resp. ) can simulate (resp. ) as described above, which takes messages and round.
We hence showed that after each change in the topology, all the information stored for vertices can be updated in rounds and amortized messages.
5.2 The Distributed Algorithm
We design a distributed algorithm for updating in the network in the spirit of our update algorithm in Section 4.2. The algorithm is a simple adaption of our sequential algorithm to this dynamic model. For every update, we first perform the steps in the previous section to update the information on every vertex in the graph and then make the network stable again by adjusting .
Throughout, we aim to maintain the following invariant which is the direct analogue of Invariant 3 in the dynamic setting.
[hidealllines=false,backgroundcolor=gray!10,innertopmargin=0pt]
Invariant 4.
Following every vertex/edge update, the set maintained by the algorithm is an MIS of the input graph. Moreover,

[label=()]

if only a single vertex leaves then there is no restriction on the number of vertices joining (which could be zero).

if at least two vertices leave , then at least twice as many vertices are added to .
In either case, the worst case number of rounds and messages spent by the algorithm for any update is within, respectively, an and an factor of the total number of vertices leaving and joining .
Using the same exact argument as in the proof of Lemma 4.1, maintaining Invariant 4 ensures that the amortized adjustment complexity and amortized round complexity of the algorithm is and its amortized message complexity is . Hence, to prove Lemma 5.1, it suffices to prove that Invariant 4 is preserved after every update. We consider different cases based on insertion and deletion of edges and vertices.
5.2.1 Edge Deletions
Suppose we delete the edge . We only consider the case that belongs to and is not; the remaining cases are either symmetric to this one or need no update in (see Section 4.2.1). If is not in , by Invariant 1, it knows all its neighbors in and can decide whether to join or not to locally; if it enters , it can update the network in rounds and messages by Lemma 5.2. If is in , it first sends a message to all its neighbors and ask for their status to which its neighbors reply whether they belong to or not. This only takes rounds and communication and then can decide again whether to join or not to. Note that this part of the result holds even with abrupt deletions.
5.2.2 Edge Insertions
Suppose we insert the edge . We only consider the case when both and belong to ; the remaining cases need no update in (see Section 4.2.2). Remember that in Section 4.2.2, we needed to handle these updates in three separate cases. While the algorithm and analysis in each case is different, the procedures needed to carry the information around the network are essentially the same among these cases and hence in the following, for simplicity, we only consider one of the main cases, namely case 1b (see Section 4.2.2 for definition of this case). The algorithm in the remaining cases can be adapted to this setting in the same exact way.
Recall that in this case, the vertex is deleted from and moreover knows the set entirely, which is of size . The general approach is to make a “coordinator” for running the update algorithm in Section 4.2.2 by communicating with its twohop neighborhood and gather the necessary information to run the sequential update algorithm.
Vertex first sends a message to all its neighbors in and asks for their status to which they respond whether or not they belong to . This takes rounds and messages. Next, informs one of its neighbors that it can join and this new vertex updates its status and the information in the graph which takes rounds and messages by Lemma 5.2. After this, again sends a message to all its neighbors in and asks for their status in to which they respond whether they belong to or whether one of their neighbors in has been added to in this step. Then, again, informs one of its neighbors (if such exists) that it can join , and continues. This way, we only spend rounds and communication per each vertex entering in addition to rounds and communication for communicating with neighbors of that would not join eventually.
After processing the list , we also need to delete from , the set of vertices in that are now incident to vertices in that just joined . Note that such vertices are necessarily in the twohop neighborhood of and hence can communicate with them (which are only many) in rounds and use the above idea to implement the same update algorithm in Section 4.2.2 in this model. This allows us to preserve Invariant 4 by the same exact analysis in Section 4.2.2.
5.2.3 Vertex Deletions
This case is essentially equivalent to the edge insertion case discussed above. Since we have a graceful deletion, we can treat the deleted vertex the same way as in Section 5.2.2 by deleting it from (if it belonged to it) and using it as the “coordinator” to implement the process described in Section 5.2.2.
5.2.4 Vertex Insertions
The only thing we need to do in this case is to check whether we need to add this new vertex to or not. If this vertex is not in , it already knows this information and hence can decide whether or not to join ; after that we are done. Otherwise, if the vertex belongs to , it sends a message to all its neighbors and ask for their status in , and use that to decide about joining . In either case, we only need rounds and total communication. After this, we update the neighbors using first part of Lemma 5.2 in communication and rounds.
To conclude, we showed that Invariant 4 is preserved after any edge or vertex insertion or deletion by the distributed algorithm, hence proving Lemma 5.1. We are now ready to prove Theorem 2.
Proof of Theorem 2.
The proof is identical to the proof of Theorem 1. The only difference is that in this distributed setting, we are not able to maintain the exact number of edges in the graph in a distributed fashion across all vertices. However, recall that we assumed vertices affected by an update in the topology know the index of this update, i.e., how many updates have happened before this one. Hence, whenever the number of updates reaches , any vertex that knows this information sends a message to all its neighbors to terminate the process which would then be broadcast across the whole graph. This takes rounds and communication and can be charged to the total number of updates, i.e., in this step. Hence, the vertices can initialize their data structure using the new choice of and continue the distributed algorithm in Lemma 5.1.
References
 [1] I. Abraham, S. Chechik, and S. Krinninger. Fully dynamic allpairs shortest paths with worstcase updatetime revisited. In Proceedings of the 28th Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, January 1619, 2017, pages 440–452, 2017.
 [2] N. Alon, L. Babai, and A. Itai. A fast and simple randomized parallel algorithm for the maximal independent set problem. J. Algorithms, 7(4):567–583, 1986.
 [3] L. Barba, J. Cardinal, M. Korman, S. Langerman, A. van Renssen, M. Roeloffzen, and S. Verdonschot. Dynamic graph coloring. In Proceedings of the 15th International Symposium on Algorithms and Data Structures, WADS 2017, St. John’s, NL, Canada, July 31  August 2, 2017, pages 97–108, 2017.
 [4] L. Barenboim, M. Elkin, and F. Kuhn. Distributed coloring in linear (in ) time. SIAM J. Comput., 43(1):72–95, 2014.
 [5] L. Barenboim, M. Elkin, S. Pettie, and J. Schneider. The locality of distributed symmetry breaking. J. ACM, 63(3):20:1–20:45, 2016.
 [6] L. Barenboim and T. Maimon. Fullydynamic graph algorithms with sublinear time inspired by distributed computing. In Proceedings of the International Conference on Computational Science, ICCS 2017, Zurich, Switzerland, June 1214, 2017, pages 89–98, 2017.
 [7] S. Baswana, M. Gupta, and S. Sen. Fully dynamic maximal matching in update time. In Proceedings of the 52nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2011, Palm Springs, CA, October 2325, 2011, pages 383–392, 2011 (see also SICOMP’15 version, and subsequent erratum).

[8]
A. Bernstein and S. Chechik.
Deterministic decremental single source shortest paths: beyond the
bound.
In
Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 1821, 2016
, pages 389–397, 2016.  [9] A. Bernstein and L. Roditty. Improved dynamic algorithms for maintaining approximate shortest paths under deletions. In Proceedings of the 22nd Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, CA, USA, January 2325, 2011, pages 1355–1365, 2011.
 [10] A. Bernstein and C. Stein. Fully dynamic matching in bipartite graphs. In Proc. 42nd ICALP, pages 167–179, 2015.
 [11] A. Bernstein and C. Stein. Faster fully dynamic matchings with small approximation ratios. In Proceedings of the 27th Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 1012, 2016, pages 692–711, 2016.

[12]
S. Bhattacharya, D. Chakrabarty, and M. Henzinger.
Deterministic fully dynamic approximate vertex cover and fractional
matching in O(1) amortized update time.
In
Proceedings of the 19th International Conference on Integer Programming and Combinatorial Optimization, IPCO 2017, Waterloo, ON, Canada, June 2628, 2017
, pages 86–98, 2017.  [13] S. Bhattacharya, D. Chakrabarty, M. Henzinger, and D. Nanongkai. Dynamic algorithms for graph coloring. In Proceedings of the 29th Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 710, 2018, pages 1–20, 2018.
 [14] S. Bhattacharya, M. Henzinger, and G. F. Italiano. Deterministic fully dynamic data structures for vertex cover and matching. In Proceedings of the 26th Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 46, 2015, pages 785–804, 2015.
 [15] S. Bhattacharya, M. Henzinger, and D. Nanongkai. New deterministic approximation algorithms for fully dynamic matching. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 1821, 2016, pages 398–411, 2016.
 [16] S. Bhattacharya, M. Henzinger, and D. Nanongkai. Fully dynamic maximum matching and vertex cover in worst case update time. In Proceedings of the 28th Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, January 1619, 2017, pages 470–489, 2017.
 [17] K. CensorHillel, E. Haramaty, and Z. S. Karnin. Optimal dynamic distributed MIS. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, PODC 2016, Chicago, IL, USA, July 2528, 2016, pages 217–226, 2016.
 [18] S. A. Cook. An overview of computational complexity. Commun. ACM, 26(6):400–408, 1983.
 [19] C. Demetrescu and G. F. Italiano. A new approach to dynamic all pairs shortest paths. J. ACM, 51(6):968–992, 2004.
 [20] D. Eppstein, Z. Galil, G. F. Italiano, and A. Nissenzweig. Sparsification  a technique for speeding up dynamic graph algorithms. J. ACM, 44(5):669–696, 1997.
 [21] S. Even and Y. Shiloach. An online edgedeletion problem. J. ACM, 28(1):1–4, 1981.
 [22] G. N. Frederickson. Data structures for online updating of minimum spanning trees, with applications. SIAM J. Comput., 14(4):781–798, 1985.
 [23] M. Ghaffari. An improved distributed algorithm for maximal independent set. In Proceedings of the 27th Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 1012, 2016, pages 270–277, 2016.
 [24] A. Gupta, R. Krishnaswamy, A. Kumar, and D. Panigrahi. Online and dynamic algorithms for set cover. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 1923, 2017, pages 537–550, 2017.
 [25] M. Gupta and R. Peng. Fully dynamic approximate matchings. In Proceedings of the 54th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2013, Berkeley, CA, USA, October 2629, 2013, pages 548–557, 2013.
 [26] M. Henzinger, S. Krinninger, and D. Nanongkai. Decremental singlesource shortest paths on undirected graphs in nearlinear total update time. In Proceedings of the 55th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2014, Philadelphia, PA, USA, October 1821, 2014, pages 146–155, 2014.
 [27] M. Henzinger, S. Krinninger, and D. Nanongkai. Sublineartime decremental algorithms for singlesource reachability and shortest paths on directed graphs. In Proceedings of the 46th Annual ACM on Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31  June 03, 2014, pages 674–683, 2014.

[28]
M. Henzinger, S. Krinninger, D. Nanongkai, and T. Saranurak.
Unifying and strengthening hardness for dynamic problems via the online matrixvector multiplication conjecture.
In Proceedings of the 47th Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June 1417, 2015, pages 21–30, 2015.  [29] M. R. Henzinger and V. King. Maintaining minimum spanning trees in dynamic graphs. In Proceedings of the 24th International Colloquium on Automata, Languages, and Programming, ICALP 1997, Bologna, Italy, July 711, 1997, pages 594–604, 1997.
 [30] M. R. Henzinger and V. King. Randomized fully dynamic graph algorithms with polylogarithmic time per operation. J. ACM, 46(4):502–516, 1999.
 [31] Z. Ivković and E. L. Lloyd. Fully dynamic maintenance of vertex cover. In Proceedings of the 19th International Workshop on GraphTheoretic Concepts in Computer Science, WG 1993, Utrecht, The Netherlands, June 1618, 1993, pages 99–111, 1993.
 [32] M. T. J. Holm, K. de. Lichtenberg. Polylogarithmic deterministic fullydynamic algorithms for connectivity, minimum spanning tree, 2edge, and biconnectivity. J. ACM, 48(4):723–760, 2001.
 [33] R. M. Karp and A. Wigderson. A fast parallel algorithm for the maximal independent set problem. J. ACM, 32(4):762–773, 1985.
 [34] V. King. Fully dynamic algorithms for maintaining allpairs shortest paths and transitive closure in digraphs. In Proceedings of the 40th Annual Symposium on Foundations of Computer Science, FOCS 1999, New York, NY, USA, October 1718, 1999, pages 81–91, 1999.
 [35] N. Linial. Distributive graph algorithmsglobal solutions from local data. In Proceedings of the 28th IEEE Annual Symposium on Foundations of Computer Science, FOCS 1987, Los Angeles, CA, USA, October 2729, 1987, pages 331–335, 1987.
 [36] M. Luby. A simple parallel algorithm for the maximal independent set problem. SIAM J. Comput., 15(4):1036–1053, 1986.
 [37] O. Neiman and S. Solomon. Simple deterministic algorithms for fully dynamic maximal matching. In Proceedings of the 45th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2013, Palo Alto, CA, USA, June 14, 2013, pages 745–754, 2013.
 [38] K. Onak and R. Rubinfeld. Maintaining a large matching and a small vertex cover. In Proceedings of the 42nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2010, Cambridge, MA, USA, June 68, 2010, pages 457–464, 2010.
 [39] A. Panconesi and A. Srinivasan. On the complexity of distributed network decomposition. J. Algorithms, 20(2):356–374, 1996.
 [40] M. Parter, D. Peleg, and S. Solomon. Localonaverage distributed tasks. In Proceedings of the 27th Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 1012, 2016, pages 220–239, 2016.
 [41] D. Peleg. Distributed Computing: A LocalitySensitive Approach. SIAM, 2000.
 [42] L. Roditty and U. Zwick. On dynamic shortest paths problems. Algorithmica, 61(2):389–401, 2011.
 [43] S. Solomon. Fully dynamic maximal matching in constant update time. In Proceedings of the 57th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2016, New Brunswick, NJ, USA, October 911, 2016, pages 325–334, 2016.
 [44] M. Thorup. Worstcase update times for fullydynamic allpairs shortest paths. In Proceedings of the 37th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2005, Baltimore, MD, USA, May 2124, 2005, pages 112–119, 2005.
 [45] C. WulffNilsen. Fullydynamic minimum spanning forest with improved worstcase update time. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 1923, 2017, pages 1130–1143, 2017.
Appendix A An Lower Bound on WorstCase Adjustment Complexity
By a straightforward modification of the lower bound example in [17], we can show that adjustmentcomplexity of any deterministic algorithm is in the worstcase.
Consider the following example: Let be a complete bipartite graph between two sets of vertices and , each of size . We create an identical copy of named with bipartition on the remaining vertices and let be the union of these graphs. Consider any deterministic algorithm for maintaining an MIS on . Without loss of generality, assume is the MIS chosen by (in any MIS of either is entirely in the MIS or and similarly for and ). Let and be two arbitrary vertices in and . The adversary starts deleting all edges incident to all vertices in and . Finally, it adds an edge between and . We argue that at some point during these updates, adjusted vertices in the maintained MIS. There are two cases to consider. For simplicity of exposition, we assume that at each time step, all edges incident to a vertex are deleted at once.
Suppose at some point before inserting the last edge, decides to add a vertex in to the MIS for the first time (the argument is symmetric for as well). Since is incident to all vertices in , this means that needs to leave the MIS. Also, since a vertex in has joined the MIS, we know that there cannot be any edge from vertices in the MIS to any vertex in (as we start with a complete bipartite graph and assumed that all edges incident to a vertex are deleted simultaneously). This means that after this step, all vertices in should join the MIS to ensure maximality. Therefore, at this step, vertices are inserted to the MIS at once, proving the claim in this case.
Now suppose that before inserting the last edge, no vertex in and belong to the MIS and hence both and should be inside it. By adding an edge between and , is forced to remove at least one of them, say , from the MIS, which in turn forces all to join MIS to keep the maximality. Hence, again, vertices are inserted to the MIS at once, finalizing the proof.
Remark A.1.
We remark that this simple example explains why we obtain our results in amortized bounds rather than worstcase bounds.
Comments
There are no comments yet.