Dynamic Algorithms for Graph Coloring

11/12/2017 ∙ by Sayan Bhattacharya, et al. ∙ 0

We design fast dynamic algorithms for proper vertex and edge colorings in a graph undergoing edge insertions and deletions. In the static setting, there are simple linear time algorithms for (Δ+1)- vertex coloring and (2Δ-1)-edge coloring in a graph with maximum degree Δ. It is natural to ask if we can efficiently maintain such colorings in the dynamic setting as well. We get the following three results. (1) We present a randomized algorithm which maintains a (Δ+1)-vertex coloring with O(Δ) expected amortized update time. (2) We present a deterministic algorithm which maintains a (1+o(1))Δ-vertex coloring with O(polyΔ) amortized update time. (3) We present a simple, deterministic algorithm which maintains a (2Δ-1)-edge coloring with O(Δ) worst-case update time. This improves the recent O(Δ)-edge coloring algorithm with Õ(√(Δ)) worst-case update time by Barenboim and Maimon.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graph coloring is a fundamental problem with many applications in computer science. A proper -vertex coloring of a graph assigns a color in to every node, in such a way that the endpoints of every edge get different colors. The chromatic number of the graph is the smallest for which a proper -vertex coloring exists. Unfortunately, from a computational perspective, approximating the chromatic number is rather futile: for any constant , there is no polynomial time algorithm that approximates the chromatic number within a factor of in an -vertex graph, assuming [FeigeK98, Zuckerman07] (see [KhotP06] for a stronger bound). On the positive side, we know that the chromatic number is at most where is the maximum degree of the graph. There is a simple linear time algorithm to find a -coloring: pick any uncolored vertex , scan the colors used by its neighbors, and assign to a color not assigned to any of its neighbors. Since the number of neighbors is at most , by pigeon hole principle such a color must exist.

In this paper, we consider the graph coloring problem in the dynamic setting, where the edges of a graph are being inserted or deleted over time and we want to maintain a proper coloring after every update. The objective is to use as few colors as possible while keeping the update time111There are two notions of update time: amortized update time – an algorithm has amortized update time of if for any , after insertions or deletions the total update time is , and worst case update time – an algorithm has worst case update time of if every update time . As typical for amortized update time guarantees, we assume that the input graph is empty initially. small. Specifically, our main goal is to investigate whether a -vertex coloring can be maintained with small update time. Note that the greedy algorithm described in the previous paragraph can easily be modified to give a worst-case update time of : if an edge is inserted between two nodes and of same color, then scan the at most neighbors of to find a free color. A natural question is whether we can get an algorithm with significantly lower update time. We answer this question in the affirmative.

  • We design and analyse a randomized algorithm which maintains a -vertex coloring with expected amortized update time.222As typically done for randomized dynamic algorithms, we assume that the adversary who fixes the sequence of edge insertions and deletions is oblivious to the randomness in our algorithm.

It is not difficult to see that if we had colors, then there would be a simple randomized algorithm with -expected amortized update time (see Section 2.1 for details). What is challenging in our result above is to maintain a coloring with small update time. In contrast, if randomization is not allowed, then even maintaining a -coloring with -update time seems non-trivial. Our second result is on deterministic vertex coloring algorithms: although we do not achieve a coloring, we come close.

  • We design and analyse a deterministic algorithm which maintains a -vertex coloring with amortized update time.

Note that in a dynamic graph the maximum degree can change over time. Our results hold with the changing as well. However, for ease of explaining our main ideas we restrict most of the paper to the setting where is a static upper bound known to the algorithm. In Section 6 we point out the changes needed to make our algorithms work for the changing- case.

Our final result is on maintaining an edge coloring in a dynamic graph with maximum degree . A proper edge coloring is a coloring of edges such that no two adjacent edges have the same color.

  • We design and analyze a simple, deterministic -edge coloring algorithm with worst-case update time.

This significantly improves upon the recent -edge coloring algorithm of Barenboim and Maimon [BarenboimM17] which needs -worst-case update time.

Perspective:

An important aspect of -vertex coloring is the following local-fixability property: Consider a graph problem where we need to assign a state (e.g. color) to each node. We say that a constraint is local to a node if it is defined on the states of and its neighbors. We say that a problem is locally-fixable iff it has the following three properties. (i) There is a local constraint on every node. (ii) A solution to is feasible iff satisfies the local constraint at every node. (iii) If the local constraint at a node is unsatisfied, then we can change only the state of to satisfy without creating any new unsatisfied constraints at other nodes. For example, -vertex coloring is locally-fixable as we can define a constraint local to to be satisfied if and only if ’s color is different from all its neighbors, and if not, then we can always find a recoloring of to satisfy its local constraint and introduce no other constraint violations. On the other hand, the following problems do not seem to be locally-fixable: globally optimum coloring, the best approximation algorithm for coloring [Halldorsson93],

-coloring (which always exists by Brook’s theorem, unless the graph is a clique or an odd cycle), and

-edge coloring (which always exists by Vizing’s theorem).

Observe that if we start with a feasible solution for a locally-fixable problem , then after inserting or deleting an edge we need to change only the states of and to obtain a new feasible solution. For instance in the case of -vertex coloring, we need to recolor only the nodes incident to the inserted edge. Thus, the number of changes is guaranteed to be small, and the main challenge is to search for these changes in an efficient manner without having to scan the whole neighborhood. In contrast, for non-locally-fixable problems, the main challenge seems to be analyzing how many nodes or edges we need to recolor (even with an inefficient algorithm) to keep the coloring proper. A question in this spirit has been recently studied in [BarbaCKLRRV-WADS17].

It can be shown that the -edge coloring problem is also locally-fixable (see Appendix A). Given our current results on -vertex coloring and -edge coloring, it is inviting to ask whether there is some deeper connections that exist in designing dynamic algorithms for these problems. In particular, are there reductions possible among these problems? Or can we find a complete locally-fixable problem? It is also very interesting to understand the power of randomization for these problems.

Indeed, in the distributed computing literature, there is deep and extensive work on and beyond the locally-fixable problems above. (In fact, it can be shown that any locally-fixable problem is in the SLOCAL complexity class studied in distributed computing [GhaffariKM17]; see Appendix A.) Coincidentally, just like our findings in this paper, there is still a big gap between deterministic and randomized distributed algorithms for -vertex coloring. For further details we refer the to the excellent monograph by Barenboim and Elkin [BarenboimE13] (see [GhaffariKM17, FischerGK17], and references therein, for more recent results).

Finally, we also note that the dynamic problems we have focused on are search problems; i.e., their solutions always exist, and the hard part is to find and maintain them. This posts a new challenge when it comes to proving conditional lower bounds for dynamic algorithms for these locally-fixable problems: while a large body of work has been devoted to decision problems [Patrascu10, AbboudW14, HenzingerKNS15, KopelowitzPP16, Dahlgaard16], it seems non-trivial to adapt existing techniques to search problems.

Other Related Work.

Dynamic graph coloring is a natural problem and there have been many works on this [DGOP07, OuerfelliB2011, SallinenIPGRP16, HardyLT17]

. Most of these papers, however, have proposed heuristics and described experimental results. The only two theoretical papers that we are aware of are

[BarenboimM17] and [BarbaCKLRRV-WADS17], and they are already mentioned above.

Organisation of the rest of the paper.

In Section 2 we give the high level ideas behind our vertex coloring result. In particular, Section 2.1 contains the main ideas of the randomized algorithm, whereas Section 3 contains the full details. Similarly, Section 2.2 contains the main ideas of the deterministic algorithm, whereas Section 4 contains the full details. Section 5 contains the edge-coloring result. We emphasize that Sections 345 are completely self contained, and they can be read independently of each other. As mentioned earlier, in Sections 345 we assume that the parameter is known and that the maximum degree never exceeds . We do so solely for the better exposition of the main ideas. Our algorithms easily modify to give results where is the current maximum degree. See Section 6 for the details.

2 Our Techniques for Dynamic Vertex Coloring

2.1 An overview of our randomized algorithm.

We present a high level overview of our randomized dynamic algorithm for -vertex coloring that has expected amortized update time. The full details can be found in Section 3. We start with a couple of warm-ups before sketching the main idea.

Warmup I: Maintaining a -coloring in expected amortized update time. We first observe that maintaining a coloring is easy using randomization against an oblivious adversary – we need only expected amortized time. The algorithm is this. Let be the palette of colors. Each vertex stores the last time at which it was recolored. If an edge gets deleted or if an edge gets inserted between two vertices of different colors, then we do nothing. Next, consider the scenario where an edge gets inserted at time between two vertices and of same color. Without any loss of generality, suppose that , i.e., the vertex was recolored last. In this event, we scan all the neighbors of and store the colors used by them in a set , and select a random color from . Since , we have as well. Clearly this leads to a proper coloring since the new color of , by design, is not the current colors of any of ’s neighbors.

The time taken to compute the set can be as high as since can have

neighbors. Now, let us analyze the probability that the insertion of the edge

at time leads to a conflict. Suppose that at time , just before recolored itself, the color of was . The insertion at time creates a conflict only if chose the color at time . However the probability of this event is at most , since had at least choices to choose its color from at time . Therefore the expected time spent on the addition of edge is .

In the analysis described above, we have crucially used the fact that the insertion of the edge at time is oblivious to the random choice made while recoloring the vertex at time . It should also be clear that the constant is not sacrosanct and a coloring can be obtained in -expected amortized time. However this fails to give a or even coloring in time for any constant .

Warmup II: A simple algorithm for coloring that is difficult to analyze. In the previous algorithm, while recoloring a vertex we made sure that it never assumed the color of any of its neighbors. We say that a color is blank for a vertex iff no neighbor of gets the color . Since we have colors, every vertex has at least one blank color. However, if there is only one blank color to choose from, then an adversarial sequence of updates may force the algorithm to spend time after every edge insertion. A polar-opposite idea would be to randomly recolor a vertex without considering the colors of its neighbors. This has the problem that a recoloring may lead to one or more neighbors of the vertex being unhappy (i.e., having the same color as ), and then there is a cascading effect which is hard to control.

We take the middle ground: Define a color to be unique for a vertex if it is assigned to exactly one neighbor of . Thus, if is recolored using a unique color then the cascading effect of unhappy vertices doesn’t explode. Specifically, after recoloring we only need to consider recoloring ’s unique neighbor, and so on and so forth. Why is this idea useful? This is because although the number of blank colors available to a vertex (i.e., the colors which none of its neighbors are using) can be as small as , the number of blank+unique colors is always at least . This holds since any color which is neither blank nor unique accounts for at least two neighbors of , whereas has at most neighbors.

The above observation suggests the following natural algorithm. When we need to recolor a vertex , we first scan all its neighbors to identify the set of all unique and blank colors for , and then we pick a new color for uniformly at random from this set . By definition of the set , at most one neighbor of will have the same color . If such a neighbor exists, then we recolor recursively using the same algorithm. We now state three important properties of this scheme. (1) While recoloring a vertex we have to make at most one recursive call. (2) It takes time to recolor a vertex , ignoring the potential recursive call to its neighbor. (3) When we recolor a vertex , we pick its new color uniformly at random from a set of size . Note that the properties (2) and (3) served as the main tools in establishing the bound on the expected amortized update time as discussed in the previous algorithm. For property (1), if we manage to upper bound the length of the chain of recursive calls that might result after the insertion of an edge in the input graph between two vertices of same color, then we will get an upper bound on the overall update time of our algorithm. This, however, is not trivial. In fact, the reader will observe that it is not necessary to have colors in order to ensure the above three properties. They hold even with colors. Indeed, in that case the algorithm described above might never terminate. We conclude that another idea is required to achieve update time. This turns out to be the concept of a hierarchical partition of the set of vertices of a graph. We describe this and present an overview of our final algorithm below.

An overview of the final algorithm. Fix a large constant , and suppose that we can partition the vertex-set of the input graph into levels with the following property.

Property 2.1.

Consider any vertex at a level . Then the vertex has at most neighbors in levels , and at least neighbors in levels .

It is not clear at first glance that there even exists such a partition of the vertex-set: Given a static graph , there seems to be no obvious way to assign a level to each vertex satisfying Property 2.1. One of our main technical contributions is to present an algorithm that maintains a hierarchical partition satisfying Property 2.1 in a dynamic graph. Initially, when the input graph has an empty edge-set, we place every vertex at level . This trivially satisfies Property 2.1. Subsequently, after every insertion or deletion of an edge in , our algorithm updates the hierarchical partition in a way which ensures that Property 2.1 continues to remain satisfied. This algorithm is deterministic, and using an intricate charging argument we show that it has an amortized update time of . This also gives a constructive proof of the existence of a hierarchical partition that satisfies Property 2.1 in any given graph.

We now explain how this hierarchical partition, in conjunction with the ideas from Warmup II, leads to an efficient randomized vertex coloring algorithm. In this algorithm, we require that a vertex keeps all its neighbors at levels informed about its own color . This requirement allows a vertex to maintain: (1) the set of colors assigned to its neighbors with , and (2) the set of remaining colors. We say that a color is blank for iff no neighbor of with has the same color . On the other hand, we say that a color is unique for iff exactly one neighbor of with has the same color . Note the crucial change in the definition of a unique color from Warmup II. Now, for a color to be unique for it is not enough that has exactly one neighbor with the same color; in addition, this neighbor has to lie at a level strictly below the level of . Using the property of the hierarchical partition that has neighbors in levels and an argument similar to one used in Warmup II, we can show that there are a large number of colors that are either blank or unique for .

Claim 2.1.

For every vertex , there are at least colors that are either blank or unique.

We now implement the same template as in Warmup II. When a vertex needs to be recolored, it picks its new color uniformly at random from the set of its blank + unique colors. This can cause some other vertex to be unhappy, but such a vertex lies at a level strictly lower than . As there are levels, this bounds the depth of any recursive call: At level , we just use a blank color. Further, whenever we recolor , the time it needs to inform all its neighbors with is bounded by (by the property of the hierarchical partition). Since each recursive call is done on a vertex at a strictly lower level, the total time spent on all the recursive calls can also be bounded by due to a geometric sum. Finally, by Claim 2.1, each time picks a random color it does so from a palette of size . If the order of edge insertions and deletions is oblivious to this randomness, then the probability that an edge insertion is going to be problematic is , which gives an expected amortized time bound of .

2.2 An overview of our deterministic algorithm.

We present a high level overview of our deterministic dynamic algorithm for -vertex coloring that has an amortized update time of . The full details are in Section 4. As in Section 2.1, we start with a warmup before sketching the main idea.

Warmup: Maintaining a coloring in amortized update time. Let be the palette of colors. We partition the set into equally sized subsets: each having colors. Colors in are said to be of type and we let denote the type of the color assigned to a node . Furthermore, we let denote the number of neighbors of that are assigned a type color. We refer to the neighbors of with as type neighbors of . For every node , we let denote the set of neighbors of with . Every node maintains the set in a doubly linked list. Note that if the node gets a color from , then we have . We maintain a proper coloring with the following extra property: If a node is of type , then it has at most type neighbors.

Property 2.2.

If any node is assigned a color from , then we have .

Initially, the input graph is empty, every vertex is colored arbitrarily, and the above property holds. Note that the deletion of an edge from does not lead to a violation of the above property, nor does it make the existing coloring invalid. We now discuss what we do when an edge gets inserted into , by considering three possible cases.

Case 1: . There is nothing to be done since and have different types of colors.

Case 2: , but both and after the insertion of the edge . The colors assigned to the vertices and are of the same type. In this event, we first set and . There is nothing further to do if and don’t have the same color since the property continues to hold. If they have the same color , then we pick an arbitrary endpoint and find a type color that is not assigned to any of the neighbors of in the set . This is possible since and there are colors of each type. We then change the color of to . These operations take time.

Case 3: and after the insertion of the edge . Here, after the addition of the edge , the vertex violates Property 2.2. We run the following subroutine RECOLOR():

  • Since has at most neighbors and there are types, there must exist a type with . Such a type can be found by doing a linear scan of all the neighbors of , and this takes time since has at most neighbors.

    From the set we choose a color that is not assigned to any of the neighbors of : Such a color must exist since . Next, we update the set as follows: We delete from every neighbor of with , and insert into every neighbor of with . We similarly update the set for every neighbor of with . It takes time to implement this step.

    Accordingly, the total time spent on this call to the RECOLOR(.) subroutine is . However, Property 2.2 may now be violated for one or more neighbors of . If this is the case, then we recursively call RECOLOR() and keep doing so until all the vertices satisfy Property 2.2. In the end, we have a proper coloring with all the vertices satisfying Property 2.2.

A priori it may not be clear that the above procedure even terminates. However, we now argue that the amortized time spent in all the calls to the RECOLOR subroutine is (and in particular the chain of recursive calls to the subroutine terminates). To do so we introduce a potential , which sums over all vertices the number of its neighbors which are of the same type as itself. Note that when an edge is inserted or deleted the potential can increase by at most . However, during a call to RECOLOR() the potential drops by at least . This is because moves from a color of type to a color of type where and ; this leads to a drop of and we get the same amount of drop when considering ’s neighbors. Therefore, during edge insertions or deletions starting from an empty graph, we can have at most calls to the RECOLOR subroutine. Since each such call takes time, we get the claimed amortized update time.

Getting amortized update time.

One way to interpret the previous algorithm is as follows. Think of each color

as an ordered pair

, where . The first coordinate is analogous to the notion of a type, as defined in the previous algorithm. For any vertex and , let denote the -tuple consisting of the first coordinates of the color assigned to . For ease of exposition, we define . Furthermore, for every vertex and every , let denote the set of neighbors of with . With these notations, Property 2.2 can be rewritten as: for all .

To improve the amortized update time to , we think of every color as an tuple , whose each coordinate can take possible values. The total number of colors is given by . The values of and are chosen in such a way which ensures that and . We maintain the invariant that for all and , for some carefully chosen function . We then implement a generalization of the previous algorithm on these colors represented as tuples. Using some carefully chosen parameters we show how to deterministically maintain a vertex coloring in a dynamic graph in amortized update time. See Section 4 for the details.

3 A Randomized Dynamic Algorithm for Vertex Coloring

As discussed in Section 2.1, our randomized dynamic algorithm for vertex coloring has two main components. The first one is a hierarchical partition of the vertices of the input graph into -many levels. In Section 3.2, we show how to maintain such a hierarchical partition dynamically. The second component is the use of randomization while recoloring a conflicted vertex so as to ensure that (a) at most one new conflict is caused due to this recoloring, and (b) if so, the new conflicted vertex lies at a level strictly lower than . We describe this second component in Section 3.3. The complete algorithm, which combines the two components, appears in Section 3.4. The theorem below captures our main result.

Theorem 3.1.

There is a randomized, fully dynamic algorithm to maintain a vertex coloring of a graph whose maximum degree is with expected amortized update time .

3.1 Preliminaries.

We start with the definition of a hierarchical partition. Let denote the input graph that is changing dynamically, and let be an upper bound on the maximum degree of any vertex in . For now we assume that the value of does not change with time. In Section 6, we explain how to relax this assumption. Fix a constant . For simplicity of exposition, assume that (say) is an integer and that . The vertex set is partitioned into subsets . The level of a vertex is the index of the subset it belongs to. For any vertex and any two indices , we let be the set of neighbors of whose levels are between and . For notational convenience, we define whenever . A hierarchical partition satisfies the following two properties/invariants. Note that since , Invariant 3.3 is trivially satisfied by every vertex at the highest level . Invariant 3.2, on the other hand, is trivially satisfied by the vertices at level .

Invariant 3.2.

For every vertex at level , we have .

Invariant 3.3.

For every vertex , we have .

Let be the set of all possible colors. A coloring is proper for the graph iff for every edge , we have . Given the hierarchical partition, a coloring , and a vertex at level , we define a few key subsets of . Let be the colors used by neighbors of lying in levels and above. Let denote the remaining set of colors. We say a color is blank for if no vertex in is assigned color . We say a color is unique for if exactly one vertex in is assigned color . We let (respectively ) denote the blank (respectively unique) colors for . Let denote the remaining colors in . Thus, for every color , there are at least two vertices that are assigned color . We end this section with a crucial observation.

Claim 3.1.

For any vertex at level , we have .

Proof.

Since and , we get . The following two observations, which in turn follow from definitions, prove the claim; (a) and (b) . ∎

Data Structures. We now describe the data structures used by our dynamic algorithm. The first set is used to maintain the hierarchical partition and the second set is used to maintain the sets of colors.

(1) For every vertex and every level , we maintain the neighbors of in level in a doubly linked list. If , then we also maintain the set of neighbors in a doubly linked list. We use the phrase neighborhood list of to refer to any one of these lists. For every neighborhood list we maintain a counter which stores the number of vertices in it. Every edge keeps two pointers – one to the position of in the neighborhood list of , and the other vice versa. Therefore when an edge is inserted into or deleted from the linked lists can be updated in time. Finally, we keep two queues of dirty vertices which store the vertices not satisfying either of the two invariants.

(2) We maintain the coloring as an array where contains the current color of . Every vertex maintains the colors and in doubly linked lists. For each color and vertex , we keep a pointer from the color to its position in either or depending on which list belongs to. This allows us to add and delete colors from these lists in time. We also maintain a counter associated with each color and each vertex . If , then the value of equals the number of neighbors with color . Otherwise, if , then we set . For each vertex , we keep a time counter which stores the last “time” (edge insertion/deletion) at which was recolored333Note that as long as the number of edge insertions and deletions are polynomial, requires only bits to store; if the number becomes superpolynomial then every rounds or so we recompute the full coloring in the current graph., i.e., its was changed.

3.2 Maintaining the hierarchical partition.

Initially when the graph is empty, all the vertices are at level . This satisfies both the invariants vacuously. Subsequently, we ensure that the hierarchical partition satisfies Invariants 3.2 and 3.3 by using a simple greedy heuristic procedure. To describe this procedure, we define a vertex to be dirty if it violates any one of the invariants, and clean otherwise. Our goal then is to ensure that every vertex in the hierarchical partition remains clean. By inductive hypothesis, we assume that every vertex is clean before the insertion/deletion of an edge. Due to the insertion/deletion of an edge , some vertices of the form might become dirty. We fix the dirty vertices as per the procedure described in Figure 1. In this procedure, we always fix the dirty vertices violating Invariant 3.3 before fixing any dirty vertex that violates Invariant 3.2. This will be crucial in bounding the amortized update time. Furthermore, note that as we change the level of a vertex during one iteration of the While loop in Figure 1, this might lead to a change in the below or side degrees444The terms below-degree and side-degree of a vertex refer to the values of and respectively. of the neighbors of . Hence, one iteration of the While loop might create multiple new dirty vertices, which are dealt with in the subsequent iterations of the same While loop. It is not hard to see that any iteration of the while loop acting on a vertex ends with making it clean. We encapsulate this in the following lemma and the subsequent corollary.

01. While Invariant 3.2 or Invariant 3.3 is violated: 02. If there is some vertex that violates Invariant 3.3, Then 03. Find the minimum level where . 04. Move the vertex up to level , and update the relevant data structures as described in the proof of Lemma 3.5. 05. Else 06. Find a vertex that violates Invariant 3.2. 07. If there is a level where , Then 08. Move the vertex down to maximum such level , and update the relevant data structures as described in the proof of Lemma 3.6. 09. Else 10. Move the vertex down to level , and update the relevant data structures.

Figure 1: Subroutine: MAINTAIN-HP is called when an edge is inserted into or deleted from .
Lemma 3.4.

Consider any iteration of the While loop in Figure 1 which changes the level of a vertex from to . The vertex becomes clean (i.e., satisfies both the invariants) at the end of the iteration. Furthermore, at the end of this iteration we have: (a) , (b) if .

Proof.

There are three cases to consider, depending on how the vertex moves from level to level .

Case 1. The vertex moves up from a level . In this case, the vertex moves up to the minimum level where . This implies that . Thus, the vertex satisfies both the invariants after it moves to level , and both the conditions (a) and (b) hold.

Case 2. The vertex moves down from level to a level . In this case, steps 07, 08 in Figure 1 imply that is the maximum level where . Hence, we have . So the vertex satisfies both the invariants after moving to level , and both the conditions (a) and (b) hold.

Case 3. The vertex moves down from level to level . Here, steps 07, 09 in Figure 1 imply that for every level . In particular, setting , we get: . Thus, the vertex satisfies both the invariants after it moves down to level , and both the conditions (a) and (b) hold. ∎

Lemma 3.4 states that during any given iteration of the While loop in Figure 1, we pick a dirty vertex and make it clean. In the process, some neighbors of become dirty, and they are handled in the subsequent iterations of the same While loop. When the While loop terminates, every vertex is clean by definition. It now remains to analyze the time spent on implementing this While loop after an edge insertion/deletion in the input graph. Lemma 3.4 will be crucial in this analysis. The intuition is as follows. The lemma guarantees that whenever a vertex moves to a level , its below-degree is at least . In contrast, Invariant 3.2 and Figure 1 ensure that whenever the vertex moves down from the same level , its below-degree is less than . Thus, the vertex loses at least in below-degree before it moves down from level . This slack of help us bound the amortized update time. We next bound the time spent on a single iteration of the While loop in Figure 1.

Lemma 3.5.

Consider any iteration of the While loop in Figure 1 where a vertex moves up to a level from a level (steps 2 – 4). It takes time to implement such an iteration.

Proof.

First, we claim that the value of (the level where the vertex will move up to) can be identified in time. This is because we explicitly store the sizes of the lists and for all . Next, we update the lists and the counters for and its neighbors as follows. For every level and every vertex :

  • , and .

  • If , Then

    • .

    • If , then and .

The time spent on the above operations is bounded by the number of vertices in .

Since the vertex is moving up from level to level , we have to update the position of in the neighborhood lists of the vertices . We also need to merge the lists and for into a single list . In the process if some vertices becomes dirty, then we need to put them in the correct dirty queue. This takes time.

By Lemma 3.4, we have , and . Since is a constant, we conclude that it takes time to implement this iteration of the While loop in Figure 1. ∎

Lemma 3.6.

Consider any iteration of the While loop in Figure 1 where a vertex moves down to a level from a level (steps 5 – 10). It takes time to implement such an iteration.

Proof.

We first bound the time spent on identifying the level the vertex will move down to. Since the vertex violates Invariant 3.2, we know that . Therefore, the algorithm can scan through the list and find the required level in time. Next, we update the lists and the counters for and its neighbors as follows.

For every vertex :

  • If , Then

    • .

    • If , then and .

  • If , Then

    • , and .

The time spent on the above operations is bounded by the number of vertices in .

Since the vertex is moving down from level to level , we have to update the position of in the neighborhood lists of the vertices . We also need to split the list into the lists and for . In the process if some vertices become dirty, then we need to put them in the correct dirty queue. This takes time.

Figure 1 ensures that satisfies Invariant 3.3 at level before it moves down to a lower level. Thus, we have , and we spend time on this iteration of the While loop. ∎

Corollary 3.7.

It takes time for a vertex to move from a level to a different level .

Proof.

If , then the corollary follows immediately from Lemma 3.5. For the rest of the proof, suppose that . In this case, as per the proof of Lemma 3.6, the time spent is at least the size of the list , and Lemma 3.4 implies that . Hence, the total time spent is , which is also since is a constant. Note that we ignored the scenarios where since in that event is a constant anyway. ∎

In Theorem 3.8, we bound the amortized update time for maintaining a hierarchical partition.

Theorem 3.8.

We can maintain a hierarchical partition of the vertex set that satisfies Invariants 3.2 and 3.3 in amortized update time.

We devote the rest of Section 3.2 to the proof of the above theorem using a token based scheme. The basic framework is as follows. For every edge insertion/deletion in the input graph we create at most tokens, and we use one token to perform units of computation. This implies an amortized update time of , which is since is a constant.

Specifically, we associate many tokens with every vertex and many tokens with every edge in the input graph. The values of these tokens are determined by the following equalities.

(3.1)

Initially, the input graph is empty, every vertex is at level , and for all . Due to the insertion of an edge in , the total number of tokens increases by at most , where and are the levels of the endpoints of the edge just before the insertion. On the other hand, due to the deletion of an edge in the input graph, the value of for increases by at most , and the tokens associated with the edge disappears. Overall, the total number of tokens increases by at most due to the deletion of an edge. We now show that the work done during one iteration of the While loop in Figure 1 is proportional to times the net decrease in the total number of tokens during the same iteration. Accordingly, we focus on any single iteration of the While loop in Figure 1 where a vertex (say) moves from level to level . We consider two cases, depending on whether moves to a higher or a lower level.

Case 1: The vertex moves up from level to level .

Immediately after the vertex moves up to level , we have and hence . This follows from (3.2) and Lemma 3.4. Since is always nonnegative, the value of does not increase as moves up to level . We now focus on bounding the change in the total number of tokens associated with the neighbors of . Note that the event of moving up from level to level affects only the tokens associated with the vertices . Specifically, from (3.2) we infer that for every vertex , the value of increases by at most . On the other hand, for every vertex , the value of remains unchanged. Thus, the total number of tokens associated with the neighbors of increases by at most . The inequality follows from Lemma 3.4. To summarize, the total number of tokens associated with all the vertices increases by at most .

We now focus on bounding the change in the total number of tokens associated with the edges incident on . From (3.1) we infer that for every edge with , the value of drops by at least one as the vertex moves up from level to level . For every other edge with , the value of remains unchanged. Overall, this means that the total number of tokens associated with the edges drops by at least . The inequality follows from Lemma 3.4. To summarize, the total number of tokens associated with the edges decreases by at least .

From the discussion in the preceding two paragraphs, we reach the following conclusion: As the vertex moves up from level to level , the total number of tokens associated with all the vertices and edges decreases by at least . In contrast, Lemma 3.5 states that it takes time taken to implement this iteration of the While loop in Figure 1. Hence, we derive that the time spent on updating the relevant data structures is at most times the net decrease in the total number of tokens. This concludes the proof of Theorem 3.8 for Case 1.

Case 2: The vertex moves down from level to level .

As in Case 1, we begin by observing that immediately after the vertex moves down to level , we have if , and hence . This follows from Lemma 3.4. The vertex violates Invariant 3.2 just before moving from level to level (see step 6 in Figure 1). In particular, just before the vertex moves down from level to level , we have and . The last inequality holds since is a sufficiently large constant. So the number of tokens associated with drops by at least as it moves down from level to level . Also, from (3.2) we infer that the value of does not increase for any as moves down to a lower level. Hence, we conclude that:

(3.3)

We now focus on bounding the change in the number of tokens associated with the edges incident on . From (3.1) we infer that the number of tokens associated with an edge increases by as moves down from level to level . In contrast, the number of tokens associated with any other edge does not change as the vertex moves down from level to a lower level. Let be the increase in the total number of tokens associated with all the edges. Thus, we have:

(3.4)

where the last equality follows by rearrangement. Next, recall that the vertex moves down from level to level during the concerned iteration of the While loop in Figure 1. Accordingly, steps 7 – 10 in Figure 1 implies that for all levels . This is equivalent to the following statement:

(3.5)

Next, step 6 in Figure 1 implies that the vertex violates Invariant 3.2 at level . Thus, we get: . Note that for all levels , we have and . Hence, we get: for all levels . Combining this observation with (3.5), we get:

(3.6)

Plugging (3.6) into (3.4), we get:

(3.7)

In the above derivation, the last inequality holds since is a sufficiently large constant.

From (3.3) and (3.7), we reach the following conclusion: As the vertex moves down from level to a level , the total number of tokens associated with all the vertices and edges decreases by at least . In contrast, by Lemma 3.6 it takes time to implement this iteration of the While loop in Figure 1. Hence, we derive that the time spent on updating the relevant data structures is at most times the net decrease in the total number of tokens. This concludes the proof of Theorem 3.8 for Case 2.

3.3 The recoloring subroutine.

Whenever we want to change the color of a vertex , we call the subroutine RECOLOR() as described in Figure 2. We ensure that the hierarchical partition does not change during a call to this subroutine. Specifically, throughout the duration of any call to the RECOLOR subroutine, the value of remains the same for every vertex . We also ensure that the hierarchical partition satisfies Invariants 3.2 and 3.3 before any making any call to the RECOLOR subroutine.

During a call to the subroutine RECOLOR, we randomly choose a color for the vertex from the subset . In case the random color lies in , we find the unique neighbor of which is assigned this color, and then we recursively recolor . Since the level of is strictly less than that of , the maximum depth of this recursion is . We now bound the time spent on a call to RECOLOR.

1. Choose uniformly at random.    // These notations are defined in Section 3.1. 2. Set . 3. Update the relevant data structures as described in the proof of Lemma 3.9. 4. If : 5. Find the unique vertex with