1 Introduction
Graph coloring is one of the most fundamental and wellstudied problems in computer science. Given a graph with vertices, a coloring assigns a color from to each vertex. The coloring is proper if all adjacent vertices have different colors. The smallest for which there exists a proper coloring is called the chromatic number of . Unfortunately, it is NPhard to approximate the chromatic number within a factor of for all [21, 32]. Hence, graph coloring is usually studied w.r.t. certain graph parameters such as the maximum degree of any vertex or the arboricity , which is the minimum number of forests into which the edges of can be partitioned. It is wellknown that proper colorings and proper colorings can be computed in polynomial time.
In the dynamic version of the problem, the graph is undergoing edge insertions and deletions and a data structure needs to maintain a proper coloring with small update time. More concretely, suppose there are update operations each inserting or deleting a single edge. This implies an sequence of graphs such that and differ by exactly one edge. Then for each the dynamic algorithm must maintain a proper coloring.
When studying dynamic algorithms w.r.t. graph parameters such as the maximum degree or the arboricity it is important that the dynamic algorithms are adaptive to the parameter. That is, during a sequence of edge insertions and deletions, the values of parameters such as and might change over time. For example, suppose is the arboricity of and let denote the maximum arboricity of all graphs. Then ideally we would like that after the ’th update the number of colors used by a dynamic algorithm depends on and and not on because it might be that .
Bhattacharya et al. [8] studied the dynamic coloring problem and showed how to maintain a coloring with polylogarithmic update time and their algorithm is adaptive to the current maximum degree of the graph. In followup work [9, 19] the update time was improved to .
Later, Solomon and Wein [30] provided a dynamic coloring algorithm with update time. Note that the number of colors used by [30] depends on maximum arboricity over all graphs . Hence, we ask the following question.
Question 1.
Are there dynamic coloring algorithms with polylogarithmic update time which maintain a coloring that is adaptive to the current arboricity of the graph?
Another interesting question concerns limitations of dynamic coloring algorithms. A lower bound of Barba et al. [3] shows that there exist dynamic graphs which are colorable but any dynamic algorithm maintaining a coloring must recolor vertices after each update. The lower bound holds even for forests, i.e., for graphs with arboricity . This implies that any dynamic algorithm maintaining an coloring for any function must recolor vertices after each update. Note that this rules out dynamic colorings for forests and, more generally, planar graphs with polylogarithmic update times.
However, the lower bound only applies to dynamic algorithm that are maintaining explicit colorings. That is, a coloring is explicit if after each update the data structure stores an array of length such that stores the color of vertex . Thus, the color of each vertex can be determined with a single memory access. All of the previously mentioned dynamic coloring algorithms maintain explicit colorings but are allowed to use more than a constant number of colors.
In the light of the above lower bound, it is natural to ask whether it can be bypassed by implicit colorings. That is, a coloring is implicit if the data structure offers a query routine which after some computation returns the color of a vertex . In particular, we require the following consistency requirement for the query operation:

Consider any sequence of consecutive query operations which are not interrupted by an update. Then if vertices and , , are adjacent, we have that .
Note that in the above definition we only consider consecutive query operations which are not interrupted by an update. This is because after an update potentially a lot of vertex colors may change (due to the lower bound). Furthermore, observe that the definition implies that if we query all vertices of the graph consecutively, then we obtain a proper coloring.
Observe that an explicit coloring always implies an implicit coloring: when queried for a vertex , the data structure simply returns . However, implicit colorings are much more versatile than explicit colorings: when the colors of many vertices change, this does not affect the implicit coloring because it does not have to update the array . Hence, we ask the following natural question.
Question 2.
Can we break the lower bound of Barba et al. [3] with algorithms maintaining implicit colorings?
1.1 Our Contributions
We answer both questions affirmatively.
Adaptive explicit colorings. First, we show that there exists a randomized^{1}^{1}1 As usual in the study of randomized dynamic algorithms we assume that the adversary is oblivious, i.e., that the sequence of edge insertions and deletions is fixed before the algorithm runs. algorithm which maintains an explicit and adaptive coloring with polylogarithmic update time. This answers Question 1 affirmatively.
Theorem 1.
There is a randomized data structure that maintains an explicit and adaptive coloring on a graph with vertices and arboricity with expected amortized update time .
Note that this improves upon the results in [30] in two ways: It makes the coloring adaptive and it shaves a factor in the number of colors used by the algorithm. To obtain our result, we use a similar approach as the one used in [30]. In [30], the vertices were assigned to levels and the vertices on each level were colored using colors. In our result, we assign the vertices to levels and partition the levels into groups of consecutive levels each. We then make sure that for coloring the ’th group we use only colors and that the levels of groups with are empty. Then a geometric sum argument implies that we use colors in total.
Adaptive implicit colorings. Furthermore, we provide two algorithms maintaining implicit colorings. Both of these algorithms are also adaptive. We first provide an algorithm which maintains an adaptive implicit coloring with polylogarithmic update time and query time . This improves upon the coloring of Theorem 1 for .
Theorem 2.
There is a deterministic data structure that maintains an adaptive implicit coloring, with update time and query time , where is the current arboricity of the graph.
Note that Theorem 2 implies that for graphs with constant arboricity we can maintain colorings with polylogarithmic update and query times. This class of graphs contains trees, planar graphs, graphs with bounded treewidth, and all minorfree graphs. In particular, this breaks the lower bound of Barba et al. [3] and answers Question 2 affirmatively.
Corollary 3.
There is a deterministic data structure that for dynamic graphs with constant arboricity maintains an implicit coloring, with update time and query time .
Next, we improve upon the results of Theorem 1 and Theorem 2 in the parameter regime . More concretely, we obtain the following result.
Theorem 4.
There is a deterministic data structure that maintains an adaptive implicit coloring with an amortized update time of and a query time of .
Dynamic arboricity decomposition. To derive the results of Theorem 2, we introduce a data structure which maintains an adaptive arboricity decomposition of a dynamic graph. That is, it explicitly maintains a partition of the edges of the dynamic graph into undirected forests. This data structure might be of independent interest and might be useful in future applications.
To obtain the result we assume that we have black box access to an algorithm maintaining a lowoutdegree orientation of the graph. More concretely, a outdegree edgeorientation for an undirected graph assigns a direction to each edge and ensures that each vertex has outdegree at most . We then provide a reduction showing that any data structure maintaining a outdegree orientation of a graph can be turned into a data structure for maintaining an arboricity decomposition.
Theorem 5.
Let be a dynamic graph. Suppose there exists a data structure with (amortized or worstcase update) update time maintaining a outdegree orientation of . Then there exists a data structure that maintains an arboricity decomposition of with forests and with (amortized or worstcase, resp.) update time .
Theorem 5 yields the following corollary which we obtain by showing that a data structure of Bhattacharya et al. [10] can be extended to maintain an adaptive outdegree orientation of an undirected graph (see Section 2).
Corollary 6.
There exists a deterministic adaptive data structure that maintains a partition of the edges into forests with amortized update time , where is the current arboricity of the graph.
The corollary complements a result by Banerjee et al. [2] who presented an algorithm for dynamically maintaining an arboricity decomposition consisting of exactly forests with update time, where is the number of edges currently in the graph. Thus the result in the corollary obtains an exponentially faster update time by increasing the number of forests by a constant factor.
See Appendix A for more discussions on known data structures for maintaining low outdegree edgeorientations and also further related work.
2 Level Data Structure
In this section we introduce a version of the data structure presented in [10], which we will refer to as level data structure. The data structure dynamically maintains an adaptive outdegree orientation of a dynamic graph, where is the current arboricity of the graph. More precisely, the level data structure maintains an undirected graph with vertices and provides an update operation for inserting and deleting edges. It maintains an orientation of the edges of the graph such that each vertex has outdegree at most . We emphasize that, unlike the data structure presented in [10], our level data structure does not require that is an upper bound on the maximum arboricity of the graph over the whole sequence of edge insertions and deletions.
For the rest of the paper, we will write to denote undirected edges and to denote directed edges.
Levels, Groups and Invariants. Internally, the data structure maintains a partition of the of the vertices into levels which we call hierarchy. For each , we let denote the set of vertices that are currently assigned to level . Furthermore, we partition the levels into groups such that each group contains consecutive levels. More precisely, for each , we set . Note that and that neither the total number of level nor the number of levels per group depend on the arboricity.
The data structure maintains the following invariants for each vertex :

If , and , then has at most neighbors in . That is, each vertex has at most neighbors at its own or higher levels.

If , and , then has at least neighbors in levels , where is such that . That is, each vertex has at least neighbors at levels and above.
Due to edge insertions and deletions the above invariants might get violated. If a vertex does not satisfy the invariants, we call it dirty. Otherwise, we say that satisfies the degreeproperty.
Note that the above partitioning of the vertices implies an edge orientation: For an (undirected) edge such that and , we assign the orientations as follows: if , if and an arbitrary orientation if . This corresponds to directing an edge from the vertex of lower level towards the vertex of higher level in the hierarchy. Note that due to Invariant 1, each vertex at level has outdegree at most .
Initialization and Data Structures. The initialization of the data structure is implemented as follows. We assume that at the beginning the data structure is given a graph with vertices and no edges. We initialize the sets by setting and for all . The groups are defined as above and do not depend on the edges of the graph.
Furthermore, for each vertex with , we maintain the following data structures. For each level , we maintain a doublylinked list containing all neighbors of in . Furthermore, there is a doublylinked list containing all neighbors of in . Additionally, for each edge we store a pointer to the position of in and vice versa. Note that by additionally maintaining for each list the number of vertices stored in the list, we can check in time whether one of the invariants is violated for .
Updates. Now suppose that an edge is inserted or deleted. Then one of the vertices might get dirty and we have to recover the degreeproperty. While there exists a dirty vertex with , we proceed as follows. If violates Invariant 1, we move to level . If violates Invariant 2, we move to level . Note that during the above process, the algorithm might change the levels of vertices with .
Observe that when a vertex changes its level due to one of these operations, it is easy to update the lists : When we increase the level of from to , we simply iterate over the list and split it into lists and . Furthermore, for each we have to move from to ; this can be done in time^{2}^{2}2When iterating over , we use the pointer stored for the edge which provides the position of in . Now we remove from , add to and update the pointer for edge accordingly. for each such . Similarly, when we decrease the level of from to , we merge the lists and into a single list and updating the neighborslist of all vertices in similar to the procedure described above.
Observe that when a vertex changes its level via the above routine, we can update the edge orientation while iterating over the lists .
Properties. We summarize the properties of the data structure in the following lemma and present its proof in Appendix B.1.
Lemma 7.
The level data structure is deterministic and has the following properties:

It maintains an orientation of the edges such that the outdegree of each vertex is at most , where and is the current arboricity of the graph.

Inserting and deleting an edge takes amortized time and each update flips the orientations of edges (amortized).

Suppose the graph has arboricity and set . Then for all groups with and all levels , we have that .

For each (oriented) edge with it holds that .

Returning a value with takes time , where is the current arboricity of the graph.
3 Explicit Coloring with Colors
We present an algorithm that maintains an coloring using the level data structure from Section 2. To obtain our coloring, we will assign disjoint color palettes to all levels of the data structure. Our main observation is that since the level data structure guarantees that each vertex at level has at most neighbors at its own level, it suffices to use colors for level . Then a geometric sum argument yields that we only use colors in total. As before, we do not require an upper bound on in advance, but the number of colors only depends on the current arboricity of the graph.
Initialization. Again, assume that when the data structure is initialized, we are given a graph with vertices and no edges. For this graph, we build the level data structure from Section 2. Furthermore, to each level in some group we assign a new palette of colors, where is as in Lemma 7 and . At the very beginning, we assign a random color to each vertex .
Note that the above choice of the color palettes implies that for any two levels their color palettes are disjoint.
Updates. Now suppose that an edge is inserted or deleted. We process this update using the update procedure of the level data structure. Whenever a vertex changes its level in the level data structure, we say that is affected. We now provide a recoloring routine for affected vertices and for the vertices and .
For and we proceed as follows. If and are in different levels, then we do not have to recolor any of them (because the color palettes of different levels are disjoint). If and are on the same level and of different colors, we do nothing. If and are on the same level and have the same color, then suppose that w.l.o.g. received its current color before was last recolored. Now we scan the list for the colors of all neighbors of in . By Lemma 7 there are at most such neighbors and, hence, they use at most different colors. Thus, there must be at least available colors for in the palette of level , i.e., colors that are not used by any of the neighbors of in level . From these available colors, we pick one uniformly at random and assign it to . Note that is not recolored.
Whenever an affected vertex changes its level, we recolor as follows. Suppose that is moved to level . We consider the colors of the vertices in by simply scanning the list . As before, this yields at least available colors. We assign a random color among these available colors.
Analysis. We start by analyzing the update time of algorithm.
Lemma 8.
The expected amortized update time of the algorithm is .
Proof.
By Lemma 7, the amortized update time for the level data structure is . Now observe that the work for recoloring affected vertices can be charged to the work done by the level data structure: When the level data structure moves an affected vertex from level to a new level , then it has to scan all neighbors of in the lists and . When the data structure performs these operations, we can keep track of the colors of the neighbors of at the new level as described above. Thus, the cost for recoloring affected vertices can be charged to the running time analysis of the level data structure.
We are left to analyze the recoloring routine for vertices and which are on the same level . Note that for recoloring , the algorithm spends time because the list has size at most by Invariant 1. Now suppose that is recolored and stays on its level . We show that in expectation it takes edge insertions to vertices on the same level until needs to recolored again: Indeed, suppose that a new edge is inserted with . When received its color, it randomly picked one of at least colors and thus it picked the same color as
with probability at most
. Now letbe the random variable which counts how many such edges from
to vertices on the same level as are inserted until needs to be recolored. Observe thatis geometrically distributed. Thus, we have that
. This proves the claim. By charging to each update operation, this gives that this recoloring step has an amortized update time of . This running time is subsumed by the update time for maintaining the level data structure. ∎Lemma 9.
The data structure maintains a coloring.
Proof.
First, recall that for each level with we use different colors and that for different levels, the color palettes are disjoint. This implies that for each group , we use colors. Furthermore, by Lemma 7 each level with where satisfies that . Thus, the geometric sum implies that the total number of colors used is at most
The above lemmas imply Theorem 1.
4 Dynamic Arboricity Decomposition
In this section, we present a data structure for maintaining an arboricity decomposition, i.e., we maintain a partition of the edges of a dynamic graph into edgedisjoint (undirected) forests. In particular, we show that any data structure for maintaining an edge orientation can be used to maintain such an arboricity decomposition, where the number of forests will depend on the maximum outdegree, denoted by in the sequel. We stress that when depends on some parameter (e.g., the arboricity which might increase/decrease after a sequence of edge insertions/deletions) then so is the number of forests maintained by our data structure. Using the level data structure from Section 2, this yields that we can maintain an arboricity decomposition with forests if the current graph has arboricity ; the update time is polylogarithmic in . We will use this data structure in the next section to give a deterministic implicit coloring algorithm.
For the rest of the section, we assume that we have access to some black box data structure that maintains an orientation of the edges with update time for some . We will show how to maintain a set of forests such that if the maximum outdegree of a node is bounded by (where which might change over time) the forests provide an arboricity decomposition of the graph and the forests are empty.
Initialization and Invariants. We assume that at the beginning we are given a graph with vertices and no edges. For this graph, we build the black box outdegree data structure. We initialize to forests such that each of them contains all vertices and no edges. Furthermore, for each vertex we store an array storing bits and initially we set for all .
For a vertex , we let denote the outdegree of in the black box data structure. When running the data structure, we make sure that the following invariants hold for each :

For each , either forest or but not both contain an outedge of .

No outedge of is assigned to a forest with .

For all and , it holds that iff one of the outedges of is assigned to forest or forest .
Observe that when all of the invariants hold, then for each vertex we have that for and for . Thus, we have a desired arboricity decomposition. Further note that after the initialization of the data structure, all invariants hold.
Updates. Suppose that an edge is inserted or deleted from the graph. We start by inserting or deleting, resp., the edge from the black box data structure. Now the black box data structure might either (1) flip the orientation of an existing edge, (2) add a new outedge to a vertex (due to an edge insertion) or (3) delete an outedge of a vertex (due to an edge deletion).
Let us start by considering Case (1), i.e., suppose the black box data structure flips the orientation of an edge . Then we assume w.l.o.g. that the new orientation is and proceed as follows.
First, we add as an outedge to . Let denote the forest in which has no outedge (recall that such a forest must exist by Invariant 1). Now we insert the edge into and set . Note that after this procedure, all invariants for are satisfied.
Second, we remove the edge (with the old orientation) from . Let be such that the edge was stored in or . We remove from the corresponding forest and set . Note that this might violate the invariants because now has no outedge in and , but it might have one in or with , where is the outdegree of before was deleted. We fix this in the next step.
Third, let be as before and set to the largest integer such that . If we do nothing (all invariants already hold). Otherwise (), we will essentially move the edge stored in forest or to forest or . More concretely, let be the unique outedge of stored in or and remove from this forest. Now let denote the forest in which has no outedge and insert into . Additionally, set and . This restores all invariants for .
In Case (2) above, i.e., the black box data structure inserted an outedge for a vertex, we run the first step described above and nothing else. In Case (3), i.e., the black box data structure deleted an outedge for a vertex, we run the second and the third step of the above procedure.
Analysis. First, we show that forests indeed provide an arboricity decomposition of the dynamic graph.
Lemma 10.
Let be the maximum outdegree of any vertex in the outdegree decomposition maintained by the black box data structure. Then the forests provide an arboricity decomposition of the graph.
Proof.
Due to Invariant 1, each edge of the graph is stored in some forest. Thus, the union of all forests contains all edges of the graph. Hence, to prove the lemma, it suffices to prove the following two claims: (1) For each , does not contain a cycle. (2) If then does not contain any edges.
We prove Claim (1) by contradiction. Suppose that contains a cycle over vertices. Since is cycle, contains exactly edges. By Invariant 1, each vertex has at most one outedge in . Hence, must correspond to a directed cycle in . Now consider the edge which closed the cycle when it was added to . In the first step of the algorithm, we only added the edge to if had no outedge in . This contradicts the fact that closes a directed cycle.
Claim (2) follows directly from Invariant 2 and the assumption that is the maximum outdegree of any vertex. ∎
Note that in the above proof we did not assume that is an upper bound on the maximum outdegree over the entire sequence of edge insertions and deletions. Instead, we only need that is the maximum outdegree in the current graph. Hence, the number of forests providing the arboricity decomposition will never be more than at any point in time even when is changing over time.
Lemma 11.
If the (amortized or worstcase) update time of the black box data structure is , then the (amortized or worstcase, resp.) update time of the above algorithm is .
Proof.
In each update, the algorithm spends time for inserting or deleting, resp., an edge in the black box data structure.
All the other steps can be implemented in time by maintaining the following values: (1) For each , we maintain the maximum index such that (if no such exists we set the corresponding index to ). (2) For each and , we maintain a pointer to the copy of in . (3) For each edge , we store a pointer to the forest in which it is currently stored.
Now observe that the first step of the algorithm can be implemented in time as follows: To find , we use the index from (1). When we need to check whether has an outedge in or , we can use the pointers from (2) to the copies of in and .
The second and third step of the algorithm can be implemented similarly. When in the second step we have to remove the edge , we can use the pointer from (3) to find its copy in time. ∎
5 Implicit Coloring with Colors
We present a data structure for implicitly maintaining a coloring. The data structure has an update time of and it provides a query operation which in time returns the color of a vertex . For planar graphs (which have arboricity at most ) this implies that we can maintain an coloring with update time and query time .
Our algorithm maintains the arboricity data structure of Corollary 6 together with a data structure maintaining the forests of the arboricity decomposition. The latter assigns a unique root to each tree in the forests. Our main observation is that for any two adjacent vertices and , there is a tree such that the distances of and to the root of have different parity. Now the query operation for a vertex picks the color of based on the parities of ’s distances to the roots of the trees.
Initialization. We assume that initially the graph has vertices and no edges. For this graph, we build the arboricity decomposition presented in Corollary 6. Furthermore, each of the forests maintained by the arboricity data structure is equipped with the data structure from the following lemma.
Lemma 12.
There exists a data structure for maintaining a dynamic forest with the following properties:

Inserting an edge into the forest can be done in time, where and are in different trees before the edge insertion.

Deleting an edge from the forest takes time.

The data structure assigns a unique root to each tree in the forest.

For a given vertex , the distance of to the root of the tree containing can be reported in time .
The lemma is a simple application of dynamic trees or top trees [1] and we prove it in Appendix B.2.
Updates. Suppose an edge is inserted or deleted from the graph. Then we proceed as follows. First, we insert or delete, resp., the edge in the data structure from Corollary 6. Second, whenever the arboricity decomposition inserts or deletes an edge in one of the forests, we insert or delete the edge in the corresponding forest of the data structure from Lemma 12.
Queries. When we receive a query for the color of a vertex , we proceed as follows. For each of the forests we identify the tree containing . Let denote these trees. For each , we determine whether the distance of to the root of using the data structure from Lemma 12. Now for each , we set if the distance has even parity and otherwise. Now let . We define the color of to be .
Analysis. We start by showing that indeed we obtain a coloring.
Lemma 13.
The data structure maintains an implicit coloring.
Proof.
Consider any edge
. We show that query procedure returns vectors
and such that . Indeed, the edge must be contained in one of the forests maintained by the arboricity decomposition. Therefore, and are adjacent in some tree of that forest. Since has a unique root (by Lemma 12), the distances of and to the root of must have a different parity. Thus, we obtain that and, hence, .Furthermore, the total number of used colors is since there are only possibilities for each vector . ∎
Lemma 14.
The amortized update time of the data structure is . The query time of the algorithm is .
Proof.
First, note that the amortized update time of the data structure from Corollary 6 is . This implies that amortized per update, the arboricity decomposition inserts or deletes at most edges from the forests. For each such inserted or deleted edge it takes time to update the edge in the data structure from Lemma 12. This gives that the total amortized update time is .
When answering a query for a vertex , for each of the trees containing we need to query the distance of to the root node of the tree. Each of these queries takes time by Lemma 12. Hence, the total query time is . ∎
The two lemmas above imply Theorem 2.
6 Implicit Coloring with Colors
We present a data structure maintaining an implicit coloring. The data structure has an update time of and a query time of .
We will now focus on the case that and provide an algorithm maintaining a coloring; we will only come back to the case at the very end of the section when we prove Theorem 4. To obtain the result for , we use the level data structure described in Section 2 and an idea similar to that of Section 3. Recall that in Section 3 we used disjoint color palettes of colors for each level in group . Thus, for all levels in we used colors in total. Now we improve upon this result by providing a query procedure which only uses colors per group . More concretely, we will partition the group into subgroups such that for each subgroup the query procedure only uses colors.
Subgroups. Recall from Section 2 that the level data structure contains levels and that group contains the levels , where . Now let be an approximation of the arboricity of the graph with . We partition each group into subgroups of consecutive levels each. Formally, for each and for each , we define that subgroup contains the levels . Thus, there are subgroups per group .
Note that depends on which is an approximation of the current arboricity of the graph. This implies that as the arboricity of the graph changes (due to edge insertions and deletions), the subgroups will also change. However, note that the groups are not affected by this. Also, the algorithm will not need to maintain the subgroups explicitly. Instead, it will be enough if the algorithm can compute to check for a given level in which subgroup the level is contained. Later, whenever we need to compute the subgroup of a level , we can assume that we know a suitable value for and, hence, with the desired properties via Property 5 of Lemma 7.
Furthermore, to each subgroup we assign a new color palette with colors, where is the constant from Lemma 7 and . In particular, the palettes for any two different subgroups are disjoint.
Initialization. As before, we assume that initially we are given a graph over vertices and without any edges. For this graph we build the level data structure from Section 2. We also maintain a counter which counts the number of edge insertions and deletions processed by the data structure, but does not count the number of queries processed. Initially, we set . Furthermore, for each vertex we store a pair consisting of its color as well as the last time when its color was last updated. We only store the most recent such pair, i.e., when is assigned a color at time and there already exists a pair for with , we delete the old pair . At the beginning, we initialize the pairs of all vertices to , indicating that we assigned color 0 after having seen 0 updates. If at some time for a vertex we store a pair with then we say that is outdated, otherwise we say that is fresh.
Updates. Suppose that an edge is inserted or deleted. Then we insert or delete, resp., the edge in the level data structure and update it suitably (but do not change the colors stored for the vertices). Also, we increase the counter .
Queries. Suppose that the color of a vertex in a level is queried at time . Then we set and such that level is in group and subgroup . If is fresh, we output the color stored for . If is outdated, we recompute the color of as follows. First, we iterate over and find subsets and which are as follows: contains the all neighbors of in some level with and , contains all neighbors of in level . Second, for each that is outdated, we recursively recompute the color for . Note that after this step all vertices in are fresh. Third, let be the set of all vertices in that are fresh. Observe that (by Invariant 1 of the level data structure) while the color palette of subgroup has colors. Hence, there at least colors which are not used by any vertex in and we pick one of those and assign it to . Furthermore, we update the pair for vertex .
Analysis. In the next lemma we show that the coloring assigned to fresh vertices is proper. Note that it is enough to prove the claim for fresh vertices: the query routine only needs to provide a proper coloring as long as queries are not interrupted by an update, thus we only need to consider fresh vertices since only fresh vertices are assigned pairs with the most recent timestamp.
Lemma 15.
Let be an edge. Suppose that and are fresh and that has color and has color . Then .
Proof.
Let denote the level of and let denote the level of . Let and be such that and . If and are from different subgroups (i.e., ), then we must have that since we used disjoint color palettes for different subgroups. Now suppose that and are from the same subgroup (i.e., ). We distinguish two cases. First, suppose that . Then assume w.l.o.g. that received its color after . Thus, was in the set when was colored and, hence, we must have that . Second, suppose that . W.l.o.g. assume that . Then was contained in the set when received its color. Hence, we must have that . ∎
Lemma 16.
The algorithm maintains a coloring.
Proof.
We already showed in Lemma 15 that the obtained coloring is proper. It only remains to bound the number of colors used by the algorithm. First, observe that since each group consists of levels and each subgroup contains levels, there are subgroups per group. Additionally, for each subgroup we assigned a color palette of colors. Thus, the total number of colors used per group is . Now the same geometric sum argument as used in the proof of Lemma 9 yields that the data structure uses colors in total. ∎
Next, we analyze the update and query time of the algorithm. We show that it takes time to query the color of a vertex . This includes all necessary recursive computations for outdated vertices.
Lemma 17.
The update time of the algorithm is . Furthermore, the query time of the algorithm is .
Proof.
Since the update procedure of our algorithm only updates the level data structure and increases the counter , the update time is by Lemma 7.
Now let us analyze the query time of the algorithm. Consider any subgroup and let be the largest level in , i.e., . We prove by induction on the level that for any vertex at level the query time is . As the algorithm only recolors vertices in , we do not have to consider any other levels.
As base case suppose that . Then we have that . Furthermore, the computation of of can be performed in time since (see Lemma 7).
Next, consider a vertex at level with . Then by Invariant 1 of the level data structure, we have that the set contains at most vertices at levels . By induction hypothesis, coloring each of these vertices takes time . Thus, coloring all of these vertices takes time . Computing the colors of vertices at level by computing the sets and takes time by the same arguments as in the base case. Thus, the total query time for this vertex is .
Now let us bound the total query time. Note that difference of for is maximized when and in this case . Thus, the total query time is at most
Proof of Theorem 4.
To obtain the data structure claimed in the theorem, we run the above algorithm and the explicit algorithm from Theorem 1 in parallel. After each update, we use Property 5 of the level data structure to obtain an approximation of the arboricity with . If , then we will use the data structure from this section for subsequent queries. The previous lemmas imply that this provides a coloring with amortized update time and query time . If , we use the data structure from Theorem 1 for queries. This provides an coloring with amortized update time and query time because the coloring maintained by the data structure is explicit. ∎
References
 [1] S. Alstrup, J. Holm, K. de Lichtenberg, and M. Thorup. Maintaining information in fully dynamic trees with top trees. ACM Trans. Algorithms, 1(2):243–264, 2005.
 [2] N. Banerjee, V. Raman, and S. Saurabh. Fully dynamic arboricity maintenance. In COCOON, pages 1–12, 2019.
 [3] L. Barba, J. Cardinal, M. Korman, S. Langerman, A. van Renssen, M. Roeloffzen, and S. Verdonschot. Dynamic graph coloring. Algorithmica, 81(4):1319–1341, 2019.
 [4] L. Barenboim and T. Maimon. Fullydynamic graph algorithms with sublinear time inspired by distributed computing. In ICCS, pages 89–98, 2017.
 [5] E. Berglin and G. S. Brodal. A simple greedy algorithm for dynamic graph orientation. Algorithmica, 82(2):245–259, 2020.
 [6] A. Bernstein and C. Stein. Fully dynamic matching in bipartite graphs. In ICALP, pages 167–179, 2015.
 [7] A. Bernstein and C. Stein. Faster fully dynamic matchings with small approximation ratios. In SODA, pages 692–711, 2016.
 [8] S. Bhattacharya, D. Chakrabarty, M. Henzinger, and D. Nanongkai. Dynamic algorithms for graph coloring. In SODA, pages 1–20, 2018.
 [9] S. Bhattacharya, F. Grandoni, J. Kulkarni, Q. C. Liu, and S. Solomon. Fully Dynamic Coloring in Constant Update Time. CoRR, abs/1910.02063, 2019.
 [10] S. Bhattacharya, M. Henzinger, D. Nanongkai, and C. E. Tsourakakis. Space and timeefficient algorithm for maintaining dense subgraphs on onepass dynamic streams. In STOC, pages 173–182, 2015.
 [11] G. S. Brodal and R. Fagerberg. Dynamic representation of sparse graphs. In WADS, pages 342–351, 1999.
 [12] R. Duan, H. He, and T. Zhang. Dynamic edge coloring with improved approximation. In SODA, pages 1937–1945, 2019.
 [13] D. Frigioni, A. MarchettiSpaccamela, and U. Nanni. Fully dynamic shortest paths in digraphs with arbitrary arc weights. J. Algorithms, 49(1):86–113, 2003.
 [14] M. Ghaffari, J. Hirvonen, F. Kuhn, and Y. Maus. Improved distributed deltacoloring. In PODC, pages 427–436, 2018.
 [15] M. Ghaffari and H. Su. Distributed degree splitting, edge coloring, and orientations. In SODA, pages 2505–2523, 2017.
 [16] A. V. Goldberg, S. A. Plotkin, and G. E. Shannon. Parallel symmetrybreaking in sparse graphs. SIAM J. Discrete Math., 1(4):434–446, 1988.

[17]
B. Hardy, R. Lewis, and J. M. Thompson.
Tackling the edge dynamic graph colouring problem with and without
future adjacency information.
J. Heuristics
, 24(3):321–343, 2018.  [18] M. He, G. Tang, and N. Zeh. Orienting dynamic graphs, with applications to maximal matchings and adjacency queries. In ISAAC, pages 128–140, 2014.
 [19] M. Henzinger and P. Peng. ConstantTime Dynamic Coloring and Weight Approximation for Minimum Spanning Forest: Dynamic Algorithms Meet Property Testing. CoRR, abs/1907.04745, 2019.
 [20] H. Kaplan and S. Solomon. Dynamic representations of sparse distributed networks: A localitysensitive approach. In SPAA, pages 33–42, 2018.
 [21] S. Khot and A. K. Ponnuswami. Better inapproximability results for maxclique, chromatic number and min3lindeletion. In ICALP, pages 226–237, 2006.
 [22] T. Kopelowitz, R. Krauthgamer, E. Porat, and S. Solomon. Orienting fully dynamic graphs with worstcase time bounds. In ICALP, pages 532–543, 2014.
 [23] L. Kowalik and M. Kurowski. Oracles for boundedlength shortest paths in planar graphs. ACM Trans. Algorithms, 2(3):335–363, 2006.
 [24] N. Linial. Distributive graph algorithmsglobal solutions from local data. In FOCS, pages 331–335, 1987.
 [25] N. Linial. Locality in distributed graph algorithms. SIAM J. Comput., 21(1):193–201, 1992.

[26]
C. Monical and F. Stonedahl.
Static vs. dynamic populations in genetic algorithms for coloring a dynamic graph.
In GECCO, pages 469–476, 2014.  [27] O. Neiman and S. Solomon. Simple deterministic algorithms for fully dynamic maximal matching. ACM Trans. Algorithms, 12(1):7:1–7:15, 2016.
 [28] K. Onak, B. Schieber, S. Solomon, and N. Wein. Fully dynamic MIS in uniformly sparse graphs. In ICALP, pages 92:1–92:14, 2018.
 [29] M. Parter, D. Peleg, and S. Solomon. Localonaverage distributed tasks. In SODA, pages 220–239, 2016.
 [30] S. Solomon and N. Wein. Improved dynamic graph coloring. In ESA, pages 72:1–72:16, 2018.
 [31] L. Yuan, L. Qin, X. Lin, L. Chang, and W. Zhang. Effective and efficient dynamic graph coloring. PVLDB, 11(3):338–351, 2017.
 [32] D. Zuckerman. Linear degree extractors and the inapproximability of max clique and chromatic number. Theory of Computing, 3(1):103–128, 2007.
Appendix A Further Related Work
The first result for dynamic coloring was obtained by Barenboim and Maimon [4] and they showed how to maintain a coloring with worstcase update time . This result was later improved by the algorithms [10, 8, 19] to obtain colorings with amortized constant update time. Duan et al. [12] provided an algorithm for edgecoloring with polylogarithmic update time if . Furthermore, algorithms for dynamic coloring were also studied in practice, e.g., [26, 31, 17].
Computing graph colorings of static graphs has been an active research area in the distributed community over several decades, e.g., [24, 25, 16, 15, 14]. More recently, Parter et al. [29] also studied dynamic coloring algorithms in the distributed setting.
Providing dynamic algorithms for graphs with bounded arboricity has been a fruitful area of research. Such algorithms have been derived for fundamental dynamic problems including shortest paths [13, 23], maximal independent set [28], matching [6, 7, 27] or coloring [30].
Several papers studied the problem of dynamically maintaining lowoutdegree edge orientation. The first such result was obtained by Brodal and Fagerberg [11] who obtained an orientation with amortized update time . He et al. [18] obtained a tradeoff between the outdegree and the update time of the algorithm. Kopelowitz et al. [22] obtained algorithms with worstcase update time and this result was improved by Berglin and Brodal [5]. Kaplan and Solomon [20] showed how to maintaining edge orientations in the distributed setting when the local memory per node is restricted.
Appendix B Omitted Proofs
b.1 Proof of Lemma 7
Before we prove the lemma, let us first review the data structure by Bhattacharya et al. [10]. Since the data structure of [10] was developed for the densest subgraph problem, let us first introduce this problem and discuss its relationship with arboricity.
Arboricity and densest subgraph. The density of the densest subgraph is defined as . By the NashWilliams Theorem, we have that for the arboricity of a graph it holds that . Thus, we get
(1) 
The data structure of [10]. Recall from Section 2 that our data structure maintains levels. Furthermore, there are groups consisting of consecutive levels each. Each vertex at level satisfies the following two invariants: (1) at has at most neighbors in and (2) has at least neighbors in levels , where is such that .
Now the data structure of [10] essentially works by running data structures in parallel, one data structure for each group . More precisely, each data structure stores all vertices and edges of the graph. Furthermore, assigns each vertex to exactly one of levels and the data structure ensures that the following invariants hold: (1) For each at level , has at most neighbors in and (2) for each at level , has at least neighbors in levels . The update procedure of the algorithm is the same as described in Section 2, it only takes into account that now there are only levels per data structure .
Note that in the data structure of [10] there exist copies of each vertex , while in our data structure each vertex is only stored once.
The data structure by [10] satisfies the following properties.
Lemma 18 (Bhattcharya et al. [10]).
The data structure satisfies the following properties:

If , then the highest level of does not contain any vertices, i.e., .

The amortized update time for maintaining is .
The lemma follows from Theorem 2.6 and Theorem 4.2 in [10].
Note that since the data structure of [10] maintains data structures in parallel, the total update time of the data structure becomes .
Proof of Lemma 7.
Let us now prove Lemma 7.
The claim about the update time in Property 2 follows from the analysis of the data structures in [10] (the analysis goes through if we assign potential to each edge insertion and deletion); the claim for the number of edge flips follows from the fact that with amortized update time the data structure cannot flip more than edges per update (amortized).
Property 5 follows from the fact that [10] show that a approximation of the densest subgraph can be maintained with amortized update time . Since the value of the arboricity and the densest subgraph only differ by a factor , we can simply run the data structure of [10] in the background to always have access to a value with the desired property.
Property 1 follows from Property 3 (which we prove below): By Property 3 we have that for all levels with and . Thus, all vertices with outedges must be in a level with with . By the invariants maintained by the data structure, we obtain that the outdegree of each such vertex is at most
We are left to prove Property 3 and proceed in two steps.
First, recall that the levels in the levels data structure from Section 2 were denoted and those in the data structures from [10] were denoted . We prove that for all it holds that
(2) 
where and are such that .
Indeed, suppose that . Then we have that since only has levels and stores all vertices in . Thus, the desired subset relationship trivially holds.
Next, consider . Observe that by induction hypothesis we have that
This implies for all it holds that , where and denote the degree of induced by the vertices in and , respectively. Since both the level data structure and the data structure only promote vertices with at least vertices to the next level, any vertex which is promoted from level to in the level data structure must also be promoted from level to level in the data structure . Thus, the claim from Equation (2) holds.
Comments
There are no comments yet.