1 Introduction and Related Work
The distributed vertex coloring problem is one of the defining and probably the most intensively studied problem of the area of distributed graph algorithms. In the standard version of the problem we are given a graph
that at the same time defines a communication network and also the graph to be colored. In a distributed coloring algorithm, the nodes of communicate with each other in synchronous rounds by exchanging messages over the edges of . Initially, the nodes of do not know anything about (except possibly for some global parameters such as, for example, the number of nodes , the maximum degree , or approximations thereof), and at the end, each node must output a color such that adjacent nodes are colored with different colors and such that the overall number of colors is from a given restricted domain. If adjacent nodes can exchange arbitrarily large messages in every communication round, this distributed model is known as the model and if messages are restricted to bits per edge in each round, the model is known as the model [Pel00]. The time or round complexity of a distributed algorithm in the or model is the total number of rounds until all nodes terminate.Early work on distributed coloring.
The distributed coloring problem was first studied in a seminal paper by Linial [Lin92], which essentially also started the whole area of local distributed graph algorithms. Linial showed in particular that any deterministic distributed algorithm for computing an coloring of a ring network requires rounds. He also showed that in rounds, it is possible to (deterministically) color arbitrary graphs of maximum degree with colors. The lower bound was later extended to randomized algorithms by Naor [Nao91]. With a simple sequential greedy algorithm, one can color the vertices of a graph with at most colors, and most of the work on distributed coloring was therefore also on solving the coloring problem. Already when Linial’s paper came out in 1987, it was clear that the randomized parallel maximal independent set algorithms developed shortly before by Luby [Lub86] and Alon, Babai, and Itai [ABI86] can be used to obtain a randomized distributed round algorithm to compute a coloring. In fact, even the naïve parallel coloring algorithm, where each node repeatedly chooses a uniformly random color among the still available colors and keeps the color if no neighbor concurrently tries the same color, leads to an round distributed coloring algorithm [Joh99].
A brief history on distributed coloring algorithms.
Given that there are very simple time randomized distributed coloring algorithms, until relatively recently, most of the work was on deterministic distributed coloring algorithms. Given a coloring with more than colors, it is straightforward to reduce the number of colors by one in a single round. The time coloring algorithm of Linial [Lin92] therefore directly leads to an time distributed algorithm for coloring and thus in boundeddegree graphs, a coloring can be computed in optimal rounds. Over the years, the dependency on has been improved in a long sequence of papers to the current best algorithm, which has a round complexity of [GPS88, SV93, KW06, Kuh09, BEK14, Bar16, FHK16, BEG18, MT20]. For the edge coloring problem (i.e., for the same problem on line graphs), the time complexity has even been improved recently to [BE11b, Kuh20, BKO20].
As a function of the number of nodes , the fastest known deterministic algorithms have long been based on computing a socalled network decomposition (a decomposition of the graph into clusters of small diameter together with a coloring of the clusters with a small number of colors). Until a recent breakthrough by Rozhoň and Ghaffari [RG20], the best deterministic algorithm for computing such a network decomposition and the best resulting coloring algorithm had a round complexity of [AGLP89, PS92]. Rozhoň and Ghaffari [RG20] improved this to rounds. When focusing on the dependency on , there is also work on computing vertex and edge colorings directly, without going through network decomposition [BE11a, FGK17, GHK18, Har19, Kuh20, GK21]. This has culminated in the recent work of Ghaffari and Kuhn [GK21], who showed that a coloring can be computed in rounds deterministically. The algorithm of [GK21] also works directly in the model.
In light of the simple time randomized distributed coloring algorithm from the late 1980s, work on faster randomized distributed coloring algorithms only started a bit more than 10 years ago. In [KSOS06], it was shown that computing an coloring can be done in rounds and in [SW10], this was even improved to as long as . As one of the results of the current paper, we show that for , also a coloring can be computed in only rounds. The first improvements on the complexity of the coloring problem were obtained in [SW10, BEPS16] and the first sublogarithmictime algorithms for edge coloring and for vertex coloring were subsequently developed in [EPS15, HSS18]. This development led to the algorithm of Chang, Li, and Pettie [CLP20], which in only rounds manages to compute a partial vertex coloring such that all remaining uncolored components are of polylogarithmic size. In combination with the deterministic algorithm of [GK21], this leads to a randomized coloring algorithm with a round complexity of . An adaptation of the algorithm of [CLP20] to the model appeared in [HKMT21]. We will provide a more detailed discussion of the papers [SW10, BEPS16, EPS15, HSS18, CLP20, HKMT21] that are most relevant for the present work in Sec. 2.
From coloring to listcoloring.
In a listcoloring problem, each node is given as input a list or palette consisting of colors from some color space , and the objective is to compute a proper coloring of the graph, where each node is colored with a color from its list. In the list coloring () problem, the list of each node is of size (at least) , where is the (initial) degree of . The problem is a natural generalization of the coloring problem that can still be solved by the naïve sequential greedy algorithm. Further, after computing a partial solution to a given coloring problem, the remaining coloring problem on the uncolored nodes in general is a problem, where the palette of each node consists of the colors not used by any of the neighbors. In some sense, the problem is the more fundamental and also the more natural problem than the coloring problem. The problem is selfreducible: After computing a partial solution to a given problem, the remaining problem is still a problem. Also the problem naturally appears as a subproblem when solving more constrained coloring problems. is, for example, used as a subroutine in the distributed coloring algorithms of [BE18] for computing optimal colorings in graphs with chromatic number close to and in the distributed coloring algorithms of [PS95, GHKM18].
Distributed algorithms.
First note that all the fastest deterministic and randomized coloring algorithms discussed above also work for the more general list coloring problem. In fact, many of those algorithms critically rely on the fact that they solve some version of the list coloring problem, e.g., [Bar16, FHK16, MT20, HSS18, CLP20, Kuh20, BKO20]. Further, by using techniques developed in [FHK16, Kuh20], one can deterministically reduce to list coloring with only an multiplicative and additive overhead.^{1}^{1}1If the round complexity is polynomial in , the multiplicative dependency even reduces to . For deterministic algorithms, at least currently, there is therefore no significant gap between the complexities of (list)coloring and . This is however very different for randomized algorithms. While the best known list coloring algorithm requires only rounds [CLP20], the best known randomized algorithm that works for the problem is from [BEPS16] and it has a round complexity of . For general graphs, this can be as large as and it is therefore not faster than the simple randomized time distributed coloring algorithms [ABI86, Lub86, Lin92, Joh99] from the 1980s and 1990s (those algorithms also work directly for the problem).
1.1 Our Contributions
The main technical contribution of our paper is an time randomized distributed algorithm that for a given problem, colors almost all nodes of an exponentially large degree range. More concretely, we prove the following technical theorem.
Theorem 1.
Let be an node graph with maximum degree at most and let be the nodes of of degree at least . Then, for every positive constant , there is an round randomized distributed algorithm that for a given instance on computes a partial proper coloring of the nodes in such that for every node , the probability that is not colored at the end of the algorithm is at most , even if the random bits of nodes outside the hop neighborhood of are chosen adversarially.
Our main contribution follows from Theorem 1 using standard techniques. By applying methods originally used by Beck in the context of algorithmic versions of the Lovász Local Lemma [Bec91] and first adapted to the distributed context in [BEPS16], the probabilistic guarantees of the above theorem imply that after running the round randomized distributed algorithm, w.h.p., the uncolored nodes form components of size . This phenomenon is nowadays known as graph shattering. One can now go through degree classes. If we set , Theorem 1 implies that all nodes of degree can be colored in rounds w.h.p. For lower degree classes, the round algorithm colors all nodes, except for components of size. To color those components, one can then apply the best deterministic algorithm of [GK21], which has a round complexity of on node graphs of maximum degree and thus a round complexity of on graphs of size . Overall, we obtain the following main theorem.
Theorem 2.
There is a randomized distributed algorithm to solve the problem on node graphs in rounds, w.h.p.
The fact that our randomized algorithm directly colors all nodes of degree at least has another interesting consequence. The following is a direct corollary of Theorem 1.
Corollary 1.
When all nodes have degree at least , the problem can be solved w.h.p. in rounds in the model.
Note that Corollary 1 is a significant improvement over prior work. Prior to this paper, also for large , the best known coloring had a round complexity of (even for the standard nonlist version of the problem). Also note that the statement of Corollary 1 can be obtained by a somewhat simpler algorithm and by a much simpler analysis than the full statement of Theorem 1.
We show in the appendix that the lower bound on the degrees can be reduced in the case of the problem.
Corollary 2.
When for , the (list)coloring problem can be solved w.h.p. in rounds in the model.
Palette sparsification
One key technical lemma is a method to generate slack. One corollary of that result is the following result.
Theorem 3 (Informal).
For any graph , sampling colors for each vertex with degree from a set of arbitrary colors, allows for a proper coloring of from the sampled colors, w.h.p.
This was previously shown for coloring [ACK19], coloring [AA20], and listcoloring [AA20], but in all cases requiring only sized samples (which are necessary). Our result follows almost immediately from the frameworks of [ACK19, AA20] when given the slack generation result for sparse nodes (Proposition 1).
This has the following implication for the problem in several other models.
Corollary 3.
For finding a in a general graph, w.h.p., there exists

a singlepass dynamic streaming algorithm using space;

a nonadaptive time algorithm; and

an MPC algorithm in rounds on machines with memory .
We discuss these implications in Sec. 8.
2 Technical Overview
In the following, we first discuss the most important technical insights that lead to the current fast randomized distributed coloring algorithms. We next highlight why the existing techniques are not sufficient to also solve the coloring () problem similarly efficiently, and where the main challenges are. We then give a highlevel overview on how we overcome those challenges and at the same time also simplify the existing randomized distributed coloring algorithms.
Graph shattering.
The graph shattering technique was pioneered by Beck [Bec91] in the context of constructive solutions for the Lovász Local Lemma, and it was brought to the distributed setting by Barenboim, Elkin, Pettie, and Schneider [BEPS16]. The highlevel idea is the following: One first runs a fast randomized algorithm that computes a partial solution for a given graph problem such that the unsolved parts only form small components (i.e., the randomized algorithm shatters the graph into small unsolved components). The remaining small components are then typically solved by a deterministic algorithm. More formally, let be an node graph of maximum degree and assume that a randomized distributed algorithm computes an output for a (random) subset of the nodes. If for every node , independently of the private randomness of nodes outside some constant neighborhood of , for a sufficiently large constant , then, w.h.p., the induced subgraph of the nodes with no output consists of connected components of size at most . A formal statement of this appears, e.g., in [GHK18, CLP20]. With some additional tricks (or in the case of graph coloring, often even directly), the size of the remaining components can be reduced to , so that the randomized complexity of a problem becomes the time to shatter the graph plus the time to solve the remaining problem deterministically on graphs of size . Interestingly, it was shown by Chang, Kopelowitz, and Pettie [CKP19] that the randomized distributed complexity of all locally checkable labeling problems (to which all the typical coloring problems belong) on graphs of size is at least the deterministic complexity of the same problem on instances of size . The graph shattering method is therefore essentially necessary for solving such problems.
The role of slack.
At the core of all sublogarithmictime randomized distributed (list) coloring algorithms is the notion of slack. A node of degree is said to have slack if it has an available color palette (or list) of size . If we are given a coloring problem in which all nodes have slack , one can use an idea of Schneider and Wattenhofer [SW10] to color (most of) the graph in only rounds as follows. Assume that for each node , , for some . Each node chooses random color from its list of colors, and gets permanently colored with one of those colors if no neighbor tries the same color. For each node , each of the colors has a constant probability of being successful and therefore each node gets permanently colored with probability . In the remaining coloring problem on the uncolored nodes, the degree of most nodes drops by a factor , while the slack of a node cannot decrease. The slack to degree ratio of most nodes therefore increases from to . If we start with slack and thus , after only rounds, most nodes are permanently colored with a color from their list. In the coloring problem, highdegree nodes however do not initially start with sufficient slack. In the problem, all nodes start with a color palette of size and thus with slack . If we want to apply the above fast coloring algorithm in those cases, we first have to create slack for nodes.
Basic slack generation for coloring.
In principle, there are three ways of generating slack for a node and we use all three ways in our algorithm. The slack of a node increases if some neighbor permanently chooses a color that is not in ’s palette (we will refer to this as chromatic slack) and it also increases if there are two (nonadjacent) neighbors and of that both permanently choose the same color. In addition, the slack of a node can be temporarily increased if the nodes are colored in different phases of an algorithm and some neighbors of are colored in a later phase than . In the listcoloring problem, slack can be generated for many nodes by applying the following simple oneround distributed algorithm. Each node tries a uniformly random color of its palette and is permanently colored with this color if no neighbor of tries the same color. Because all nodes choose from different colors, it is not hard to see that every node has a constant probability of keeping the color it tries. In expectation (and with sufficiently high probability), node therefore gets slack if either the average probability for neighbors to pick a color outside is at least or if there are nonconnected pairs that try the same color (note that each color is only tried a constant number of times by nodes in in expectation).
Almostclique decomposition.
All known sublogarithmictime distributed (list)coloring algorithms are based on the following highlevel idea. As a first step, the nodes are partitioned into a set of nodes that are locally sparse and into socalled almostcliques. A node is said to have sparsity if the subgraph induced by its neighborhood contains at most edges. In an almostclique decomposition, for some parameter , the nodes in have sparsity and each almostclique is a set of nodes for which and each node in has at least neighbors in . A similar decomposition was first used by Reed [Ree98] and it was first used in the context of distributed coloring by Harris, Schneider, and Su [HSS18]. Since then, most fast randomized coloring algorithms in the distributed setting or related computational models are based on almostclique decompositions [CLP20, HKMN20, HKMT21, PS18, CDP20, AA20, ACK19, CFG19].
For locally sparse nodes, the required condition for slack generation described in the paragraph above is satisfied. One can therefore first let every node try a random color and let nodes keep their colors if no neighbor chooses the same color. The uncolored locally sparse nodes in this way get some slack and we can then delay coloring them to the end of the algorithm. The almostcliques can in principle be handled efficiently because any two nodes within a single almostclique are within distance in the graph. At least in the model, computations within a single almostclique can therefore be done in a centralized fashion. Note however that implementing this highlevel idea is not trivial. If is chosen large (e.g., as a small constant), the locally sparse nodes get a lot of slack and can be colored very fast, but this also creates a lot of dependencies between the different almostcliques. If is small, the dependencies between almostcliques become easier to handle, while now the locally sparse nodes also obtain less slack. In [HSS18], the authors set to balance the time for coloring the almostcliques and for afterwards coloring the locally sparse nodes. The algorithm was then improved by Chang, Li, and Pettie in a technical tour de force [CLP20]. The authors of [CLP20] define (and construct) a hierarchy of almostcliques with different and they show that this hierarchy can be used to shatter the graph in only rounds, which in combination with the deterministic algorithm of [GK21] leads to the current fastest round distributed coloring algorithm. The approach of [CLP20] was simplified and adapted to the model in [HKMT21]. In [HKMN20], it was in particular shown (in the context of the more constrained distance2 coloring problem in ) that one can compute a single almostclique decomposition for a constant and that after running one round in which every node tries to get colored with a random color of its list, each node in an almostclique obtains slack proportional to the number of neighbors has outside with large probability. This was used in [HKMT21] to color the almostcliques in rounds of the model.
Extending the setup to list coloring ().
When extending existing randomized coloring algorithms to the more restrictive problem, one faces a number of challenges. First, the notion of local sparsity and the almostclique decomposition have mostly been defined for the coloring problem [Ree98, EPS15, CLP20, ACK19, HKMT21, HNT21]: a node is locally sparse if the number of edges among neighbors is small compared to a complete neighborhood of size and almostcliques have to be of size close to . Luckily, Alon and Assadi [AA20] gave a generalization of the almostclique decomposition that can be used for the problem. The decomposition is mostly defined in a natural way. The definitions of local sparsity and almostcliques are now w.r.t. to the actual node degrees instead of w.r.t. and the authors in addition define a node to be uneven if a constant fraction of the neighbors of have a sufficiently higher degree. They then show that the nodes of the graph can be partitioned into a set of locally sparse nodes, a set of uneven nodes, and several almostcliques. As the more standard almostclique decompositions, this decomposition can be computed in constant time in the model.
Based on the generalized almostclique decomposition for the problem, we would like to proceed in a similar way as for the coloring problem. As a first step, we would like to create slack for all nodes that are not in almostcliques, i.e., for all nodes that are locally sparse and for all nodes that are uneven. The major obstacle that we have to overcome to achieve this is the problem of generating slack. This was already pointed out by Chang, Li, and Pettie [CLP20] as a major obstacle to the generalization of their result to the problem. In fact, [CLP20] suggests to first look at the simpler coloring problem, where a node of degree is to be assigned a color from .
Slack generation for listcoloring.
The problem brings a number of challenges for slack generation that are not present in the list coloring problem. In the (list)coloring problem, nodes of degree have slack more than from the start because their palettes are of size . It is further wellestablished that a node of high degree and sufficiently large local sparsity obtains slack by a single round of trying a random color. Intuitively, this follows because the palettes of nonadjacent neighbors of either have a large overlap, leading to slack via sparsity, or they contain many colors that are not in ’s palette, leading to chromatic slack. In the problem, neither lowdegree nodes nor locally sparse highdegree nodes get automatic slack. To illustrate the problems that can arise, we examine a few motivating examples.
The first and second examples in Fig. 1 illustrate that sparsity no longer guarantees slack in the setting. In the first example (Fig. (a)a, which is from [CLP20]), a sparse node is connected to two cliques with essentially nonoverlapping palettes. Therefore no slack can arise from the endpoints of a nonedge in the sparse node’s neighborhood picking the same color. In fact, no matter how the neighbors of get colored, it is impossible to increase the slack of from to more than . The example hence shows that it can be impossible to derive hardly any slack even for sparse nodes. Thus, at least some sparse nodes need to be treated differently. We will do this by giving them temporary slack. In the example in Fig. (a)a, the temporary slack is provided by coloring sparse nodes before coloring dense nodes. All the neighbors of are therefore colored after coloring , giving a large amount of temporary slack.
The second example (Fig. (b)b) shows that the same can also hold for dense nodes. In the list coloring setting, dense nodes receive slack proportional to their external degree due to the local sparsity implied by external neighbors. This is not the case in the list coloring setting. The example consists of a node of high degree in a large almostclique (making dense) and such that is adjacent to another small almostclique. The two cliques have nonoverlapping palettes as in the first example. However, here
is colored as part of the dense nodes and it therefore does not automatically get temporary slack from all its dense neighbors. We handle this case by selecting a set of outliers in each almostclique, which are handled earlier, before the remaining nodes of the clique (which we call the inliers). The inliers of a clique are nodes for which similar arguments as in the
case hold and we will show that a constant fraction of each almostclique are inliers. Hence, the outliers of an almostclique get sufficient temporary slack from the inliers, which are colored later.The next examples in Fig. 2 illustrate that even when slack exists in expectation, the usual concentration arguments might still break. The third example (Fig. (a)a) is a case where slack exists in expectation but is impossible to achieve with concentration. Here is adjacent to nodes of much lower degree that also have another common (highdegree) neighbor. This only happens when a node is adjacent to nodes of significantly smaller degrees, so this case disappears if we focus on coloring the nodes whose degree fall in the range , which we do in our main subroutine.
Finally, the fourth example (Fig. (b)b) is a case where slack exists in expectation and with the probability we need, but cannot be achieved solely with samecolored pairs (as is standard for (list)coloring and is necessary for the use of martingale or Talagrand inequalities). In the example shown, neighbors of have degrees and palette sizes of order , and heavy colors appear in palettes, causing them to be tried by neighbors of in expectation. Other colors are unlikely to provide slack, so slack generation must rely on those heavy colors. This case is captured by our analysis for heavy colors (in Sec. 7.2).
Additional challenges.
The disparity in degrees and palettes in brings numerous additional challenges that go beyond slack generation. It affects the almostclique decomposition properties, since outside highdegree nodes can now be adjacent to even all nodes of a given almostclique. Colors are selected with widely different probabilities and success/failure probabilities similarly vary. This impacts shattering, which is a property that depends on the maximum degree. Just the lack of knowledge of global maximum degree makes synchronization harder.
The previous stateoftheart algorithm of [CLP20] depends heavily on the global bound . The intricacy of that algorithm and its analysis is such that it is unlikely to be an effective building block for a algorithm. The algorithm features a hierarchy of decompositions that are partitioned into “blocks”, split by size, and combined into six different sets. These are whittled down in distinct ways, resulting in three final subgraphs that are finished off by two different deterministic algorithms. The analysis of just one of these sets runs a full 10 pages in the journal version [CLP20].
2.1 Algorithm Outline
At the beginning of the algorithm, we compute an almostclique decomposition (ACD). The ACD computation returns a partition of the nodes of the graph into sets , , and into almostcliques. Each node in is sparse (i.e., has edges), each node in is uneven (i.e., has neighbors of degree ), and in each almostclique , every has at least neighbors in and at most neighbors outside , for some constant . Note that the precise definitions of the ACD, sparsity, unevenness, and other related notions appear in Sec. 3. After computing the ACD, the algorithm has two main phases. We first color all the sparse and uneven nodes (i.e., all nodes in ) and we afterwards color all the dense nodes (i.e., all the nodes that are in almostcliques). In each phase, we further iterate through degree classes. We do this in order to be able to apply the standard shattering technique. For shattering to work, each node should succeed (in getting colored) with probability . Our concentration arguments typically allow to show that each node succeeds with some probability and we therefore need to make sure that when dealing with nodes of degree up to , the minimum node degree is at least for some sufficiently large constant .
Coloring the sparse and uneven nodes.
As observed above, unlike in the list coloring problem, it is no longer true that a single round of random color trial creates sufficient slack for all sparse nodes. The highlevel idea of algorithm for coloring the sparse and uneven nodes is therefore as follows. We first select a certain subset of the sparse nodes for which a single random coloring round might not create sufficient slack. Each node in has at a constant fraction of its neighbors still uncolored and outside . We then run one round of random color trial to give slack to each node . With those things in place we can then color as follows. In a first step, we color the nodes in . Because nodes in have many neighbors outside , they have temporary slack and can therefore be colored in rounds by using the algorithm of [SW10]. Next, we can color the remaining nodes in . For those we have generated enough slack in the initial random color trial step and we can therefore also color those nodes in rounds by using the algorithm of [SW10].
To understand the above algorithm in more detail, we first define a set of nodes for which it is relatively easy to show that one round of random color trial creates sufficient slack. First note that this is definitely the case for all nodes in and we thus have . Each node has neighbors of degree and each such neighbor has a constant probability of choosing a color that is not in ’s palette. A similar argument also works more generally if has discrepancy , i.e., if the average probability for ’s neighbors for trying a color outside ’s palette is constant. In this case, it is straightforward to see that the created chromatic slack is in expectation, it is however more tricky to guarantee it with sufficiently high probability (details appear in Sec. 7.3). All such nodes are therefore also added to . Further, we call a node balanced if a large fraction of its neighbors has degree . For sparse balanced nodes, essentially the same arguments as for sparse nodes in the list coloring case work and the sparse balanced nodes are therefore added to . Finally, nodes with a constant fraction of their neighbors that are dense also get automatic temporary slack from the fact that the dense nodes are colored after all the sparse nodes. So, these are also included in .
An additional class of nodes that we can prove obtain slack are nodes for which a constant fraction of the neighbors is expected to try a color that is ’heavy’. Here, a color is called heavy if the expected number of neighbors of trying this color is at least a sufficiently large constant. We call those nodes . For nodes in , it is straightforward to see that the expected slack from neighbors picking the same color is . However, in this case we have to invest some additional work to prove that this slack is also created with a sufficiently large probability (see Sec. 7.2). We can now define the set as follows. contains all nodes such that a constant fraction of the neighbors of are in
. The final set of nodes that are not classified are the nodes in
. We call those nodes tough and in Sec. 7.4 we show that tough nodes also obtain sufficient slack in the initial round of random color trial.Coloring the dense nodes.
As a first step, each almostclique defines a leader node and a set of outlier nodes . The leader of is the node of minimum slackability, where slackability is defined as the sum of discrepancy and sparsity. The slackability of will also be referred to as the slackability of the almostclique . The set of outliers consists of the (approximately) third of the nodes in with the fewest common neighbors with , of the sixth of the nodes in of maximum degree, and of the antineighbors of in (i.e., the nodes that are not adjacent to ). The remaining nodes of (which is still close to at least half of ) is called the inliers of . We show that all the inliers of a clique have similar properties (and in particular neighborhoods and palettes that are nearidentical, with differences on the order of the slackability of ).
After defining the leader and outliers of each almostclique, we run one round of random color trial to create slack. For each almostclique with slackability at least for a sufficiently large constant , we show that all the inliers obtain slack that is proportional to their slackability (and thus in particular at least proportional to the slackability of the almostclique). The arguments for slack generation are similar to the corresponding arguments for sparse nodes (details in Sec. 7).
After slack generation, we select one more set in each almostclique . For each almostclique , we compute a ’putaside’ set as follows. We first choose a random subset of of size , inducing a global set . To obtain , we then remove any node from with a neighbor in . Note that the sets of different almostcliques are independent and they can therefore be colored trivially even if all other nodes are already colored. We can therefore delay coloring those sets to the very end of the algorithm. With sufficiently high probability, the set of each almostclique is of sufficiently large polylogarithmic size. We need the sets to create temporary slack for the other nodes in ultradense almostcliques in which the slack generation is a lowprobability event.
We can now proceed to color most of the nodes of the almostcliques. In a first step, we color all the outliers. Because the outliers are only roughly at most half of each almostclique, they have sufficient slack from the inliers so that they can be colored in rounds by using the algorithm of [SW10]. After coloring the outliers, we color most of the inliers of each clique. Here, we use the fact that the leader of each clique is connected to all the inliers of the clique and that the leader’s color palette is not too different from the color palettes of the other inliers. The leader therefore just randomly proposes one of its own available colors to each of the nodes in , so that no color is proposed more than once. It is remarkable that this simple primitive suffices to color nearly all the inliers, leaving only a portion proportional to the slackability of . The remaining inliers then have slack proportional to their remaining degree (where the slack in ultradense almostcliques comes from the putaside set ). We can therefore fully color them with the algorithm of [SW10]. At the very end, we finally color the nodes in the putaside sets .
Putting everything together.
The combination of our algorithm for sparse and uneven nodes and our algorithm for dense nodes gives us an algorithm to color all nodes of degree in rounds, w.h.p. Applied to nodes in lower degree range, the combined algorithm shatters the subgraph associated to the degree range in rounds. We apply the combined algorithm to the subgraphs induced by degree classes, starting from the higher degrees. Each time, we color the shattered graph with a deterministic algorithm whose running time decreases as the maximum degree of the graph goes down. This decreasing cost of the deterministic algorithm means that the running time is dominated by the cost of the deterministic algorithm applied to the second degree range, consisting of nodes of degree . In combination with the round deterministic list coloring algorithm of [GK21], this leads to an overall round complexity of .
3 Preliminaries and Definitions
Constants and evolving quantities.
Throughout the paper, we use subscripts for constant numerical quantities and parentheses for evolving ones, e.g., and are the original degree and palette of node , while and are the current degree and palette, i.e., taking into account that parts of the graph have been colored or turned off.
Let us consider as an upperbound on the maximum degree rather than the maximum degree itself. Let .
3.1 Slack, Sparsity, & AlmostCliques
Definition 1 (Slack).
The slack of a node in a given round is the difference between the number of colors it has then available and its degree in that round.
For any subset of the vertices , we denote by the set of edges between nodes of , and by the number of edges between nodes of . The next quantity (sparsity) measures the number of missing edges in a node’s neighborhood. Note that the definition used here is different from the one used when dealing with or , to address the variability of the palette sizes.
Definition 2 (Sparsity).
The (local) sparsity of node is defined as . Node is sparse if , and dense if .
To address the variety in size and content of the palettes that are inherent to , we use several quantities that measure how much a node’s palette differs from its neighbors’.
Definition 3 (Disparity, Discrepancy & Unevenness).
The disparity of towards is defined as . The discrepancy of node is defined as , and its unevenness is defined as . Node is discrepant if , uneven if .
It always holds that , and the two are equivalent in the nonlist setting. In addition to the fixed quantities defined here, we also make use of the evolving variant later in the paper. Intuitively, discrepancy is how many neighbors of a node are expected to try a color outside its palette, and disparity is the contribution of individual nodes to that quantity. Unevenness focuses on how much the palettes differ in size, ignoring their content.
Sparsity and (more recently) unevenness have been key in the definition of graph decompositions known as almostclique decompositions. Intuitively, such decompositions partition the graph into smalldiameter connected components of dense and even nodes on the one hand and possibly big sets of comparatively sparse or uneven nodes on the other hand. We use an almostclique decomposition of [AA20], tailored to the setting. See also earlier oriented ACD definitions of [HSS18, ACK19].
Definition 4 (() Acd [Aa20]).
Let be a graph and be parameters. A partition of , with further partitioned into , is an almostclique decomposition (ACD) for if:

Every is sparse ,

Every is uneven ,

For every and , ,

For every and , .
As is shown in [AA20], An ACD can be found in a constant number of rounds in [AA20], for any and . We refer to the ’s as almostcliques. For each let , and for each let be the almostclique containing . Properties 3 and 4 of Definition 4 directly imply that for every , , and that for every , . It also follows that the diameter of each is at most 2.
Almostclique decompositions anterior to [AA20] were tailored to solve coloring problems. As such, they used a definition of sparsity involving the maximum degree of , had no notion of unevenness, and did not consider almostcliques of size . Such ACDs could be found for any graph in a constant number of rounds of [HSS18] or [HKMT21]. The type of decomposition presented here, tailored to coloring problems and due to Alon and Assadi [AA20], can similarly be computed in constant rounds of .
In the setting, a simple link exists between sparsity and slack: a simple randomized procedure gives slack to nodes that have sparsity. In this setting sparsity is also useful in analyzing the structural properties of almostcliques. The situation is very different in the setting, as will be evident from our analysis of slack generation in this paper. Notably, sparsity alone is no longer sufficient as a quantity for slack generation and the structural analysis of almostcliques, leading to our introducing slackability.
Definition 5 (Slackability).
The slackability of node is defined as . We also define the strong slackability as .
Schneider and Wattenhofer [SW10] showed that coloring can be achieved ultrafast if all nodes have slack at least proportional to their degree (and the degree is large enough). This is achieved by each node trying up to colors in a round, using the high bandwidth of the model. We use the following variant that is very similar but still slightly different from some previous results. For instance, the case where is a direct consequence of Lemma 2.1 in [CLP20].
Lemma 1.
Consider the list coloring problem where each node has slack . Let be globally known. For every , there is a randomized algorithm SlackColor that in rounds properly colors each node w.p. , even conditioned on arbitrary random choices of nodes at distance from .
We give a proof of Lemma 1 and a description of SlackColor in Appendix B for completeness.
3.2 Basic Primitive
The basic primitive in randomized coloring algorithms, which we call TryRandomColor, is for nodes to try a random eligible color: propose it to its neighbors and keep it if it does not conflict with them. More formally, we run TryColor (Alg. 1), with an independently and uniformly sampled color . A more refined version gives priority to some nodes over others: for each node , we partition its neighborhood into – the nodes whose colors conflict with ’s – and . For correctness of TryColor, should hold for each edge . The standard algorithm, where all nodes conflict with each other, corresponds to setting , for all . Repeating it leads to a simple round algorithm [Joh99].
4 Coloring Sparse and Uneven Nodes
It is well established [SW10, EPS15] that if nodes have slack proportional to their degree, then they can be colored ultrafast ( time for highdegree nodes) by SlackColor. Sparse nodes have sparsity linear in their degree. This leads to linear slack in the coloring problem, using the following simple algorithm GenerateSlack.
We also use GenerateSlack for , but as we have seen, this is not sufficient to generate slack for all nodes. Our solution is to identify a particular subset of sparse nodes, (to be detailed shortly) that don’t get slack in the classical way. We then show these nodes can still be colored fast if they are colored before the other sparse nodes, . This is formalized in the following lemma.
Proposition 1.
Assume all nodes have degree at least for some universal constant . There is a round procedure that identifies a subset such that after running GenerateSlack in the subgraph induced by :

Each node in has uncolored neighbors in w.p. , and

Each node in has slack , w.p. .
For each node, the probability bounds hold even when conditioned on arbitrary random choices outside its 2hop neighborhood.
The proof of Proposition 1 appears in Sec. 7.4. Assuming Proposition 1, we have the following simple procedure for coloring sparse nodes.
We now describe the set , along with informal versions of all the relevant definitions. We then sketch the arguments used in proving the slack generation result, including the distinct cases treated. We defer proof details to Sec. 7. We define and use a number of small epsilon constants in the formal definitions. For reference, here are their order of magnitude in relation to : ; ; .
A sparse node is said to be balanced if most of its neighbors are of degree at least : . A node is discrepant if its discrepancy is at least a constant fraction of its degree: . This case subsumes the uneven case, in which a node has a constant fraction of its neighbors with a nontrivially larger degree. The easy nodes are the uneven nodes and the sparse nodes that are either balanced, discrepant, or with dense nodes making up a constant fraction of their neighborhood. These obtain slack with standard arguments.
Another class of nodes that receives permanent slack from GenerateSlack are the heavy nodes, defined informally as follows. The weight of a color equals the expected number of neighbors of that pick that color in GenerateSlack: . Let be the set of heavy colors for . A node is heavy if the total weight of its heavy colors is a constant fraction of its degree:
We can now define , the nodes that should be colored first. Those are the sparse nodes that are not heavy nor easy, but have a constant fraction of their neighbors that are easy. These easy neighbors therefore provide temporary slack for the node, if it is colored before them.
Formally, we define the following sets of nodes:
Proof intuition.
As mentioned, standard arguments suffice to show that easy nodes () get slack. Also, it is immediate that the nodes of get temporary reprieve from their waiting neighbors. The remaining sparse nodes fall into two types.
There are the heavy nodes (specifically those that are not easy), which have many “heavy colors” in their neighborhood. Each heavy color can contribute a large amount of slack in expectation, and a change in the color of a single node can decrease the expected total contribution of other nodes significantly. Thus, the usual concentration bounds do not apply.
We tackle this by a twostage analysis. We show that there exists a partition of the colorspace into buckets with some nice properties and fix one such partition (only for the sake of the analysis). We view the random color choice as consisting of two steps: picking a bucket, and picking a color within that bucket. We can derive tight bounds on the number of nodes and the number of their neighbors that select a given bucket. We can then analyze each bucket in isolation, for which it suffices to obtain bounds on the expected number of nodes colored with each heavy color. We can then use Hoeffding bound to get a concentration lower bound on the total number of nodes colored with heavy colors. This bound is significantly larger than the number of heavy colors, which implies that w.h.p. many colors are reused, i.e., linear slack is generated.
The remaining sparse nodes that fall into none of the types above (i.e., they are light and neither in nor ) are said to be tough. One of the main result is that the tough nodes do get permanent slack from GenerateSlack (Alg. 3). At a high level, we orient the edges from high to low degree and sum the in and outdegrees of the neighbors of a tough node. A gap exists between the sums due to the large number of unbalanced neighbors, which implies the presence of slackproviding nonedges. The finer details for this are not very easily intuitive, and we defer the discussion to the detailed presentation in Sec. 7.
5 Coloring Dense Nodes
We give now an algorithm for graph containing only dense nodes. Once the sparse (and uneven) nodes have been colored, we are indeed left with a graph consisting only of dense nodes, so we can view as the subgraph induced by . In the original graph , at most an fraction of each dense node’s neighborhood is nondense, so their degrees in are all at least their original degree times and fall into essentially the same degree range. Observe that an almostclique decomposition of is still a valid decomposition of , as conditions 3 and 4 of Definition 4 remain satisfied. (The opposite is not true: after coloring the dense nodes, the sparse nodes may no longer be sparse.) We are in a sense using the selfreducibility property of the .
The algorithm (Alg. 5) builds on previous frameworks for randomized coloring ([HSS18, CLP20]), but with several notable changes. Some of the notable differences from some or most previous approaches include:

Management of palette discrepancy (both in size and color composition), by separately treating those with the largest variance;

A procedure that generates slack to each dense node proportional to its sparsity;

A procedure to give temporary slack to nodes within very isolated almostcliques, for which the previous argument provides little slack or with insufficient probability; and

A singleround procedure to color most nodes in an almostclique by synchronizing the colors they try.
Recall that . We say that is a lowslack almostclique if . Let be the minimum over nodes in . Please note that definitions of dense nodes, such as slackability, are in terms of , i.e., the subgraph induced by .
We first derive structural bounds on dense nodes in Sec. 5.1. We then treat the steps 1, 2, 3 and 5 of the algorithm in individual subsections.
5.1 Slackability Bounds External and AntiDegree
Definition 6 (External/antidegree).
For a node , let denote its almostclique, its set of external neighbors, and its external degree. Similarly, let denote its set of antineighbors and its antidegree.
In the setting, it was recently observed [HKMT21] that the sparsity of a node bounds its external and antidegrees. As sparsity implies that a proportional amount of slack can be (probabilistically) obtained in this setting, this meant that nodes could be guaranteed to have external and antidegree bounded by their slack. We show an analogous result here where strong slackability replaces sparsity.
Lemma 2.
There is a constant such that holds for every node in an almostclique .
Proof.
Let be an external neighbor of , i.e., is a neighbor of in an almostclique . Nodes and are mostly adjacent to other nodes of their almostcliques: and , and therefore, .
This immediately implies that each such contributes to ’s strong slackability: if , then is part of at least nonedges in ’s neighborhood, and thus contributes to ; otherwise, has contribute to . ∎
Lemma 3.
There is a constant such that holds for any dense node .
Proof.
Let . We bound the unevenness via the degree sum of the nodes in :
(1) 
where, for a set , we let . There are only edges missing within , thus the first degree sum on the righthand side above “misses” only the corresponding at most “halfedges”, that is,
To bound the second sum, let us rearrange it as a sum over , and recall that each node in has at least neighbors in , and (by the ACD property):
Comments
There are no comments yet.