# Locally-Iterative Distributed (Delta + 1)-Coloring below Szegedy-Vishwanathan Barrier, and Applications to Self-Stabilization and to Restricted-Bandwidth Models

We consider graph coloring and related problems in the distributed message-passing model. Locally-iterative algorithms are especially important in this setting. These are algorithms in which each vertex decides about its next color only as a function of the current colors in its 1-hop neighborhood. In STOC'93 Szegedy and Vishwanathan showed that any locally-iterative (Delta+1)-coloring algorithm requires Omega(Delta log Delta + log^* n) rounds, unless there is "a very special type of coloring that can be very efficiently reduced" SV93. In this paper we obtain this special type of coloring. Specifically, we devise a locally-iterative (Delta+1)-coloring algorithm with running time O(Delta + log^* n), i.e., below Szegedy-Vishwanathan barrier. This demonstrates that this barrier is not an inherent limitation for locally-iterative algorithms. As a result, we also achieve significant improvements for dynamic, self-stabilizing and bandwidth-restricted settings: - We obtain self-stabilizing distributed algorithms for (Delta+1)-vertex-coloring, (2Delta-1)-edge-coloring, maximal independent set and maximal matching with O(Delta+log^* n) time. This significantly improves previously-known results that have O(n) or larger running times GK10. - We devise a (2Delta-1)-edge-coloring algorithm in the CONGEST model with O(Delta + log^* n) time and in the Bit-Round model with O(Delta + log n) time. Previously-known algorithms had superlinear dependency on Delta for (2Delta-1)-edge-coloring in these models. - We obtain an arbdefective coloring algorithm with running time O(√(D)elta + log^* n). We employ it in order to compute proper colorings that improve the recent state-of-the-art bounds of Barenboim from PODC'15 B15 and Fraigniaud et al. from FOCS'16 FHK16 by polylogarithmic factors. - Our algorithms are applicable to the SET-LOCAL model of HKMS15.

• 9 publications
• 14 publications
• 1 publication
07/29/2022

### Locally-iterative (Δ+1)-Coloring in Sublinear (in Δ) Rounds

Distributed graph coloring is one of the most extensively studied proble...
02/20/2018

### Distributed Symmetry-Breaking Algorithms for Congested Cliques

The Congested Clique is a distributed-computing model for single-hop net...
05/12/2021

### Distributed Graph Coloring Made Easy

In this paper we present a deterministic CONGEST algorithm to compute an...
02/08/2021

### Superfast Coloring in CONGEST via Efficient Color Sampling

We present a procedure for efficiently sampling colors in the model. It...
06/02/2022

### Distributed Edge Coloring in Time Polylogarithmic in Δ

We provide new deterministic algorithms for the edge coloring problem, w...
08/10/2020

### Coloring Fast Without Learning Your Neighbors' Colors

We give an improved randomized CONGEST algorithm for distance-2 coloring...
01/13/2019

### Fast and Simple Deterministic Algorithms for Highly-Dynamic Networks

This paper provides a surprisingly simple method for obtaining fast (con...

## 1 Introduction

1.1 The Classical Model
Consider an -vertex graph with maximum degree whose vertices host processors. The vertices communicate with one another over the edges of in synchronous rounds. The problem that we are studying is how many rounds (also known as running time in the message-passing model of distributed computing) are required for computing a proper111A coloring is called proper, if , for every edge . -coloring of . This is one of the most fundamental and well-studied distributed symmetry-breaking problems [15, 27, 49, 62, 47, 4, 6, 7, 8, 9, 12, 3, 22], and it has numerous applications to resource and channel allocation, scheduling, workload balancing, and to mutual exclusion [44, 29, 23, 35].

The study of distributed coloring algorithms on paths and cycles was initiated by Cole and Vishkin in 1986 [15], who devised a -coloring algorithm with time222Unless said otherwise, algorithms that we discuss are deterministic.. The first distributed algorithm for the -coloring problem on general graphs was devised by Goldberg and Plotkin in 1987 [27]. The running time of their algorithm is . ( is a very slow-growing function, defined formally in Section 2.) Goldberg, Plotkin and Shannon [28] improved this bound to . Linial [49] showed a lower bound of . His lower bound applies to a more relaxed -coloring problem, for any, possibly quickly-growing function . Linial also strengthened the upper bound of [28], and showed that an -coloring can be computed in time. (Via a standard color reduction, described e.g., in [8] Chapter 3, given an -coloring one can compute a -coloring in rounds. Thus, Linial’s algorithm also gives rise to -coloring in time.)

In STOC’93, Szegedy and Vishwanathan [62] studied locally-iterative coloring algorithms. An algorithm is locally-iterative if it maintains a sequence of proper colorings, where is the coloring on round , for every , and is the running time of the algorithm. On each round , every vertex computes its new color based only on the colors , where is the -hop-neighborhood of . Szegedy and Vishwanathan showed upper and lower bounds on the quantity , which is the number of colors into which an -vertex graph of maximum degree can be properly recolored within one single round, assuming that it was properly -colored in the beginning of the round. Note, however, that for the lower bound of [62] to hold, the proper -coloring of is assumed to be arbitrary.

As a corollary of their upper bound on , Szegedy and Vishwanathan [62] derived an improved upper bound of for -coloring. (This upper bound was later re-derived in a somewhat more explicit way by Kuhn and Wattenhofer [47].) As a corollary of their lower bound on , Szegedy and Vishwanathan [62] showed a heuristic lower bound on the number of rounds that a locally-iterative algorithm needs in order to compute a -coloring. Their lower bound (Theorem 12 in [62]

, marked as ”heuristic”) is

. (Strictly speaking, it says that rounds are required to decrease the number of colors from to . By Linial’s lower bound [49], rounds are required to compute a -coloring.)

All -coloring algorithms developed before 2009 were locally iterative. (See Table 1 below for a summary of known locally-iterative algorithms.) In [5, 44] the first- and the second-named authors of the current paper, and independently Kuhn, devised an -time -coloring algorithm, using defective colorings. (See Section 2 for the definition of this notion.) The algorithms of [5, 44, 9] are, however, not locally-iterative. This direction was further pursued by the first-named author in [3], who devised an algorithm with running time , using arbdefective colorings. (See Section 2; the notion originates from [6].) This result was further improved by Fraigniaud at al. [22], who devised the current state-of-the-art -coloring with running time . The algorithms of [5, 44, 3, 22] are all not locally-iterative, as they all decompose the graph into many subgraphs, compute colorings for them, and carefully combine them into a single coloring for the original graph. In view of Szegedy-Vishwanathan’s heuristic lower bound (henceforth, SV barrier), this seemed to be inevitable.

In the current paper we show that this is not the case, and devise the first locally-iterative -coloring algorithm with running time , i.e., below the SV barrier of . Unlike previously locally-iterative algorithms, our algorithm does not necessarily reduce the number of employed colors in every round. Instead, if the initial number of colors is , it can keep being for almost the entire execution of the algorithm, and then ”suddenly” reduce to in the last few rounds. The colorings , , that it computes on rounds , respectively, are all proper, but they are not at all arbitrary. Rather they have some special properties that guarantee that in rounds the number of colors reduces to .

Interestingly, in their semianl paper [62], Szegedy and Vishwanathan mention a possibility of such a phenomenon. In the preamble to their aforementioned ”heuristic” theorem (Theorem 12) they wrote:

”There is a possibility, however, that after a few steps of iteration we arrive at a very special type of coloring that can be very efficiently reduced in steps thereafter. Assuming that this does not happen, the results of the previous section give the following theorem:
Theorem 12 (heuristic): Let . To decrease the number of colors from to it takes steps. In particular, to decrease the number of colors from to requires steps.”111The argument of [62] applies, in fact, to reducing the number of colors to , as opposed to .

We also use our new locally iterative technique to devise improved not locally-iterative coloring algorithms. Specifically, we obtain -coloring within time, for an arbitrarily small constant , and a -coloring within time. This improves the best previously-known running time of Fraigniaud et al. [22].

1.2 Applications
In the Conclusions section of the paper [47] by Kuhn and Wattenhofer, the authors explain why locally-iterative algorithms are particularly important from practical perspective. They mention ”emerging dynamic and mobile distributed systems such as peer-to-peer, ad-hoc, or sensor networks” as examples of networks for which such algorithms can be especially suitable. They also point out that locally-iterative algorithms are typically communication-efficient ones.

In this paper we demonstrate that our novel locally-iterative algorithms indeed provide dramatically improved bounds for both the dynamic Self-Stabilizing scenarios and for scenarios in which communication-efficiency is crucial. In the next three subsections we discuss these applications of our locally-iterative technique one after another.

1.2.1 Self-Stabilizing Symmetry Breaking
The Self-Stabilizing setting was introduced by Dijkstra [16], and is being intensively studied since then. See, e.g., Dolev’s monograph [17] and surveys by Herman [34] and by Guelleti and Kheddouci [29]. In the context of -coloring, the setting we consider is the following one. Every vertex of a graph of maximum degree at most and at most vertices has a unique ID number. The memory of each vertex consists of two areas. The Read Only Memory (henceforth, ROM) consists of hard-wired data such as vertex ID, degree bound , a bound on the number of vertices , and program code. The ROM is failure-prone, but its contents cannot be changed during execution. The other area of the memory is Random Access Memory (henceforth, RAM). This memory may change during execution, and it is appropriate for storing variables, such as vertex colors. However, this memory area may change not only as a result of an algorithm instruction, but also as a result of faults. Such faults may make arbitrary and completely unpredictable changes in any round in the entire RAM. Moreover, in the Fully-Dynamic Self-Stabilizing setting, in each round vertices may crush, new vertices may appear and communication links between vertices may change arbitrarily, as long as the bounds on and hold111In fact, since the dependence of our algorithms’ running time on is just , the bound for the number of vertices may be double- or triple-exponential in the real number of vertices, and still the running time will be affected by just an additive constant term.. For example, colors are stored in RAM, and as long as faults occur, vertices may hold arbitrary colors, possibly the same as those of their neighbors, no matter what operations are performed by an algorithm. The objective is to devise algorithms in which once faults stop occuring, the algorithm self-stabilizes quickly to a proper solution.

The relevant notion of running time in this context is called stabilization time (also known as ”quiescence” time), which is the maximum number of rounds, so that rounds after the last fault or dynamic change of the graph we are guaranteed that an algorithm arrives to a proper solution, e.g., the coloring of the graph is a proper -coloring. One can define analogously self-stabilizing variants of -edge-coloring (see Section 1.2.2), of Maximal Independent Set (henceforth, MIS) and of Maximal Matching (henceforth, MM)222A subset of vertices is an MIS if there are no edges between pairs of vertices in , and for every vertex , there exists a neighbor . A subset of edges is an MM if no tow edges of are incident, and for every , there exists an edge incident on it.. Also, ideally, vertices that are remote from those that experience faults or dynamic updates should not be affected. This is formally captured by the notion of adjustment radius, which is the maximum distance of a vertex that needs to re-compute its color (or status in MIS, etc.) from the closest fault or dynamic update.

Self-stabilizing symmetry-breaking problems were extensively studied [36, 37, 39, 40, 60, 61, 63, 64]. See also [29] for an excellent survey of self-stabilizing symmetry-breaking algorithms. However, all of them have prohibitively large stabilization time of or more. In this paper we devise the first self-stabilizing algorithms with stabilization time of for all these four fundamental problems. Moreover, the adjastment radius of our -coloring algorithm is just , since only vertices whose immediate neighborhoods undergo faults or dynamic updates may need to recompute their colors. For the MIS and -edge-coloring problems the adjustment radius of our algorithms is , and for the MM problem it is . We note that the fact that our algorithms are deterministic is particularly useful in this setting. Indeed, this prevents the possibility that adversarial faults will manipulate random bits of the algorithm. Finally, for each of these problems, there is a variant of our algorithm in which vertices use just words of local memory.333We assume that for every vertex and a neighbor of , the vertex has a read-only buffer in which the message that has sent to on the current round is stored.

Not self-stabilizing dynamic distributed algorithms for symmetry-breaking problems were studied in [48, 41, 13, 57]. In addition to not being self-stabilizing, all these algorithms assume that faults are well-spaced, so that the algorithm has time to stabilize after each fault, before the next fault occurs. This is in a stark contrast to our algorithm, where an arbitrarily large number of faults and dynamic updates may occur in parallel, or as soon one after another as one wishes. Also, the algorithms of [48, 41, 57] use large messages, while our algorithm only short messages (see below). Finally, [13] provide bounds in expectation, and [57] provide amortized bounds, while our bounds are worst-case.
1.2.2 Edge-Coloring
Another classical and extremely well-studied symmetry breaking problem is that of -edge-coloring [55, 7, 11, 12, 20, 18, 24, 21, 56]. An edge-coloring of a graph is a function . It is said to be proper if for every pair of incident edges , , we have . The classical theorem of Vizing [65] states that every graph is -edge-colorable. However, existing distributed deterministic solutions [55, 7, 11, 12, 20, 21] employ colors or more in general graphs. (There are efficient randomized distributed algorithms [12, 20] that compute -edge-colorings in time close to . This running time is incomparable to running time of the form , for some function , achieved by deterministic algorithms that we discuss here.) The first efficient deterministic algorithm for -edge-coloring was devised by Panconesi and Rizzi [55]. Its running time is .

In the LOCAL model of distributed computing, messages of arbitrary size are allowed. The -edge-coloring problem for a graph reduces to -vertex-coloring problem for the line graph of , and in the LOCAL model this reduction can be implemented without any overhead in running time. Therefore, the novel sublinear-in- time algorithms for -vertex-coloring [3, 22] immediately give rise to sublinear-in- time algorithms for -edge-coloring. However, all these edge-coloring algorithms [55, 3, 33] are not locally iterative. Moreover, they do not apply (or require significantly more time) in the CONGEST model of distributed computing. In the latter model, every vertex is allowed to send bits of information to each of its neighbors in every round. Implementing Panconesi-Rizzi algorithm in the CONGEST model requires time. Simulating vertex-coloring for a line graph also yields a multiplicative overhead of factor at least in the running time. Therefore, to the best of our understanding, the state-of-the-art solution for -edge-coloring in the CONGEST model requires time, and it is not locally iterative. The best currently-known locally-iterative solution is even slower, and requires time. (It is achieved by simulating the locally-iterative -time algorithm of [47, 62] in the line graph in the CONGEST model.) The problem of devising communication-efficient algorithms for symmetry-breaking problems was raised in a recent work by Pai et al. [54]. Related questions were also studied by Censor-Hillel et al. [14].

We adapt our locally-iterative algorithm for -vertex-coloring to work for -edge-coloring directly, i.e., without simulation of the line graph. As a result we obtain a locally-iterative -edge-coloring algorithm with running time in the CONGEST model. Moreover, we show that unlike previous solutions (that require stabilization time of ), our algorithm works in the self-stabilizing setting, still with small messages, with stabilization time . Moreover, our algorithm is also applicable to the more restricted Bit-Round [43] model in which each vertex is only allowed to send 1 bit in each communication round over each edge. Even in this setting our algorithm terminates within rounds, where the rounds performed in the beginning are for an initial stage that is unavoidable in this setting. (This is similar to the unavoidable factor in the CONGEST and LOCAL models. In the Bit-Round model the exchange of IDs requires bit-rounds.)

Another feature of our -edge-coloring algorithm is that its running time is actually . This is in contrast to previous -edge-coloring algorithms [55, 3, 22] whose running time is of the form (Panconesi-Rizzi’s algorithm invokes -vertex-coloring algorithm for trees [15, 28], which requires time. It is not known if this can be done in time. Algorithms of [3, 22] solve vertex-coloring, and simulating them on line graphs involves a multiplicative overhead factor of 2.) Hence, for small values of (), our -edge-coloring algorithm improves the state-of-the-art bound in the LOCAL model as well.
1.2.3 SET-LOCAL Model
An additional application of our algorithms is in the SET-LOCAL model [33] that represents restricted networks in which vertices do not have IDs (but start from a proper coloring), and are not capable to distinguish between identical messages received from different neighbors. Since our algorithms are locally-iterative and compute the next colors based only on sets of current colors of -hop-neighborhoods, our algorithms are directly applicable to the SET-LOCAL model. Thus our algorithms compute proper -coloring (and solve related problems) in time in the SET-LOCAL model starting from a proper coloring. The best previous algorithms in this model required time [62, 47, 33]. A lower bound of for -coloring in this setting was obtained by Hefetz et al. [33].
1.3 Technical Overview
We start with describing our most basic subroutine, which we call Additive Group algorithm, or shortly, AG algorithm. The subroutine starts with a proper -vertex-coloring of the input graph , and produces its proper -coloring in rounds, in a locally-iterative way. Assume (for simplicity of presentation) that is a prime number. We represent every initial color as a pair , where are from the field of integers with charecteristic , i.e., . Then every vertex (in parallel) checks if there exists a neighbor , with . If there is no such a neighbor, then the vertex finalizes its color, i.e., sets it to . Otherwise, the vertex sets its color to , where the addition is performed in . We show (see Section 3) that when all vertices run this simple iterative step for rounds, the ultimate coloring is a proper -coloring. Moreover, at all times the graph is properly colored.

The simplicity and the uniformity of this iterative step makes it very powerful. In dynamic self-stabilizing environments vertices run this step forever in conjunction with an appropriate ”check-and-fix” procedure, no matter what changes or faults occur in the network. It turns out that still, once faults stop occurring, within additional rounds the coloring converges to a proper -coloring. In the edge-coloring scenario, every edge has a color , known to both endpoints. The endpoint checks locally if there is an edge incident on , , with , and makes an analogous test among edges incident on it. Then and communicate to one another one single bit each, which enables both of them to update the color of . Therefore, this algorithm gives rise to the first communication- and time-efficient -edge-coloring algorithm. Moreover, this algorithm is extremely well suited to dynamic and self-stabilzing scenarios.

Some subtleties arise when is not prime, and we overcome them by showing that in some cases the proof goes through even if the arithmetics is performed in an additive group , rather than in a Galois field . Another difficulty stems from the need to combine the AG algorithm with Linial’s algorithm. The latter algorithm reduces the number of colors to , and from there the AG algorithm takes over. However, in the self-stabilizing setting some vertices may run Linial’s algorithm, while others have already proceeded to AG algorithm. Careful adaptations to both algorithms are required to handle such situations.

Finally, we also extend the AG algorithm to computing arbdefective coloring. For a pair of parameters and , a coloring is said to be -arbdefective -coloring if the color classes of induce subgraphs of arboricity at most each. Arbdefective colorings were introduced by the first- and the second-named authors in [6], and they were shown to be extremely useful for efficient computation of proper colorings in [6, 3, 22]. Our extension of AG algorithm from proper to arbdefective colorings (we call the extended algorithm ArbAG) works very similarly to the AG algorithm. The only difference is that on each round, each vertex tests if it has at most a certain number of neighbors with . (Recall that in AG algorithm, this threshold number is .) Other than that ArbAG has the same simple locally-iterative structure as algorithm AG, but the number of iterations of ArbAG is significantly smaller. (Note, however, that strictly speaking, a locally iterative algorithm is required to maintain a proper coloring on each round, while algorithm ArbAG maintains an arbdefective coloring.) This is in sharp contrast to previous methods [6, 3] of computing arbdefective colorings. The latter are far more involved, far less communication-efficient, and less time-efficient by polylogarithmic factors. As a result we also obtain improved (again, by polylogarithmic factors) algorithms for general (not necessarily locally-iterative) -coloring and -coloring.

## 2 Preliminaries

The function is the number of times the function has to be applied iteratively starting from , until we arrive at a number smaller than . The unique identity number (ID) of a vertex in a graph is denoted . The diameter of a graph is the maximum (unweighted) distance between vertices . The arboricity of a graph is the minimum number of forests into which the edge set can be partitioned. For a vertex and a positive integer , we denote by (respectively ) the set of vertices at distance exactly (resp., at most) from . A -defective -coloring is a vertex coloring using colors such that each vertex has at most neighbors colored by its color. A -arbdefective -coloring is a vertex coloring using colors, such that each subgraph induced by vertices of the same color has arboricity at most . Our algorithms employ the following important fact. For any integer , there exists a prime in . This prime number exists due to Bertrand-Chebyshev postulate. See, e.g., Theorem 418 in [31].

## 3 Additive-Group Coloring

In this section we present our main algorithm that computes a proper -coloring from a proper -coloring, where . Consider a graph with a proper -coloring . For all vertices , we represent a color by a pair . We do it by finding a prime number . The color is represented by the following pair . Our final goal is to eliminate the first coordinate, i.e., to change all nodes colors such that for every vertex , it will hold that , , and is a proper -coloring. Our algorithm proceeds in iterations, starting from the initial coloring . In each iteration colors may change, but the coloring remains proper. We employ the following definition.

###### Definition 3.1.

Two neighbors in conflict with one another if and only if and , where .

Denote . We will refer to as the first coordinate and to as the second coordinate. Denote by the color of in round . Our algorithm starts from a proper coloring of the input graph . In each round the algorithm performs the following step. For all in parallel, if a node conflicts with a neighboring node , then the new color of in the end of this round is . Otherwise (this means does not conflict with any neighbor), we set , and the color of becomes final and will not change anymore.111Note, however, that a finalized vertex , i.e., a vertex with , can keep running the same iterative step, and still its colors will stay unchanged. This completes the description of the algorithm. Note that a node does not have to send its new color to all of its neighbors. Rather it is enough to send only one bit indicating whether its color became final or that it changed according to the rule specified above. We will use this property later. Next, we prove correctness.

###### Lemma 3.2.

For each iteration , the coloring is proper.

###### Proof.

The proof is by induction on . Base: (): holds trivially, since the initial coloring is proper.
Step: Assuming that in iteration the coloring is proper, we prove that in iteration it is proper as well. If a color of a node is , then for the next iteration the color is either or . Consider an adjacent node , i.e., . If , where , then , by the induction hypothesis. In this case, the new colors of the nodes will be and and since this means that the new colors of and are distinct. Otherwise, , where . If in iteration it holds that and , we are done since . Otherwise, or had conflicts in iteration . If exactly one of them had a conflict, then their colors in iteration are distinct. (One of them has 0 in the first coordinate, while the other has not, in iteration .) Thus, it is left to consider the case that both had conflicts. Thus, and . If , we are done. Otherwise, and , because is proper. Thus, , and . ∎

We say that a vertex is in a working stage as long as its color satisfies . Once becomes , the vertex is in the final stage. In order to analyze the running time of the algorithm we observe in Lemmas 3.3, 3.4 and Corollary 3.5 that, assuming that is sufficiently large, a pair of neighbors can conflict at most twice in rounds. (Once in a working stage, and once in a final stage of one of the vertices.) Therefore, a vertex with less than neighbors will have a round out of in which it conflicts with no neighbor. In this round it will select a final color. Since , all vertices in the graph will select a color within rounds. This gives rise to the following Corollary.

###### Lemma 3.3.

For , suppose that our algorithm is executed for rounds, and consider two neighboring nodes in that are in their respective working stages during these entire rounds. Then have the same second coordinate in their colors in the same round , (that is, and , for some ) at most once during these consequent rounds.

###### Proof.

Assume that in some iteration it holds that and . For each of the following iterations , the difference between the second coordinates is . Note that since is a prime and (since, by Lemma 3.2, the coloring is proper in all iterations, and in particular, is a proper coloring), the equality can only hold when , i.e., only after additional iterations. ∎

In the following lemma we complement Lemma 3.3.

###### Lemma 3.4.

For , suppose that our algorithm is executed for rounds, and consider two neighboring nodes in , such that is in working stage and is in final stage during these entire rounds. Then have the same second coordinate in their colors in the same round , (that is, and , for some ) at most once during these consequent rounds.

###### Proof.

Since is in final stage, its color does not change during these rounds. Indeed, it holds that . On the other hand, is in the working stage. If initially the color of is , for some , then in the following rounds it changes as follows: , . Since is prime, all these values of the second coordinate are distinct in the field of integers modulo . In other words, the equality holds for exactly one element of this field. Thus conflicts with at most once, in the round where . ∎

###### Corollary 3.5.

Given a graph with a proper -coloring, where , our Additive-Group Coloring algorithm produces a proper coloring within rounds.

###### Proof.

By Lemma 3.3, for , two adjacent nodes in the working stage (whose colors are not final) cannot conflict with one other more than once during the first rounds of the algorithm. However, two adjacent nodes can also conflict if exactly one of them has selected a final color. Once this happens, it will conflict with its neighbor that is still in the working stage at most once during these rounds. (See Lemma 3.4.) Since any node starts from a working state, and once the state transits to final its color does not change anymore, a node cannot conflict with each of its neighbors more than twice. Therefore, for each node, within rounds, there must be a round in which it does not conflict with any of its neighbors. Hence, all nodes will reach a final stage within rounds. Since , the statement about the running time of the corollary follows. A final color is of the form , . Thus the number of employed colors is at most . ∎

###### Corollary 3.6.

Any graph can be colored with colors within rounds, by a locally-iterative algorithm.

###### Proof.

Running Linial’s algorithm [49] on the input graph will produce a coloring using colors within rounds. (Recall that Linial’s algorithm is locally-iterative.)
At the second stage we run our Additive-Group algorithm on . This results in a new proper coloring that employs colors. Computing the coloring from requires rounds, by Corollary 3.5. At the last stage we reduce the number of colors to using the standard color reduction. This also requires time. Note that the standard color reduction is a locally-iterative algorithm as well. Therefore, the overall running time is . ∎

Finally, we argue that the algorithm can be implemented using words in local memory of each vertex. Recall that we assume that for every vertex , it has read-only buffers. For each neighbor of , its corresponding buffer contains the message that received from in the current round. This buffer will contain the color of .

First, consider a situation when vertices run AG algorithm, i.e., colors are in . Then only needs to test, for every neighbor , if is in conflict with . For this only needs to remember its own color, i.e., words. Next, we argue that Linial’s algorithm can be implemented using words in local memory. (This might be known, and we sketch this argument for the sake of completeness.) For every color , there is a polynomial over the field , of a prime characteristic . The polynomials are of degree at most , for a parameter . (Both and depend on the number of colors in the proper coloring that the algorithm starts with.) The number of bits required to represent the coefficients of each polynomial is proportional (up to a constant factor) to the number of bits required to represent the color , i.e., it can be encoded in words. For a fixed vertex , let ,,…, be its neighbors, , and let be their respective polynomials. Now computes . It then reads the color of , computes its polynomial , and evaluates it at . If , then is not going to be the new color of . Otherwise, evaluates (erases ), tests if , and so on. If is different from all , then is the new color of . Otherwise, erases , computes , and tests in the same way if , etc.

To summarize, the algorithm of Corollary 3.6 can be implemented using words in local memory per vertex. To the best of our knowledge, this is not the case for all previous (not locally-iterative) -coloring algorithms that run in time or less [5, 44, 3, 22].

## 4 Fully-Dynamic Self-Stabilizing algorithms with O(Δ+log∗n) rounds

### 4.1 Fully-Dynamic Self-Stabilizing O(Δ)-Coloring

In this section we employ a variant of Linial’s algorithm for -coloring that allows a vertex to avoid being colored by colors from a given set of size at most [3]. (This is useful when selecting a new color, to avoid collisions with some neighbors that have already obtained final colors.) We refer to this algorithm as Algorithm Excl-Linial. For the sake of completeness, it is described below. The algorithm is identical to Linial’s original algorithm, except for the final stage that transforms a proper -coloring into a proper -coloring. In this stage each vertex computes a polynomial of degree in a field of size , and selects a color , such that , for any neighbor of and any in that field. Since the degree of the polynomials in this stage is , each polynomial intersects with a neighboring one in at most two points. Hence, there are at most points on that may intersect with some neighbor. If the field is of size , there must be a point such that for all neighbors of and all elements in the field. Such a pair is selected by the original algorithm of Linial. In the modified variant, on the other hand, the field is of size greater than . Consequently, if a set of at most forbidden colors is provided, there still exists an element in that field, such that is not equal to any of the colors in the set , and neither to any , for a neighbor and an element . Such a color is selected as a final color. Thus, we obtain an -coloring were all colors belong to sets that exclude colors each, within time.

Before describing our self-stabilizing algorithm, we define some notation, and describe yet another useful variant of Linial’s algorithm, which we call Algorithm Mod-Linial. Let denote the number of iterations in Linial’s algorithm. Let denote upper bounds on the number of colors in the different iterations of Linial’s algorithm. Define the intervals as follows. , . Since each such interval contains sufficient number of colors, we can map each color palette of each iteration of Linial’s algorithm to one of the intervals defined above. Specifically, the palette of the first iteration is mapped to (which is of size ), the palette of the second iteration is mapped to (which is of size ), and so on, up to the last palette that is mapped to . This way Linial’s algorithm is modified, so that in each iteration a coloring using a palette is transformed into a coloring using the palette . (The actual number of colors used from this palette is .) The modified algorithm will be referred to as Mod-Linial. It accepts as input a color of a vertex , a (sub)set of its neighbors colors, and a set of forbidden colors, and returns a new color for . The range will be used for an initial -coloring obtained from IDs.

Our fully-dynamic self-stabilizing algorithm works as follows. The RAM of each vertex consists of a variable that holds a color in a range . The ROM of each vertex holds the algorithm, the number of vertices and the maximum degree . In each round each vertex checks whether it is in a proper state, i.e., its color is distinct from all colors of its neighbors. (See the pseudocode of Procedure Check-Error below.) If is not in a proper state, the vertex returns to its initial state. (See lines 1- 3 of Procedure Self-Stabilizing-Coloring.) We define the initial state of a vertex with ID by the color . Otherwise, the vertex is in a proper state. Then, the vertex computes its next color or finalizes the current one. (See lines 4 - 20 of Procedure Self-Stabilizing-Coloring.) Specifically, as long as the vertex color belongs to an interval for , i.e., the color is significantly larger than , the vertex computes the next color from a smaller range using the algorithm Mod-Linial (lines 6-7 of Procedure Procedure Self-Stabilizing-Coloring). Once a color is in the interval , the vertex must select a new color in the interval that is distinct from any neighboring color that is also in . This is done in lines 9 - 11 of the procedure. The set , computed in line 10 and provided as the third parameter of Procedure Mod-Linial in line 11, contains all possible colors that neighbors of that run already lines 12 - 18 (i.e., their colors are small enough) may obtain in the current iteration. Note that for each such there are at most such colors. Finally, a color that is in the range either becomes final or changes to another color in according to Algorithm AG. See lines 12 - 18. This completes the description of the algorithm. Its pseudocode is provided below. In the next lemmas we analyze the algorithm.

###### Lemma 4.1.

Given an arbitrary graph , our self-stabilizing algorithm produces a proper coloring in each round, once faults no longer occur.

###### Proof.

Consider a round . If a node has a color that is equal to that of a neighbor , i.e., , then . Otherwise, lines 3 - 20 are executed. Since it is assumed that no more faults will occur, we prove that lines 3-20 provide a proper coloring. If (line 7) then will be in the range . (Any element in is greater than any element in , and thus numerical values of colors decrease as the algorithm proceeds. Also, all intervals are disjoint.) Therefore, all neighbors with will not select a new color from . For a neighbor with , its color belongs to , and Mod-Linial algorithm will produce a proper coloring.
If then Mod-Linial algorithm works in the following way. It computes a new color from , such that it is distinct from all neighbors’ colors that transit from to in round , and from all colors of the set . The latter set contains all possible colors that can be used in round by neighbors of with colors in the range in round . Consequently, the new color of of is distinct from the new colors of such neighbors. Moreover, the new color is also distinct from new colors of the rest of the neighbors, since they were either in in round , and do not collide with in round due to correctness of Mod-Linial, or in a higher range, and thus are not in in round .
If , then lines 12 - 19 execute our Additive-Group algorithm (see Corollary 3.5), and produce a proper coloring for neighbors with . For neighbors with , the coloring is proper as well, by analysis of previous cases in this proof. ∎

###### Lemma 4.2.

Given an arbitrary graph , our fully-dynamic self-stabilizing algorithm produces a proper -coloring with stabilization-time.

###### Proof.

In the end of each round

, counting from the moment that faults stop occurring, all colors are in the range

. Therefore, within rounds, all colors are in the range . From this moment and on, the procedure executes our Additive-Group algorithm in all vertices. Therefore, by Corollary 3.5, within additional rounds the number of colors becomes . ∎

We also obtain a self-stabilizing algorithm that employs exactly colors. This algorithm and its analysis are provided in Section 7. Its properties are summarized in Theorem 4.3 below. Similarly to our static algorithm (see end of Section 3), the self-stabilizing algorithm can be implemented using words of local memory in each vertex. Note also that if topological updates or faults occur in some set of vertices, then only vertices of may change their colors as a result of recomputation conducted by our algorithm. This is because only vertices of may detect an error, and re-initialize their colors. Other vertices keep having finalized colors. (Even if their neighbors encountered faults and have to re-compute their colors, these colors will never conflict with finalized ones, once faults stop occurring.) It follows that the adjustment radius of our algorithm is just .

###### Theorem 4.3.

Given an arbitrary graph , our fully-dynamic self-stabilizing algorithm produces a proper -coloring with stabilization time. Its adjustment radius is .

### 4.2 Fully-Dynamic Self-Stabilizing MIS, MM, and (2Δ−1)-Edge-Coloring

We employ our self-stabilizing coloring algorithm from the previous section in order to compute MIS as follows. We add a bit to the RAM of each vertex . This bit represents whether is in the MIS (if ) or not in the MIS (if ). We add the following instruction in the end of Procedure Self-Stabilizing-Coloring. If all neighbors of with smaller colors than that of have , then we set . Otherwise, we set . This completes the description of the changes required to compute an MIS. The next lemma shows that within rounds, for , after the stabilization of coloring, all vertices with colors induce a subgraph with a properly computed MIS. Consequently, within additional rounds an MIS of the entire input graph is achieved. The proof is provided below.

###### Lemma 4.4.

Suppose that an invocation of Procedure Self-Stabilizing-Coloring has produced a proper -coloring of the from for each vertex in a certain round. Then, within rounds from that moment, for , the subgraph induced by vertices of colors has a properly computed MIS.

###### Proof.

The proof is by induction on . Base (i = 0): All vertices of color do not have neighbors with smaller colors, and thus their bits become equal to 1. Since in this stage the coloring is proper, the set of such vertices is independent. Since it does not contain vertices of other colors in the current stage, the set is also maximal. (in other words, this set is an MIS of itself.)
Step: We consider the subset of vertices with colors . By induction hypothesis, within rounds from stabiliztion of the coloring, the subgraph induced by vertices of colors has a properly computed MIS. Since colors do not change after stabilization, the bits of the vertices of this subgraph whill not change in round . In this round each vertex of color sets its bit to if it has no neighbors with a smaller color in the MIS, and sets it to otherwise. Consequently, there are no pair of neighbors with colors from , for which both their bits are set to . Moreover, each vertex of color smaller than for which must have a neighbor with , by induction hypothesis. Each vertex of color for which must have a neighbor with a smaller color and , according to the instruction that is executed in round after the stabilization of the coloring. Hence, after that round, the subgraph induced by has a properly computed MIS. ∎

###### Theorem 4.5.

Given an arbitrary graph , our self-stabilizing algorithm produces a proper MIS within rounds after the last fault.

###### Proof.

Let be the stabilization time of the coloring algorithm. (See Theorem 4.3.) Denote by , , the set of vertices that belong to MIS (i.e., have ) at round after faults stop occurring. Let be the -coloring maintained by the algorithm. (We know that rounds after the last fault occurred, is indeed a proper -coloring.)

We prove by induction on that at time after faults stop occurring, for , is an MIS for the set , where is the coloring maintained by the algorithm at that time.
Base (i = 1): All vertices of form an independent set (because is a proper -coloring, because it is the coloring more than rounds after the last fault occurred), and each of them joins MIS because they have no neighbors of smaller color.
Step: For some we assume that is an MIS for . Consider a vertex , i.e., . This vertex had the same color for all the rounds , counting from the moment when faults stopped occurring. By end of round or earlier, all its neighbors of smaller color (they also did not change their colors during the time interval ) have set their values . So in round , iff has no smaller color neighbor in the MIS, it joins MIS. (It might have joined earlier, but it will anyway check again whether it has to join in round .) Since vertices of form an independent set, the resulting set is an independent set for . ∎

In order to bound the adjustment radius of our MIS algorithm, we perform the following modifications. We run a self-stabilizing -vertex-coloring algorithm, and also, a self-stabilizing MIS algorithm. In the latter, every vertex has a status MIS, NOTMIS, or Undecided. In every round, in addition to making the local check of coloring, also conducts a local check of the MIS. Specifically, if MIS, it tests if there is a neighbor of in the MIS. If it is the case, the status of becomes Undecided. Also, if the status of is NOTMIS, and no neighbor of belongs to the MIS, then changes its status to Undecided. Also, every undecided vertex that has no MIS vertices in its neighborhood, and such that its color is smaller than that of all its undecided neighbors, joins MIS. (This can happen in the same round when changes its state, e.g., from NOTMIS to Undecided.) If an undecided vertex has a neighbor in the MIS, the status of becomes NOTMIS.

Next, we argue that stabilization time is . Indeed, the coloring stabilizes within this time, and then in additional rounds (where the number of colors is , for a constant ) the MIS stabilizes. Indeed, one round after faults stop occurring, no pair of neighbors remain together in the MIS. In the next round, all vertices that are not Undecided induce a subgraph with a proper MIS. Next, it can be argued inductively (but one should look on , where is the set of vertices already in the MIS at that time) that for every , after rounds after stabilization of coloring, for vertices of colors smaller than or equal to , we have an MIS.)

Next, we analyze the adjustment radius of the algorithm. Consider a vertex in the MIS, and suppose that was stable. We argue that stays in the MIS. (Regardless of changes in color and MIS-statuses in .) Vertices in may change their colors (but not ), because the adjustment radius of the coloring algorithm is . However, a color change itself does not trigger a change in the MIS status. Each had status NOTMIS. Even if one or more of its neighbors (not ) change their MIS-status to NOTMIS, still, since has a neighbor in MIS (which is ), will keep it MIS-status = NOTMIS, and so will keep it MIS-status = MIS. Finally, if MIS-status of is NOTMIS, and the -hop-neighborhood is stable, then there is a neighbor of with MIS-status = MIS, and is stable. But then by the previous argument, does not change its MIS-status (stays in the MIS), and so retains its MIS-status = NOTMIS.

Finally, observe that the adjustment radius of the MIS algorithm is not , because if MIS-status of is NOTMIS, and its neighbor is in the MIS, and a neighbor of joins MIS (as a result of a fault), then both and become undecided, and also (if it has no other MIS neighbor) becomes undecided. If also has locally minimal color among undecided vertices, then joins MIS. So a change in -neighborhood of may cause a change in , and thus the adjustment radius is greater than . We showed earlier that it is at most , thus it equals . We summarize this in the next theorem.

###### Theorem 4.6.

The adjustment radius of our self-stabilizing MIS algorithm is .

In the ordinary (non-stabilizing) setting it is possible to compute a maximal matching and an edge coloring by simulating the line-graph of the input graph, and computing an MIS and vertex-coloring of it. These solutions on the line graph directly provide solutions for maximal matching and edge coloring of the input graph within the same running time. This technique is applicable also to the self-stabilizing setting. Specifically, each vertex simulates virtual vertices, one virtual vertex per edge adjacent on . In the beginning of each round each vertex verifies whether the state of each of its virtual vertices that correspond to some edge equals to the state in the other endpoint of that edge. If this is not the case, the endpoint with a greater ID copies the state of the other endpoint for that virtual vertex. Consequnetly all edges have consistent representations, i.e., the same state in both their endpoints, in the entire graph. Now, a self-stabilizing MIS or vertex-coloring algorithm can be simulated correctly on the line graph in order to produce self-stabilizing maximal matching and edge-coloring of the input graph. In conjunction with Theorems 4.3, 4.5 this implies the following result.

###### Theorem 4.7.

Given an arbitrary graph , our self-stabilizing algorithms produce a maximal matching and a proper -edge-coloring within stabilization time.

## 5 Edge Coloring within O(Δ+log∗n) Rounds in the CONGEST Model and O(Δ+logn) Rounds in the Bit-Round Model

We employ our techniques in order to compute edge colorings using small messages. The algorithm consists of two stages. The first stage constructs an -edge-coloring from scratch, and the second stage computes an -coloring from this -coloring. We remark that we cannot use the algorithm of Linial for the first stage, since its message complexity in the case of edge-coloring is quite large. Instead, we do the following. We invoke Kuhn’s algorithm [44] for -defective -edge coloring. This algorithm orients all edges towards endpoints with greater IDs. Then, each vertex assigns its outgoing edges distinct colors from the set . It also assigns its incoming edges distinct colors from the same range. Consequently, each edge obtains a pair of colors, one color from each of its endpoints. This is done within a single round by sending a message of size per edge (in both directions).

Each color of an edge can be represented as an ordered pair , where . Observe that a set of edges with the same -color consists of paths and cycles, since each vertex on such an edge has at most one another edge adjacent on it in this set. This is because the defect of is . To remove the defect we run Cole and Vishkin coloring algorithm [15] on edges of each color class in parallel and assign a new color to each edge in the form . The first two indices are the result of the first stage, and the rightmost index is the result of Cole-Vishkin’s algorithm invocation.

Next, we compute an -edge-coloring from the -edge-coloring as follows. In each round both endpoints of an edge hold its color, that will be from now on represented as an ordered pair , , rather than a tuple. Consequently, each endpoint can check for conflicts of edges adjacent on it. For each edge with a conflict at an endpoint, the endpoint sends a message over this edge (consisting of a single bit) to notify the other endpoint about the conflict. Then, for each edge, both of its endpoint know whether it has a conflict with some adjacent edge or not. If the current edge color is , and there is a conflict, the new color becomes . Otherwise, it becomes . Both endpoints update the new color of their edge. This is done within a single round and by exchanging just a single bit on each edge. Then all vertices of the graph are ready to proceed to the next round and perform it in a similar way. The algorithm stops once all edges have colors of the from ,