1 Introduction
Graph coloring is a central problem in graph theory and has numerous applications in diverse areas of computer science. A proper coloring of a graph assigns a color to every vertex from the palette of colors such that no edge is monochromatic, i.e., has the same color on both endpoints. An important and wellstudied case of graph coloring problems is the coloring problem where is the maximum degree of the graph. Not only does every graph admit a coloring, remarkably, any partial coloring of vertices of a graph can be extended to a proper coloring of all vertices: simply pick uncolored vertices in any order and assign a color to a vertex not yet assigned to any of its neighbors; since the maxdegree is , such a color always exists.
In this paper, we study the coloring problem in the context of processing massive graphs. The aforementioned property of coloring problem immediately implies a simple (sequential) greedy algorithm for this problem in linear time and space. However, when processing massive graphs, even this algorithm can be computationally prohibitive. This is due to various limitations arising in processing massive graphs such as requiring to process the graph in a streaming fashion on a single machine or in parallel across multiple machines due to storage limitations, or simply not having enough time for reading the whole input. A natural question is then:
Can we design sublinear algorithms for coloring problem in modern models of computation for processing massive graphs?
We answer this fundamental question in the affirmative for several canonical classes of sublinear algorithms including (dynamic) graph streaming algorithms, sublinear time/query algorithms, and massively parallel computation (MPC) algorithms. We also prove new lower bounds to contrast the complexity of the coloring problem in these models with two other closely related Locally Checkable Labeling (LCL) problems (see [39]), namely, the maximal independent set and the maximal matching^{1}^{1}1Another closely related LCL problem is the edge coloring problem. However, as the output in the edgecoloring problem is linear in the input size, one cannot hope to achieve nontrivial algorithms for this problem in models such as streaming or sublinear time algorithms, and hence we ignore this problem in this paper..
1.1 Our Contributions
We present new sublinear algorithms for the coloring problem which are either the first nontrivial ones or significantly improve the stateoftheart. At the core of these algorithms is a simple metaalgorithm for this problem that we design in this paper; the sublinear algorithms are then obtained by efficiently implementing this metaalgorithm in each model separately.
A MetaAlgorithm for Coloring. The main approach behind our metaalgorithm is to “sparsify” the coloring problem to a listcoloring problem with lists/palletes of size for every vertex. This may sound counterintuitive: while every graph admits a coloring that can be found via a simple algorithm, there is no guarantee that it also admits a listcoloring with size lists, let alone one that can be found via a sublinear algorithm. The following key structural result that we prove in this paper, however paves the path for this sparsification.
[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]
Result 1 (PaletteSparsification Theorem).
Let be an vertex graph with maximum degree . Suppose for any vertex , we sample colors from independently and uniformly at random. Then with high probability there exists a proper coloring of in which the color for every vertex is chosen from .
In Result 1, as well as throughout the paper, “with high probability” means with probability for some large polynomial in .
Result 1 can be seen as a “sparsification” result for coloring: after sampling colors for each vertex randomly, the total number of monochromatic edges is only with high probability (see Section 4 for a simple proof); at the same time, by computing a proper coloring of using only these edges—which is promised to exist by Result 1—we obtain a coloring of . As such, Result 1 provides a way of sparsifying the graph into only edges, while still allowing for recovery of a coloring of the original graph. This sparsification serves as the central tool in our sublinear algorithms for the coloring problem.
We shall remark that, as stated, Result 1 only promise the existence of a coloring (which can be found in exponential time), but in fact we show that there is a fast and simple procedure to find the corresponding coloring and that this will also be used by our algorithms in each model. We also note that the bound of colors in Result 1 is asymptotically optimal (see Proposition C.1).
Streaming Algorithms. Our Result 1 can be used to design a singlepass semistreaming algorithm for the coloring problem in the most general setting of graph streams, namely, dynamic streams that allow both insertions and deletions of edges (see Section 4.1 for details).
Result 2.
There exists a randomized singlepass dynamic streaming algorithm for the coloring problem using space.
To our knowledge, the only previous semistreaming algorithm for coloring was the folklore pass streaming simulation of the classical round distributed/parallel (PRAM) algorithms for this problem (see, e.g. [37]). No space singlepass streaming algorithm was known for this problem even in insertiononly streams. This stateofaffairs was in fact similar to the case of the closely related maximal matching problem: the best known semistreaming algorithm for this problem on dynamic streams uses passes [35, 3] and it is provably impossible to solve this problem using space in a single pass over a dynamic stream [9] (although this problem is trivial in insertiononly streams). Considering this one might have guessed a similar lower bound also holds for the coloring problem. We further prove a lower bound of space on the space complexity of singlepass streaming algorithms for computing a maximal independent set even in insertiononly streams (see Theorem 5). Result 2 is in sharp contrast to these results, and shows that coloring is provably simpler than both problems in the streaming setting.
Sublinear Time Algorithms. There exists a straightforward greedy algorithm that computes a coloring of any given graph in linear time, i.e., time. Perhaps surprisingly, we show that one can improve upon the running time of this textbook algorithm by using Result 1.
Result 3.
There exists a randomized time algorithm for the coloring problem. Furthermore, any algorithm for this problem requires time.
When designing sublinear (in ) time algorithms, specifying the exact data model is important as the algorithm cannot even read the entire input once. In Result 3, we assume the standard query model for sublinear time algorithms on general graphs (see, e.g., Chapter 10 of Goldreich’s book [23]) which allow for two types of queries what is the th neighbor of a given vertex , and whether a given pair of vertices are neighbor to each other or not (see Section 4.2 for details). To our knowledge, this is the first sublinear time algorithm for the coloring problem. We also note that an important feature of our algorithm in Result 3 is that it is nonadaptive, i.e., it chooses all the queries to the graph beforehand and thus queries are done in parallel.
In yet another contrast to the coloring problem, we show that the problem of computing a maximal matching requires queries to the graph and hence time (see Theorem 6).
Massively Parallel Computation Algorithms. Another application of our Result 1 is a constantround algorithm for the coloring problem in the MPC model, which is a common abstraction of MapReducestyle computation frameworks (see Section 4.3 for formal definition).
Result 4.
There exists a randomized MPC algorithm for the coloring problem in at most two MPC rounds on machines with memory .
Two recent papers considered graph coloring problems in the MPC model. Harvey et al. [26] designed algorithms that use memory per machine and find a coloring of a given graph—an algorithmically easier problem than coloring—in MPC rounds. Furthermore, Parter [42] designed an MPC algorithm that uses memory per machine and finds a coloring in rounds^{2}^{2}2The algorithm of Parter [42] is stated in the CongestedClique model, but using the wellknown connections between this model and the MPC model, see, e.g. [12, 22], this algorithm immediately extends to the MPC model.. Our Result 4 improves these results significantly: both the number of used colors as well as per machine memory compared to [26], and roundcomplexity (with at most factor more permachine memory) compared to [42].
Maximal matching and maximal independent set problems have also been studied previously in the MPC model [35, 22, 32]. Currently, the best known algorithms for these problem with memory per machine require rounds in case of maximal matching [35] and rounds in case of maximal independent set [22, 32]. For the related problems of approximating the maximum matching and the minimum vertex cover, a recent set of results achieve round MPC algorithms with memory per machine [7, 8, 22, 17] (these results however do not extend to the maximal matching problem). Our Result 4 hence is the first example that gives a constant round MPC algorithm for one of the “classic four local distributed graph problems” (i.e., maximal independent set, maximal matching, vertex coloring, and edge coloring; see, e.g. [41, 10, 20]), even when the memory per machine is as small as .
Optimality of Our Sublinear Algorithms:
Spacecomplexity of our streaming algorithm in Result 2 and roundcomplexity of our MPC algorithm in Result 4 are clearly optimal (to within polylog factors and constant factors, respectively). We further prove that query and time complexity of our sublinear time algorithm in Result 3 are also optimal up to polylog factors (see Theorem 7).
Perspective: Beyond Greedy Algorithms.
Many graph problems admit simple greedy algorithms. Starting with Luby’s celebrated distributed/parallel algorithm for the maximal independent set problem [36], there have been numerous attempts in adapting these greedy algorithms to different models of computation including the models considered in this paper (see, e.g. [35, 34, 28, 26, 40, 5]). Typically these adaptations require multiple passes/rounds of computation, and this is for the fundamental reason that most greedy algorithms are inherently sequential: they require accessing the input graph in an adaptive manner based on decisions made thus far, which, although limited, still results in requiring multiple passes/rounds over the input.
Our work on coloring bypasses this limitation of greedy algorithms by utilizing a completely different approach, namely, a nonadaptive sparsification of the input graph (Result 1) that in turns lends itself to space, time, and communicationefficient algorithms in a variety of different models. As such, our results can be seen as an evidence that directly adapting greedy algorithms for graph problems may not necessarily be the best choice in these models. We believe that this viewpoint is an important (nontechnical) contribution of our paper as it may pave the way for obtaining more efficient algorithms for other fundamental graph problems in these models.
1.2 Our Techniques
The main technical ingredient of our paper is Result 1. For intuition, consider two extreme cases: when the underlying graph is very dense, say is a clique on vertices, and when the underlying graph is relatively sparse, say every vertex (except for one) have degree at most . Result 1 is easy to prove for either case albeit by using entirely different arguments. For the former case, consider the bipartite graph consisting of vertices in on one side and set of colors on the other side, where each vertex in the side is connected to vertices in in the colorside. Using standard results from random graphs theory, one can argue that this graph with high probability has a perfect matching, thus implying the listcoloring of . For the latter case, consider the following simple greedy algorithm: iteratively sample a color for every vertex from the set and assign the color to the vertex if it is not chosen by any of its neighbors so far. It is wellknown that this algorithm only requires rounds when number of colors is a constant factor larger than the degree (see, e.g., [44]). As such, the set of colors sampled in the list for vertices is enough to “simulate” this algorithm in this case.
To prove Result 1
in general, we need to interpolate between these two extreme cases. To do so, we exploit a graph decomposition result of
[25] (see also [14]) for the coloring problem, that allows for decomposing a graph into “sparse” and “dense” components. The proof for coloring the sparse components then more or less follows by simulating standard distributed algorithms in [44, 18] as discussed above. The main part however is to prove the result for dense components which requires a global and nongreedy argument. Note that in general, we can always reduce the problem of finding a coloring to an instance of the assignment problem on the bipartite graph discussed above. The difference is that we need to allow some vertices in to be assigned to more than one vertex in when (as opposed to the case of cliques above that only required finding a perfect matching). We show that if the original graph is “sufficiently close” to being a clique, then with high probability, such an assignment exists in this bipartite graph and use this to prove the existence of the desired listcoloring of .Result 1 implies the sublinear algorithms we design in each model with a simple caveat: The listcoloring problem that needs to be solved in the sparsified graph is in general NPhard and hence using this result directly does not allow for a polynomial time implementation of our algorithms. We thus combine Result 1 with additional ideas specific to each model to turn these algorithms into polynomial time (and in fact even sublinear time) algorithms.
1.3 Recent Related Work
Independently and concurrently to our work, two other papers also considered the vertex coloring problem in settings related to this paper. Firstly, Parter and Su [43], improving upon the previous algorithm of Parter [42], gave an round congestedclique algorithm for coloring; this result also immediately implies an MPC algorithm for coloring in rounds and memory permachine. Moreover, Bera and Ghosh [13] also studied the graph coloring problem in the streaming model and gave a singlepass streaming algorithm that for any parameter , outputs a coloring of the input graph using space. Note that for the coloring problem, this algorithm requires space which is equal to the input size.
Subsequent to our work, Chang et al. [15] further studied the coloring problem and among other results, gave an round MPC algorithm for this problem on machines with memory as small as .
2 Preliminaries
Notation.
For any , we define . For a graph , we use to denote the vertices, to denote the edges, and to denote the degree of .
2.1 The HarrisSchneiderSu (HSS) Network Decomposition
In the proof of our Result 1, we use a network decomposition result of Harris, Schneider and Su for designing distributed algorithms for graph coloring in the LOCAL model [25]. We emphasize that we use of this decomposition in a quite different way than the ones in distributed settings [25, 14].
The HarrisSchneiderSu network decomposition, henceforth HSSdecomposition, partitions a graph into sparse and dense regimes, measured with respect to a parameter .
Definition 1 (friend edges).
For any , we say that an edge in a graph is an friend edge iff . Let denote the set of all friend edges.
Definition 2 (dense vertices).
For any we say that a vertex in a graph is dense iff degree of in is at least . We use to denote the set of all dense vertices.
Consider the graph as the subgraph of on the set of dense vertices and containing only the friend edges. Let be the connected components of . HSSdecomposition partitions the vertices of the graph into and , where is partitioned into with the following properties given by Lemma 2.1 and Proposition 2.2.
Lemma 2.1 ([25]).
Any connected component of has size at most . Moreover, any vertex has at most:

neighbors (in ) in , i.e., .

nonneighbors (in ) in , i.e., .
Define sparse vertices as , i.e., the vertices which are not dense. The main property of sparse vertices we are interested in is as follows.
Proposition 2.2.
Let be any sparse vertex in . Then, the total number of edges spanning the neighborhood of is at most , that is .
Proof.
If is less than , to prove the proposition, we add some dummy vertices which are only connected to so that become exactly , By doing so, the number of edges spanning the neighborhood of would not change. As is an sparse vertex, it means that at least of its neighbors have at most neighbors in common with . This means that any of those vertices is not connected to at least other vertices in . As such, the total number of edges spanning the neighborhood of is at most, .
2.2 A Simple Extension of the HSSDecomposition
It would be more convenient for us to work with a slightly different variant of the HSSdecomposition that we introduce here.
Lemma 2.3 (Extended HSSDecomposition).
For any parameter , any graph can be decomposed into a collection of vertices such that:

[leftmargin=15pt]

, i.e., any vertex in is at least sparse and at most sparse.

For any , has the following properties (we refer to as an almostclique):

.

Any has at most neighbors outside of .

Any has at most nonneighbors inside of .

Two main differences between Lemma 2.1 and the original HSSdecomposition are: size of each is now lower bounded (HSSdecomposition does not lower bound the size of ), and the number of all neighbors of any vertex outside is now bounded (not only neighbors to other dense vertices as in the original HSSdecomposition).
Proof of Lemma 2.3.
Consider the HSSDecomposition with parameter . By Lemma 2.1, can be decomposed into sparse vertices and components with dense vertices. Let be the components among these that contain at least one dense vertex.
We define as the set of vertices in , i.e., all vertices that are not in the connected components defined above. Clearly, . On the other hand, does not contain any dense vertices (as we removed ), and hence . This proves Property (1). We now prove Property (2).
Fix any and let be any connected component that contains a dense vertex. Firstly, since is a connected component of a HSSdecomposition with parameter , by Lemma 2.1, any vertex in has at most nonneighbors inside . This proves Property (2c).
Now let be any dense vertex in . As is dense, by Definition 2, has at least friend neighbors. Let be the set of these vertices. By Definition 1, any of these vertices have at least shared neighbors with . As the maximum degree of any vertex is , this implies that any two vertices have at least common neighbors with each other. Furthermore, since has at least vertices, each vertex in has at least neighbors in . Thus all vertices in are dense. Moreover, as all vertices in are connected to by an friend edge (and hence also a friend edge), vertices in all appear in the same connected component with the vertex . This implies that . Moreover, by Property (2c) we already proved, any vertex has at most nonneighbors in and hence . This proved Property (2a).
Finally, the above argument, together with the lower bound on size of , also implies that each vertex is connected to at least vertices inside . As such, can only have neighbors outside proving Property (2b).
3 The PaletteSparsification Theorem
We prove our Result 1 in this section; see Appendix C for further remarks on optimality of the bounds in this result, as well as (im)possibility of extending this result to coloring for values of strictly smaller than .
Theorem 1 (PaletteSparsification Theorem).
Let be any vertex graph and be the maximum degree in . Suppose for each vertex , we independently pick a set of colors of size uniformly at random from . Then with high probability there exists a proper coloring of such that for all vertices , .
Let us start by fixing the parameters used in the proof of Theorem 1. Let be a sufficiently small constant, say, and be a sufficiently large integer, say ^{3}^{3}3In the interest of simplifying the exposition of the proof, we made no attempt in optimizing the constants. The proof of the theorem can be made to work with much smaller constants than the ones used here.. In Theorem 1, we make each vertex sample colors in . We assume that ; otherwise Theorem 1 is trivial as we sampled all colors for each vertex and every graph admits a coloring. For the purpose of the analysis, we assume that the set of each vertex is union of three sets , named batch one, two, and three, respectively, where each for is created by picking each color in independently and with probability . While this distribution is not identical to the one in Theorem 1, it is easy to see that proving the theorem for this distribution also implies the original result as in this new distribution, with high probability, no vertex samples more than colors.
We use the extended HSSdecomposition with parameter (Lemma 2.3): graph is decomposed into where each for is an almostclique.
We prove Theorem 1 in three parts. In the first part, we argue that by only using the colors in the first batch , we can color all the vertices in . This part is mostly standard and more or less follows from the results in [18, 25, 14] by simulating a distributed local algorithm using only the colors in the first batch. We hence concentrate bulk of our effort in proving the next two parts which are the key components of the proof. We first show that using only the colors in the second batch, we can color a relatively large fraction of vertices in each almostclique at a rate of two vertices per color (assuming the number of nonedges in the almostclique is not too small). This allows us to “save” extra colors for coloring the remainder of the almostcliques, which we do in the last part. We note that unlike the coloring of the first part which is based on a “local” coloring scheme (in which we determine the color of each vertex based on colors assigned to each of its neighbors similar to the greedy algorithm), the coloring of the second and third part is done in a “global” manner in which the color of a vertex is determined based on some global properties of the graph not only the local neighborhood of a vertex.
Partial Coloring Function. Define a function that assigns one of the colors in plus the null color to the vertices, such that no two neighboring vertices have the same color from (but they may both have the null color ). We refer to as a partial coloring function and refer to vertices that are colored by in as having a valid color. Furthermore, we say that a valid color is available to a vertex in the partial coloring , iff does not assign to any neighbor of . The set of available colors for is denoted by .
It is immediate that if does not assign a null color to any vertex, then the resulting coloring is a proper coloring of the graph. We start with a partial coloring function which assigns a null color to all vertices initially and modify this coloring in each part to remove all null colors.
3.1 Part One: Coloring Sparse Vertices.
Recall the definition of sparse vertices in the extended HSSdecomposition from Section 2. In the first part of the proof, we show that we can color all sparse vertices using only the colors in the first batch.
Lemma 3.1.
With high probability, there exists a partial coloring function such that for all vertices , .
We prove this lemma by showing that one can simulate a natural greedy algorithm (similar but not identical to the algorithm of [18]) for coloring sparse vertices using only the colors in the first batch. The first step is to use the first color in , chosen uniformly at random from , for all vertices to color a large fraction of vertices in ; the main property of this coloring is that after this step any uncolored sparse vertex has “excess” colors compared to the number of edges it has to other uncolored sparse vertices. This step follows from the proof of the OneShotColoring procedure in [18, 25, 14] and we simply present a proof sketch for intuition. We then use the remaining colors in for each uncolored vertex and color them greedily, using the fact that the number of available colors is sufficiently larger than the number of neighbors of each uncolored vertex in every step. This part is also similar to the algorithm in [18] (see also [25, 14]) but uses a different argument as here we cannot sample the colors for each vertex adaptively (as the colors in are sampled “at the beginning” of the greedy algorithm).
3.2 Part Two: Initial Coloring of Almost Cliques.
Recall that by Lemma 3.1, after the first part, we have a partial coloring that assigns a valid color to all sparse vertices. We now design a partial coloring where for all , and for remaining vertices initially but some additional vertices would also be assigned a valid color by the end of this part using the second batch.
Fix the almostcliques . Define as the complementgraph of on the same set of vertices as by switching edges and nonedges in . Note that any two neighboring vertices in can be colored the same (in ). We exploit this fact in the following definition.
Definition 3 (Colorful Matching).
We say that a matching in the complementgraph of an almostclique is a colorful matching with respect to the partial coloring iff:

For any there is a color s.t and .

For any pairs of edges , .
By finding a colorful matching in a complementgraph , we can “save” on the the colors needed for coloring as we can color vertices of the matching at a rate of two vertices per color.
We now iterate over complementgraphs one by one, and show that there exists a sufficiently large colorful matching in each complementgraph, even after we update the coloring for vertices matched by the colorful matchings in previous complementgraphs.
Lemma 3.2.
Fix any complementgraph and let be any partial coloring in which for all . Suppose average degree of is . Then, there exists a colorful matching of size at least in with high probability (over the randomness of ).
We start by some definitions. For , a color is available to this edge if the color is available to both and under . For a set of colors , let be the number of available colors for an edge in . For a set of edges , we define . We say that an edge sampled an available color in iff there exists an available color for in . Lemma 3.2 relies on the following lemma.
Lemma 3.3.
Let be a subgraph of and be its edgeset. Let be any set of colors such that . If for each vertex in , we sample each color in with probability , then with high probability, there is an edge in that samples an available color.
Proof.
Let . We argue that if each vertex samples each color in with probability , then with a constant probability, there is an edge in that samples an available color. We then argue that sampling with rate can be seen as performing this experiment independently times and obtain the final high probability bound.
For an edge , let
be an indicator random variable which is
iff sampled an available color (in the experiment with probability ). Since , we have,(as for ) 
Define . We prove which implies that with probability at least , an edge in samples an available color.
Firstly, notice that
. We prove that the variance of
is not much larger than its expectation, and use Chebyshev’s inequality to prove the bound on . By definition, . Since each , we have , hence it only remains to bound the covariance terms.For any pair of edges in , if they do not share a common endpoint, then the variables and are independent (hence their covariance is ), but if they share a common endpoint, their covariance would be nonzero. However, in this case, . By Property (2c) of Lemma 2.3, each vertex in has at most neighbors (as edges in are nonedges in the almostclique ). As such, there are at most edges that share a common endpoint with an edge . Let denote the set of edges in that share an endpoint with . We have,
The last equation is because . By Chebyshev’s inequality: .
So if we sample each color with probability , there is an edge that samples an available color with probability at least . By sampling the colors at rate , we can repeat this trial at least times and obtain that with , there is an edge that sampled an available color.
We are now ready to prove Lemma 3.2.
Proof of Lemma 3.2.
We construct the colorful matching in the following manner. We iterate over the colors (in an arbitrary order) and for each color , we find the vertices which sampled this color in (this choice is independent across colors by the sampling process that defines ). If is available for some edge in , we add with color to the matching, delete this edge from the graph, and move on to the next color. Clearly the resulting matching will be colorful (as in Definition 3). It thus remains to lower bound the size of this matching.
Let be initially and be its edgeset, i.e., . is also initially the set of all colors in . Let be the number of vertices in . Consider the value of throughout the process. When we are dealing with a color , if we cannot find an edge where is available for , we delete the color from . In this case, will decrease by . Otherwise, we add with color to our colorful matching, delete from , and delete and from . In this case, will decrease by at most since each vertex in has at most neighbors (by Property (2c) of extended HSSdecomposition in Lemma 2.3) and is at most as in the extended HSSdecomposition, (by Property (2a) of Lemma 2.3). By Lemma 3.3 (as the process of sampling colors in is identical to the lemma statement but sampling colors with higher probability), with high probability, will decrease by at most before we add a new edge to the colorful matching. So each time when we add a new edge into the colorful matching, decreases by at most with high probability. We now lower bound the value of which allows us to bound the number of times an edge is added to the colorful matching.
Let be an edge in . Both and belong to the almostclique in the extended HSSdecomposition and hence by Lemma 2.3, each have at most neighbors outside . This suggests that even if has assigned a color to all vertices except for , there are at least available colors for the edge , i.e., . Moreover, by Lemma 2.3, we also have that the number of vertices in the almostclique and hence also in is . As such,
by the choice of . Consequently, before becomes smaller than (and we could no longer apply Lemma 3.3), we would have added at least edges to the colorful matching with high probability, finalizing the proof.
The coloring is then computed as follows. We iterate over almostcliques and for each one, we pick a colorful matching of size (by our choice of ); by Lemma 3.2, we find this matching with high probability. We only pick edges from this colorful matching and for each edge in these edges, we set . By Definition 3, this is a valid coloring. We then move to the next almostclique (and use Lemma 3.2 with the updated ).
3.3 Part Three: Final Coloring of AlmostCliques.
We now finalize the coloring of almostcliques and obtain a coloring that assigns a valid color to all vertices. Initially, for all . We then iterate over almostcliques and update to assign a valid color to all vertices of the current almostclique. For any , define as the vertices that are not yet assigned a valid color in .
Definition 4 (PaletteGraph).
For any almostclique in and a partial coloring , we define a palettegraph of the almostclique with respect to as follows:

is a bipartite graph between the uncolored vertices in (i.e., ) and colors .

There exists an edge between and iff the color is available to vertex according to (i.e., ) and moreover .
Suppose we are able to find a matching in the palettegraph of an almostclique that matches all vertices in . Let be the matched pair of . We set and correctly color all vertices in this almostclique, and then continue to the next almostclique. The following lemma proves that with high probability, we can find such a matching in every almostclique.
Lemma 3.4.
Let be any almostclique in and be the partial coloring obtained after processing almostcliques . With high probability (over the randomness of the third batch), there exists a matching in the palettegraph of that matches all vertices in .
Proof of this lemma is based on the following result on existence of large matchings in certain random graphs that we prove in this paper.
Lemma 3.5.
Suppose and . Let be a bipartite graph such that:

and ;

each vertex in has degree at least and at most ;

the average degree of vertices in is at least .
A subgraph of obtained by sampling each edge with probability has a matching of size with high probability.
The proof of Lemma 3.5 is quiet technical and hence we postpone it to Section 3.5 to keep the flow of the current argument. We now use this result to prove Lemma 3.4.
Proof of Lemma 3.4.
Define as the bipartite graph with the same vertex set as the palettegraph of such that there is an edge between a vertex and a color iff is available to (edges in are superset of the ones in palettegraph as an edge can appear in even if ). By this definition, the palettegraph of is a subgraph of chosen by picking each edge independently with probability (by the choice of ).
We apply Lemma 3.5 to a properly chosen subgraph of with the same vertexset to prove this lemma. Let be the number of vertices in . By definition of coloring (after the proof of Lemma 3.2), has vertices. For any vertex , if has more than available colors (i.e., neighbors in ), then we pick available colors for arbitrarily and only connect to those colorvertices in ; otherwise, has the same neighbors in and .
We prove that satisfies all three properties in Lemma 3.5. Let and , and , and thus . This proves the first part of Property (1) of Lemma 3.5. Moreover, as is an almostclique, by Property (2a) and by Property (2c) of Lemma 2.3, and hence , proving the second part of Property (2) as well.
Furthermore, each vertex in has degree at most . Also any vertex in has degree at most outside the almostclique in by Property (2b) of Lemma 2.3, so any vertex in has at least available colors (even if has assigned colors to all vertices outside and all colors used by the colorful matching are also unavailable). As in we connect every vertex to the available colorvertices, satisfies Property (2) in Lemma 3.5.
Consider a vertex . Let be the number of nonneighbors of inside and hence has at most neighbors outside . As such, has at least neighbors inside of ( is the number of colors used by the colorful matching), hence also at least neighbors inside of . So has at least edges (by the fact that and the choice of ). Hence, the average degree in is at least , which implies satisfies Property (3) in Lemma 3.5.
To conclude, satisfies the properties of Lemma 3.5. Since the palettegraph of contains a subgraph of obtained by sampling each edge of with probability (as argued above), the palettegraph contains a matching of size with high probability.
3.4 WrapUp: Proof of Theorem 1.
The existence of listcoloring under the sampling process of and follows from Lemmas 3.1, 3.2 and 3.4. Note that this sampling process is not exactly the same as sampling colors uniformly at random as in Theorem 1. However, in this process, with high probability, we do not sample more than colors for each vertex and hence conditioning on sampling colors (as in Theorem 1) only changes the probability of success by a negligible factor, hence implying Theorem 1.
3.5 Proof of Lemma 3.5: Large Matchings in (Almost) Random Graphs
We first need the following auxiliary claim. The proof is standard and appears in Appendix B.
Claim 3.6.
Suppose is a constant. Consider two random variables and where for all , and are independent indicator random variables and and . Suppose and are nonnegative integers that are indexed in decreasing order and have the following two properties:

for any ,

then for any integer , .
We now use this claim to prove Lemma 3.5 restated below.
Lemma (Restatement of Lemma 3.5).
Suppose and . Let be a bipartite graph such that:

and ;

each vertex in has degree at least and at most ;

the average degree of vertices in is at least .
If is a subgraph of obtained by sampling each edge with probability , then has a matching of size with high probability.
Proof of Lemma 3.5.
By Hall’s marriage theorem, we only need to prove that with high probability, for any , the size of neighbor set of is at least in , i.e., . Fix a sets and let ; we prove this for the set .
If , since each vertex has degree at least in , total number of edges from to is at least . On the other hand, if we fix another set with , there are at most edges between and due to the fact that both and have at most vertices. As such, the number of edges between and
Comments
There are no comments yet.