Sublinear Algorithms for (Δ+ 1) Vertex Coloring

07/24/2018 ∙ by Sepehr Assadi, et al. ∙ 0

Any graph with maximum degree Δ admits a proper vertex coloring with Δ + 1 colors that can be found via a simple sequential greedy algorithm in linear time and space. But can one find such a coloring via a sublinear algorithm? We answer this fundamental question in the affirmative for several canonical classes of sublinear algorithms including graph streaming, sublinear time, and massively parallel computation (MPC) algorithms. In particular, we design: * A single-pass semi-streaming algorithm in dynamic streams using Õ(n) space. The only known semi-streaming algorithm prior to our work was a folklore O(log n)-pass algorithm obtained by simulating classical distributed algorithms in the streaming model. * A sublinear-time algorithm in the standard query model that allows neighbor queries and pair queries using Õ(n√(n)) time. We further show that any algorithm that outputs a valid coloring with sufficiently large constant probability requires Ω(n√(n)) time. No non-trivial sublinear time algorithms were known prior to our work. * A parallel algorithm in the massively parallel computation (MPC) model using Õ(n) memory per machine and O(1) MPC rounds. Our number of rounds significantly improves upon the recent O(Δ·^*(n))-round algorithm of Parter [ICALP 2018]. At the core of our results is a remarkably simple meta-algorithm for the (Δ+1) coloring problem: Sample O(n) colors for each vertex from the Δ+1 colors; find a proper coloring of the graph using only the sampled colors. We prove that the sampled set of colors with high probability contains a proper coloring of the input graph. The sublinear algorithms are then obtained by designing efficient algorithms for finding a proper coloring of the graph from the sampled colors in the corresponding models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graph coloring is a central problem in graph theory and has numerous applications in diverse areas of computer science. A proper -coloring of a graph assigns a color to every vertex from the palette of colors such that no edge is monochromatic, i.e., has the same color on both endpoints. An important and well-studied case of graph coloring problems is the coloring problem where is the maximum degree of the graph. Not only does every graph admit a coloring, remarkably, any partial coloring of vertices of a graph can be extended to a proper coloring of all vertices: simply pick uncolored vertices in any order and assign a color to a vertex not yet assigned to any of its neighbors; since the max-degree is , such a color always exists.

In this paper, we study the coloring problem in the context of processing massive graphs. The aforementioned property of coloring problem immediately implies a simple (sequential) greedy algorithm for this problem in linear time and space. However, when processing massive graphs, even this algorithm can be computationally prohibitive. This is due to various limitations arising in processing massive graphs such as requiring to process the graph in a streaming fashion on a single machine or in parallel across multiple machines due to storage limitations, or simply not having enough time for reading the whole input. A natural question is then:

Can we design sublinear algorithms for coloring problem in modern models of computation for processing massive graphs?

We answer this fundamental question in the affirmative for several canonical classes of sublinear algorithms including (dynamic) graph streaming algorithms, sublinear time/query algorithms, and massively parallel computation (MPC) algorithms. We also prove new lower bounds to contrast the complexity of the coloring problem in these models with two other closely related Locally Checkable Labeling (LCL) problems (see [39]), namely, the maximal independent set and the maximal matching111Another closely related LCL problem is the edge coloring problem. However, as the output in the edge-coloring problem is linear in the input size, one cannot hope to achieve non-trivial algorithms for this problem in models such as streaming or sublinear time algorithms, and hence we ignore this problem in this paper..

1.1 Our Contributions

We present new sublinear algorithms for the coloring problem which are either the first non-trivial ones or significantly improve the state-of-the-art. At the core of these algorithms is a simple meta-algorithm for this problem that we design in this paper; the sublinear algorithms are then obtained by efficiently implementing this meta-algorithm in each model separately.

A Meta-Algorithm for Coloring. The main approach behind our meta-algorithm is to “sparsify” the coloring problem to a list-coloring problem with lists/palletes of size for every vertex. This may sound counterintuitive: while every graph admits a coloring that can be found via a simple algorithm, there is no guarantee that it also admits a list-coloring with -size lists, let alone one that can be found via a sublinear algorithm. The following key structural result that we prove in this paper, however paves the path for this sparsification.

[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]

Result 1 (Palette-Sparsification Theorem).

Let be an -vertex graph with maximum degree . Suppose for any vertex , we sample colors from independently and uniformly at random. Then with high probability there exists a proper coloring of in which the color for every vertex is chosen from .

In Result 1, as well as throughout the paper, “with high probability” means with probability for some large polynomial in .

Result 1 can be seen as a “sparsification” result for coloring: after sampling colors for each vertex randomly, the total number of monochromatic edges is only with high probability (see Section 4 for a simple proof); at the same time, by computing a proper coloring of using only these edges—which is promised to exist by Result 1—we obtain a coloring of . As such, Result 1 provides a way of sparsifying the graph into only edges, while still allowing for recovery of a coloring of the original graph. This sparsification serves as the central tool in our sublinear algorithms for the coloring problem.

We shall remark that, as stated, Result 1 only promise the existence of a coloring (which can be found in exponential time), but in fact we show that there is a fast and simple procedure to find the corresponding coloring and that this will also be used by our algorithms in each model. We also note that the bound of colors in Result 1 is asymptotically optimal (see Proposition C.1).

Streaming Algorithms. Our Result 1 can be used to design a single-pass semi-streaming algorithm for the coloring problem in the most general setting of graph streams, namely, dynamic streams that allow both insertions and deletions of edges (see Section 4.1 for details).

Result 2.

There exists a randomized single-pass dynamic streaming algorithm for the coloring problem using space.

To our knowledge, the only previous semi-streaming algorithm for coloring was the folklore -pass streaming simulation of the classical -round distributed/parallel (PRAM) algorithms for this problem (see, e.g. [37]). No space single-pass streaming algorithm was known for this problem even in insertion-only streams. This state-of-affairs was in fact similar to the case of the closely related maximal matching problem: the best known semi-streaming algorithm for this problem on dynamic streams uses passes [35, 3] and it is provably impossible to solve this problem using -space in a single pass over a dynamic stream [9] (although this problem is trivial in insertion-only streams). Considering this one might have guessed a similar lower bound also holds for the coloring problem. We further prove a lower bound of -space on the space complexity of single-pass streaming algorithms for computing a maximal independent set even in insertion-only streams (see Theorem 5). Result 2 is in sharp contrast to these results, and shows that coloring is provably simpler than both problems in the streaming setting.

Sublinear Time Algorithms. There exists a straightforward greedy algorithm that computes a coloring of any given graph in linear time, i.e., time. Perhaps surprisingly, we show that one can improve upon the running time of this textbook algorithm by using Result 1.

Result 3.

There exists a randomized time algorithm for the coloring problem. Furthermore, any algorithm for this problem requires time.

When designing sublinear (in ) time algorithms, specifying the exact data model is important as the algorithm cannot even read the entire input once. In Result 3, we assume the standard query model for sublinear time algorithms on general graphs (see, e.g., Chapter 10 of Goldreich’s book [23]) which allow for two types of queries what is the -th neighbor of a given vertex , and whether a given pair of vertices are neighbor to each other or not (see Section 4.2 for details). To our knowledge, this is the first sublinear time algorithm for the coloring problem. We also note that an important feature of our algorithm in Result 3 is that it is non-adaptive, i.e., it chooses all the queries to the graph beforehand and thus queries are done in parallel.

In yet another contrast to the coloring problem, we show that the problem of computing a maximal matching requires queries to the graph and hence time (see Theorem 6).

Massively Parallel Computation Algorithms. Another application of our Result 1 is a constant-round algorithm for the coloring problem in the MPC model, which is a common abstraction of MapReduce-style computation frameworks (see Section 4.3 for formal definition).

Result 4.

There exists a randomized MPC algorithm for the coloring problem in at most two MPC rounds on machines with memory .

Two recent papers considered graph coloring problems in the MPC model. Harvey et al.  [26] designed algorithms that use memory per machine and find a coloring of a given graph—an algorithmically easier problem than coloring—in MPC rounds. Furthermore, Parter [42] designed an MPC algorithm that uses memory per machine and finds a coloring in rounds222The algorithm of Parter [42] is stated in the Congested-Clique model, but using the well-known connections between this model and the MPC model, see, e.g. [12, 22], this algorithm immediately extends to the MPC model.. Our Result 4 improves these results significantly: both the number of used colors as well as per machine memory compared to [26], and round-complexity (with at most -factor more per-machine memory) compared to [42].

Maximal matching and maximal independent set problems have also been studied previously in the MPC model [35, 22, 32]. Currently, the best known algorithms for these problem with memory per machine require rounds in case of maximal matching [35] and rounds in case of maximal independent set [22, 32]. For the related problems of -approximating the maximum matching and the minimum vertex cover, a recent set of results achieve -round MPC algorithms with memory per machine [7, 8, 22, 17] (these results however do not extend to the maximal matching problem). Our Result 4 hence is the first example that gives a constant round MPC algorithm for one of the “classic four local distributed graph problems” (i.e., maximal independent set, maximal matching, vertex coloring, and edge coloring; see, e.g. [41, 10, 20]), even when the memory per machine is as small as .

Optimality of Our Sublinear Algorithms:

Space-complexity of our streaming algorithm in Result 2 and round-complexity of our MPC algorithm in Result 4 are clearly optimal (to within poly-log factors and constant factors, respectively). We further prove that query and time complexity of our sublinear time algorithm in Result 3 are also optimal up to poly-log factors (see Theorem 7).

Perspective: Beyond Greedy Algorithms.

Many graph problems admit simple greedy algorithms. Starting with Luby’s celebrated distributed/parallel algorithm for the maximal independent set problem [36], there have been numerous attempts in adapting these greedy algorithms to different models of computation including the models considered in this paper (see, e.g. [35, 34, 28, 26, 40, 5]). Typically these adaptations require multiple passes/rounds of computation, and this is for the fundamental reason that most greedy algorithms are inherently sequential: they require accessing the input graph in an adaptive manner based on decisions made thus far, which, although limited, still results in requiring multiple passes/rounds over the input.

Our work on coloring bypasses this limitation of greedy algorithms by utilizing a completely different approach, namely, a non-adaptive sparsification of the input graph (Result 1) that in turns lends itself to space, time, and communication-efficient algorithms in a variety of different models. As such, our results can be seen as an evidence that directly adapting greedy algorithms for graph problems may not necessarily be the best choice in these models. We believe that this viewpoint is an important (non-technical) contribution of our paper as it may pave the way for obtaining more efficient algorithms for other fundamental graph problems in these models.

1.2 Our Techniques

The main technical ingredient of our paper is Result 1. For intuition, consider two extreme cases: when the underlying graph is very dense, say is a clique on vertices, and when the underlying graph is relatively sparse, say every vertex (except for one) have degree at most . Result 1 is easy to prove for either case albeit by using entirely different arguments. For the former case, consider the bipartite graph consisting of vertices in on one side and set of colors on the other side, where each vertex in the -side is connected to vertices in in the color-side. Using standard results from random graphs theory, one can argue that this graph with high probability has a perfect matching, thus implying the list-coloring of . For the latter case, consider the following simple greedy algorithm: iteratively sample a color for every vertex from the set and assign the color to the vertex if it is not chosen by any of its neighbors so far. It is well-known that this algorithm only requires rounds when number of colors is a constant factor larger than the degree (see, e.g., [44]). As such, the set of colors sampled in the list for vertices is enough to “simulate” this algorithm in this case.

To prove Result 1

in general, we need to interpolate between these two extreme cases. To do so, we exploit a graph decomposition result of 

[25] (see also [14]) for the coloring problem, that allows for decomposing a graph into “sparse” and “dense” components. The proof for coloring the sparse components then more or less follows by simulating standard distributed algorithms in [44, 18] as discussed above. The main part however is to prove the result for dense components which requires a global and non-greedy argument. Note that in general, we can always reduce the problem of finding a coloring to an instance of the assignment problem on the bipartite graph discussed above. The difference is that we need to allow some vertices in to be assigned to more than one vertex in when (as opposed to the case of cliques above that only required finding a perfect matching). We show that if the original graph is “sufficiently close” to being a clique, then with high probability, such an assignment exists in this bipartite graph and use this to prove the existence of the desired list-coloring of .

Result 1 implies the sublinear algorithms we design in each model with a simple caveat: The list-coloring problem that needs to be solved in the sparsified graph is in general NP-hard and hence using this result directly does not allow for a polynomial time implementation of our algorithms. We thus combine Result 1 with additional ideas specific to each model to turn these algorithms into polynomial time (and in fact even sublinear time) algorithms.

1.3 Recent Related Work

Independently and concurrently to our work, two other papers also considered the vertex coloring problem in settings related to this paper. Firstly, Parter and Su [43], improving upon the previous algorithm of Parter [42], gave an round congested-clique algorithm for coloring; this result also immediately implies an MPC algorithm for coloring in rounds and memory per-machine. Moreover, Bera and Ghosh [13] also studied the graph coloring problem in the streaming model and gave a single-pass streaming algorithm that for any parameter , outputs a coloring of the input graph using space. Note that for the coloring problem, this algorithm requires space which is equal to the input size.

Subsequent to our work, Chang et al.  [15] further studied the coloring problem and among other results, gave an round MPC algorithm for this problem on machines with memory as small as .

2 Preliminaries

Notation.

For any , we define . For a graph , we use to denote the vertices, to denote the edges, and to denote the degree of .

2.1 The Harris-Schneider-Su (HSS) Network Decomposition

In the proof of our Result 1, we use a network decomposition result of Harris, Schneider and Su for designing distributed algorithms for graph coloring in the LOCAL model [25]. We emphasize that we use of this decomposition in a quite different way than the ones in distributed settings [25, 14].

The Harris-Schneider-Su network decomposition, henceforth HSS-decomposition, partitions a graph into sparse and dense regimes, measured with respect to a parameter .

Definition 1 (-friend edges).

For any , we say that an edge in a graph is an -friend edge iff . Let denote the set of all -friend edges.

Definition 2 (-dense vertices).

For any we say that a vertex in a graph is -dense iff degree of in is at least . We use to denote the set of all -dense vertices.

Consider the graph as the subgraph of on the set of -dense vertices and containing only the -friend edges. Let be the connected components of . HSS-decomposition partitions the vertices of the graph into and , where is partitioned into with the following properties given by Lemma 2.1 and Proposition 2.2.

Lemma 2.1 ([25]).

Any connected component of has size at most . Moreover, any vertex has at most:

  1. neighbors (in ) in , i.e., .

  2. non-neighbors (in ) in , i.e., .

Define -sparse vertices as , i.e., the vertices which are not -dense. The main property of -sparse vertices we are interested in is as follows.

Proposition 2.2.

Let be any -sparse vertex in . Then, the total number of edges spanning the neighborhood of is at most , that is .

Proof.

If is less than , to prove the proposition, we add some dummy vertices which are only connected to so that become exactly , By doing so, the number of edges spanning the neighborhood of would not change. As is an -sparse vertex, it means that at least of its neighbors have at most neighbors in common with . This means that any of those vertices is not connected to at least other vertices in . As such, the total number of edges spanning the neighborhood of is at most, .       

To conclude, HSS-decomposition partitions the vertices of the graph into , where is additionally partitioned into the collection with the properties given by Lemma 2.1 and Proposition 2.2.

2.2 A Simple Extension of the HSS-Decomposition

It would be more convenient for us to work with a slightly different variant of the HSS-decomposition that we introduce here.

Lemma 2.3 (Extended HSS-Decomposition).

For any parameter , any graph can be decomposed into a collection of vertices such that:

  1. [leftmargin=15pt]

  2. , i.e., any vertex in is at least -sparse and at most -sparse.

  3. For any , has the following properties (we refer to as an almost-clique):

    1. .

    2. Any has at most neighbors outside of .

    3. Any has at most non-neighbors inside of .

Two main differences between Lemma 2.1 and the original HSS-decomposition are: size of each is now lower bounded (HSS-decomposition does not lower bound the size of ), and the number of all neighbors of any vertex outside is now bounded (not only neighbors to other dense vertices as in the original HSS-decomposition).

Proof of Lemma 2.3.

Consider the HSS-Decomposition with parameter . By Lemma 2.1, can be decomposed into -sparse vertices and components with -dense vertices. Let be the components among these that contain at least one -dense vertex.

We define as the set of vertices in , i.e., all vertices that are not in the connected components defined above. Clearly, . On the other hand, does not contain any -dense vertices (as we removed ), and hence . This proves Property (1). We now prove Property (2).

Fix any and let be any connected component that contains a -dense vertex. Firstly, since is a connected component of a HSS-decomposition with parameter , by Lemma 2.1, any vertex in has at most non-neighbors inside . This proves Property (2c).

Now let be any -dense vertex in . As is -dense, by Definition 2, has at least -friend neighbors. Let be the set of these vertices. By Definition 1, any of these vertices have at least shared neighbors with . As the maximum degree of any vertex is , this implies that any two vertices have at least common neighbors with each other. Furthermore, since has at least vertices, each vertex in has at least neighbors in . Thus all vertices in are -dense. Moreover, as all vertices in are connected to by an -friend edge (and hence also a -friend edge), vertices in all appear in the same connected component with the vertex . This implies that . Moreover, by Property (2c) we already proved, any vertex has at most non-neighbors in and hence . This proved Property (2a).

Finally, the above argument, together with the lower bound on size of , also implies that each vertex is connected to at least vertices inside . As such, can only have neighbors outside proving Property (2b).       

3 The Palette-Sparsification Theorem

We prove our Result 1 in this section; see Appendix C for further remarks on optimality of the bounds in this result, as well as (im)possibility of extending this result to -coloring for values of strictly smaller than .

Theorem 1 (Palette-Sparsification Theorem).

Let be any -vertex graph and be the maximum degree in . Suppose for each vertex , we independently pick a set of colors of size uniformly at random from . Then with high probability there exists a proper coloring of such that for all vertices , .

Let us start by fixing the parameters used in the proof of Theorem 1. Let be a sufficiently small constant, say, and be a sufficiently large integer, say 333In the interest of simplifying the exposition of the proof, we made no attempt in optimizing the constants. The proof of the theorem can be made to work with much smaller constants than the ones used here.. In Theorem 1, we make each vertex sample colors in . We assume that ; otherwise Theorem 1 is trivial as we sampled all colors for each vertex and every graph admits a coloring. For the purpose of the analysis, we assume that the set of each vertex is union of three sets , named batch one, two, and three, respectively, where each for is created by picking each color in independently and with probability . While this distribution is not identical to the one in Theorem 1, it is easy to see that proving the theorem for this distribution also implies the original result as in this new distribution, with high probability, no vertex samples more than colors.

We use the extended HSS-decomposition with parameter (Lemma 2.3): graph is decomposed into where each for is an almost-clique.

We prove Theorem 1 in three parts. In the first part, we argue that by only using the colors in the first batch , we can color all the vertices in . This part is mostly standard and more or less follows from the results in [18, 25, 14] by simulating a distributed local algorithm using only the colors in the first batch. We hence concentrate bulk of our effort in proving the next two parts which are the key components of the proof. We first show that using only the colors in the second batch, we can color a relatively large fraction of vertices in each almost-clique at a rate of two vertices per color (assuming the number of non-edges in the almost-clique is not too small). This allows us to “save” extra colors for coloring the remainder of the almost-cliques, which we do in the last part. We note that unlike the coloring of the first part which is based on a “local” coloring scheme (in which we determine the color of each vertex based on colors assigned to each of its neighbors similar to the greedy algorithm), the coloring of the second and third part is done in a “global” manner in which the color of a vertex is determined based on some global properties of the graph not only the local neighborhood of a vertex.

Partial Coloring Function. Define a function that assigns one of the colors in plus the null color to the vertices, such that no two neighboring vertices have the same color from (but they may both have the null color ). We refer to as a partial coloring function and refer to vertices that are colored by in as having a valid color. Furthermore, we say that a valid color is available to a vertex in the partial coloring , iff does not assign to any neighbor of . The set of available colors for is denoted by .

It is immediate that if does not assign a null color to any vertex, then the resulting coloring is a proper -coloring of the graph. We start with a partial coloring function which assigns a null color to all vertices initially and modify this coloring in each part to remove all null colors.

3.1 Part One: Coloring Sparse Vertices.

Recall the definition of sparse vertices in the extended HSS-decomposition from Section 2. In the first part of the proof, we show that we can color all sparse vertices using only the colors in the first batch.

Lemma 3.1.

With high probability, there exists a partial coloring function such that for all vertices , .

We prove this lemma by showing that one can simulate a natural greedy algorithm (similar but not identical to the algorithm of [18]) for coloring sparse vertices using only the colors in the first batch. The first step is to use the first color in , chosen uniformly at random from , for all vertices to color a large fraction of vertices in ; the main property of this coloring is that after this step any uncolored -sparse vertex has “excess” colors compared to the number of edges it has to other uncolored -sparse vertices. This step follows from the proof of the OneShotColoring procedure in [18, 25, 14] and we simply present a proof sketch for intuition. We then use the remaining colors in for each uncolored vertex and color them greedily, using the fact that the number of available colors is sufficiently larger than the number of neighbors of each uncolored vertex in every step. This part is also similar to the algorithm in [18] (see also [25, 14]) but uses a different argument as here we cannot sample the colors for each vertex adaptively (as the colors in are sampled “at the beginning” of the greedy algorithm).

As the proof of this lemma closely follows the previous work in [18, 25, 14] with only some minor modifications, we postpone its proof to Appendix A.

3.2 Part Two: Initial Coloring of Almost Cliques.

Recall that by Lemma 3.1, after the first part, we have a partial coloring that assigns a valid color to all sparse vertices. We now design a partial coloring where for all , and for remaining vertices initially but some additional vertices would also be assigned a valid color by the end of this part using the second batch.

Fix the almost-cliques . Define as the complement-graph of on the same set of vertices as by switching edges and non-edges in . Note that any two neighboring vertices in can be colored the same (in ). We exploit this fact in the following definition.

Definition 3 (Colorful Matching).

We say that a matching in the complement-graph of an almost-clique is a colorful matching with respect to the partial coloring iff:

  1. For any there is a color s.t and .

  2. For any pairs of edges , .

By finding a colorful matching in a complement-graph , we can “save” on the the colors needed for coloring as we can color vertices of the matching at a rate of two vertices per color.

We now iterate over complement-graphs one by one, and show that there exists a sufficiently large colorful matching in each complement-graph, even after we update the coloring for vertices matched by the colorful matchings in previous complement-graphs.

Lemma 3.2.

Fix any complement-graph and let be any partial coloring in which for all . Suppose average degree of is . Then, there exists a colorful matching of size at least in with high probability (over the randomness of ).

We start by some definitions. For , a color is available to this edge if the color is available to both and under . For a set of colors , let be the number of available colors for an edge in . For a set of edges , we define . We say that an edge sampled an available color in iff there exists an available color for in . Lemma 3.2 relies on the following lemma.

Lemma 3.3.

Let be a subgraph of and be its edge-set. Let be any set of colors such that . If for each vertex in , we sample each color in with probability , then with high probability, there is an edge in that samples an available color.

Proof.

Let . We argue that if each vertex samples each color in with probability , then with a constant probability, there is an edge in that samples an available color. We then argue that sampling with rate can be seen as performing this experiment independently times and obtain the final high probability bound.

For an edge , let

be an indicator random variable which is

iff sampled an available color (in the experiment with probability ). Since , we have,

(as for )

Define . We prove which implies that with probability at least , an edge in samples an available color.

Firstly, notice that

. We prove that the variance of

is not much larger than its expectation, and use€™ Chebyshev’s inequality to prove the bound on . By definition, . Since each , we have , hence it only remains to bound the covariance terms.

For any pair of edges in , if they do not share a common endpoint, then the variables and are independent (hence their covariance is ), but if they share a common endpoint, their covariance would be non-zero. However, in this case, . By Property (2c) of Lemma 2.3, each vertex in has at most neighbors (as edges in are non-edges in the almost-clique ). As such, there are at most edges that share a common endpoint with an edge . Let denote the set of edges in that share an endpoint with . We have,

The last equation is because . By Chebyshev’s inequality: .

So if we sample each color with probability , there is an edge that samples an available color with probability at least . By sampling the colors at rate , we can repeat this trial at least times and obtain that with , there is an edge that sampled an available color.       

We are now ready to prove Lemma 3.2.

Proof of Lemma 3.2.

We construct the colorful matching in the following manner. We iterate over the colors (in an arbitrary order) and for each color , we find the vertices which sampled this color in (this choice is independent across colors by the sampling process that defines ). If is available for some edge in , we add with color to the matching, delete this edge from the graph, and move on to the next color. Clearly the resulting matching will be colorful (as in Definition 3). It thus remains to lower bound the size of this matching.

Let be initially and be its edge-set, i.e., . is also initially the set of all colors in . Let be the number of vertices in . Consider the value of throughout the process. When we are dealing with a color , if we cannot find an edge where is available for , we delete the color from . In this case, will decrease by . Otherwise, we add with color to our colorful matching, delete from , and delete and from . In this case, will decrease by at most since each vertex in has at most neighbors (by Property (2c) of extended HSS-decomposition in Lemma 2.3) and is at most as in the extended HSS-decomposition, (by Property (2a) of Lemma 2.3). By Lemma 3.3 (as the process of sampling colors in is identical to the lemma statement but sampling colors with higher probability), with high probability, will decrease by at most before we add a new edge to the colorful matching. So each time when we add a new edge into the colorful matching, decreases by at most with high probability. We now lower bound the value of which allows us to bound the number of times an edge is added to the colorful matching.

Let be an edge in . Both and belong to the almost-clique in the extended HSS-decomposition and hence by Lemma 2.3, each have at most neighbors outside . This suggests that even if has assigned a color to all vertices except for , there are at least available colors for the edge , i.e., . Moreover, by Lemma 2.3, we also have that the number of vertices in the almost-clique and hence also in is . As such,

by the choice of . Consequently, before becomes smaller than (and we could no longer apply Lemma 3.3), we would have added at least edges to the colorful matching with high probability, finalizing the proof.       

The coloring is then computed as follows. We iterate over almost-cliques and for each one, we pick a colorful matching of size (by our choice of ); by Lemma 3.2, we find this matching with high probability. We only pick edges from this colorful matching and for each edge in these edges, we set . By Definition 3, this is a valid coloring. We then move to the next almost-clique (and use Lemma 3.2 with the updated ).

3.3 Part Three: Final Coloring of Almost-Cliques.

We now finalize the coloring of almost-cliques and obtain a coloring that assigns a valid color to all vertices. Initially, for all . We then iterate over almost-cliques and update to assign a valid color to all vertices of the current almost-clique. For any , define as the vertices that are not yet assigned a valid color in .

Definition 4 (Palette-Graph).

For any almost-clique in and a partial coloring , we define a palette-graph of the almost-clique with respect to as follows:

  • is a bipartite graph between the uncolored vertices in (i.e., ) and colors .

  • There exists an edge between and iff the color is available to vertex according to (i.e., ) and moreover .

Suppose we are able to find a matching in the palette-graph of an almost-clique that matches all vertices in . Let be the matched pair of . We set and correctly color all vertices in this almost-clique, and then continue to the next almost-clique. The following lemma proves that with high probability, we can find such a matching in every almost-clique.

Lemma 3.4.

Let be any almost-clique in and be the partial coloring obtained after processing almost-cliques . With high probability (over the randomness of the third batch), there exists a matching in the palette-graph of that matches all vertices in .

Proof of this lemma is based on the following result on existence of large matchings in certain random graphs that we prove in this paper.

Lemma 3.5.

Suppose and . Let be a bipartite graph such that:

  1. and ;

  2. each vertex in has degree at least and at most ;

  3. the average degree of vertices in is at least .

A subgraph of obtained by sampling each edge with probability has a matching of size with high probability.

The proof of Lemma 3.5 is quiet technical and hence we postpone it to Section 3.5 to keep the flow of the current argument. We now use this result to prove Lemma 3.4.

Proof of Lemma 3.4.

Define as the bipartite graph with the same vertex set as the palette-graph of such that there is an edge between a vertex and a color iff is available to (edges in are superset of the ones in palette-graph as an edge can appear in even if ). By this definition, the palette-graph of is a subgraph of chosen by picking each edge independently with probability (by the choice of ).

We apply Lemma 3.5 to a properly chosen subgraph of with the same vertex-set to prove this lemma. Let be the number of vertices in . By definition of coloring (after the proof of Lemma 3.2), has vertices. For any vertex , if has more than available colors (i.e., neighbors in ), then we pick available colors for arbitrarily and only connect to those color-vertices in ; otherwise, has the same neighbors in and .

We prove that satisfies all three properties in Lemma 3.5. Let and , and , and thus . This proves the first part of Property (1) of Lemma 3.5. Moreover, as is an almost-clique, by Property (2a) and by Property (2c) of Lemma 2.3, and hence , proving the second part of Property (2) as well.

Furthermore, each vertex in has degree at most . Also any vertex in has degree at most outside the almost-clique in by Property (2b) of Lemma 2.3, so any vertex in has at least available colors (even if has assigned colors to all vertices outside and all colors used by the colorful matching are also unavailable). As in we connect every vertex to the available color-vertices, satisfies Property (2) in Lemma 3.5.

Consider a vertex . Let be the number of non-neighbors of inside and hence has at most neighbors outside . As such, has at least neighbors inside of ( is the number of colors used by the colorful matching), hence also at least neighbors inside of . So has at least edges (by the fact that and the choice of ). Hence, the average degree in is at least , which implies satisfies Property (3) in Lemma 3.5.

To conclude, satisfies the properties of Lemma 3.5. Since the palette-graph of contains a subgraph of obtained by sampling each edge of with probability (as argued above), the palette-graph contains a matching of size with high probability.       

3.4 Wrap-Up: Proof of Theorem 1.

The existence of list-coloring under the sampling process of and follows from Lemmas 3.13.2 and 3.4. Note that this sampling process is not exactly the same as sampling colors uniformly at random as in Theorem 1. However, in this process, with high probability, we do not sample more than colors for each vertex and hence conditioning on sampling colors (as in Theorem 1) only changes the probability of success by a negligible factor, hence implying Theorem 1.

3.5 Proof of Lemma 3.5: Large Matchings in (Almost) Random Graphs

We first need the following auxiliary claim. The proof is standard and appears in Appendix B.

Claim 3.6.

Suppose is a constant. Consider two random variables and where for all , and are independent indicator random variables and and . Suppose and are non-negative integers that are indexed in decreasing order and have the following two properties:

  • for any ,

then for any integer , .

We now use this claim to prove Lemma 3.5 restated below.

Lemma (Restatement of Lemma 3.5).

Suppose and . Let be a bipartite graph such that:

  1. and ;

  2. each vertex in has degree at least and at most ;

  3. the average degree of vertices in is at least .

If is a subgraph of obtained by sampling each edge with probability , then has a matching of size with high probability.

Proof of Lemma 3.5.

By Hall’s marriage theorem, we only need to prove that with high probability, for any , the size of neighbor set of is at least in , i.e., . Fix a sets and let ; we prove this for the set .

If , since each vertex has degree at least in , total number of edges from to is at least . On the other hand, if we fix another set with , there are at most edges between and due to the fact that both and have at most vertices. As such, the number of edges between and