1 Introduction
A linear forest is a forest in which every connected component is a path. Given a graph , we define its linear arboricity, denoted by , to be the minimum number of edgedisjoint linear forests in whose union is . This notion was introduced by Harary [15] in 1970 as one of the covering invariants of graphs, and has been studied quite extensively since then.
It is immediate that as every edge (along with the isolated vertices ) forms a linear forest. A less trivial upper bound can be obtained as follows: by a classical theorem due to Vizing, can be partitioned into at most matchings, where denotes the maximum degree of ; observe that each matching is a linear forest, and therefore we get that . For a lower bound, note that every linear forest has at most edges (and equality holds if and only if the linear forest is a Hamiltonian path). Therefore, if is a regular graph, then
which implies (recall that is an integer) that . The following conjecture, known as the linear arboricity conjecture, of Akiyama, Exoo and Harary [1] asserts that this bound is the best possible:
Conjecture 1.1 (The linear arboricity conjecture).
Let be a graph of maximum degree . Then,
Remark 1.2.
It is easy to see that every graph , with maximum degree , can be embedded into a regular graph (perhaps on a greater number of vertices). Therefore, the above conjecture is equivalent to the statement that for a regular graph we have .
The linear arboricity conjecture was shown to be asymptotically correct as by Alon in 1988 [3]. He showed that for every regular graph ,
in the same paper, he also proved that the linear arboricity conjecture holds for graphs with girth . The bound for general graphs was subsequently improved by Alon and Spencer in 1992 (see [4]) to:
(1) 
Even though this conjecture has received a considerable amount of attention over the years, and has been proven (i) in special cases (see, e.g., [1, 2, 3, 8, 14, 25, 26]) (ii) for almost all regular graphs of constant degree by McDiarmid and Reed [18], and (iii) for a typical ErdősRenyi graph with edgedensity either or a fixed constant by Glock, Kühn and Osthus [12], there have been no asymptotic improvements in the error term (that is, the second summand in the bound (1) of Alon and Spencer) for general graphs. Our first main result improves this term by a polynomial factor:
Theorem 1.3.
There exist absolute constants and for which the following holds. For any regular graph ,
Remark 1.4.
In the proof of Theorem 1.3, we make use of Lemma 2.13, the proof of which relies on a ‘nibbling’ argument. As this argument is wellknown but quite lengthy, we have used the results from [7] as a black box, and we get a bound of (say) . While a more careful analysis of the nibbling process tailored to our argument may very well give a better bound on , we have made no attempt to do so, since we believe that a ‘natural barrier’ for our argument should be i.e. (which is anyway far from creftype 1.1), and any further progress towards the conjecture should require new ideas.
It was shown by Peroche [22] that computing the linear arboricity of a graph is complete; this is to be contrasted with variants like the arboricity of a graph (i.e. the minimum number of edgedisjoint forests in whose union is ) for which polynomial time algorithms are available [11]. Our proof of Theorem 1.3 gives an algorithm for computing a decomposition of into at most edgedisjoint linear forests, which runs in time polynomial in
with high probability. Since the linear arboricity of a
regular graph is at least , we thereby get an approximation algorithm providing the bestknown approximation guarantee (to our knowledge) for efficiently approximating the linear arboricity of a regular graph.Corollary 1.5.
There exist absolute constants and for which the following holds. Let be a regular graph. Then, there is a probabilistic polynomial time algorithm for approximating to within multiplicative error.
Our second main result deals with graphs, which we now define. A regular graph is said to be an graph if
and the second largest (in absolute value) eigenvalue of the adjacency matrix of
is at most . For all such graphs with not too large compared to , we are able to obtain better bounds on the error than the one coming from Theorem 1.3.Theorem 1.6.
There exist absolute constants and for which the following holds. For every graph with ,
Just like for Theorem 1.3, our proof of Theorem 1.6 also leads to an algorithm for computing such a decomposition of in time which is polynomial in with high probability.
1.1 The general proof scheme
Our proof outlines follow and extend ideas from [5]. Let be a regular graph on vertices. Consider the following procedure to upper bound : First, find a vertex partitioning , where is an even positive integer to be specified later, with the following properties:

for all , and

for all and all .
The existence of such a partitioning is guaranteed by Lemma 2.8, which is proved by a standard application of Chernoff’s bounds (Lemma 2.1) followed by the Lovász Local Lemma (Lemma 2.4).
Second, for all , let be the induced bipartite graph . By Property and Vizing’s theorem (Theorem 2.7), one can decompose into at most
matchings. Let be any such decomposition into matchings (it might be the case that a few of them are empty), and let be the collection of all such decompositions (that is, one decomposition for every ).
Third, let be a Hamiltonian path decomposition of ; the existence of such a decomposition is ensured by the fact that is even and a classical result of Walecki from the 1890s which can be found in [17] and provides an explicit such decomposition. It is easy to see that using our collection of decompositions , one can find a collection of forests, one for every such Hamiltonian path in , satisfying the following two properties:

consists of at most edgedisjoint linear forests;

contains all the edges .
Indeed, let be such a Hamiltonian path; after possibly relabeling the vertices, we may assume that . Observe that by taking one matching from each decomposition we obtain a linear forest. Therefore, by repeating this procedure times, since each consists of at most matchings, one can build a collection of at most linear forests for every . Clearly, such a collection contains edgedisjoint linear forests whose union consists of all the edges of all the bipartite graphs .
As there are Hamiltonian paths in , the above construction gives us at most
linear forests which cover all the edges in all the bipartite graphs . Let be the set of all the edges which have not been covered by these linear forests (we will also identify with the graph on whose edges are , in which case we will refer to as the leave graph). Since by Property of the partitioning, Vizing’s theorem shows that can be decomposed into at most matchings. Since any matching is manifestly a linear forest, we have thus obtained a decomposition of the edges of into at most
(2) 
linear forests. In order to optimize the error term , we would like to pick so that the two summands in the error term are the same. This is achieved by choosing , in which case
This is the strategy used in [5] to recover the bound of Alon and Spencer.
Let us now discuss the weak points in the construction and the analysis that we have presented, along with ideas for improving them. The formal details will be given in subsequent sections.

In the above construction, we decompose the leave graph into matchings and treat each matching as a linear forest by itself. This gives us the error term in the above analysis. Note, however, that adding a matching contained in some to any of the linear forests obtained from a path which has as an endpoint still results in a linear forest. Therefore, it makes sense to try to ‘swallow’ all the edges of in our current linear forests. We discuss this in more detail in Section 2.6, where we also present the key technical lemma (Lemma 2.13) needed to make this idea work. The upshot of Lemma 2.13 is that it allows us to replace the term in the error by for some . Optimizing the error term now results in the choice , which gives an error of for some , as desired in Theorem 1.3.

In the above construction, we take matchings in each , whereas ideally, we would like to take only ‘average degree’ many matchings. This error, summed up for all , gives us the term in (2). In the proof of Theorem 1.6, we will show (Lemma 2.10) that if satisfies some expansion properties, then we can approximately decompose each into edgedisjoint perfect (up to divisibility) matchings. If we remove the linear forests generated by these matchings using the above procedure, then we remove the “correct” number of linear forests, and the “leave graph” has much smaller maximum degree. Now, we apply Theorem 1.3 to .
2 Auxiliary lemmas
In this section, we gather various preliminaries, as well as state and prove the key lemmas needed for our proofs.
2.1 Probabilistic estimates
Throughout this paper, we will make extensive use of the following wellknown bound on the upper and lower tails of a sum of independent indicators, due to Chernoff (see, e.g., Appendix A in [4]).
Lemma 2.1 (Chernoff’s inequality).
Remark 2.2.
If all the ’s are the same, the obtained bounds are also valid if, instead of taking
as the sum of i.i.d random variables, we take it to be hypergeometrically distributed with mean
[16].Before introducing the next tool to be used, we need the following definition.
Definition 2.3.
Let be a collection of events in some probability space. A graph on the vertex set is called a dependency graph for if is mutually independent of all the events .
The following is the socalled Lovász local lemma in its symmetric version (see, e.g., [4]).
Lemma 2.4 (Lovász local lemma).
Let be a sequence of events in some probability space, and let be a dependency graph for . Suppose that for every and . Then, .
2.2 Algorithmic Lovász local lemma
The original proof of the Lovász local lemma in [9] is nonconstructive in that it does not provide any way of finding a point in the probability space avoiding the ‘bad’ events. However, in the case when the ‘bad’ events are determined by a finite collection of mutually independent random variables , the breakthrough work of Moser and Tardos [21] shows that the following simple randomized algorithm efficiently computes an assignment to the random variables which avoids all the ‘bad’ events – start with a random assignment to the variables , and check whether some event in is violated. If so, arbitrarily pick such a violated event, and sample another random assignment for the values of the variables on which this event depends (this step is called a resampling of the event). Continue this process until there are no violated events.
Theorem 2.5 ([21]).
Let be a finite set of mutually independent random variables in a probability space. Let be a finite set of events determined by these variables. Consider the dependency graph on these events given by adding an edge connecting two events if and only if and depend on some common random variable in . Suppose that for every and . Then, there exists an assignment of values to the variables not violating any of the events in . Moreover the randomized algorithm described above resamples an event at most an expected times before it finds such an evaluation. Thus, the expected total number of resampling steps is at most
Remark 2.6.
All the applications of the local lemma in this paper fit the general framework of the above theorem and seek to avoid at most events, where is some polynomial in the number of vertices . Moreover, every event in each of our applications can be sampled in time , for some polynomial . It follows that all of our applications of the local lemma can be performed algorithmically in expected time . Thus, by Markov’s inequality, it follows that the probability of the algorithm taking more than time is at most .
2.3 Vizing’s theorem
The chromatic index of a graph , denoted by , is the minimum number of colors needed to color in such a way that each color class is a matching. It follows immediately from this definition that ; perhaps surprisingly, Vizing [24] proved that this trivial lower bound is nearly optimal:
Theorem 2.7 (Vizing’s Theorem).
Every graph satisfies
Moreover, the strategy in Vizing’s original proof can be used to obtain a polynomial time algorithm to edge color any graph with colors ([20]). Note that, as mentioned in the introduction, Vizing’s theorem immediately gives the bound .
2.4 Random vertex partitioning
Given a regular graph with sufficiently large, the following lemma gives a partition for which ‘all the degrees are correct’.
Lemma 2.8.
There exists an absolute constant for which the following holds. For all , all regular graphs , and all integers , there exists a partition satisfying the following two properties:

For all , .

For all and for all , the number of edges from into , denoted by , satisfies
Proof.
Note that for (say) , the lemma follows easily by Chernoff’s inequality for the hypergeometric distribution and the union bound. Since we are also interested in graphs with smaller degree, we need a slightly more complicated proof where the union bound is replaced by a standard application of the local lemma (Lemma 2.4).
Let and let be an arbitrary partition of such that are of size each. Let be a random function chosen as follows: for each , the restriction is a permutation of chosen uniformly at random. Given such an , define . Observe that for each , is either or , so that the desired property of the lemma holds. We wish to show that, with positive probability, there exists an such that the corresponding partition satisfies property of the lemma.
To this end, fix a vertex and for each , let . Since each is chosen uniformly at random from among all permutations of , it follows that for all ,
Therefore, by Chernoff’s bounds (Lemma 2.1),
Let denote the event ‘’, and note that for all and , may depend on an event only if at least one of the following two conditions hold: ; or and have neighbors to the same for some . In particular, each event depends on at most events. Finally, since
the local lemma guarantees the existence of an as desired. ∎
2.5 Finding dense, regular spanning subgraphs in ‘nice’ bipartite graphs
The next lemma shows that almostregular balanced bipartite graphs induced by large disjoint subsets of a good expander contain a spanning regular graph covering almost all the edges. The proof is similar to the proof of Lemma 2.12 in [10], and is based on the following generalization of the GaleRyser theorem due to Mirsky [19].
Theorem 2.9 ([19]).
Let be a balanced bipartite graph with , and let be an integer. Then, contains an factor if and only if for all and
Lemma 2.10.
Let be an graph. Let be some integer such that . Let and be disjoint subsets of of sizes and consider the bipartite subgraph of induced by these sets, denoted by . Assume further that . Then, contains an factor (i.e. an regular spanning subgraph) for , provided that .
Proof.
Since , the statement is vacuously true whenever . Hence, we may assume that . By Mirsky’s criterion, it suffices to verify that for all and , we have
We divide the analysis into five cases:
Case 1: . Since , there is nothing to prove in this case.
Case 2: , and , where . Suppose for contradiction that Then, it must be the case that
On the other hand, we know by the expander mixing lemma that
where the second inequality holds since . Hence, we must have
Since both terms on the right hand side are nonnegative, should also be greater than either of them, for which we must have
In particular, we must have , which implies , which violates our assumption about .
Case 3: , , and . If , then by the same argument as above, we must have
In particular, we must have , which violates our assumption about .
Case 4: , and . By assumption, we have , so that . Moreover, since , we have that . Therefore, . On the other hand, we also have . Combining these two inequalities, we see that . Therefore, by the expander mixing lemma, it suffices to verify that
Dividing both sides by , we see that this is implied by the inequality
where , , , , , and . Observe that the objective function on the left hand side of the desired inequality is bilinear in and , and therefore the minimum will be obtained on the triangular boundary of the region. On this boundary, the inequality reduces to one of the following: ; ; , and is readily verified since .
Case 5: and . This is exactly the same as cases (2)(4) with the roles of and interchanged. ∎
Remark 2.11.
Under the conditions of the above lemma, an factor in can be found efficiently using algorithmic versions of Mirsky’s criterion based on standard network flow algorithms (see, e.g., [6]).
Remark 2.12.
In the application of this lemma to Theorem 1.6, we will have to deal with bipartite graphs as above, except that we are allowed to have . In this scenario, it is impossible to find an factor. However, by adding a “fake” vertex to with suitable edge connections to , finding an factor in this new graph using the above lemma, decomposing this factor into edgedisjoint perfect matchings using repeated applications of Hall’s theorem, and finally removing all edges incident to the “fake” vertex, we see that contains edge disjoint matchings such that every vertex in is matched in at least such matchings.
2.6 Avoiding short cycles
In this section, we introduce our key technical lemma for proving Theorem 1.3. Since the usefulness of this lemma may not be apparent at first glance, we encourage the reader to refer to this section only after encountering its application in the proof of Theorem 1.3.
Lemma 2.13.
There exist universal constants for which the following holds. Let be a graph with maximum degree and minimum degree such that and . Let be a fixed collection of matchings in the complete graph on . Then, there exists a collection of matchings in , where some of them may possibly be empty, such that:

the graph , which is obtained from by deleting all the edges , has maximum degree at most ;

for all , there are at most indices for which lies on a cycle in of length at most , where .
Moreover, such a collection of matchings may be obtained in poly(V(G)) time with high probability.
The proof of this lemma builds on the proof of the main result in the work of Dubhashi, Grable, and Panconesi [7]. Since the details are somewhat involved, we defer them to Appendix A.
3 Proofs of main results
In this section, we conclude the proofs of our main results. Since these proofs build on the general strategy discussed earlier, we encourage the reader to review the construction in Section 1.1 before proceeding. We start by proving Theorem 1.6 as a warm up since its proof is simpler.
3.1 Proof of Theorem 1.6
Let be an graph and set . As in the general proof scheme presented in Section 1.1, we start with a vertex decomposition satisfying the conclusions of Lemma 2.8, where is a positive even integer which will be specified below. For all , let be a collection of edgedisjoint matchings of the bipartite graph as in Remark 2.12 – such a decomposition exists for all sufficiently large since holds by our choice of below, and our assumption that .
Let be a Hamiltonian path decomposition of , and for each , let be the collection of edgedisjoint linear forests obtained as in Section 1.1. This gives us a set of edgedisjoint linear forests of . The key observation here is that the graph induced by all edges of which are not in any such linear forest has maximum degree since each vertex in is in at least of the edgedisjoint matchings selected in . Our goal now is to find a decomposition of the edges of into as few linear forests as possible. The bound (1) ensures that we can find a decomposition into at most linear forests. Together with the collection of edgedisjoint linear forests that we built earlier, this shows that
Setting to optimize the error term (in which case ) shows that , where the tilde hides logarithmic dependence on . If instead of (1), we use Theorem 1.3 to handle the linear arboricity of , then we get that
for some , as desired.
3.2 Proof of Theorem 1.3
Let be a regular graph on vertices with sufficiently large. Let be a vertexpartition satisfying the conclusions of Lemma 2.8, where is a positive even integer which will be specified below. As before, let denote a decomposition of the bipartite graph into at most matchings, and let denote the collection of such decompositions.
Let be a Hamiltonian path decomposition of , and for each , let be the collection of at most edgedisjoint linear forests obtained as in Section 1.1. Fix an arbitrary labeling of these forests. Moreover, for each , let and denote its endpoints, and observe that all the pairs are disjoint.
Next, for each , let be a decomposition of the edges of into matchings; the existence of such a decomposition is guaranteed by Vizing’s theorem. For each , let , and observe that is a collection of edgedisjoint linear forests which covers all the edges . For each , let be the set of all pairs for which there exists a path in of length exactly with and as its endpoints. Note that such paths correspond precisely to two ‘full paths’ of length in whose endpoints in are an edge of . Since each is a matching, it follows immediately that each is a matching of the complete graph on the vertex set .
For each , consider the graph . By Lemma 2.8, we have
where . Below, we will choose be to less than . Therefore, for sufficiently large, and , so that by applying Lemma 2.13 to we obtain a collection of matchings in , where some of them are possibly empty, such that:

the graph , which is obtained from by deleting all the edges , has maximum degree at most , and

for all , there are at most indices for which lies on a cycle in of length at most .
With this in hand, let for all , and let . Since each is a graph of maximum degree at most , it is a disjoint union of cycles, paths and isolated vertices. We wish to remove one edge from each cycle in each . For the analysis, it will be convenient to do it in the following manner: for any cycle in any of length at most , remove an edge arbitrarily from ; on the other hand, for a cycle in some of length at least , delete an edge chosen uniformly at random from among the first (with respect to a fixed, but otherwise arbitrary ordering of the edges) edges of appearing in this cycle. Let denote the (random) linear forest resulting from after this deletion, and let be the collection of edgedisjoint linear forests obtained from the Hamiltonian path in this manner.
For each , let denote the (random) number of edges in which are incident to . We claim that there is a choice of for which for all . For this, fix and observe that since is part of at most cycles of length in , it follows that can be a part of at most cycles in of length . Hence, the contribution to from such cycles is at most . Moreover, the probability that any cycle of length at least contributes to is bounded above by , since such a cycle contributes to only when the edge deleted from it is incident to , where the edge to be deleted is chosen uniformly at random from among edges, of which at most one is incident to . Since there are at most cycles containing to start with, and since deletions from long cycles are made independently, it follows from Chernoff’s bounds that with probability at least , . Let denote the event that this does not happen. Note that can depend on only if and are both incident to the first edges of in a long cycle. Again, since there are at most cycles to start with, it follows that any can depend on at most other ’s. Therefore, since , it follows from the local lemma that , which proves the desired claim.
Finally, repeat the above construction for each to obtain a collection of edge disjoint linear forests , and let denote the leave graph obtained by deleting from any edge which appears in . Observe that consists of edges of the following two types:

edges within that are not contained in i.e. edges in the graph ;

edges in that are not contained in i.e. edges removed during the deletion process described above.
Recall from Lemma 2.8 that for all . Since the are disjoint, it follows from the above discussion that , where Therefore, by Vizing’s theorem, one can decompose into at most edgedisjoint matchings. These matchings, together with , give a decomposition of into a number of linear forests which is at most
Optimizing the error term by setting the two summands in the parentheses to be equal gives , in which case, we get that
for some , as desired.
References
 [1] J. Akiyama, G. Exoo, and F. Harary. Covering and packing in graphs. III. Cyclic and acyclic invariants. Math. Slovaca, 30(4):405–417, 1980.
 [2] J. Akiyama, G. Exoo, and F. Harary. Covering and packing in graphs. IV. Linear arboricity. Networks, 11(1):69–72, 1981.
 [3] N. Alon. The linear arboricity of graphs. Israel J. Math., 62(3):311–325, 1988.
 [4] N. Alon and J. H. Spencer. The probabilistic method. WileyInterscience Series in Discrete Mathematics and Optimization. John Wiley & Sons, Inc., New York, 1992. With an appendix by Paul Erdős, A WileyInterscience Publication.
 [5] N. Alon, V. J. Teague, and N. C. Wormald. Linear arboricity and linear arboricity of regular graphs. Graphs Combin., 17(1):11–16, 2001.
 [6] R. P. Anstee. The network flows approach for matrices with given row and column sums. Discrete Math., 44(2):125–138, 1983.
 [7] D. Dubhashi, D. A. Grable, and A. Panconesi. Nearoptimal, distributed edge colouring via the nibble method. Theoret. Comput. Sci., 203(2):225–251, 1998.
 [8] H. Enomoto and B. Péroche. The linear arboricity of some regular graphs. J. Graph Theory, 8(2):309–324, 1984.
 [9] P. Erdős and L. Lovász. Problems and results on chromatic hypergraphs and some related questions. Colloq. Math. Soc. János Bolyai, 10:609–627, 1975.
 [10] A. Ferber and V. Jain. 1factorizations of pseudorandom graphs. Preprint, arXiv:1803.10361, 2018.
 [11] H. N. Gabow and H. H. Westermann. Forests, frames, and games: algorithms for matroid sums and applications. Algorithmica, 7(56):465–497, 1992.
 [12] S. Glock, D. Kühn, and D. Osthus. Optimal path and cycle decompositions of dense quasirandom graphs. J. Combin. Theory Ser. B, 118:88–108, 2016.
 [13] D. A. Grable. A large deviation inequality for functions of independent, multiway choices. Combin. Probab. Comput., 7(1):57–63, 1998.
 [14] F. Guldan. The linear arboricity of regular graphs. Math. Slovaca, 36(3):225–228, 1986.
 [15] F. Harary. Covering and packing in graphs. I. Ann. New York Acad. Sci., 175:198–205, 1970.
 [16] W. Hoeffding. Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc., 58:13–30, 1963.
 [17] E. Lucas. Récréations mathématiques. 2ième éd., nouveau tirage. Librairie Scientifique et Technique Albert Blanchard, Paris, 1960.
 [18] C. McDiarmid and B. Reed. Linear arboricity of random regular graphs. Random Structures Algorithms, 1(4):443–445, 1990.
 [19] L. Mirsky. Combinatorial theorems and integral matrices. J. Combinatorial Theory, 5:30–44, 1968.
 [20] J. Misra and D. Gries. A constructive proof of Vizing’s theorem. Inform. Process. Lett., 41(3):131–133, 1992.
 [21] R. A. Moser and G. Tardos. A constructive proof of the general Lovász local lemma. J. ACM, 57(2):Art. 11, 15, 2010.
 [22] B. Péroche. Complexité de l’arboricité linéaire d’un graphe. II. RAIRO Rech. Opér., 19(3):293–300, 1985.
 [23] V. Rödl. On a packing and covering problem. European J. Combin., 6(1):69–78, 1985.

[24]
V. G. Vizing.
On an estimate of the chromatic class of a
graph. Diskret. Analiz No., 3:25–30, 1964.  [25] J.L. Wu. On the linear arboricity of planar graphs. J. Graph Theory, 31(2):129–134, 1999.
 [26] J.L. Wu and Y.W. Wu. The linear arboricity of planar graphs of maximum degree seven is four. J. Graph Theory, 58(3):210–220, 2008.
Appendix A Proof of Lemma 2.13
In this appendix, we show how the proof of the main result in [7], which is based on the celebrated Rödl nibble [23], implies Lemma 2.13. The organization of this appendix is as follows: Algorithm 1 records the nibbling algorithm used in [7]; Theorem A.1 and Theorem A.4 record the conclusion of the analysis in [7]; Corollary A.5 adapts the analysis in [7] for our choice of parameters; Lemma A.6 and Lemma A.7 show that Algorithm 1 produces only a small number of short cycles with respect to any fixed collection of matchings, and finally, Proposition A.8 proves Lemma 2.13.
Before proceeding to formal details, let us provide a high level overview of what follows. The goal in [7] is to produce a proper edgecoloring of a regular graph using colors (here, is allowed to depend on ). Their algorithm runs in two phases – the first phase, which is based on the semirandom ‘nibble’ method of Rödl, is the one relevant to our paper; the second phase actually uses a trivial algorithm. In the first phase, the algorithm seeks to color ‘most’ of the edges using a palette of colors. Starting with the input graph , the algorithm generates a sequence of graphs, where is the graph induced by the edges which are still uncolored at the end of stage . In each stage , each edge has a palette of all ‘available’ colors, where initially, the palette of each edges is the set . Each vertex selects an fraction of uncolored edges incident to it, and each selected edge picks a tentative color from its palette independently and uniformly at random. If a selected edge has no ‘colorconflicts’ with any neighboring edge, then the corresponding color becomes the final color of the edge. All the palettes of the remaining edges are updated by deleting all the final colors of neighboring colored edges. This process is then repeated in the next stage. The algorithm continues for a number of rounds by the end of which (with high probability) each vertex has no more than uncolored edges incident to it.
As in all nibblingbased arguments, the key idea is that in each stage, the number of edges which experience color conflicts is only a small fraction of the number of edges selected to be colored at this stage. The main effort in [7] is spent in showing that this holds true with high probability throughout the process. They do this by showing inductively – and this is what we will use in our analysis – that the graphs and the color palettes of each edge behave almost like ‘random’ subgraphs and subsets of the original ones. We now give a formal description of the algorithm and analysis in [7]. Following this, we will show how to tailor it to our application.
Algorithm 1 is the first phase of the algorithm used in [7] as described above.