Efficiently list-edge coloring multigraphs asymptotically optimally

12/26/2018 ∙ by Fotis Iliopoulos, et al. ∙ berkeley college 0

We give polynomial time algorithms for the seminal results of Kahn, who showed that the Goldberg-Seymour and List-Coloring conjectures for (list-)edge coloring multigraphs hold asymptotically. Kahn's arguments are based on the probabilistic method and are non-constructive. Our key insight is to show that the main result of Achlioptas, Iliopoulos and Kolmogorov for analyzing local search algorithms can be used to make constructive applications of a powerful version of the so-called Lopsided Lovasz Local Lemma. In particular, we use it to design algorithms that exploit the fact that correlations in the probability spaces on matchings used by Kahn decay with distance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In graph edge coloring one is given a (multi)graph and the goal is to find an assignment of one of colors to each edge so that no pair of adjacent edges share the same color. The chromatic index, , of is the smallest integer for which this is possible. In the more general list-edge coloring problem, a list of allowed colors is specified for each edge. A graph is -list-edge colorable if it has a list-coloring no matter how the lists are assigned to each edge. The list chromatic index, , is the smallest for which is -list-edge colorable.

Edge coloring is one of the most fundamental and well-studied coloring problems with various applications in computer science (e.g.,  [6, 11, 17, 18, 19, 29, 31, 33, 34, 35, 37]). To give just one representative example, if edges represent data packets then an edge coloring with colors specifies a schedule for exchanging the packets directly and without node contention. In this paper we are interested in designing algorithms for efficiently edge coloring and list-edge coloring multigraphs. To formally describe our results, we need some notation.

For a multigraph let denote the set of matchings of . A fractional edge coloring is a set of matchings and corresponding positive real weights , such that the sum of the weights of the matchings containing each edge is one. I.e., , . A fractional edge coloring is a fractional edge -coloring if . The fractional chromatic index of , denoted by , is the minimum such that has a fractional edge -coloring.

Let be the maximum degree of and define . Both of these quantities are obvious lower bounds for the chromatic index and it is known [8] that . Furthermore, Padberg and Rao [30] show that the fractional chromatic index of a multigraph, and indeed an optimal fractional edge coloring, can be computed in polynomial time.

A famous and long-standing conjecture by Goldberg and Seymour states that every multigraph satisfies . In a seminal paper [18], Kahn showed that the Goldberg-Seymour conjecture holds asymptotically:

Theorem 1.1 ([18]).

For multigraphs ,

(Here denotes a term that tends to zero as .) He later [19] proved the analogous result for list-edge coloring, establishing that the List Coloring Conjecture, which asserts that for any multigraph , also holds asymptotically:

Theorem 1.2 ([19]).

For multigraphs ,

The proofs of Kahn use the probabilistic method and are not constructive. The main contribution of this paper is to provide polynomial time algorithms for the above results, as follows:

Theorem 1.3.

For every , there exists an algorithm that, given a multigraph on vertices, constructs a -edge coloring of with probability at least in expected polynomial time.

Theorem 1.4.

For every , there exists an algorithm that, given a multigraph on vertices and an arbitrary list of colors for each edge, constructs a -list-edge coloring of with probability at least in expected polynomial time.

Clearly, Theorem 1.4 subsumes Theorem 1.3. Furthermore, the results of Sanders and Steurer [33] and Scheide [35] already give polynomial time algorithms for edge coloring multigraphs asymptotically optimally, without exploiting the arguments of Kahn. Nonetheless, we choose to present the proof of Theorem 1.3 for three reasons. First and most importantly, its proof is significantly easier than that of Theorem 1.4, while it contains many of the key ideas required for making Theorem 1.2 constructive. Second, our algorithms and techniques are very different from those of [33, 35]. Finally, as we will see, we will need to show that the algorithm of Theorem 1.3 is commutative, a notion introduced by Kolmogorov [22]. This fact may be of independent interest since, as shown in [22, 14], commutative algorithms have several nice properties: they are typically parallelizable, their output distribution has high entropy, etc.

As a final remark, we note that, to the best of our knowledge, Theorem 1.4 is the first result to give an asymptotically optimal polynomial time algorithm for list-edge coloring multigraphs.

1.1 Technical Overview

The proofs of Theorems 1.1 and 1.2 are based on a very sophisticated variation of what is known as the semi-random method (also known as the “naive coloring procedure”), which is the main technical tool behind some of the strongest graph coloring results, e.g., [16, 17, 21, 25]. The idea is to gradually color the graph in iterations, until we reach a point where we can finish the coloring using a greedy algorithm. In its most basic form, each iteration consists of the following simple procedure (using vertex coloring as a canonical example): Assign to each vertex a color chosen uniformly at random; then uncolor any vertex which receives the same color as one of its neighbors. Using the Lovász Local Lemma (LLL) [9] and concentration inequalities, one typically shows that, with positive probability, the resulting partial proper coloring has useful properties that allow for the continuation of the argument in the next iteration. For a nice exposition of both the method and the proofs of Theorems 1.1 and 1.2, the reader is referred to [26].

The key new ingredient in Kahn’s arguments is the method of assigning colors. For each color , we choose a matching  from some hard-core distribution on and assign the color to the edges in . The idea is that by assigning each color exclusively to the edges of one matching, we avoid conflicting color assignments and the resulting uncolorings.

The existence of such hard-core distributions is guaranteed by the characterization of the matching polytope due to Edmonds [8] and a result by Lee [23] (also shown independently by Rabinovich et al. [32]). The crucial fact about them is that they are endowed with very useful approximate stochastic independence properties, as was shown by Kahn and Kayll in [20]. In particular, for every edge , conditioning on events that are determined by edges far enough from  in the graph does not effectively alter the probability of  being in the matching.

The reason why this property is important is because it enables the application of a sophisticated version of what is known as the Lopsided Lovász Local Lemma (LLL). Recall that the original statement of the LLL asserts, roughly, that, given a family of “bad” events in a probability space, if each bad event individually is not very likely and, in addition, is independent of all but a small number of other bad events, then the probability of avoiding all bad events is strictly positive. The Lopsided LLL used by Kahn generalizes this criterion as follows. For each bad event , we fix a positive real number and require that conditioning on all but a small number of other bad events doesn’t make the probability of larger than . Then, provided the  are small enough, the conclusion of the LLL still holds. In other words, one replaces the “probability of a bad event” in the original LLL statement with the “boosted” probability of the event, and the notion of “independence” by the notion of “sufficiently mild negative correlation”.

Notably, the breakthrough result of Moser and Tardos [27, 28] that made the LLL constructive for the vast majority of its applications does not apply in this case, mainly for two reasons. First, the algorithm of Moser and Tardos applies only when the underlying probability measure of the LLL application is a product over explicitly presented variables. Second, it relies on a particular type of dependency (defined by shared variables). The lack of an efficient algorithm for Lopsided LLL applications is the primary obstacle to making the arguments of Kahn constructive.

Our main technical contribution is the design and analysis of such algorithms. Towards this goal, we use the flaws-actions framework introduced in [1] and further developed in [2, 4, 14, 3]. In particular, we use the algorithmic LLL criterion for the analysis of stochastic local search algorithms developed by Achlioptas, Iliopoulos and Kolmogorov in [2]. We start by showing that there is a connection between this criterion and the version of the Lopsided LLL used by Kahn, in the sense that the former can be seen as the constructive counterpart of the latter. However, this observation by itself is not sufficient, since the result of [2] is a tool for analyzing a given stochastic local search algorithm. Thus, we are still left with the task of designing the algorithm before using it. Nonetheless, this connection provides valuable intuition on how to realize this task. Moreover, we believe it is of independent interest as it provides an explanation for the success of various algorithms (such as [24]) inspired by the techniques of Moser and Tardos, which were not tied to a known form of the LLL.

To get a feeling for the nature of our algorithms it is helpful to have some intuition for the criterion of [2]. There, the input is the algorithm to be analyzed and a probability measure over the state space of the algorithm. The goal of the algorithm is to reach a state that avoids a family of bad subsets of the space which we call flaws. It does this by focusing on a flaw that is currently present at each step, and taking a (possibly randomized) action to address it. At a high level, the role of the measure is to gauge how efficiently the algorithm rids the state of flaws, by quantifying the trade-off between the probability that a flaw is present at some inner state of the execution of the algorithm and the number of other flaws each flaw can possibly introduce when the algorithm addresses it. In particular, the quality of the convergence criterion is affected by the compatibility between the measure and the algorithm.

Roughly, the states of our algorithm will be matchings in a multigraph (corresponding to color classes) and the goal will be to construct matchings that avoid certain flaws. To that end, our algorithm will locally modify each flawed matching by (re)sampling matchings in subgraphs of according to distributions induced by the hard-core distributions used in Kahn’s proof. The fact that correlations decay with distance in these distributions allows us to prove that, while the changes are local, and hence not many new flaws are introduced at each step, the compatibility of our algorithms with these hard-core distributions is high enough to allow us to successfully apply the criterion of [2].

1.2 Organization of the Paper

In Section 2 we present the necessary background. In Section 3 we show a useful connection between the version of the Lopsided LLL used by Kahn and the algorithmic LLL criterion of [2]. In Section 4 we present the proof of Theorem 1.3. In Section 5, we sketch the proof of Theorem 1.2 and then prove Theorem 1.4.

2 Background and Preliminaries

2.1 The Lopsided Lovász Local Lemma

Erdős and Spencer [10] noted that independence in the LLL can be replaced by positive correlation, yielding the original version of what is known as the Lopsided LLL. More sophisticated versions of the Lopsided LLL have also been established in [5, 7]. Below we state the Lopsided LLL in one of its most powerful forms.

Theorem 2.1 (General Lopsided LLL).

Let be a probability space and be a set of (bad) events. For each , let be such that for every . If there exist positive real numbers such that

(1)

then the probability that none of the events in occurs is at least .

The digraph over induced by the sets , , is often called a lopsidependency digraph.

2.2 An Algorithmic LLL Criterion

Let be a discrete state space, and let be a collection of subsets (which we call flaws) of such that . Our goal is to find a state ; we refer to such states as flawless.

For a state , we denote by the set of flaws present in . We consider local search algorithms working on  which, in each flawed state , choose a flaw  in and randomly move to a nearby state in an effort to fix . We will assume that, for every flaw and every state , there is a non-empty set of actions such that addressing flaw in state amounts to selecting the next state  from

according to some probability distribution

. Note that potentially , i.e., addressing a flaw does not necessarily imply removing it. We sometimes write to denote that the algorithm addresses flaw at and moves to .

Throughout the paper we consider Markovian algorithms that start from a state picked from an initial distribution , and then repeatedly pick a flaw that is present in the current state and address it. The algorithm always terminates when it encounters a flawless state.

Definition 2.2 (Causality).

We say that flaw causes if there exists a transition such that (i) ; (ii) either or .

Definition 2.3 (Causality Digraph).

Any digraph on that includes every edge such that causes is called a causality digraph. We write for the set of out-neighbors of  in this graph.

Throughout this paper we consider only algorithms with the property that causes if and only if causes . We will thus view the causality graph as an undirected graph. We also write to denote that (or equivalently, ).

For a given probability measure supported on the state space , and for each flaw , we define the charge

(2)

In Section 3 we give the intuition behind the definition of charges and also draw a connection with the parameters in Theorem 2.1. We are now ready to state the main result of [2].

Theorem 2.4.

Assume that, at each step, the algorithm chooses to address the lowest indexed flaw according to an arbitrary, but fixed, permutation of . If there exist positive real numbers for such that

(3)

for some , then the algorithm reaches a flawless object within steps with probability at least , where

We also describe another theorem that can be used to show convergence in a polynomial number of steps, even when the number of flaws is super-polynomial, assuming the algorithm has a nice property which we describe below.

Definition 2.5.

For , let denote the matrix defined by if , and otherwise. A Markovian algorithm defined by matrices , , is commutative with respect to a causality relation if for every such that we have .

We note that Definition 2.5 was introduced in [3], as a generalization of the combinatorial definition of commutativity introduced in [22]. While the latter would suffice for our purposes, we choose to work with Definition 2.5 due to its compactness.

Theorem 2.6.

Let be a commutative algorithm with respect to a causality relation . Assume there exist positive real numbers in such that condition (3) holds. Assume further that the causality graph induced by can be partitioned into cliques, with potentially further edges between them. Setting , the expected number of steps performed by is at most , and for any parameter , terminates within resamplings with probability .

Following Theorem 3.2 in [14], the proof of Theorem 2.6 is identical to the analogous result of Hauepler, Saha and Srinivasan [12] for the Moser-Tardos algorithm and hence we omit it.

2.3 Hard-Core Distributions on Matchings

A probability distribution on the matchings of a multigraph is hard-core if it is obtained by associating to each edge a positive real (called the activity of ) so that the probability of any matching is proportional to . Thus, recalling that denotes the set of matchings of , and setting for each , we have

The characterization of the matching polytope due to Edmonds [8] and a result of Lee [23] (which was also shown independently by Rabinovich et al. [32]) imply the following connection between fractional edge colorings and hard-core probability distributions on matchings. Before describing it, we need a definition.

For any probability distribution on the matchings of a multigraph , we refer to the probability that a particular edge is in the random matching as the marginal of at . We write for the collection of marginals of  at all the edges .

Theorem 2.7.

There is a hard-core probability distribution with marginals if and only if there is a fractional -edge coloring of with , i.e., if and only if .

Kahn and Kayll [20] proved that the probability distribution promised by Theorem 2.7 is endowed with very useful approximate stochastic independence properties.

Definition 2.8.

Suppose we choose a random matching from some probability distribution. We say that an event is -distant from a vertex  if is completely determined by the choice of all matching edges at distance at least  from . We say that is -distant from an edge if it is -distant from both endpoints of .

Theorem 2.9 ([20]).

For any , there exists a such that for any multigraph with fractional chromatic number there is a hard-core distribution with marginals such that

  1. for every , and hence , .

  2. for every , if we choose a matching according to then, for any edge and event which is -distant from ,

    where .

We conclude this subsection with the result of Jerrum and Sinclair [15]

for sampling from hard-core distributions on matchings. The algorithm works by simulating a rapidly mixing Markov chain on matchings, whose stationary distribution is the desired hard-core distribution 

, and outputting the final state.

Theorem 2.10 ([15], Corollary 4.3).

Let be a multigraph,

a vector of activities associated with the edges of 

, and the corresponding hard-core distribution. Let and define . There exists an algorithm that, for any , runs in time and outputs a matching in  drawn from a distribution  such that .

Remark 2.1.

[15] establishes this result for matchings in (simple) graphs. However, the extension to multigraphs is immediate: make the graph simple by replacing each set of multiple edges between a pair of vertices by a single edge  of activity ; then use the algorithm to sample a matching from the hard-core distribution in the resulting simple graph; finally, for each edge in this matching, select one of the corresponding multiple edges with probability . Note that the running time will depend polynomially on the maximum activity  in the simple graph, as claimed.

3 Causality, Lopsidependency and Approximate Resampling Oracles

In this section we show a connection between Theorem 2.1 and Theorem 2.4. While this section is not essential to the proof of our main results, it does provide useful intuition since it implies the following natural approach to making applications of the Lopsided LLL algorithmic: We start designing a local search algorithm for addressing the flaws that correspond to bad events by considering the family of probability distributions whose supports induce a causality graph that coincides with the lopsidependency graph of the Lopsided LLL application of interest. This is typically an automated task. The key to successful implementation is our ability to make the way in which the algorithm addresses flaws sufficiently compatible with the underlying probability measure . To make this precise, we first recall an algorithmic interpretation of the notion of charges defined in (2).

As shown in [2], the charge captures the compatibility between the actions of the algorithm for addressing flaw and the measure . To see this, consider the probability, , of ending up in state after (i) sampling a state according to , and then (ii) addressing at . Define the distortion associated with as

(4)

i.e., the maximum possible inflation of a state probability incurred by addressing (relative to its probability under , and averaged over the initiating state according to ). Now observe from (2) that

(5)

An algorithm for which is called a resampling oracle [13] for , and notice that it perfectly removes the conditional of the addressed flaw. However, designing resampling oracles for sophisticated measures can be impossible by local search. This is because small, but non-vanishing, correlations can travel arbitrarily far in . Thus, allowing for some distortion can be very helpful, especially in cases where correlations decay with distance.

Remark 3.1.

Recalling the definition of matrix in Definition 2.5 and letting , we see that . As shown in [3], this observation can be used to provide an alternative proof of Theorem 2.4 using the fact that any operator norm (and in particular the -norm) bounds the spectral radius of a matrix. Moreover, this linear algebraic point of view leads to significant generalizations of Theorem 2.4. We refer the reader to [3] for details.

Theorem 3.1 below shows that Theorem 2.4 is the algorithmic counterpart of Theorem 2.1.

Theorem 3.1.

Given a family of flaws over a state space , an algorithm with causality graph with neighborhoods , and a measure over , then for each we have

(6)

where the are the charges of the algorithm as defined in (2).

Proof.

Let . Observe that

(7)

where the second equality holds because each is a probability distribution and the third by the definition of causality and the fact that . Now notice that changing the order of summation in (7) gives

In words, Theorem 3.1 shows that causality graph is a lopsidependency graph with respect to measure with for all . Thus, when designing an algorithm for an application of Theorem 2.1 using Theorem 3.1, we have to make sure that the induced causality graph coincides with the lopsidependency graph, and that the measure distortion induced when addressing flaw is sufficiently small so that the resulting charge is bounded above by .

4 Edge Coloring Multigraphs: Proof of Theorem 1.3

We follow the exposition of the proof of Kahn in [26]. The key to the proof of Theorem 1.3 is the following lemma.

Lemma 4.1.

For all , there exists such that if then we can find matchings in whose deletion leaves a multigraph with in expected time with probability at least , for any constant .

Using the algorithm of Lemma 4.1 recursively, for every we can efficiently find an edge coloring of using at most colors as follows. First, we compute using the algorithm of Padberg and Rao. If , then we apply Lemma 4.1 to get a multigraph with . We can now color recursively using at most colors. Using one extra color for each one of the matchings promised by Lemma 4.1, we can then complete the coloring of , proving the claim. In the base case where , we color greedily using colors. The fact that concludes the proof of Theorem 1.3 since the number of recursive calls is at most .

4.1 The Algorithm

Observe that we only need to prove Lemma 4.1 for since, clearly, if it holds for then it holds for all . So we fix and let . Our goal will be to delete matchings from to get a multigraph which has fractional chromatic index at most .

The flaws.

Let be the set of possible -tuples of matchings of . For a state let denote the multigraph induced by deleting the matchings from . For a vertex we define to be the degree of in . We now define the following flaws. For every vertex let

For every connected subgraph of

with an odd number of vertices, let

The following lemma states that it suffices to find a flawless state.

Lemma 4.2 ([18]).

Any flawless state satisfies .

Proof.

Edmonds’ characterization [8] of the matching polytope implies that the chromatic index of is less than if

  1. ; and

  2. with an odd number of vertices: .

Now clearly, addressing every flaw of the form establishes condition . By summing degrees this also implies that for every subgraph with an even number of vertices .

Moreover, any odd subgraph can be split into a connected component with an odd number of vertices, and a subgraph with an even number of vertices. Thus, in the absence of flaws, it suffices to prove condition for connected . Again by summing degrees, we see that if no flaw is present, then condition can fail only for with fewer than vertices, concluding the proof. ∎

To describe an efficient algorithm for finding flawless states we need to (i) determine the initial distribution of the algorithm and show that is efficiently samplable; (ii) show how to address each flaw efficiently; (iii) show that the expected number of steps of the algorithm is polynomial; and finally (iv) show that we can search for flaws in polynomial time, so that each step is efficiently implementable.

The initial distribution.

Let and apply Theorem 2.9. Let be the promised hard-core probability distribution, the vector of activities associated with it, and the corresponding constant. Note that the activities  defining  are not readily available. However, the next lemma says that we can efficiently compute a set of activities that gives an arbitrarily good approximation to the desired distribution .

Lemma 4.3.

For every , there exists a -time algorithm that computes a set of edge activities such that the corresponding hard-core distribution satisfies .

Proof.

Lemma 4.3 is a straightforward corollary of the main results of Singh and Vishnoi [36] and Jerrum and Sinclair [15]. Briefly, the main result of [36] states that finding a distribution that apprxoximates

can be seen as the solution of a max-entropy distribution estimation problem which can be efficiently solved given a “generalized counting oracle” for

. The latter is provided by [15]. ∎

For a parameter and a distribution , we say that we -approximately sample from to express that we sample from a distribution such that . Set , where is a sufficiently large constant to be specified later, and let be the distribution promised by Lemma 4.3. The initial distribution of our algorithm, , is obtained by -approximately sampling random matchings (independently) from . Observe that , where denotes the probability distribution over induced by taking independent samples from .

Addressing flaws.

For an integer and a connected subgraph let be the set of vertices within distance strictly less than  of a vertex .

We consider the procedure Resample below which takes as input a connected subgraph , a state and a positive integer , and which will be used to address flaws.

1:procedure Resample()
2:     Let
3:     for  to  do
4:         Let be the set of edges of that do not belong to the multigraph induced by
5:         Let be the set of edges of whose both endpoints are in distance exactly from
6:         Let be the set of vertices of that belong to edges in
7:         Let be the multigraph induced by
8:         Let be the hard-core distribution induced by .
9:         -approximately sample a matching from
10:         Let By definition, is a matching      
11:     Output

Notice that Theorem 2.10 implies that procedure Resample terminates in time.

Set . To address at state , we invoke procedures Resample and Resample , respectively.

Searching for flaws.

Notice that we can compute in polynomial time using the algorithm of Padberg and Rao [30]. Therefore, given a state and , we can search for flaws of the form in polynomial time. However, the flaws of the form are potentially exponentially many, so a brute-force search does not suffice for our purposes.

Fortunately, the result of Padberg and Rao essentially provides a polynomial time oracle for this problem as well. Recall Edmonds’ characterization used in the proof of Lemma 4.2. The constraints over odd subgraphs  are called matching constraints. Recall further that in the proof of Lemma 4.2 we showed that, in the absence of -flaws, the only matching constraints that could possibly be violated correspond to flaws. On the other hand, the oracle of Padberg and Rao, given as input and a multigraph , can decide in polynomial time whether has a fractional -coloring or return a violated matching constraint. Hence, if our algorithm prioritizes flaws over flaws, this oracle can be used to detect the latter in polynomial time.

4.2 Proof of Lemma 4.1

We are left to show that the expected number of steps of the algorithm is polynomial and that each step can be executed in polynomial time. To that end, we will show that both of these statements are true assuming that the initial distribution is instead of approximately , and that in Lines 89 of the procedure Resample we perfectly sample from the hard-core probability distribution induced by activities instead of -approximately sampling from . Observe that, since we will prove that in this case the expected running time of the ideal algorithm is polynomial, we can maximally couple the approximate and ideal distributions, and then take the constant in the definition of the approximation parameter to be sufficiently large. The latter implies that the probability that the coupling will fail during the execution of the algorithm is negligible (i.e., at most ), establishing that the algorithm converges even if we use approximate distributions.

For an integer and a vertex , let be the set of flaws indexed by a vertex of or a set intersecting . For each set for which we have defined we let . For each flaw we define the causality neighborhood and for each flaw we define , where is as defined in the previous subsection. Notice that this is a valid choice because flaw can only cause flaws in and flaw can only cause flaws in . The reason why we choose these neighborhoods to be larger than seemingly necessary is because, as we will see, with respect to this causality graph our algorithm is commutative, allowing us to apply Theorem 2.6.

Lemma 4.4.

Let for a vertex and a connected subgraph of with an odd number of vertices and let . For every there exists such that if then

  1. ;

  2. ,

where the charges are computed with respect to the measure and the algorithm that samples from the ideal distributions.

The proof of Lemma 4.4 can be found in Section 4.3. Lemma 4.5 establishes that our algorithm is commutative with respect to the causality relation induced by neighborhoods . Its proof can be found in Section 4.4.

Lemma 4.5.

For each pair of flaws , the matrices commute.

Setting for each flaw , we see that condition (3) with is implied by

(8)

which is true for large enough according to Lemma 4.4. Notice further that the causality graph induced by can be partitioned into cliques, one for each vertex of , with potentially further edges between them. Indeed, flaws indexed by subgraphs that contain a certain vertex of form a clique in the causality graph. Combining Lemma 4.5 with the latter observation, we are able to apply Theorem 2.6 which implies that our algorithm terminates after an expected number of at most steps. (This is because we assume that per our discussion above.)

This completes the proof of Lemma 4.1 and hence, as explained at the beginning of Section 4, Theorem 1.3 follows. It remains, however, to go back and prove Lemmas 4.4 and 4.5, which we do in the next two subsections.

4.3 Proof of Lemma 4.4

In this section we prove Lemma 4.4. Given a state , a subgraph , and let

where we define . Moreover, let denote the -th entry of . Finally, let be the multigraph induced by and be the set of matchings of that are compatible with . That is, for any matching in we have that is also a matching of .

Remark 4.1.

Recall the definition of the multigraph in Line 7 of procedure Resample and observe that the set of matchings is exactly the set of matchings of this multigraph. As we saw earlier, this implies that any hard-core distribution over is efficiently samplable via the algorithm of [15]. We introduce this equivalent definition of here because it will be convenient in defining events with respect to the probability space induced by .

Proof of part (a).

We will need the following key lemma, which was essentially proved in [18]. Its proof can be found in Appendix A. Recall that is the distribution over induced by taking independent samples from .

Lemma 4.6.

For every there exists such that if then for any random state distributed according to ,

  1. for every flaw  and state : , and

  2. for every flaw and state : .

We show the proof of part (a) of Lemma 4.4 only for the case of - flaws, as the proof for - flaws is very similar. Specifically, our goal will be to prove that

(9)

Lemma 4.6 then concludes the proof.

Let denote the vector such that , where is the set of edges adjacent to . Notice that since is a matching. For a vector define and observe that iff . Define the set and notice that the latter observation implies that iff . (In other words, the elements of induce a partition of .) Hence, for a fixed state and a random sample  from , we have

(10)

since corresponds to independent samples from . Recall that is associated with a set of activities . Thus, for any vector , we obtain

(11)

where recall that denotes the set of matchings of that are compatible with . To get (11) we used the form of to cancel the contributions of edges in .

We will use (10) and (11) to prove that, for distributed according to , and any state ,

(12)

According to the definition of , maximizing (12) over yields (9) and completes the proof.

Fix . To compute the sum on the left-hand side of (12) we need to determine the set of states for which . To do this, recall that given as input a state , procedure Resample( modifies one by one each matching , , “locally” around . In particular, observe that the support of the distribution for updating is exactly the set and, hence, it has to be that for every and state . This also implies that, for every such ,

(13)

Recall now that we have assumed that the hard-core distribution in Lines 89 of Resample is induced by the ideal vector of activities . In particular, we have