Local Computation Algorithms for the Lovász Local Lemma

09/21/2018 ∙ by Dimitris Achlioptas, et al. ∙ MIT berkeley college 0

We consider the task of designing Local Computation Algorithms (LCA) for applications of the Lovász Local Lemma (LLL). LCA is a class of sublinear algorithms proposed by Rubinfeld et al. that have received a lot of attention in recent years. The LLL is an existential, sufficient condition for a collection of sets to have non-empty intersection (in applications, often, each set comprises all objects having a certain property). The ground-breaking algorithm of Moser and Tardos made the LLL fully constructive, following earlier works by Beck and Alon giving algorithms under significantly stronger LLL-like conditions. LCAs under those stronger conditions were given in the paper of Rubinfeld et al. and later work by Alon et al., where it was asked if the Moser-Tardos algorithm can be used to design LCAs under the standard LLL condition. The main contribution of this paper is to answer this question affirmatively. In fact, our techniques yields LCAs for settings beyond the standard LLL condition.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Lovász Local Lemma (LLL) [12]

is a powerful tool of probabilistic combinatorics for establishing the existence of objects satisfying certain properties (constraints). As a probability statement, it asserts that given a family of “bad” events, if each bad event is individually not very likely and, in addition, is independent of all but a small number of other bad events, then the probability of avoiding all bad events is strictly positive. Given a collection of constraints, one uses the LLL to prove the existence of an object satisfying all of them (a perfect object) by considering, for example, the uniform measure on all candidate objects and defining one bad event for each constraint (containing all candidate objects that violate the constraint). Making the LLL constructive was the subject of intensive research for over two decades, during which several constructive versions were developed 

[7, 5, 28, 11, 39], but always under conditions stronger than those of the LLL. In a breakthrough work [30, 31], Moser and Tardos made the LLL constructive for any product probability measure (over explicitly presented variables). Specifically, they proved that whenever the LLL condition holds, their Resample algorithm, which repeatedly selects any occurring bad event and resamples all its variables according to the measure, quickly converges to a perfect object.

In this paper we consider the task of designing Local Computation Algorithms (LCA) for applications of the LLL. This is a class of sublinear algorithms proposed by Rubinfeld et al. in [35] that has received a lot of attention in the recent years [6, 15, 18, 23, 24, 25, 26, 34]. For an instance , a local computation algorithm should answer in an online fashion, for any index , the -th bit of one of the possibly many solutions of , so that the answers given are consistent with some specific solution of . As an example, given a constraint satisfaction problem and a sequence of queries corresponding to variables of the problem, the algorithm should output a value assignment for each queried variable that agrees with some full assignment satisfying all constraints (assuming one exists).

The motivation behind the study of LCAs becomes apparent in the context of computations on massive data sets. In such a setting, inputs to and outputs from algorithms may be too large to handle within an acceptable amount of time. On the other hand, oftentimes only small portions of the output are required at any point in time by any specific user, in which case the use of a local computation algorithm is appropriate. We also note that LCAs can be seen as a generalization of several models such as local algorithms [40], locally decodable codes [41] and local reconstruction algorithms e.g., [20, 10, 4, 8, 36].

1.1 Related work in local computation algorithms

The original paper of Rubinfeld et al. [35] as well as the follow-up work of Alon et al. [6] provide LCAs for several problems, including applications of the LLL to -SAT and hypergraph -coloring. The LCAs for LLL applications given in these works, though, are based on the earlier constructive versions of the LLL by Beck [7] and by Alon [5], thus requiring significantly stronger conditions than the (standard) LLL condition. Indeed, it was left as a major open question in [35] whether the Moser-Tardos algorithm can be used to design LCAs under the LLL condition.

1.2 Our contributions

Our main contribution is to make the LLL locally constructive, i.e., to give a LCA under the LLL condition. Our techniques actually yield a LCA under more general recent conditions for the success of stochastic local search algorithms [2, 1, 17] that go beyond the variable setting of Moser and Tardos. For simplicity of exposition, though, we focus our presentation on the variable setting of Moser and Tardos, as it captures the great majority of LLL applications, and discuss the more general settings later. That is, we focus on constraint satisfaction problems , where is a set of variables and is a set of constraints over these variables. Given a product measure over , the LLL condition is said to be satisfied with -slack for the family of bad events induced by , if the “badness” of each bad event is bounded by (instead of 1, as in the standard condition). Given a instance , we assume that each constraint entails at most variables, and each variable is entailed by at most constraints. Finally, a -LCA responds to each query in time , using memory , and makes no error with probability at least . An informal version of our main result can thus be stated as follows.

Theorem 1.1 (Informal Statement).

If satisfies the LLL conditions with -slack, then there exists an -LCA for , for every such that .

Theorem 1.1 gives a trade-off between the running time (per query) and the probability of error, while establishing that both decrease with the slack in the LLL conditions. Moreover, as we will see, if we know beforehand that the total number of queries to our algorithm will be polylogarithmic, then the condition of Theorem 1.1 can be significantly improved. Using our general results we design LCAs for the following problems, chosen to highlight different features of our results. As we will see formally in Section 2.2, our results apply to constraint satisfaction problems of large size, i.e., we assume that the number of variables is sufficiently large. This mild assumption is essentially inherit in the model of local computation algorithms.

1.2.1 -Sat

Gebauer, Szabó and Tardos [14] used the LLL to prove that any -CNF formula where every variable appears in at most clauses is satisfiable if and, moreover, that this is asymptotically tight in . We show the following.

Theorem 1.2.

Let be a -CNF formula on variables with clauses where every variable appears in at most clauses.

  1. Suppose that , for some . Then, for every such that , there exists a -LCA for that answers up to queries.

  2. Suppose that , for some . For every , there exists a -LCA for that answers up to queries.

For comparison, the work of Rubinfeld et al. [35] gave a LCA for -CNF formulas only when there exist such that and

Notably, the LCA of [35] is logarithmic in time and space [6]. Unfortunately, the techniques of [6] that allow for space-efficient local algorithms are tailored to the LLL-algorithm of Alon [5] and do not appear to be compatible with our results.

1.2.2 Coloring Graphs

In graph vertex coloring one is given a graph and the goal is to find a mapping of to a set of colors so that no edge in is monochromatic. The chromatic number, , of is the smallest integer for which this is possible. Trivially, if the maximum degree of is , then . Molloy and Reed [27] proved that this can be significantly improved for graphs where the neighborhood of every vertex is bounded away from being a clique.

Theorem 1.3 ([27]).

There exists such that if has maximum degree and the neighborhood of every vertex of contains at most edges, where , then .

Theorem 1.3 is a sophisticated application of the LLL. Our results imply local algorithms for finding the colorings promised by Theorem 1.3 that exhibit no trade-off between speed and accuracy, in the sense that for large enough both constants , below, can be made arbitrarily small.

Theorem 1.4.

Let be any graph on vertices, edges, and maximum degree satisfying the conditions of Theorem 1.3. For every there exists a -local algorithm for coloring .

1.2.3 Non-Uniform Hypergraph Coloring

Our results can also handle applications of the LLL in non-uniform settings, i.e., where the probabilities of bad events may vary significantly. For example, it is known that a hypergraph with minimum edge size at least where every vertex lies in at most edges of size is -colorable, if (see Theorem 19.2 in [29]).

Using our main theorem we can design a local algorithm for this problem when the number of queries is polylogarithmic. (Our main result, as well as extensions of the techniques in [35] can be applied to give local algorithms with no restriction on the number of queries, but under significantly stronger assumptions for the .)

Theorem 1.5.

Fix arbitarily small and arbitrarily large. Let be the set of hypergraphs with minimum edge size at least , where each vertex lies in at most edges of size such that

(1)

For every there exists a -LCA for -coloring hypergraphs in that answers up to queries.

2 Background

2.1 The Lovász Local Lemma

To prove that a set of objects contains at least one element satisfying a collection of constraints, we introduce a probability measure on , thus turning the objects violating each constraint into a bad event.

General LLL.

Let be a probability space and be a set of (bad) events. For each , let be such that for every . If there exist positive real numbers such that for all ,

(2)

then the probability that none of the events in occurs is at least .

Remark 2.1.

Condition (2) above is equivalent to the more well-known form , where . As we will see, formulation (2) facilitates refinements.

Definition 2.1.

We say that the general LLL condition holds with -slack if the righthand side of (2) is bounded by   for every .

Let be the digraph over the vertex set having an arc from each to each element of . We call such a graph a dependency graph. Therefore, at a high level, the LLL states that if there exists a sparse dependency graph and each bad event is not too likely, then we can avoid all bad events with positive probability.

2.2 Local Computation Algorithms

Definition 2.2.

For any input , let . The search problem, given , is to find any . We use to denote the length of the input.

Our definition of LCA algorithms is almost identical to the one of [35], the only difference being that it also takes as a parameter the number of queries to the algorithm.

Local Algorithms.

Let be as in Definition 2.2. A -local computation algorithm is a (randomized) algorithm which satisfies the following: receives a sequence of up to queries one by one; upon receiving each query it produces an output ; with probability at least , there exists such that for every . has access to a random tape and local computation memory on which it can perform current computations, as well as store and retrieve information from previous computations. We assume that the input , the local computation tape and any random bits used are all presented in the RAM world mode, i.e., is given the ability to access a word of any of these in one step. The running time of on any query is at most , which is sublinear in , and the local computation memory of is at most . Unless stated otherwise, we always assume that that the error parameter is at most some constant, say, . We say that is a strongly local computation algorithm if both are upper bounded by for some constant .

As we have already mentioned, in this paper we will be interested in local computation algorithms for constraint satisfaction problems , where is a set of variables and is a set of constraints over these variables. To simplify the statement of our results, whenever we say there exists a -local computation algorithm for ( we mean that there exists and an algorithm such that is a -local computation algorithm when the input is restricted to instances of such that . In other words, our results apply to constraint satisfaction problems of large size.

3 Statement of Results

For simplicity, we will present our results and techniques for the general LLL in the variable setting, i.e., the setting considered by Moser and Tardos [31]. In Section 7 we discuss how our techniques can be adapted to capture improved LLL criteria and generalized to settings beyond the one of [31].

The Setting.

Let be a set of variables with domains . We define to be the set of possible value assignments for the variables of , and we sometimes refer to its elements as states. We also consider a set of constraints . Each constraint is associated with a set of variables and corresponds to a set of forbidden value assignments for these variables, i.e., that violate the constraint.

We consider an aribitrary product measure over the variables of along with the family of bad events , where corresponds to the states in that violate . The dependency graph related to is the graph with vertex set and edge set . (Notice that since this dependence relationship is always symmetric, we have a graph instead of a digraph.) The neighborhood of an event is defined as and notice that is mutually independent of . Finally, for we denote by the length of a shortest path between and in .

Assumptions.

We will make computational assumptions similar to [35]. For a variable , we let denote the set of constraints that contain and define . We further define an incidence matrix such that, for any variable and constraint , if and , otherwise. The input constraint satisfaction problem will be represented by its variable-constraint incidence matrix . Let denote the maximum number of variables associated with a constraint. We will also assume that for some constant , which means that matrix is necessarily very sparse. Therefore, we also assume that the matrix is implemented via linked lists for each row (i.e., variable ) and each column (i.e., constraint ) and that

(In most applications .) We can now state our main result precisely.

Theorem 3.1.

Assume that satisfies the Lovász Local Lemma conditions with -slack and define . Let be constants such that . Then there exists a -local computation algorithm for .

Remark 3.1.

If the number of queries is , the probability of error is , and , then if the LLL conditions hold with -slack for some fixed constant , then for any arbitrarily small constant there exists a LCA that takes time per query (for all sufficiently large ).

4 Our Algorithm

In this section we describe our algorithm formally as well as the main idea behind its analysis.

To describe our algorithm, we first recall the algorithm of Moser and Tardos as well as a couple useful facts about its performance.

1:procedure RESAMPLE()
2:     Sample all variables in according to
3:     while violated constraints exist do
4:         Pick an arbitrary violated constraint
5:         (Re)sample every variable in according to      

Notice that the most expensive operation of the Moser-Tardos algorithm is searching for constraints which are currently violated. In [38], a simple optimization is suggested to reduce this cost, which we will be helpful to us as well. The idea is to keep a stack which, at every step, contains all the currently violated constraints. To do that, initially, we go over all the constraints and add the violated ones into the stack. Then, each time we resample a constraint , in order to update the stack, we are only required to check the constraints that share variables with to determine whether they became violated, in which case, we add them to the stack. The main benefit of maintaining this data structure is that we avoid going over the whole set of constraints at each step. In particular, using this method, we only have to put a amount of work after each resampling. This method is usually referred to as Depth-First MT.

In the following, when we say “apply the Depth-First MT algorithm for at most steps”, we mean that we apply the Resample algorithm above for at most steps, without performing the initial sampling of the variables of (all relevant variables will have been assigned values by other means).

For and , let be the elements of whose distance to in is at most . Furthermore, for a variable we denote by the sub-problem of induced by the constraints in and the variables they contain. Notice that if satisfies the LLL conditions, then does as well for any and . We are now ready to describe our meta-algorithm, that takes as input and , i.e., the number of queries, the desired upper bounds on the running time per query, the probability of error, and the slack, respectively. For the sake of brevity, we slightly abuse notation and for denote by the variable of the -th query.

1:procedure Respond to Queries()
2:     
3:     
4:     for  to  do
5:         Resample each variable in , is the -th query.
6:         
7:         Apply the Depth-First MT algorithm to for at most steps
8:         if a satisfying assignment for is found then
9:              Output the value of
10:         else
11:              Abort               

The main idea behind our algorithm comes from the following property of the Moser-Tardos algorithm. Assume that in an execution of the Moser-Tardos algorithm, in the current step, every constraint in a ball of radius around variable is satisfied. We prove that the probability that the algorithm will have to resample in a later step drops exponentially fast with . In other words, for large enough , the current value of is a good guess for the value of in the final output. To exploit this fact, we use that in the Moser-Tardos algorithm the strategy for choosing which violated constraint to resample can be arbitrary, so that we get an LCA as follows: upon receiving query (variable) , our algorithm tries to create a large ball of satisfied constraints around , by executing the Moser-Tardos algorithm with a strategy prioritizing the constraints in the ball. Naturally, then the radius of the ball governs the trade-off between speed and accuracy.

5 Proof of Theorem 3.1

In this section we present the proof of Theorem 3.1. Clearly, the local computation memory required by our algorithm is and its running time on any query is at most . Thus, we will focus on bounding the probability that our algorithm makes an error.

Observe that Line 5 allows us to see the execution of our algorithm as a prefix of a complete execution of the Moser-Tardos algorithm from a random initial state. The probability that our algorithm makes an error is bounded by the sum of (i) the probability that our algorithm ever aborts in Line 11; (ii) the probability that the complete execution of the Moser-Tardos algorithm resamples a (queried) variable after our algorithm has returned its response for it. We start by bounding the former, since it’s a more straightforward task.

5.1 Bounding the Running Time as a Function of the Radius

To bound the probability that our algorithm aborts in Line 11 we will use Theorem 5.1 below, a direct corollary of the main result in [1], bounding the running time of the Depth-First MT algorithm from an arbitrary initial state. Let

Theorem 5.1.

If the LLL conditions hold with slack, then the probability that the MT algorithm starting at an arbitrary initial state has not terminated after steps is at most .

There are two reasons why we need to use Theorem 5.1 instead of the original running time bound of Moser and Tardos [31]. The first and most important one, is that the original bound assumes that the initial state of the algorithm is selected according to the product measure . However, when we run the MT algorithm in response to a query for variable , some of the variables of may have been resampled multiple times in earlier executions of the for loop and, thus, be correlated with each other. The second reason is that Theorem 5.1 exploits the slack in the LLL conditions to ensure that the algorithm terminates fast with high probability and not just in expectation.

We are now ready to give a tail-bound for the the running time of our algorithm on a single query, as a function of the radius . Recall that each constraint contains at most variables and that each variable is contained in at most constraints and are at most polylogarithmic. We use notation to hide poly-logarithmic factors in .

Lemma 5.2.

Let . Step 7 takes more than time with probability at most .

Proof.

Let us first derive an upper bound on the number of constraints (and of variables) in . Since the maximum degree of the dependency graph is at most and the subgraph that maximizes the number of constraints inside is the full -ary tree of depth , we see that , since are at most poly-logarithmic. Thus, we can assume that .

The running of our algorithm on query consists of computing the sub-problem and then applying Depth-First MT to it. By “computing the sub-problem ” we mean creating an incidence matrix that corresponds to the subgraph of the dependency graph associated with , represented similarly to via linked lists. To perform this task we can do a Breadth First Search starting from a node such that for depth . This takes time, since we can find the neighbors of a constraint in the dependency graph in poly-logarithmic time and the subgraph of the dependency graph that corresponds to has at most edges.

For the application of Depth-First MT to , Theorem 5.1 asserts that if , then the probability that a satisfying assignment is not found after resamplings is at most . Recalling that , that the amount of work per resampling is , and that both and are polylogarithmic and adding the bound above for formulating each subproblem, concludes the proof. ∎

5.2 Bounding the Probability of Revising a Variable as a Function of the Radius

To bound the probability of error of our algorithm we first need to recall a key element of the analysis of [31].

5.2.1 Witness Trees

We denote by

the random variable that equals the

trajectory of an execution of the Moser-Tardos algorithm, where, for each , denotes the -th state of the trajectory and the index of the bad event resampled. We also call the random variable the witness sequence of .

We first recall the definition of witness trees from [31], while slightly reformulating to fit our setting. A witness tree is a finite rooted, unordered, tree along with a labelling of its vertices with indices of bad events such that the children of a vertex receive labels from . To lighten notation, we will sometimes write to denote and instead of . Given a witness sequence we associate with each a witness tree constructed in steps as follows: let be an isolated vertex labelled by ; then, going backwards for each , if there is a vertex such that , then among those vertices we choose the one having maximum distance from the root (breaking ties arbitrarily) and attach a new child vertex to that we label to get . If there is no such vertex then . Finally, .

We will say that a witness tree occurs in a trajectory with witness sequence , if there is such that . Finally, we use the notation to refer to the probability of events in the probability space induced by the execution of the Moser-Tardos algorithm.

Lemma 5.3 (The witness tree lemma [31]).

For every witness tree , .

5.2.2 The Analysis

Let be the event that the complete execution of the Moser-Tardos algorithm ever resamples query variable after the time, , that it returned a response for it. Let be a constraint that contains and let denote the event that constraint is resampled after . Clearly, . The key insight is that in order for to occur, it should be that at least constraints that form a path in the dependency graph which ends in must have been resampled after . This is because, by the nature of our algorithm, right after step , every constraint in is satisfied. This implies that the (first) resampling of the bad event that corresponds to event occurring will be associated with a witness tree of size at least . Thus, if is large, is unlikely. Lemma 5.4 makes this idea rigorous.

Lemma 5.4.

Let denote the set of all witness trees of size at least whose root is labelled by . Then,

We prove Lemma 5.4 in Subsection 5.2.3. Using it, we can show the following.

Lemma 5.5.

Let . If

then the probability that our algorithm answers at least one query incorrectly is at most .

Proof.

Combining Lemma 5.4 with our observation regarding the minimum size of witness trees related to event , we obtain

Thus, taking and applying the union bound we obtain

(3)

5.2.3 Proof of Lemma 5.4

A typical argument used in the algorithmic LLL literature to estimate sums over sets of witness trees, such as the sum in the statement of Lemma 

5.4, is to consider a Galton-Watson branching process that produces each witness tree in the set of interest (and perhaps other trees) with positive probability. The idea is to then relate the probability of the branching process generating each tree with the probability that the tree occurs in the algorithm and exploit that the sum of the probabilities in the process is, by definition, bounded by 1.

Lemma 5.6 ([31]).

Let denote the set of witness tree whose root is labeled by . There exists a branching process that outputs each witness tree with probability

Observe that since , Lemma 5.6 implies that

(4)

Lemma 5.3 implies (5) below, the fact that the LLL conditions hold with slack implies (6), the fact that every witness tree in has size at least implies (7), while inequality (4), finally, implies (8).

(5)
(6)
(7)
(8)

5.3 Concluding the Proof

Recall that , that , and that denotes the required upper bound on the running time of our algorithm on a single query. Lemma 5.2 and Lemma 5.5 imply that there exists such that if

(9)

where , then the probability that the algorithm aborts in Line 11 or responds inaccurately on any query is at most .

Recall that and that . It is not hard to see that if and , then the interval in (9) is non-empty for large enough , concluding the proof of Theorem 3.1. The proof of Remark 3.1 is very similar.

6 Proofs for our Applications

In this section we prove Theorems 1.21.4 and 1.5.

6.1 Proof of Theorem 1.2

We first briefly recall the application of the LLL in [14]. For each variable , let denote the number of clauses in which occurs and assume that of these occurrences are positive, for some . Let and let be the product measure over the variables of that sets each variable to true with probability . In [14] it is shown that if and we set for each , then the LLL conditions are satisfied.

To establish part (a) of Theorem 1.2, recall the definition of and in Theorem 3.1 and notice that it implies that

(10)

Thus, in order to meet the requirement of Theorem 3.1 that the LLL conditions hold with an -slack, i.e., that , it is enough that

(11)

Setting in (11), we get the condition of part (a) of Theorem 1.2, concluding the proof. Part (b) of Theorem 1.2 is a straightforward application of Theorem 3.1 and Remark 3.1.

6.2 Proof of Theorem 1.4

We’ll need to briefly recall the key ideas in the analysis of the algorithm of [27].

In the first phase, the algorithm operates on the set of complete but not necessarily proper colorings of with at most colors, where . For a vertex and a state , say that a color is stable if it is assigned to at least two non-adjacent neighbors of and, moreover, all neighbors of with color do not belong in a monochromatic edge in . Let be the number of stable colors for at . For each vertex , define the bad event with respect to the probability space , where is the uniform measure over . A coloring that avoids all bad events, can be efficiently transformed to a proper coloring of . To see this, consider the partial proper coloring that results by uncoloring every vertex in that belongs in a monochromatic edge. Since avoided all bad events, this means that in the neighborhood of every [uncolored] vertex in , at least colors appear at least twice. Therefore, in , for every vertex both of the following hold: (i) has at most uncolored neighbors, where is the number of colors appearing exactly once in the neighborhood of , and (ii) at least colors are available, i.e., do not appear in

’s neighborhood. Thus, the graph induced by the uncolored vertices can be colored with available colors using the greedy heuristic.

To prove that we can find efficiently a coloring that avoids all bad events we use the following two lemmas from the analysis of [27] (slightly modified to fit our needs). Below, both the expectation and the probability are with respect to .

Lemma 6.1 ([27]).

.

Lemma 6.2 ([27]).

.

Lemmata 6.1 and 6.2 imply that if and is large enough, then . On the other hand, each bad event is mutually independent from all but at most other bad events, since it only depends on the color of vertices which are joined to by a path of length at most . Thus, for large enough , if for every , then the LLL condition is satisfied, implying that the Moser-Tardos algorithm finds a coloring that avoids all bad events quickly.

Proof of Theorem 1.4.

We consider the constraint satisfaction problem with one variable and one constraint per vertex of the graph, the variable expressing the color of , and the constraint including all vertices (variables) within distance 2 from and forbidding all joint value assignments for which occurs. Observe that knowing the color of the vertices included by the constraint of is (more than) enough information to determine if belongs in any monochromatic edge and, thus, whether it retains its color when we uncolor all vertices belonging in monochromatic edges. With this in mind, our local algorithm is the following.

Let be the number of queries. For each , to answer the -th query we use the procedure described in the proof of Theorem 3.1 to satisfy all constraints within a ball of some radius of the queried vertex . Naturally, this colors all vertices in the neighborhood of (and probably many others). If, in the resulting coloring, we find that participates in a monochromatic edge we “guess” that will be uncolored at the end of the first phase, otherwise we “guess” that it will have the colored assigned by our procedure. In the latter case, we return this color as our answer. To answer queries for vertices that we guess will be uncolored at the end of the first phase, we simulate the greedy coloring of the second phase, using the ordering of the vertices by the queries. That is, whenever we guess that is uncolored, we chose one of its available colors, , return it as our answer to the query, and record as the color of . If we later need to answer a query for a neighbor of that we also guess to be uncolored, we do not consider an available color for . Thus, if our algorithm does not make any wrong guesses, it doesn’t make any error at all.

To bound the probability of error we use Theorem 3.1. Since the constraints correspond to the events , we see that and . We are interested in responding to at most queries, i.e., . Letting and setting we see that the LLL condition is satisfied if . Since are constants and , we see that for large enough . Letting and noting that , i.e., , we obtain

and in turn that . Thus, Theorem 3.1 applies, concluding the proof. ∎

6.3 Proof of Theorem 1.5

We consider the uniform measure over all possible -colorings of and define one bad event, , for each edge , corresponding to being monochromatic. Clearly, . If we set , where , the LLL conditions are satisfied assuming that

(12)

(For more details, see the proof of Theorem 19.2 in [29]). Now, since and are constants we see that the condition of Theorem 1.5 implies that the LLL condition holds with slack and, thus, the proof follows directly from Theorem 3.1 and Remark 3.1.

7 Improved LLL Criteria and Commutative Algorithms

Our techniques can be generalized in two distinct directions. First, so that they apply under more permissive LLL conditions such as the cluster expansion condition [9] and Shearer’s condition [37]. Second, they can be used to design local computation algorithms that simulate algorithms in the abstract settings of the algorithmic Lovász Local Lemma [2, 17, 1], where the probability space does not necessarily correspond to a product measure, and which capture the lopsided version of the LLL [13]. We briefly discuss these extensions below.

Given a dependency graph over and a set we denote by the family of subsets of that correspond to independent sets in .

Cluster Expansion condition.

The cluster expansion criterion strictly improves upon the General LLL criterion (2) by taking advantage of the local density of the dependency graph.

Definition 7.1.

Given a sequence of positive real numbers , we say that the cluster expansion condition is satisfied if for each

Shearer’s condition.

Shearer’s condition improves upon the general and cluster expansion LLL conditions by exploiting the global structure of the dependency graph. It is best possible in the sense that if it is not satisfied, then one can always construct a probability space and bad events that are compatible with the given dependency graph, for which the probability of avoiding all bad events is zero.

Definition 7.2.

Let

be the real vector such that

. For define and the polynomial

We say that the Shearer’s condition is satisfied if for all , and .

For the variable setting, the statement of our results remain identical under the cluster expansion and, essentially identical, under Shearer’s conditions ( is replaced by and we say that the condition holds with -slack for a given vector , if it simply holds for vector .) The only thing that changes in the analysis is the bound for the sum of probabilities of witness trees of large size in Lemma 5.6. (We refer the reader to Section 4 in [19] for further details.)

The first result that made the LLL constructive in a non-product probability space was due to Harris and Srinivasan in [16], who considered the space of permutations endowed with the uniform measure. Subsequent works by Achlioptas and Iliopoulos [2, 1, 3] introducing the flaws/actions framework, and of Harvey and Vondrák [17] introducing the resampling oracles framework, made the LLL constructive in more general settings. These frameworks [2, 1, 17, 3] provide tools for analyzing focused stochastic search algorithms [32], i.e., algorithms which, like the Moser-Tardos algorithm, search by repeatedly selecting a flaw of the current state and moving to a random nearby state that avoids it, in the hope that, more often than not, more flaws are removed than introduced, so that a flawless object is eventually reached.

Our techniques can be extended to these more general settings assuming they are commutative, a notion introduced by Kolmogorov [22, 3]. While we will not define the class of commutative algorithms here for the sake of brevity, we note that it contains the vast majority of LLL algorithms, including the Moser-Tardos algorithm. The reason why our results apply in this case is because the witness tree lemma, i.e., Lemma 5.3 for the case of the Moser-Tardos algorithm (which was key to our analysis) holds for commutative algorithms [19, 3].

References

  • [1] Dimitris Achlioptas and Fotis Iliopoulos. Focused stochastic local search and the Lovász local lemma. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016, pages 2024–2038, 2016.
  • [2] Dimitris Achlioptas and Fotis Iliopoulos. Random walks that find perfect objects and the Lovász local lemma. J. ACM, 63(3):22:1–22:29, July 2016.
  • [3] Dimitris Achlioptas, Fotis Iliopoulos, and Alistair Sinclair. A new perspective on stochastic local search and the lovasz local lemma. CoRR, abs/1805.02026, 2018.
  • [4] Nir Ailon, Bernard Chazelle, Seshadhri Comandur, and Ding Liu. Property-preserving data reconstruction. Algorithmica, 51(2):160–182, April 2008.
  • [5] Noga Alon. A parallel algorithmic version of the local lemma. Random Struct. Algorithms, 2(4):367–378, 1991.
  • [6] Noga Alon, Ronitt Rubinfeld, Shai Vardi, and Ning Xie. Space-efficient local computation algorithms. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, pages 1132–1139. Society for Industrial and Applied Mathematics, 2012.
  • [7] József Beck. An algorithmic approach to the Lovász local lemma. I. Random Structures Algorithms, 2(4):343–365, 1991.
  • [8] Arnab Bhattacharyya, Elena Grigorescu, Madhav Jha, Kyomin Jung, Sofya Raskhodnikova, and David P. Woodruff. Lower bounds for local monotonicity reconstruction from transitive-closure spanners. SIAM Journal on Discrete Mathematics, 26(2):618–646, 2012.
  • [9] Rodrigo Bissacot, Roberto Fernández, Aldo Procacci, and Benedetto Scoppola. An improvement of the Lovász local lemma via cluster expansion. Combinatorics, Probability & Computing, 20(5):709–719, 2011.
  • [10] M. Blum, M. Luby, and R. Rubinfeld. Self-testing/correcting with applications to numerical problems. In

    Proceedings of the Twenty-second Annual ACM Symposium on Theory of Computing

    , STOC ’90, pages 73–83, New York, NY, USA, 1990. ACM.
  • [11] Artur Czumaj and Christian Scheideler. Coloring non-uniform hypergraphs: a new algorithmic approach to the general Lovász local lemma. In Proceedings of the Eleventh Annual ACM-SIAM Symposium on Discrete Algorithms (San Francisco, CA, 2000), pages 30–39, 2000.
  • [12] Paul Erdős and László Lovász. Problems and results on -chromatic hypergraphs and some related questions. In Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. II, pages 609–627. Colloq. Math. Soc. János Bolyai, Vol. 10. North-Holland, Amsterdam, 1975.
  • [13] Paul Erdös and Joel Spencer. Lopsided Lovász local lemma and latin transversals. Discrete Applied Mathematics, 30(2-3):151–154, 1991.
  • [14] Heidi Gebauer, Tibor Szabó, and Gábor Tardos. The local lemma is tight for SAT. In Dana Randall, editor, SODA, pages 664–674. SIAM, 2011.
  • [15] Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus. On the complexity of local distributed graph problems. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 784–797. ACM, 2017.
  • [16] David G. Harris and Aravind Srinivasan. A constructive algorithm for the Lovász local lemma on permutations. In Chandra Chekuri, editor, Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 907–925. SIAM, 2014.
  • [17] Nicholas J. A. Harvey and Jan Vondrák. An algorithmic proof of the Lovász local lemma via resampling oracles. In Venkatesan Guruswami, editor, IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 1327–1346. IEEE Computer Society, 2015.
  • [18] Avinatan Hassidim, Yishay Mansour, and Shai Vardi. Local computation mechanism design. ACM Transactions on Economics and Computation (TEAC), 4(4):21, 2016.
  • [19] Fotis Iliopoulos. Commutative algorithms approximate the lll-distribution. In Eric Blais, Klaus Jansen, José D. P. Rolim, and David Steurer, editors,

    Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2018, August 20-22, 2018 - Princeton, NJ, USA

    , volume 116 of LIPIcs, pages 44:1–44:20. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018.
  • [20] M. Jha and S. Raskhodnikova. Testing and reconstruction of lipschitz functions with applications to data privacy. In 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, pages 433–442, Oct 2011.
  • [21] Kashyap Babu Rao Kolipaka and Mario Szegedy. Moser and Tardos meet Lovász. In STOC, pages 235–244. ACM, 2011.
  • [22] Vladimir Kolmogorov. Commutativity in the algorithmic Lovász local lemma. In Irit Dinur, editor, IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pages 780–787. IEEE Computer Society, 2016.
  • [23] Reut Levi, Dana Ron, and Ronitt Rubinfeld. Local algorithms for sparse spanning graphs. In Klaus Jansen, José D. P. Rolim, Nikhil R. Devanur, and Cristopher Moore, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain, volume 28 of LIPIcs, pages 826–842. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2014.
  • [24] Reut Levi, Dana Ron, and Ronitt Rubinfeld. A local algorithm for constructing spanners in minor-free graphs. In Klaus Jansen, Claire Mathieu, José D. P. Rolim, and Chris Umans, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2016, September 7-9, 2016, Paris, France, volume 60 of LIPIcs, pages 38:1–38:15. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016.
  • [25] Yishay Mansour, Aviad Rubinstein, Shai Vardi, and Ning Xie. Converting online algorithms to local computation algorithms. In Artur Czumaj, Kurt Mehlhorn, Andrew M. Pitts, and Roger Wattenhofer, editors, Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012, Warwick, UK, July 9-13, 2012, Proceedings, Part I, volume 7391 of Lecture Notes in Computer Science, pages 653–664. Springer, 2012.
  • [26] Yishay Mansour and Shai Vardi. A local computation approximation scheme to maximum matching. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 260–273. Springer, 2013.
  • [27] Michael Molloy and Bruce Reed. A bound on the strong chromatic index of a graph. journal of combinatorial theory, Series B, 69(2):103–109, 1997.
  • [28] Michael Molloy and Bruce Reed. Further algorithmic aspects of the local lemma. In STOC ’98 (Dallas, TX), pages 524–529. ACM, New York, 1999.
  • [29] Michael Molloy and Bruce Reed. Graph colouring and the probabilistic method, volume 23 of Algorithms and Combinatorics. Springer-Verlag, Berlin, 2002.
  • [30] Robin A. Moser. A constructive proof of the Lovász local lemma. In STOC’09—Proceedings of the 2009 ACM International Symposium on Theory of Computing, pages 343–350. ACM, New York, 2009.
  • [31] Robin A. Moser and Gábor Tardos. A constructive proof of the general Lovász local lemma. J. ACM, 57(2):Art. 11, 15, 2010.
  • [32] Christos H. Papadimitriou. On selecting a satisfying truth assignment. In FOCS, pages 163–169. IEEE Computer Society, 1991.
  • [33] Wesley Pegden. An extension of the Moser-Tardos algorithmic local lemma. SIAM J. Discrete Math., 28(2):911–917, 2014.
  • [34] Omer Reingold and Shai Vardi. New techniques and tighter bounds for local computation algorithms. Journal of Computer and System Sciences, 82(7):1180–1200, 2016.
  • [35] Ronitt Rubinfeld, Gil Tamir, Shai Vardi, and Ning Xie. Fast local computation algorithms. In Bernard Chazelle, editor, Innovations in Computer Science - ICS 2010, Tsinghua University, Beijing, China, January 7-9, 2011. Proceedings, pages 223–238. Tsinghua University Press, 2011.
  • [36] Michael Saks and C. Seshadhri. Local monotonicity reconstruction. SIAM Journal on Computing, 39(7):2897–2926, 2010.
  • [37] J.B. Shearer. On a problem of Spencer. Combinatorica, 5(3):241–245, 1985.
  • [38] Joel Spencer. Needles in exponential haystacks ii.
  • [39] Aravind Srinivasan. Improved algorithmic versions of the Lovász local lemma. In Shang-Hua Teng, editor, SODA, pages 611–620. SIAM, 2008.
  • [40] Jukka Suomela. Survey of local algorithms. ACM Computing Surveys (CSUR), 45(2):24, 2013.
  • [41] Sergey Yekhanin et al. Locally decodable codes. Foundations and Trends® in Theoretical Computer Science, 6(3):139–255, 2012.