Relating Complexity-theoretic Parameters with SAT Solver Performance

06/26/2017 ∙ by Edward Zulkoski, et al. ∙ University of Waterloo 0

Over the years complexity theorists have proposed many structural parameters to explain the surprising efficiency of conflict-driven clause-learning (CDCL) SAT solvers on a wide variety of large industrial Boolean instances. While some of these parameters have been studied empirically, until now there has not been a unified comparative study of their explanatory power on a comprehensive benchmark. We correct this state of affairs by conducting a large-scale empirical evaluation of CDCL SAT solver performance on nearly 7000 industrial and crafted formulas against several structural parameters such as backdoors, treewidth, backbones, and community structure. Our study led us to several results. First, we show that while such parameters only weakly correlate with CDCL solving time, certain combinations of them yield much better regression models. Second, we show how some parameters can be used as a "lens" to better understand the efficiency of different solving heuristics. Finally, we propose a new complexity-theoretic parameter, which we call learning-sensitive with restarts (LSR) backdoors, that extends the notion of learning-sensitive (LS) backdoors to incorporate restarts and discuss algorithms to compute them. We mathematically prove that for certain class of instances minimal LSR-backdoors are exponentially smaller than minimal-LS backdoors.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Modern conflict-driven clause-learning (CDCL) satisfiability (SAT) solvers routinely solve real-world Boolean instances with millions of variables and clauses, despite the Boolean satisfiability problem being NP-complete and widely regarded as intractable in general. This has perplexed both theoreticians and solver developers alike over the last two decades. A commonly proposed explanation is that these solvers somehow exploit the underlying structure inherent in industrial instances. Previous work has attempted to identify a variety of structural parameters, such as backdoors [37, 36, 13], community structure modularity [4, 28], and treewidth [25]. Additionally, researchers have undertaken limited studies to correlate the size/quality measures of these parameters with CDCL SAT solver runtime. For example, Newsham et al. [28] showed that there is moderate correlation between the runtime of CDCL SAT solvers and modularity of community structure of industrial instances. Others have shown that certain classes of crafted and industrial instances often have favorable parameter values, such as small backdoor sizes [13, 19, 20, 37].

There are several reason for trying to understand why SAT solvers work as well as they do on industrial instances, and what, if any, structure they exploit. First and foremost is the scientific motivation: a common refrain about heuristics in computer science is that we rarely fully understand why and under what circumstances they work well. Second, a deeper understanding of the relationship between the power of SAT heuristics and the structure of instances over which they work well can lead us to develop better SAT solvers. Finally, we hope that this work will eventually lead to new parameterized complexity results relevant to classes of industrial instances.

Before we get into the details of our empirical study, we briefly discuss several principles that guided us in this work: first, we wanted our study to be comprehensive both in terms of the parameters as well as the large size and variety of benchmarks considered. This approach enabled us to compare the explanatory strengths of different parameters in a way that previous studies could not. Also, the large scale and variety of benchmarks allowed us to draw more general conclusions than otherwise. Second, we were clear from the outset that we would be agnostic to the type of correlations (strong or moderate, positive or negative) obtained, as poor correlations are crucial in ruling out parameters that are not explanatory. Third, parameter values should enable us to distinguish between industrial and crafted instances.

Although many studies of these parameters have been performed in isolation, a comprehensive comparison has not been performed between them. A primary reason for this is that most of these parameters are difficult to compute – often NP-hard – and many take longer to compute than solving the original formula. Further, certain parameters such as weak backdoor size are only applicable to satisfiable instances [37]. Hence, such parameters have often been evaluated on incomparable benchmark sets, making a proper comparison between them difficult. We correct this issue in our study by focusing on instances found in previous SAT competitions, specifically from the Application, Crafted, and Agile tracks [2]

. These instances are used to evaluate state-of-the-art SAT solvers on a yearly basis. Application instances are derived from a wide variety of sources and can be considered a small sample of the types of SAT instances found in practice, such as from verification domains. Crafted instances mostly contain encodings of combinatorial/mathematical properties, such as the pigeon-hole principle or pebbling formulas. While many of these instances are much smaller than industrial instances, they are often very hard for CDCL solvers. The Agile track evaluates solvers on bit-blasted quantifier-free bit-vector instances generated from the whitebox fuzz tester SAGE 

[16]. In total, we consider approximately 1200 application instances, 800 crafted instances, and 5000 Agile instances.

Contributions. We make the following four key contributions in this paper:

  1. We perform a large scale evaluation of several structural parameters on approximately 7000 SAT instances obtained from a wide variety of benchmark suites, and relate them to CDCL solver performance. We show moderate correlations to solving time for certain combinations of features, and very high correlations for the Agile benchmark. We further show that application and Agile instances have significantly better structural parameter values compared to crafted instances. To the best of our knowledge, this is first such comprehensive study. (Refer Section 4)

  2. We introduce a new structural parameter, which we call learning-sensitive with restarts (LSR) backdoors, and describe a novel algorithm for computing upper bounds on minimum LSR-backdoor sizes, using the concept of clause absorption from [6]. The LSR-backdoor concept naturally extends the idea of learning-sensitive (LS) backdoors introduced by Dilkina et al. [12] by taking into account restarts, a key feature of CDCL SAT solvers. (Refer Background Section 2 and Section 5)

  3. We show how these structural parameters can be used as a lens to compare various solving heuristics, with a current focus on restart policies. For example, we show that the “Always Restart” policy not only produces the smallest backdoors (with respect to the algorithm we use from Section 5), but also has the fastest runtime for a class of instances. We hope that our work can be used as a guide by theorists and SAT solver developers to rule out the study of certain types of parameters (and rule in new types of parameters) for understanding the efficiency of CDCL SAT solvers. (Refer Section 4)

  4. We mathematically prove that minimal LSR-backdoors can be exponentially smaller than LS-backdoors for certain formulas. (Refer Section 6)

2 Background

CDCL SAT Solvers. We assume basic familiarity with the satisfiability problem, CDCL solvers and the standard notation used by solver developers and complexity theorists. For an overview we refer to [8]. We assume that Boolean formulas are given in conjunctive normal form (CNF). For a formula , we denote its variables as vars(F). A model refers to a complete satisfying assignment to a formula. The trail refers to the sequence of variable assignments, in the order they have been assigned, at any given point in time during the run of a solver. Learnt clauses are derived by analyzing the conflict analysis graph (CAG), which represents which decisions and propagations that led to a conflict. We assume the first unique implication point (1st-UIP) clause learning scheme throughout this paper, which is the most common in practice [27]. The learnt clause (aka conflict clause) defines a cut in the CAG; we denote the subgraph on the side of the cut which contains the conflict node as the conflict side, and the other side as the reason side. For a set of clauses and a literal , denotes that unit propagation can derive from .

Backdoors and Backbones. Backdoors are defined with respect to subsolvers, which are algorithms that can solve certain class of SAT instances in polynomial-time. Example subsolvers include the unit propagation (UP) algorithm [13, 37], which is also what we focus on as it is a standard subroutine implemented in CDCL SAT solvers. Given a partial assignment , the simplification of with respect to , denoted , removes all clauses that are satisfied by , and removes any literals that are falsified by in the remaining clauses. A strong backdoor of a formula is a set of variables such that for every assignment to , can be determined to be satisfiable or unsatisfiable by the UP subsolver [37]. A set of variables is a weak backdoor with respect to a subsolver if there exists an assignment to such that the UP subsolver determines the formula to be satisfiable. Backdoors were further extended to allow clause-learning to occur while exploring the search space of the backdoor:

Definition 1 (Learning-sensitive (LS) backdoor [11])

A set of variables is an LS-backdoor of a formula with respect to a subsolver if there exists a search-tree exploration order such that a CDCL SAT solver without restarts, branching only on and learning clauses at the leaves of the tree with subsolver , either finds a model for or proves that is unsatisfiable.

The backbone of a SAT instance is the maximal set of variables such that all variables in the set have the same polarity in every model [26]. Note that weak backdoors and backbones are typically only defined over satisfiable instances. Further, while the backbone of an instance is unique, many backdoors may exist; we typically try to find the smallest backdoors possible.

Learning-sensitive with Restarts (LSR) Backdoors. We introduce the concept of an LSR-backdoor here. Section 5 formalizes our approach.

Definition 2 (Learning-sensitive with restarts (LSR) backdoor)

A set of variables is an LSR-backdoor of the formula with respect to a subsolver if there exists a search-tree exploration order such that a CDCL SAT solver (with restarts), branching only on and learning clauses at the leaves of the tree with subsolver , either finds a model for or proves that is unsatisfiable.

By allowing restarts, the solver may learn clauses from different parts of the search-tree of , which would otherwise not be accessible without restarts. In Section 5, we demonstrate an approach to computing upper bounds on minimal LSR-backdoor sizes using algorithms for clause absorption from [6], which intrinsically relies upon restarts.

Graph Parameters. We refer to treewidth [33] and community structure [4] as graph parameters. All graphs parameters are computed over the variable incidence graph (VIG) of a CNF formula [4]. There exists a vertex for every variable in , and edges between vertices if their corresponding variables appear in clauses together (weighted according to clause size). Intuitively, the community structure of a graph is a partition of the vertices into communities such that there are more intra-community edges than inter-community edges. The modularity or Q value intuitively denotes how easily separable the communities of the graph are. The Q value ranges from , where values near means the communities are highly separable and the graph intuitively has a better community structure. Treewidth intuitively measures how much a graph resembles a tree. Actual trees have treewidth 1. Further details can be found in [33].

3 Related Work

Backdoor-related Parameters. Traditional weak and strong backdoors for both SAT and CSP were introduced by Williams et al.  [37]. Kilby et al. introduced a local search algorithm for computing weak backdoors [19], and showed that random SAT instances have medium sized backdoors (of roughly 50% of the variables), and that the size of weak backdoors did not correlate strongly with solving time. Li et al. introduced an improved Tabu local search heuristic for weak backdoors [20]. They demonstrated that many industrial satisfiable instances from SAT competitions have very small weak backdoors, often around 1% of the variables. The size of backdoors with respect to subsolvers different from UP was considered in [13, 20]. Monasson et al. introduced backbones to study random 3-SAT instances [26]. Janota et al. [17] introduced and empirically evaluated several algorithms for computing backbones. Several extensions of traditional strong and weak backdoors have been proposed. Learning-sensitive (LS) backdoors also consider the assignment tree of backdoor variables, but additionally allow clause learning to occur while traversing the tree, which may yield exponentially smaller backdoors than strong backdoors [11, 12].

Graph Abstraction Parameters. Mateescu computed lower and upper bounds on the treewidth of large application formulas [25]. Ansótegui et al. introduced community structure abstractions of SAT formulas, and demonstrated that industrial instances tend to have much better structure than other classes, such as random [4]

. It has also been shown that community-based features are useful for classifying industrial instances into subclasses (which distinguish types of industrial instances in the SAT competition)

[18]. Community-based parameters have also recently been shown to be one of the best predictors for SAT solving performance [28].

Type Benchmarks Unsat? Tool Description
Weak [19, 20, 38] 3SAT, GC, SR No Perform Tabu-based local search to minimize the number of decisions in the final model.
LS [12, 13] LP Yes Run a clause-learning solver, recording all decisions, which constitutes a backdoor.
Backbones [17, 19, 38] 3SAT, GC, Comps No Repeated SAT calls with UNSAT-core based optimizations.
Treewidth [21, 25] C09, FM Yes Heuristically compute residual graph . The max-clique of is an upper-bound.
Modularity [4, 28] Comps Yes The Louvain method [9] – greedily join communities to improve modularity.
Table 1: Previously studied benchmarks for each considered parameter, as well as description of tools used to compute them. The “Unsat?” column indicates if the parameter is defined on unsatisfiable instances. Abbrevations: 3SAT – random 3-SAT; GC – graph coloring; LP – logistics planning; SR – SAT Race 2008; C09 – SAT competition 2009; Comps – 2009-2014 SAT competitions; FM – feature models.
Benchmark Instances LSR Weak Cmty Bones TW
Application 1238 420 306 984 218 1181
Crafted 753 327 195 613 154 753
Random 126 123 76 126 59 126
Agile 4968 2828 464 4968 208 4968
Total 7085 3698 1041 6691 639 7028
Table 2: The number of instances for which we were able to successfully compute each parameter. “Cmty” refers to the community parameters; “TW” denotes the treewidth upper bound; “Bones” denotes backbone size.
Feature Set Application Crafted Random Agile

0.03 (1237) 0.04 (753) 0.04 (126) 0.84 (4968)

0.06 (982) 0.22 (613) 0.17 (126) 0.86 (4968)

0.14 (420) 0.26 (327) 0.26 (123) 0.87 (2828)

0.04 (299) 0.11 (195) 0.08 (76) 0.54 (464)

0.18 (218) 0.39 (154) 0.04 (59) 0.39 (208)

0.05 (1180) 0.07 (753) 0.11 (126) 0.91 (4968)

0.29 (420) 0.34 (327) 0.13 (123) 0.90 (2828)

0.12 (420) 0.57 (327) 0.08 (123) 0.92 (2828)

0.22 (420) 0.35 (327) 0.45 (123) 0.89 (2828)

0.18 (420) 0.29 (327) 0.04 (123) 0.93 (2828)

Table 3: Adjusted R values for the given features, compared to log of MapleCOMSPS’ solving time. The number in parentheses indicates the number of instances that were considered in each case. The lower section considers heterogeneous sets of features across different parameter types.
Benchmark LSR/V Weak/V Q Bones/V TW/V
Agile 0.18 (0.13) 0.01 (0.01) 0.82 (0.07) 0.17 (0.11) 0.16 (0.08)
Application 0.35 (0.34) 0.03 (0.05) 0.75 (0.19) 0.64 (0.38) 0.32 (0.22)
Crafted 0.58 (0.35) 0.08 (0.11) 0.58 (0.24) 0.39 (0.41) 0.44 (0.29)
Random 0.64 (0.32) 0.11 (0.10) 0.14 (0.10) 0.47 (0.40) 0.82 (0.12)
Table 4: Mean (std. dev.) of several parameter values.

Other work such as SatZilla [39] focus on large sets of easy-to-compute parameters that can be used to quickly predict the runtime of SAT solvers. In this paper, our focus is on parameters that, if sufficiently favorable, offer provable parameterized complexity-theoretic guarantees of worst-case runtime [14]

. The study of structural parameters of SAT instances was inspired by the work on clause-variable ratio and the phase transition phenomenon observed for randomly-generated SAT instances in the late 1990’s 

[10, 26, 34].

Table 4 lists previous results on empirically computing several parameters and correlating them with SAT solving time. While weak backdoors, backbones, and treewidth have been evaluated on some industrial instances from the SAT competitions, only modularity has been evaluated across a wide range of instances. Some benchmarks, such as the random 3-SAT and graph coloring instances considered in [19], are too small/easy for modern day CDCL solvers to perform meaningful analysis. Additionally, the benchmarks used in previous works to evaluate each parameter are mostly disjoint, making comparisons across the data difficult.

4 Analysis of Structural SAT Parameters

Our first set of experiments investigate the relationship between structural parameters and CDCL performance. While we would like to evaluate all parameters considered in Section 3, we focus on weak backdoors, backbones, community structure, treewidth, and LSR-backdoors. We note that obtaining any non-trivial upper bound on the size of the strong backdoor seems infeasible at this time.

Experimental Setup, Tools, and Benchmarks. We use off the shelf tools to compute weak backdoors [20], community structure and modularity [28], backbones [17], and treewidth [25]. Their approaches are briefly described in Table 4. Due to the difficulty of exactly computing these parameters, with the exception of backbones, the algorithms used in previous work (and our experiments) do not find optimal solutions, e.g., the output may be an upper-bound on the size of the minimum backdoor. We compute LSR-backdoors using a tool we developed called LaSeR, which computes an upper-bound on the size of the minimal LSR-backdoor. The tool is built on top of the MapleSat SAT solver [22], an extension of MiniSat [15]. We describe the LaSeR algorithm in Section 5. We use MapleCOMSPS, the 2016 SAT competition main track winner as our reference solver for solver runtime.

Table 4 shows the data sources for our experiments. We include all instances from the Application and Crafted tracks of the SAT competitions from 2009 to 2014, as well as the 2016 Agile track. We additionally included a small set of random instances as a baseline. As the random instances from recent SAT competitions are too difficult for CDCL solvers, we include a set of instances from older competitions. We pre-selected all random instances from the 2007 and 2009 SAT competitions that could be solved in under 5 minutes by MapleCOMSPS. All instances were simplified using MapleCOMSPS’ preprocessor before computing the parameters. The preprocessing time was not included in solving time.

Experiments were run on an Azure cluster, where each node contained two 3.1 GHz processors and 14 GB of RAM. Each experiment was limited to 6 GB. For the Application, Crafted, and Random instances, we allotted 5000 seconds for MapleCOMSPS solving (the same as used in the SAT competition), 24 hours for backbone and weak backdoor computation, 2 hours for community structure computation, and 3 hours for LSR computation. For the Agile instances, we allowed 60 seconds for MapleCOMSPS solving and 300 seconds for LSR computation; the remaining Agile parameter computations had the same cutoff as Application. Due to the difficulty of computing these parameters, we do not obtain values for all instances due to time or memory limits being exceeded. Several longer running experiments were run on the SHARCNET Orca cluster [3]. Nodes contain cores between 2.2GHz and 2.7GHz.

Structural Parameters and Solver Runtime Correlation: The first research question we posed is the following: Do parameter values correlate with solving time? In particular, can we build significantly stronger regression models by incorporating combinations of these features?

To address this, we construct ridge regression models from subsets of features related to these parameters. We used ridge regression, as opposed to linear regression, in order to penalize multi-collinear features in the data. We consider the following “base” features: number of variables (V), number of clauses (C), number of communities (Cmtys), modularity (Q), weak backdoor size (Weak), the number of minimal weak backdoors computed (#Min_Weak), LSR-backdoor size (LSR), treewidth upper-bound (TW), and backbone size (Bones). For each

C, Cmtys, Weak, LSR, TW, Bones we include its ratio with respect to as . We also include the ratio feature , as used in [28]

. All features are normalized to have mean 0 and standard deviation 1. For a given subset of these features under consideration, we use the “

” symbol to indicate that our regression model contains these base features, as well as all higher-order combinations of them (combined by multiplication). For example, contains four features: , , and , plus one “intercept” feature. Our dependent variable is the log of runtime of the MapleCOMSPS solver.

In Table 4, we first consider sets of homogeneous features with respect to a given parameter, e.g., only weak backdoor features, or only community structure based features, along with and as baseline features. Each cell reports the adjusted value of the regression, as well as the number of instances considered in each case (which corresponds to the number of instances for which we have data for each feature in the regression). It is important to note that since different subsets of SAT formulas are used for each regression (since our dataset is incomplete), we should not directly compare the cells in the top section of the table. Nonetheless, the results do give some indication if each parameter relates to solving time.

In order to show that combinations of these features can produce stronger regression models, in the bottom half of Table 4, we consider all instances for which we have LSR, treewidth, and community structure data. We exclude backbones and weak backdoors in this case, as it limits our study to SAT instances and greatly reduces the number of datapoints. We considered all subsets of base features of size 5 (e.g. ), and report the best model for each benchmark, according to adjusted (i.e. the bolded data along the diagonal). This results in notably stronger correlations than with any of the homogeneous features sets. Although we report our results with five base features (whereas most homogeneous models only used four), similar results appear if we only use four base features. We also note that values results to be higher for the Agile instances, as compared to application and crafted instances. This is somewhat expected, as the set of instances are all derived from the SAGE whitebox fuzzer [16], as compared to our other benchmarks which come from a heterogeneous set of sources.

For each row corresponding to the heterogeneous feature sets, the base features are ordered according to the confidence level (corresponding to p-values), that the feature is significant to the regression (highest first), with respect to the model used to produce the bold data along the diagonal (i.e. the best model for each benchmark). Confidence values are measured as a percentage; for brevity, we consider values over 99% as very significant, values between 99% and 95% are significant, and values below 95% are insignificant. For application instances, Q, C/V, and LSR/V are all very significant, Q/Cmtys was significant, and was not significant. For crafted instances, TW/V and Q were very significant, but the other base features were insignficant. No features are significant for the random track, indicating that the value is likely spurious, partly due to the small size of the benchmark. For the Agile benchmark, all five features are very significant. In each model, several higher-order features are also reported as significant, including several where the base feature is not considered significant.

We also remark that previous work showed notably higher values for community-based features [28]. There are several significant differences between our experiments. First, our instances are pre-simplified before computing community structure. Their experiments grouped all Application, Crafted, and Random into a single benchmark, whereas ours are split.

Structural Parameters for Industrial vs. Crafted Instances: The research question we posed here is the following: Do real-world SAT instances have significantly more favorable parameter values (e.g. smaller backdoors or higher modularity), when compared to crafted or random instances? A positive result, such that instances from application domains (including Agile) have better structural values, would support the hypothesis that such structure may relate to the efficiency of CDCL solvers. Table 4 summarizes our results. We note that while application and Agile instances indeed appear more structured with respect to these parameters, the application benchmark has high standard deviation values. This could be due to the application instances coming from a wide variety of sources.

4.1 Using Structural Parameters to Compare Solving Heuristics

When comparing different solvers or heuristics, the most common approach is to run them on a benchmark to compare solving times, or in some cases the number of conflicts during solving. However, such an approach does not lend much insight as to why one heuristic is better than another, nor does it suggest any ways in which a less performant heuristic may be improved. By contrast, in their recent works Liang et al. [22, 23], drew comparisons between various branching heuristics by comparing their locality with respect to the community structure or the “learning rate” of the solver. This eventually led them to build much better branching heuristics. We hope to do the same by using LSR-backdoors as a lens to compare restart policies.

Starting from the baseline MapleSAT solver, we consider three restart heuristics: 1) the default heuristic based on the Luby sequence [24]; 2) restarting after every conflict (always restart); and 3) never restarting. We test the following properties of these heuristics, which may relate to their effect on solving performance. First, we consider the LSR-backdoor size, as computed by the approach discussed in Section 5. A run of the solver that focuses on fewer variables can be seen as being more “local,” which may be favorable in terms of runtime. When computing LSR-backdoor sizes, each learnt clause is annotated with a set of dependency variables (denoted in Section 5), which intuitively is a sufficient set of variables such that a fresh solver, branching only on variables in this set, can learn . Our second measure looks at the average dependency set size for each learnt clause.

Property Luby Always Restart Never Restart
LSR Size 0.16 (0.11) 0.13 (0.08) 0.25 (0.17)
Avg. Clause LSR 0.11 (0.08) 0.05 (0.05) 0.19 (0.14)
Num Conflicts 133246 (206441) 50470 (84606) 256046 (347899)
Solving Time (s) 8.59 (14.26) 6.09 (11.01) 18.09 (24.86)
Table 5: Comparison of LSR measures and solving time for various restart policies on the Agile benchmark. LSR sizes are normalized by the number of variables.

Due to space limitations, we only consider the Agile instances as this gives us a large benchmark where all instances are from the same domain. The data in Table 5 corresponds to the average and standard deviation values across the benchmark. We only consider instances where we can compute all data for each heuristic, in total 2145 instances. The always restart policies emits smaller LSR sizes, both overall and on average per clause, and, somewhat surprisingly, always restarting outperforms the more standard Luby policy in this context. Note that we do not expect this result to hold across all benchmarks, as it has been shown that different restart policies are favorable for different types of instances [7]. However, given the success of always restarting here, results such as ours may promote techniques to improve rapidly restarting solvers such as in [32].

5 Computing LSR-Backdoors

Dilkina et al. [11, 13] incorporated clause learning into the concept of strong backdoors by introducing LS-backdoors, and additionally described an approach for empirically computing upper bounds on minimal LS-backdoors. We refer the reader to our [anonymized] extended technical report for complete proofs [1].

We propose a new concept called learning-sensitive with restarts (LSR) backdoors and an approach that takes advantage of allowing restarts which can often greatly reduce the number of decisions necessary to construct such a backdoor especially if many “unnecessary” clauses are derived during solving. Our key insight is that, as stated in [5, 29], most learnt clauses are ultimately not used to determine SAT or UNSAT, and therefore we only need to consider variables required to derive such “useful” clauses. Our result shows that, for an unsatisfiable formula, the set of variables within the set of learnt clauses in the UNSAT proof constitutes an LSR-backdoor. The result for satisfiable formulas shows that the set of decision variables in the final trail of the solver, along with the variables in certain learnt clauses, constitute an LSR-backdoor. Before describing result, we first recall the properties of absorption, 1-empowerment, and 1-provability, which were initially used to demonstrate that CDCL can simulate general resolution within some polynomial-size bound:

Definition 3 (Absorption [6])

Let be a set of clauses, let be a non-empty clause and let be a literal in . Then absorbs at if every non-conflicting state of the solver that falsifies assigns to . If absorbs at every literal, then absorbs .

The intuition behind absorbed clauses is that adding an already absorbed clause to is in some sense redundant, since any unit propagation that could have been realized with is already realized by clauses in .

Definition 4 (1-Empowerment [30])

Let be a clause where is some literal in the clause and is a conjunction of literals. The clause is 1-empowering with respect to a set of clauses if:

  1. : the clause is implied by .

  2. does not result in a conflict detectable by unit propagation.

  3. : unit propagation cannot derive after asserting the literals in .

Definition 5 (1-Provability [31])

Given a set of clauses , a clause is 1-provable with respect to iff .

An important note is that every learnt clause is both 1-empowering and 1-provable, and therefore not absorbed, at the moment it is derived by a CDCL solver (i.e., before being added to

) [30, 31].

Lemma 1

Let be a set of clauses and suppose that C is a 1-empowering and 1-provable clause with respect to . Then there exists a sequence of decisions and restarts containing only variables from C such that and the set of learned clauses obtained from applying absorbs C.

Proof

The proof follows directly from the construction of such a decision sequence in the proof of Proposition 2 of [31].

Our result additionally makes use of the following notation. Let be a formula and be a CDCL solver. We denote the full set of learnt clauses derived during solving as . For every conflicting state, let denote the clause that will be learned through conflict analysis. We let be the set of clauses on the conflict side of the implication graph used to derive where recursively defines the set of clauses needed to derive (where ). For every learnt clause we define , where , as the set of variables in the clause itself as well as any learnt clause used in the derivation of the clause (recursively). Intuitively, is a sufficient set of dependency variables, such that a fresh SAT solver can absorb by only branching on variables in the set. For a set of clauses , we let and .

Lemma 2

Let be a CDCL solver used to determine the satisfiability of some formula . Let be a set of clauses learned while solving . Then a fresh solver can absorb all clauses in by only branching on the variables in .

Proof

Let be the sequence over in the order that the original solver derived the clauses, and suppose we have already absorbed the first clauses by only branching upon . Then, in particular, the clauses in have been absorbed, so must be 1-provable. If is not 1-empowering, then it is absorbed and we are done. If is 1-empowering, we can invoke Lemma 2 to absorb by only branching on variables in , and by construction.

Theorem 5.1 (LSR Computation, SAT case)

Let be a CDCL solver, be a satisfiable formula, and be the final trail of the solver immediately before returning SAT, which is composed of a sequence of decision variables and propagated variables . For each , let the clause used to unit propagate be and the full set of such clauses be . Then constitutes an LSR-backdoor for .

Proof

Using Lemma 2, we first absorb all clauses in by branching on . We can then restart the solver to clear the trail, and branch on the variables in , using the same order and polarity as the final trail of . Since we have absorbed each , every will be propagated.

Theorem 5.2 (LSR Computation, UNSAT case)

Let be a CDCL solver, be an unsatisfiable formula, and be the set of learnt clauses used to derive the final conflict. Then constitutes an LSR-backdoor for .

Proof

The result follows similarly to the satisfiable case. We learn all clauses relevant to the proof using Lemma 2, which then allows unit propagation to derive UNSAT.

We make some observations about our approach. First, our approach can be easily lifted to any learning scheme other than the 1st-UIP scheme that is currently the most widely used one. Second, the set of variables that constitute an LSR-backdoor may be disjoint from the set of decisions made by the solver. Third, the above approach depends on the ability to restart, and therefore cannot be used to compute LS-backdoors. In particular, the construction of the decision sequence for Lemma 1, as described in [31], requires restarting after every conflict. As an additional remark of practical importance, modern CDCL solvers often perform clause minimization to shrink a newly learnt clause before adding it to the clause database [35], which can have a significant impact on performance. Intuitively, this procedure reduces the clause by finding dependencies among its literals. In order to allow clause minimization in our experiments, for each clause we include all clauses used by the minimizer in our set .

For our empirical results, we modified an off-the-shelf solver MapleSat [22], by annotating each learnt clause with . Note that we do not need to explicitly record the set at any time. As in the LS-backdoor experiments in [11], different LSR-backdoors can be obtained by randomizing the branching heuristic and polarity selection. However, given the size and number of instances considered here, we only perform one run per instance.

To ensure that our output is indeed an LSR-backdoor, we implemented a verifier that works in three phases. First, we compute an LSR-backdoor as above. Second, we re-run the solver, and record every learnt clause such that . We then run a final solver with a modified branching heuristic, that iterates through the sequence of learnt clauses from phase 2, absorbing each as described in Lemma 2 (first checking that the clause is either absorbed or 1-provable upon being reached in the sequence). We ensure that the solver is in a final state by the end of the sequence.

6 Separating LS and LSR-Backdoors

In this section we prove that for certain kinds of formulas the minimal LSR-backdoors are exponentially smaller than the minimal LS-backdoors under the assumption that the learning scheme is 1st-UIP and that the CDCL solver is only allowed to backtrack (and not backjump). In [12], the authors demonstrate that LS-backdoors may be exponentially smaller than strong backdoors with 1st-UIP learning scheme but without restarts.

Let be a positive integer and let be a set of Boolean variables. For any Boolean variable , let denote the positive literal and denote the negative literal . For any assignment let denote the clause on variables which is uniquely falsified by the assignment .

Our family of formulas will be defined using the following template. Let be any total ordering of ; we write to denote the relation induced by this ordering. The formula is defined on the variables , and also three auxiliary sets of variables . Given an ordering we define the formula

This family was introduced by Dilkina et al. [12], where the formula using the lexicographic ordering provides an exponential separation between the sizes of LS-backdoors and strong backdoors. Their key insight was that if a CDCL solver without restarts queried the variables in the lexicographic ordering of assignments, it would learn crucial conflict clauses that would enable the solver to establish the unsatisfiability of the instance without having to query any additional variables. (By the term “querying a variable” we mean that the solver assigns value to it and then performs any unit propagations.) Since strong backdoors cannot benefit from clause learning they will necessarily have to query additional variables to hit any conflict.

We show that the same family of formulas (but for a different ordering ) can be used to separate LS-backdoor size from LSR-backdoor size. Observe that for any ordering the variables form an LSR-backdoor for .

Lemma 3

Let be any ordering of . The -variables form an LSR-backdoor for the formula .

Proof

For each assignment (ordered by ), assign to the variables by decision queries. By the structure of , as soon as we have a complete assignment to the variables, we will unit-propagate to a conflict and learn a variable as a conflict clause; after that we restart. Once all of these assignments are explored we will have learned the unit clause for every assignment , and so we can just query the variables in any order (without restarts) to yield a contradiction, since every assignment to the variables will falsify the formula.

Note that the formula depends on variables, and so the size of this LSR-backdoor is . Furthermore, observe that the -variables will also form an LS-backdoor if we can query the assignments according to without needing to restart — for example, if is the lexicographic ordering. This suggests the following definition, which captures the orderings of that can be explored by a CDCL algorithm without restarts:

Definition 6

Let be the collection of all depth-decision trees on -variables, where we label each leaf of a tree with the assignment obtained by taking the assignments to the -variables on the path from the root of to . For any , let be the ordering of obtained by reading the assignments labelling the leaves of from left to right.

To get some intuition for our lower-bound argument, consider an ordering for some decision tree . By using the argument in Lemma 3 the formula will have a small LS-backdoor, obtained by querying the -variables according to the decision tree . Now, take any two assignments and let be the ordering obtained from by swapping the indices of and . If we try and execute the same CDCL algorithm without restarts (corresponding to the ordering ) on the new formula , the algorithm will reach an inconclusive state once it reaches the clause corresponding to in since at that point the assignment to the -variables will be . Thus, it will have to query at least one more variable (for instance, ), which increases the size of the backdoor by one. We can generalize the above argument to multiple “swaps” — the CDCL algorithm without restarts querying the variables according to would then have to query one extra variable for every which is “out-of-order” with respect to .

This discussion leads us to the following complexity measure: for any ordering (not necessarily obtained from a decision tree ) and any ordering of the form , let

Informally, counts the number of elements of which are “out-of-order” with respect to as we have discussed above. We are able to show that the above argument is fully general:

Lemma 4

Let be any ordering of , and let denote the collection of all complete depth- decision trees on variables. Then any learning-sensitive backdoor of has size at least

This reduces our problem to finding an ordering for which every ordering of the form has many elements which are “out-of-order” with respect to (again, intuitively for every mis-ordered element in the LS-backdoor we will have to query at least one more -variable.)

Lemma 5

For any there exists an ordering of such that for every decision tree we have

Proof

We define the ordering, and leave the full proof of correctness to the companion technical report [1]. Let be the lexicographic ordering of , and for any string define to be the string obtained by flipping each bit in . Then let be the ordering

Theorem 6.1

For every , there is a formula on variables such that the minimal LSR-backdoor has variables, but every LS-backdoor has variables.

7 Future Work and Conclusions

We presented a large-scale study of several structural parameters of SAT instances. We showed that combinations of these features can lead to improved regression models, and that in general, industrial instances tend to have much more favorable parameter values than crafted or random. Further, we gave examples of how such parameters may be used as a “lens” to evaluate different solver heuristics. We hope these studies can be used by complexity theorists as a guide for which parameters to focus on in future analyses of CDCL. Finally, we introduced LSR-backdoors, which characterize a sufficient subset of the variables that CDCL can branch on to solve the formula. In doing so, we presented an algorithm to compute LSR-backdoors that exploits the notion of clause absorption, and further showed that certain formulas have exponentially smaller minimal LSR-backdoors than LS-backdoors under the assumption that the CDCL solver only backtracks (not backjumps) and uses the 1st-UIP learning scheme. From a theoretical point of view, it remains open whether there is a separation between LS and LSR-backdoors when backjumping is allowed. On the empirical side, we plan to investigate approaches to computing many small LSR-backdoors to formulas. The intuition is that CDCL solvers may be more efficient on Application instances because they have lots of LSR-backdoors and the solver may “latch” on to a backdoor relatively easily, while these solvers are less efficient for crafted instances because they have too few backdoors. Further, we plan to refine our results by analyzing individual sub-categories of the benchmarks studied, with a particular focus on scaling to crafted instances.

References

  • [1] Extended Version of Paper (2017), https://drive.google.com/open?id=0B7cGd_mBqlwKWl9KSFNzbXFEd3c
  • [2] The international SAT Competitions web page (2017), http://www.satcompetition.org/
  • [3] SHARCNET (2017), http://www.sharcnet.ca/
  • [4] Ansótegui, C., Giráldez-Cru, J., Levy, J.: The community structure of SAT formulas. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 410–423. Springer (2012)
  • [5] Ansótegui, C., Giráldez-Cru, J., Levy, J., Simon, L.: Using community structure to detect relevant learnt clauses. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 238–254. Springer (2015)
  • [6]

    Atserias, A., Fichte, J.K., Thurley, M.: Clause-learning algorithms with many restarts and bounded-width resolution. Journal of Artificial Intelligence Research 40, 353–373 (2011)

  • [7] Biere, A., Fröhlich, A.: Evaluating CDCL restart schemes. In: Pragmatics of SAT workshop (2015)
  • [8] Biere, A., Heule, M., van Maaren, H.: Handbook of satisfiability, vol. 185. IOS press (2009)
  • [9] Blondel, V.D., Guillaume, J.L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment 2008(10), P10008 (2008)
  • [10] Coarfa, C., Demopoulos, D.D., Aguirre, A.S.M., Subramanian, D., Vardi, M.Y.: Random 3-sat: The plot thickens. In: International Conference on Principles and Practice of Constraint Programming. pp. 143–159. Springer (2000)
  • [11]

    Dilkina, B., Gomes, C., Malitsky, Y., Sabharwal, A., Sellmann, M.: Backdoors to Combinatorial Optimization: Feasibility and Optimality. Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems 5547, 56 – 70 (2009)

  • [12] Dilkina, B., Gomes, C., Sabharwal, A.: Backdoors in the Context of Learning. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 73 – 79. Springer (2009)
  • [13] Dilkina, B., Gomes, C.P., Sabharwal, A.: Tradeoffs in the complexity of backdoors to satisfiability: dynamic sub-solvers and learning during search. Annals of Mathematics and Artificial Intelligence 70(4), 399–431 (2014)
  • [14] Downey, R.G., Fellows, M.R.: Fundamentals of parameterized complexity, vol. 4. Springer (2013)
  • [15] Eén, N., Sörensson, N.: An extensible SAT-solver. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 502–518. Springer (2003)
  • [16] Godefroid, P., Levin, M.Y., Molnar, D.A., et al.: Automated whitebox fuzz testing. In: Network and Distributed System Security Symposium. pp. 151–166. Internet Society (2008)
  • [17] Janota, M., Lynce, I., Marques-Silva, J.: Algorithms for computing backbones of propositional formulae. AI Communications 28(2), 161–177 (2015)
  • [18] Jordi, L.: On the Classification of Industrial SAT Families. In: International Conference of the Catalan Association for Artificial Intelligence. p. 163. IOS Press (2015)
  • [19] Kilby, P., Slaney, J., Thiébaux, S., Walsh, T.: Backbones and backdoors in satisfiability. In: AAAI Conference on Artificial Intelligence. pp. 1368–1373. AAAI Press (2005)
  • [20] Li, Z., Van Beek, P.: Finding small backdoors in SAT instances. In: Canadian Conference on Artificial Intelligence. pp. 269–280. Springer (2011)
  • [21] Liang, J.H., Ganesh, V., Czarnecki, K., Raman, V.: SAT-based analysis of large real-world feature models is easy. In: International Conference on Software Product Line. pp. 91–100. ACM (2015)
  • [22] Liang, J.H., Ganesh, V., Poupart, P., Czarnecki, K.: Learning rate based branching heuristic for SAT solvers. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 123–140. Springer (2016)
  • [23] Liang, J.H., Ganesh, V., Zulkoski, E., Zaman, A., Czarnecki, K.: Understanding VSIDS branching heuristics in conflict-driven clause-learning SAT solvers. In: Haifa Verification Conference. pp. 225–241. Springer (2015)
  • [24] Luby, M., Sinclair, A., Zuckerman, D.: Optimal speedup of las vegas algorithms. Information Processing Letters 47(4), 173–180 (1993)
  • [25] Mateescu, R.: Treewidth in Industrial SAT Benchmarks (2011), http://research.microsoft.com/pubs/145390/MSR-TR-2011-22.pdf
  • [26] Monasson, R., Zecchina, R., Kirkpatrick, S., Selman, B., Troyansky, L.: Determining computational complexity from characteristic ‘phase transitions’. Nature 400(6740), 133–137 (1999)
  • [27] Moskewicz, M.W., Madigan, C.F., Zhao, Y., Zhang, L., Malik, S.: Chaff: Engineering an efficient SAT solver. In: Design Automation Conference. pp. 530–535. ACM (2001)
  • [28] Newsham, Z., Ganesh, V., Fischmeister, S., Audemard, G., Simon, L.: Impact of community structure on SAT solver performance. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 252–268. Springer (2014)
  • [29] Oh, C.: Between SAT and UNSAT: the fundamental difference in CDCL SAT. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 307–323. Springer (2015)
  • [30] Pipatsrisawat, K., Darwiche, A.: A new clause learning scheme for efficient unsatisfiability proofs. In: AAAI Conference on Artificial Intelligence. pp. 1481–1484 (2008)
  • [31] Pipatsrisawat, K., Darwiche, A.: On the power of clause-learning SAT solvers with restarts. In: International Conference on Principles and Practice of Constraint Programming. pp. 654–668. Springer (2009)
  • [32] Ramos, A., Van Der Tak, P., Heule, M.J.: Between restarts and backjumps. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 216–229. Springer (2011)
  • [33] Robertson, N., Seymour, P.D.: Graph minors. III. Planar tree-width. Journal of Combinatorial Theory, Series B 36(1), 49–64 (1984)
  • [34] Selman, B., Mitchell, D.G., Levesque, H.J.: Generating hard satisfiability problems. Artificial intelligence 81(1-2), 17–29 (1996)
  • [35] Sörensson, N., Biere, A.: Minimizing learned clauses. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 237–243. Springer (2009)
  • [36] Szeider, S.: On fixed-parameter tractable parameterizations of SAT. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 188–202. Springer (2003)
  • [37] Williams, R., Gomes, C., Selman, B.: On the connections between backdoors, restarts, and heavy-tailedness in combinatorial search. In: International Conference on Theory and Applications of Satisfiability Testing. pp. 222–230. Springer (2003)
  • [38] Williams, R., Gomes, C.P., Selman, B.: Backdoors to typical case complexity. In: International Joint Conference on Artificial Intelligence. pp. 1173–1178. No. 24, AAAI Press (2003)
  • [39] Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. Journal of artificial intelligence research 32, 565–606 (2008)

Appendix 0.A LSR Notation Example

Figure 1: Example conflict analysis graph depicting the set of relevant clauses and variables to some learnt clause . Nodes are literals. Edges labeled with some are previously learnt clauses; all other edges depicting propagations are from the original formula . The clauses used to derive and are not shown, but would be in the respective conflict analysis graphs of and . The clauses and are not included in since they occur on the reason side of the graph.

Appendix 0.B Full Proofs on Computing LSR-Backdoors

In this appendix, we present the proofs for Lemma 2 and Theorem 5.1. We define the following notation. We let be a formula, overloaded to also denote its set of clauses. Let be a set of clauses and be some clause we would like to absorb. We define the function , which produces a new clause set , such that and absorbs by applying Lemma 1. If is already absorbed by , then , and if is not 1-provable with respect to , then fail. We overload to take a sequence of clauses, such that , which applies to the clauses in order, and for each intermediate produced, it absorbs every clause . Again, if any clause is not 1-provable with respect to , then returns fail.

Lemma 2

Let be a CDCL solver used to determine the satisfiability of some formula . Let be a set of clauses learned during solving. Then a fresh solver can absorb all clauses in by only branching on the variables in .

Proof

We show that can absorb all clauses in , which includes . Let be the sequence over in the order that the original solver derived the clauses. Consider the first clause . By construction, it does not depend upon any learnt clauses (i.e. it was derived from only original clauses), and since learned , it must be 1-empowering and 1-provable with respect to the initial clause set. By Lemma 1, we can absorb by only branching on variables in , which again by construction are in . We therefore have that absorbs .

Suppose absorbs the first clauses by only branching on the variables in , and we wish to absorb . There are two cases to consider. First, may already be absorbed by , since the clauses learnt by may absorb clauses in addition to , in which case we are done. So suppose is not absorbed by . Since every previous clause in has been absorbed, we in particular have that the clauses in have been absorbed, so must be 1-provable. To see this, suppose instead of absorbing we learned the exact set of clauses in . Then by construction, negating all literals in must lead to a conflict through unit propagation. Since we have instead absorbed , any propagation that was used to derive the conflict must also be possible using the clauses that absorb (by definition of absorption).

We also know that is 1-empowering with respect ot , since otherwise it is absorbed by definition, and we assumed this is not true. Therefore, we can invoke Lemma 1, such that , which is derived by only branching on the variables in . Again, by construction.

Theorem 1 (LSR Computation, SAT case)

Let be a CDCL solver, be a satisfiable formula, and be the final trail of the solver immediately before returning SAT, which is composed of a sequence of decision variables and propagated variables . For each , let the clause used to unit propagate be and the full set of such clauses be . Then constitutes an LSR-backdoor for .

Proof

Using Lemma 2, we first absorb all clauses in by branching on . We can then restart the solver to clear the trail, and branch on the variables in , using the same order and polarity as the final trail of . If any is already assigned due to learnt clauses used to absorb , unit propagation will be able derive the literals propagated by , since we have absorbed . Note that with this final branching scheme, we can not reach a state where the wrong polarity of a variable in becomes implied through propagation (i.e. with respect to the final trail polarities), since the solver is sound and this would block the model found by the original solver .

Appendix 0.C Proof Separating LS and LSR-Backdoors

Here we present the full proofs for Section 6.

Let be a positive integer and let be a set of boolean variables. For any boolean variable , let denote the positive literal and denote the negative literal . For any assignment let denote the clause on variables which is uniquely falsified by the assignment . Throughout we implicitly assume that the clause learning algorithm is 1st-UIP.

Our formula will be defined by the following template. Let be an ordering of ; we write to denote the relation induced by this ordering. Given an ordering we can define the formula

Lemma 3

Let be any ordering of . The -variables form a learning-sensitive backdoor with restarts for the formula .

Proof

Query the variables according to the ordering given by . As soon as we have a complete assignment to the variables, we will unit-propagate to a conflict and learn a variable as a conflict clause; after that we restart. Once all such assignments are explored we can simply query the variables in any order (without restarts) to yield a contradiction, since every assignment to the variables will falsify the formula.

Consider any decision tree of depth where we have queried the variables in any order in the tree . From this tree we obtain a natural ordering of by reading off the strings labelling the leaves in left-to-right order. Note that the orderings of the form are exactly the orderings which can be generated by DPLL algorithms (or, more generally, a CDCL algorithm without restarts). For any such complete decision tree and any ordering of , define

That is, is the number of strings which are “out of order” in with respect to .

Lemma 4

Let be any ordering of , and let denote the collection of all complete depth- decision trees on variables. Then any learning-sensitive backdoor of has size at least

Proof

Let be a minimal learning-sensitive backdoor of . Let . First we show that without loss of generality the following holds:

  1. .

  2. All queries to -variables occur before any query to an -variable.

We begin with an observation: any query to a variable can be assumed to be of the form . To see this, notice that querying will immediately unit propagate to a conflict, and 1UIP-learning will immediately yield the unit clause . Thus we can always replace queries of the form in-situ with queries without affecting the rest of the algorithm’s execution.

Let us first show that . Suppose that there is a variable (the case where is symmetric). We show that can be replaced with . Consider the first time that is queried as a decision variable. By the structure of , we can assume that has not been assigned before querying as a decision variable (as we have shown above, if this is the case then , which eliminates the two clauses containing ). If , then is unit-propagated, and all clauses containing or will be removed. It follows that any conflict following this assignment must occur in a clause containing for some , and thus replacing with will not affect the queries to the backdoor in this case. So, instead, suppose that . Clearly, in order for this assignment to have been necessary, the variable must be assigned after assigning (either by unit propagation or a decision). Observe that if is set to true after assigning , then the two clauses containing are removed and so assigning to begin with would not have affected the execution of the algorithm on the backdoor. Similarly, setting will unit propagate to a conflict in this case and we would learn the unit clause as above, again showing that we could have replaced with without affecting the backdoor. Thus can be replaced with without loss of generality.

Next we show that all decision queries to -variables can be made before queries to -variables without loss of generality. As argued above, we can assume that when is queried as a decision variable it is queried negatively (as this will lead immediately to learning the unit clause ). Consider any conflict which occurs in the process of querying the variables of the backdoor that does not happen because of querying . Since , by the structure of any such conflict must occur after assigning all -variables to some string . Corresponding to this total assignment to the -variables is the unique clause which is falsified by this assignment, so consider the clause in of the form

After assigning a subset of the -variables of and assigning all -variables to a conflict can occur in two ways:

  1. for all and the clause above is the conflict clause, or

  2. there exists a single which is unassigned after assigning all of these variables; the above clause leads to assigning and thus learning the unit clause , as argued above.

In either case, moving all queries to the beginning of the algorithm will not affect these conflicts: this is clear in case (1); in case (2) the only possibility is that was queried at the beginning and so the conflict will change to a case (1) conflict.

We are now ready to finish the proof of the lemma. We encode the execution of the CDCL algorithm as a decision tree, wherein we first query all -variables and then query all -variables. Note that we must assign all -variables to hit a conflict by the structure of , and so we let denote the complete depth- decision tree querying the -variables obtained from the CDCL execution tree. Recalling that , to prove the lemma it suffices to prove that

Since is a backdoor, it must be that at every leaf of the tree the unit propagator will propagate the input to a conflict. Consider the ordering of the assignments induced by this tree . At each leaf of this tree, which corresponds to an assignment to the variables, consider the clause

There are two possibilities: either we learn a unit clause for some or we do not. If we have not then we must have already learned the unit clauses for all . Otherwise, we must have learned the unit clauses for each satisfying . This implies that

as for each we must query, at least, all variables which occur after in but before in . This proves the lemma.

By the previous lemma, to lower bound the size of the learning sensitive backdoor it suffices to find an ordering of for which any ordering produced by a decision tree has many “inversions” with respect to .

Lemma 5

There exists an ordering of such that for every decision tree we have

Proof

Let be any decision tree. Let and write . We use the following key property of orderings generated from decision trees:

Key Property. For any decision tree there is a coordinate and such that

for all and .

This property is easy to prove: simply let be the index of the variable labelled on the root of the decision tree . If the decision tree queries the bit in the left subtree then the first half of the strings in the ordering will have the th bit of the string set to , and the second half of strings set to .

To use this property, let be the lexicographic ordering of , and for any string define to be the string obtained by flipping each bit in . Then let be the ordering

It follows that for each and each , half of the strings with will be in the first half of the ordering and half of the strings with are in the second half of the ordering . The lemma follows by applying the Key Property.

Corollary 1

There exists an ordering such that any learning-sensitive backdoor for the formula has size at least .

Theorem 3

For every , there is a formula on variables such that the minimal LSR-backdoor has variables, but every LS-backdoor has variables.