Solving MaxSAT by Successive Calls to a SAT Solver

03/11/2016
by   Mohamed El Halaby, et al.
Cairo University
0

The Maximum Satisfiability (MaxSAT) problem is the problem of finding a truth assignment that maximizes the number of satisfied clauses of a given Boolean formula in Conjunctive Normal Form (CNF). Many exact solvers for MaxSAT have been developed during recent years, and many of them were presented in the well-known SAT conference. Algorithms for MaxSAT generally fall into two categories: (1) branch and bound algorithms and (2) algorithms that use successive calls to a SAT solver (SAT- based), which this paper in on. In practical problems, SAT-based algorithms have been shown to be more efficient. This paper provides an experimental investigation to compare the performance of recent SAT-based and branch and bound algorithms on the benchmarks of the MaxSAT Evaluations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 38

05/02/2013

Extending Modern SAT Solvers for Enumerating All Models

In this paper, we address the problem of enumerating all models of a Boo...
02/10/2018

SAT solving techniques: a bibliography

We present a selective bibliography about efficient SAT solving, focused...
05/10/2015

Exploiting Resolution-based Representations for MaxSAT Solving

Most recent MaxSAT algorithms rely on a succession of calls to a SAT sol...
04/30/2018

Benchmarking the Capabilities and Limitations of SAT Solvers in Defeating Obfuscation Schemes

In this paper, we investigate the strength of six different SAT solvers ...
08/31/2011

Coprocessor - a Standalone SAT Preprocessor

In this work a stand-alone preprocessor for SAT is presented that is abl...
06/18/2019

Subsumption-driven clause learning with DPLL+restarts

We propose to use a DPLL+restart to solve SAT instances by successive si...
02/09/2019

On the maximal minimal cube lengths in distinct DNF tautologies

Inspired by a recent article by Anthony Zaleski and Doron Zeilberger, we...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

List of Algorithms

LIST OF ALGORITHMSLIST OF ALGORITHMSloa

1 Introduction and Preliminaries

A Boolean variable can take one of two possible values 0 (false) or 1 (true). A literal is a variable or its negation . A clause is a disjunction of literals, i.e., . A CNF formula is a conjunction of clauses. Formally, a CNF formula composed of clauses, where each clause is composed of is defined as where .

In this paper, a set of clauses is referred to as a Boolean formula. A truth assignment satisfies a Boolean formula if it satisfies every clause.

Given a CNF formula , the satisfiability problem (SAT) is deciding whether has a satisfying truth assignment (i.e., an assignment to the variables of that satisfies every clause). The Maximum Satisfiability (MaxSAT) problem asks for a truth assignment that maximizes the number of satisfied clauses in .

Many theoretical and practical problems can be encoded into SAT and MaxSAT such as debugging [51], circuits design and scheduling of how an observation satellite captures photos of Earth [56], course timetabling [11, 45, 41, 34], software package upgrades [24], routing [58, 46], reasoning [52] and protein structure alignment in bioinformatics [50].

Let be a CNF formula, where are natural numbers. The Weighted Partial MaxSAT problem asks for an assignment that satisfies all (called hard clauses) and maximizes the sum of the weights of the satisfied clauses in (called soft clauses).

In general, exact MaxSAT solvers follow one of two approaches: successively calling a SAT solver (sometimes called the SAT-based approach) and the branch and bound approach. The former converts each MaxSAT problem with different hypothesized maximum weights into multiple SAT problems and uses a SAT solver to solve these SAT problems to determine the actual solution. The SAT-based approach converts the WPMaxSAT problem into a sequence of SAT instances which can be solved using SAT solvers. One way to do this, given an unweighted MaxSAT instance, is to check if there is an assignment that falsifies no clauses. If such an assignment can not be found, we check if there is an assignment that falsifies only one clause. This is repeated and each time we increment the number of clauses that are allowed to be until the SAT solver returns , meaning that the minimum number of falsified clauses has been determined. Recent comprehensive surveys on SAT-based algorithms can be found in[43, 8].

The second approach utilizes a depth-first branch and bound search in the space of possible assignments. An evaluation function which computes a bound is applied at each search node to determine any pruning opportunity. This paper surveys the satisfiability-based approach and provides an experimental investigation and comparison between the performances of both approaches on sets of benchmarks.

Because of the numerous calls to a SAT solver this approach makes, any improvement to SAT algorithms immediately benefits MaxSAT SAT-based methods. Experimental results from the MaxSAT Evaluations111Web page: http://www.maxsat.udl.cat have shown that SAT-based solvers are more competent to handle large MaxSAT instances from industrial applications than branch and bound methods.

2 Linear Search Algorithms

A simple way to solve WPMaxSAT is to augment each soft clause with a new variable (called a blocking variable) , then a constraint is added (specified in CNF) saying that the sum of the weights of the falsified soft clauses must be less than a given value . Next, the formula (without the weights) together with the constraint is sent to a SAT solver to check whether or not it is satisfiable. If so, then the cost of the optimal solution is found and the algorithm terminates. Otherwise, is decreased and the process continues until the SAT solver returns . The algorithm can start searching for the optimal cost from a lower bound initialized with the maximum possible cost (i.e. ) and decrease it down to the optimal cost, or it can set and increase it up to the optimal cost. Solvers that employ the former approach is called satisfiability-based (not to be confused with the name of the general method) solvers, while the ones that follow the latter are called UNSAT-based solvers. A cost of 0 means all the soft clauses are satisfied and a cost of means all the soft clauses are falsified.

Algorithm 1 employs the first method to search for the optimal cost by maintaining (maintaining a lower bound initialized to 0) (line 1).

Input: A WPMaxSAT instance
Output: A WPMaxSAT solution to
1 foreach  do
2        let be a new blocking variable
3while  do
4        if  then
5               return
6       
Algorithm 1 LinearUNSAT Linear search UNSAT-based algorithm for solving WPMaxSAT.

Next, the algorithm relaxes each soft clause with a new variable in lines 2-4. The formula now contains each soft clause augmented with a new blocking variable. The while loop in lines 5-9 sends the clauses of (without the weights) to a SAT solver (line 6). If the SAT solver returns , then LinearUNSAT terminates returning a solution (lines 7-8). Otherwise, the lower bound is updated and the loop continues until the SAT solver returns . The function in line 9 updates the lower bound either by simply increasing it or by other means that depend on the distribution of the weights of the input formula. Later in this paper we will see how the subset sum problem can be a possible implementation of . Note that it could be inefficient if changes by one in each iteration. Consider a WPMaxSAT formula with five soft clauses having the weights and 100. The cost of the optimal solution can not be anything else other than and 104. Thus, assigning any of the values is unnecessary and will result in a large number of iterations.

Example 2.1.

Let , where , and . If we run LinearUNSAT on , the soft clauses will be be relaxed and is initialized to 0. The sequence of iterations are

  1. The constraint is included, , .

  2. The constraint is included, , .

  3. The constraint is included, , .

  4. The constraint is included, , .

  5. The constraint is included, . The SAT solver returns the assignment , which leads to a WPMaxSAT solution if we ignore the values of the variables with cost 20.

The next algorithm is describes the SAT-based technique. Algorithm 2 starts by initializing the upper bound to one plus the the sum of the weights of the soft clauses (line 1).

Input: A WPMaxSAT instance
Output: A WPMaxSAT solution to
1 foreach  do
2        let be a new blocking variable
3while  do
4        if  then
5               return
6       
Algorithm 2 LinearSAT Linear search SAT-based algorithm for solving WPMaxSAT.

In each iteration of algorithm 2 except the last, the formula is satisfiable. The cost of the optimal solution is found immediately after the transition from satisfiable to unsatisfiable instance. LinearSAT begins by initializing the upper bound to one plus the sum of the weights of the soft clauses (line 1). The while loop (lines 4-8) continues until the formula becomes unsatisfiable (line 6), then the algorithm returns a WPMaxSAT solution and terminates (line 7). As long as the formula is satisfiable, the formula is sent to the SAT solver along with the constraint assuring that the sum of the weights of the falsified soft clauses is less than (line 5), and the upper bound is updated to the sum of the weights of the soft clauses falsified by the assignment returned by the SAT solver (line 8).

Note that updating the upper bound to is more efficient than simply decreasing the upper bound by one, because uses less iterations and thus the problem is solved with less SAT calls.

Example 2.2.

If we run LinearSAT on from the previous example, the soft clauses will be be relaxed and is initialized to . The sequence of iterations are

  1. The constraint is included, , , .

  2. The constraint is included, , , .

  3. The constraint is included, , , .

  4. The constraint is included, . The assignment from the previous step is indeed a solution to if we ignore the values of the variables with cost 20.

3 Binary Search-based Algorithms

The number of iterations linear search algorithms for WPMaxSAT can take is linear in the sum of the weights of the soft clauses. Thus, in the worst case the a linear search WPMaxSAT algorithm can take calls to the SAT solver. Since we are searching for a value (the optimal cost) among a set of values (from 0 to ), then binary search can be used, which uses less iterations than linear search. Algorithm 3 searches for the cost of the optimal assignment by using binary search.

Input: A WPMaxSAT instance
Output: A WPMaxSAT solution to
1 if  then
2        return
3 foreach  do
4        let be a new blocking variable
5while  do
6        if  then
7              
8       else
9              
10       
return
Algorithm 3 BinS-WPMaxSAT Binary search based algorithm for solving WPMaxSAT.

BinS-WPMaxSAT begins by checking the satisfiability of the hard clauses (line 1) before beginning the search for the solution. If the SAT solver returns (line 2), BinS-WPMaxSAT returns the empty assignment and terminates (line 3). The algorithm updates both a lower bound and an upper bound initialized respectively to -1 and one plus the sum of the weights of the soft clauses (lines 4-5). The soft clauses are augmented with blocking variables (lines 6-8). At each iteration of the main loop (lines 9-16), the middle value () is changed to the average of and and a constraint is added requiring the sum of the weights of the relaxed soft clauses to be less than or equal to the middle value. This clauses describing this constraint are sent to the SAT solver along with the clauses of (line 11). If the SAT solver returns (line 12), then the cost of the optimal solution is less than , and is updated (line 14). Otherwise, the algorithm looks for the optimal cost above , and so is updated (line 16). The main loop continues until , and the number of iterations BinS-WPMaxSAT executes is proportional to which is a considerably lower complexity than that of linear search methods.

In the following example, assigns to .

Example 3.1.

Consider in example 2.1 with all the weights of the soft clauses set to 1. At the beginning, , . The following are the sequence of iterations algorithm 3 executes.

  1. , the constraint is included, , , .

  2. , the constraint is included, , , , . The assignment is indeed an optimal one, falsifying four clauses.

It is often stated that a binary search algorithm performs better than linear search. Although this is true most of the time, there are instances for which linear search is faster than binary search. Let be the sum of the soft clauses falsified by the assignment returned by the SAT solver in the first iteration. If is indeed the optimal solution, linear search methods would discover this fact in the next iteration, while binary search ones would take iterations to declare as the optimal cost. In order to benefit from both search methods, An et al.[3] developed a PMaxSAT algorithm called QMaxSAT (version 0.4) that alternates between linear search and binary search (see algorithm 4).

Input: A WPMaxSAT instance
Output: A WPMaxSAT solution to
1 if  then
2        return
3foreach  do
4        let be a new blocking variable
5 while  do
6        if  then
7              
8       else
9              
10        if  then
11              
12       else
13               if  then
14                     
15              else
16                     
17              
18       if  then
19              
20       else
21              
22       
return
Algorithm 4 BinLin-WPMaxSAT Alternating binary and linear searches for solving WPMaxSAT.

Algorithm 4 begins by checking that the set of hard clauses is satisfiable (line 1). If not, then the algorithm returns the empty assignment and terminates (line 3). Next, the soft clauses are relaxed (lines 4-6) and the lower and upper bounds are initialized respectively to -1 and one plus the sum of the weights of the soft clauses (lines 7-8). BinLin-WPMaxSAT has two execution modes, binary and linear. The mode of execution is initialized in line 9 to binary search. At each iteration of the main loop (lines 10-27), the SAT solver is called on the clauses of with the constraint bounded by the mid point (line 12), if the current mode is binary, or by the upper bound if the mode is linear (line 14). If the formula is satisfiable (line 16), the upper bound is updated. Otherwise, the lower bound is updated to the mid point. At the end of each iteration, the mode of execution is flipped (lines 24-27).

Since the cost of the optimal solution is an integer, it can be represented as an array of bits. Algorithm 5 uses this fact to determine the solution bit by bit. BitBased-WPMaxSAT starts from the most significant bit and at each iteration it moves one bit closer to the least significant bit, at which the optimal cost if found.

Input: A WPMaxSAT instance
Output: A WPMaxSAT solution to
1 if  then
2        return
3foreach  do
4        let be a new blocking variable
5 while  do
6        if  then
               let be constants such that   // are the binary representation of the current cost
7               if  then
8                     
9              
10       else
11              
12       
return
Algorithm 5 BitBased-WPMaxSAT A bit-based algorithm for solving WPMaxSAT.

At the beginning of the algorithm as in the previous ones, the satisfiability of the hard clauses are checked and the soft clauses are relaxed. The sum of the weights of the soft clauses is an upper bound on the cost and thus it is computed to determine the number of bits needed to represent the optimal solution (line 7). The index of the current bit being considered is initialized to (line 7), and the value of the solution being constructed is initialized (line 8). The main loop (lines 10-20) terminates when it reached the least significant bit (when ). At each iteration, the SAT solver is called on with constraint saying that the sum of the weights of the falsified soft clauses must be less than (line 11). If the SAT solver returns (line 12), the sum of the weights of the soft clauses falsified by the current assignment is computed and the set of bits needed to represent that number are determined as well (line 14), the index of the current bit is decreased to the next such that (line 15). If such an index does not exist, then becomes -1 and in the following iteration the algorithm terminates. On the other hand, if the SAT solver returns , the search continues to the most significant bit by decrementing (line 19) and since the optimal cost is greater than the current value of , it is decreased by (line 20).

Example 3.2.

Consider from example 2.1 with all the weights of the soft clauses being 1. At the beginning of the algorithm, the soft clauses are relaxed and the formula becomes . Also, the variables , and are initialized to 2, 2 and respectively. The following are the iterations BitBased-WPMaxSAT executes.

  1. The constraint is included, , , .

  2. The constraint , , , .

4 Core-guided Algorithms

As in the previous method, UNSAT methods use SAT solvers iteratively to solve MaxSAT. Here, the purpose of iterative SAT calls is to identify and relax unsatisfiable formulas (unsatisfiable cores) in a MaxSAT instance. This method was first proposed in 2006 by Fu and Malik in[18] (see algorithm 6). The algorithms described in this section are

  1. Fu and Malik’s algorithm[18]

  2. WPM1[4]

  3. Improved WPM1[5]

  4. WPM2[7]

  5. WMSU1-ROR[21]

  6. WMSU3[37]

  7. WMSU4[38]

Definition 4.1 (Unsatisfiable core).

An unsatisfiable core of a CNF formula is a subset of that is unsatisfiable by itself.

Definition 4.2 (Minimum unsatisfiable core).

A minimum unsatisfiable core contains the smallest number of the original clauses required to still be unsatisfiable.

Definition 4.3 (Minimal unsatisfiable core).

A minimal unsatisfiable core is an unsatisfiable core such that any proper subset of it is not a core[15].

Modern SAT solvers provide the unsatisfiable core as a by-product of the proof of unsatisfiability. The idea in this paradigm is as follows: Given a WPMaxSAT instance , let be a SAT instance that is satisfiable iff has an assignment with cost less than or equal to . To encode , we can extend every soft clause with a new (auxiliary) variable and add the CNF conversion of the constraint . So, we have

Let be the cost of the optimal assignment of . Thus, is satisfiable for all , and unsatisfiable for all , where may range from 0 to . Hence, the search for the optimal assignment corresponds to the location of the transition between satisfiable and unsatisfiable . This encoding guarantees that the all the satisfying assignments (if any) to are the set of optimal assignments to the WPMaxSAT instance .

4.1 Fu and Malik’s algorithm

Fu and Malik implemented two PMaxSAT solvers, ChaffBS (uses binary search to find the optimal cost) and ChaffLS (uses linear search to find the optimal cost) on top of a SAT solver called zChaff[44]. Their PMaxSAT solvers participated in the first and second MaxSAT Evaluations[10]. Their method (algorithm 6 basis for many WPMaxSAT solvers that came later. Notice the input to algorithm 6 is a PMaxSAT instance since all the weights of the soft clauses are the same.

Input:
Output: The cost of the optimal assignment to
1 if  then
2        return
  // The cost of the optimal solution
  // The number of clauses falsified
3 while  do
4        if  then
5               return
6        foreach  do
7               let be a new blocking variable
         // Add the cardinality constraint as hard clauses
8       
Algorithm 6 Fu&Malik Fu and Malik’s algorithm for solving PMaxSAT.

Fu&Malik (algorithm 6) (also referred to as MSU1) begins by checking if a hard clause is falsified (line 1), and if so it terminates returning the cost (line 2). Next, unsatisfiable cores () are identified by iteratively calling a SAT solver on the soft clauses (line 6). If the working formula is satisfiable (line 7), the algorithm halts returning the cost of the optimal assignment (line 8). If not, then the algorithm starts its second phase by relaxing each soft clause in the unsatisfiable core obtained earlier by adding to it a fresh variable, in addition to saving the index of the relaxed clause in (lines 11-14). Next, the new working formula constraints are added indicating that exactly one of variables should be (line 15). Finally, the cost is increased by one (line 16) a clause is falsified. This procedure continues until the SAT solver declares the formula satisfiable.

4.2 Wpm1

Ansótegui, Bonet and Levy[4] extended Fu& Malik to WPMaxSAT. The resulting algorithm is called WPM1 and is described in algorithm 7.

Input: A WPMaxSAT instance
Output: The optimal cost of the WPMaxSAT solution
1 if  then
2        return
3 while  do
4        if  then
5               return
         // Compute the minimum weight of all the soft clauses in
6        foreach  do
7               if  then
8                      Let be a new blocking variable
9              
10       if  then
               return  // is unsatisfiable
11              
12       else
                // Add the cardinality constraint as hard clauses
13              
14       
Algorithm 7 WPM1 The WPM1 algorithm for WPMaxSAT.

Just as in Fu&Malik, algorithm 7 calls a SAT solver iteratively with the working formula, but without the weights (line 5). After the SAT solver returns an unsatisfiable core, the algorithm terminates if the core contains hard clauses and if it does not, then the algorithm computes the minimum weight of the clauses in the core, (line 9). Next, the working formula is transformed by duplicating the core (line 13) with one copy having the clauses associated with the original weight minus the minimum weight and a second copy having having the clauses augmented with blocking variables with the original weight. Finally, the cardinality constraint on the blocking variable is added as hard clauses (line 18) and the cost is increased by the minimum weight (line 19).

WPM1 uses blocking variables in an efficient way. That is, if an unsatisfiable core, , appears times, all the copies get the same set of blocking variables. This is possible because the two formulae and are MaxSAT equivalent, meaning that the minimum number of unsatisfiable clause of and is the same. However, the algorithm does not avoid using more than one blocking variable per clause. This disadvantage is eliminated by WMSU3 (described later).

Example 4.1.

Consider . In the following, is the relaxation variable added to clause at the th iteration. A possible execution sequence of the algorithm is:

  1. , , , , .

  2. , , , .

  3. , is an optimal assignment with