List of Algorithms
LIST OF ALGORITHMSLIST OF ALGORITHMSloa
1 Introduction and Preliminaries
A Boolean variable can take one of two possible values 0 (false) or 1 (true). A literal is a variable or its negation . A clause is a disjunction of literals, i.e., . A CNF formula is a conjunction of clauses. Formally, a CNF formula composed of clauses, where each clause is composed of is defined as where .
In this paper, a set of clauses is referred to as a Boolean formula. A truth assignment satisfies a Boolean formula if it satisfies every clause.
Given a CNF formula , the satisfiability problem (SAT) is deciding whether has a satisfying truth assignment (i.e., an assignment to the variables of that satisfies every clause). The Maximum Satisfiability (MaxSAT) problem asks for a truth assignment that maximizes the number of satisfied clauses in .
Many theoretical and practical problems can be encoded into SAT and MaxSAT such as debugging [51], circuits design and scheduling of how an observation satellite captures photos of Earth [56], course timetabling [11, 45, 41, 34], software package upgrades [24], routing [58, 46], reasoning [52] and protein structure alignment in bioinformatics [50].
Let be a CNF formula, where are natural numbers. The Weighted Partial MaxSAT problem asks for an assignment that satisfies all (called hard clauses) and maximizes the sum of the weights of the satisfied clauses in (called soft clauses).
In general, exact MaxSAT solvers follow one of two approaches: successively calling a SAT solver (sometimes called the SATbased approach) and the branch and bound approach. The former converts each MaxSAT problem with different hypothesized maximum weights into multiple SAT problems and uses a SAT solver to solve these SAT problems to determine the actual solution. The SATbased approach converts the WPMaxSAT problem into a sequence of SAT instances which can be solved using SAT solvers. One way to do this, given an unweighted MaxSAT instance, is to check if there is an assignment that falsifies no clauses. If such an assignment can not be found, we check if there is an assignment that falsifies only one clause. This is repeated and each time we increment the number of clauses that are allowed to be until the SAT solver returns , meaning that the minimum number of falsified clauses has been determined. Recent comprehensive surveys on SATbased algorithms can be found in[43, 8].
The second approach utilizes a depthfirst branch and bound search in the space of possible assignments. An evaluation function which computes a bound is applied at each search node to determine any pruning opportunity. This paper surveys the satisfiabilitybased approach and provides an experimental investigation and comparison between the performances of both approaches on sets of benchmarks.
Because of the numerous calls to a SAT solver this approach makes, any improvement to SAT algorithms immediately benefits MaxSAT SATbased methods. Experimental results from the MaxSAT Evaluations^{1}^{1}1Web page: http://www.maxsat.udl.cat have shown that SATbased solvers are more competent to handle large MaxSAT instances from industrial applications than branch and bound methods.
2 Linear Search Algorithms
A simple way to solve WPMaxSAT is to augment each soft clause with a new variable (called a blocking variable) , then a constraint is added (specified in CNF) saying that the sum of the weights of the falsified soft clauses must be less than a given value . Next, the formula (without the weights) together with the constraint is sent to a SAT solver to check whether or not it is satisfiable. If so, then the cost of the optimal solution is found and the algorithm terminates. Otherwise, is decreased and the process continues until the SAT solver returns . The algorithm can start searching for the optimal cost from a lower bound initialized with the maximum possible cost (i.e. ) and decrease it down to the optimal cost, or it can set and increase it up to the optimal cost. Solvers that employ the former approach is called satisfiabilitybased (not to be confused with the name of the general method) solvers, while the ones that follow the latter are called UNSATbased solvers. A cost of 0 means all the soft clauses are satisfied and a cost of means all the soft clauses are falsified.
Algorithm 1 employs the first method to search for the optimal cost by maintaining (maintaining a lower bound initialized to 0) (line 1).
Next, the algorithm relaxes each soft clause with a new variable in lines 24. The formula now contains each soft clause augmented with a new blocking variable. The while loop in lines 59 sends the clauses of (without the weights) to a SAT solver (line 6). If the SAT solver returns , then LinearUNSAT terminates returning a solution (lines 78). Otherwise, the lower bound is updated and the loop continues until the SAT solver returns . The function in line 9 updates the lower bound either by simply increasing it or by other means that depend on the distribution of the weights of the input formula. Later in this paper we will see how the subset sum problem can be a possible implementation of . Note that it could be inefficient if changes by one in each iteration. Consider a WPMaxSAT formula with five soft clauses having the weights and 100. The cost of the optimal solution can not be anything else other than and 104. Thus, assigning any of the values is unnecessary and will result in a large number of iterations.
Example 2.1.
Let , where , and . If we run LinearUNSAT on , the soft clauses will be be relaxed and is initialized to 0. The sequence of iterations are

The constraint is included, , .

The constraint is included, , .

The constraint is included, , .

The constraint is included, , .

The constraint is included, . The SAT solver returns the assignment , which leads to a WPMaxSAT solution if we ignore the values of the variables with cost 20.
The next algorithm is describes the SATbased technique. Algorithm 2 starts by initializing the upper bound to one plus the the sum of the weights of the soft clauses (line 1).
In each iteration of algorithm 2 except the last, the formula is satisfiable. The cost of the optimal solution is found immediately after the transition from satisfiable to unsatisfiable instance. LinearSAT begins by initializing the upper bound to one plus the sum of the weights of the soft clauses (line 1). The while loop (lines 48) continues until the formula becomes unsatisfiable (line 6), then the algorithm returns a WPMaxSAT solution and terminates (line 7). As long as the formula is satisfiable, the formula is sent to the SAT solver along with the constraint assuring that the sum of the weights of the falsified soft clauses is less than (line 5), and the upper bound is updated to the sum of the weights of the soft clauses falsified by the assignment returned by the SAT solver (line 8).
Note that updating the upper bound to is more efficient than simply decreasing the upper bound by one, because uses less iterations and thus the problem is solved with less SAT calls.
Example 2.2.
If we run LinearSAT on from the previous example, the soft clauses will be be relaxed and is initialized to . The sequence of iterations are

The constraint is included, , , .

The constraint is included, , , .

The constraint is included, , , .

The constraint is included, . The assignment from the previous step is indeed a solution to if we ignore the values of the variables with cost 20.
3 Binary Searchbased Algorithms
The number of iterations linear search algorithms for WPMaxSAT can take is linear in the sum of the weights of the soft clauses. Thus, in the worst case the a linear search WPMaxSAT algorithm can take calls to the SAT solver. Since we are searching for a value (the optimal cost) among a set of values (from 0 to ), then binary search can be used, which uses less iterations than linear search. Algorithm 3 searches for the cost of the optimal assignment by using binary search.
BinSWPMaxSAT begins by checking the satisfiability of the hard clauses (line 1) before beginning the search for the solution. If the SAT solver returns (line 2), BinSWPMaxSAT returns the empty assignment and terminates (line 3). The algorithm updates both a lower bound and an upper bound initialized respectively to 1 and one plus the sum of the weights of the soft clauses (lines 45). The soft clauses are augmented with blocking variables (lines 68). At each iteration of the main loop (lines 916), the middle value () is changed to the average of and and a constraint is added requiring the sum of the weights of the relaxed soft clauses to be less than or equal to the middle value. This clauses describing this constraint are sent to the SAT solver along with the clauses of (line 11). If the SAT solver returns (line 12), then the cost of the optimal solution is less than , and is updated (line 14). Otherwise, the algorithm looks for the optimal cost above , and so is updated (line 16). The main loop continues until , and the number of iterations BinSWPMaxSAT executes is proportional to which is a considerably lower complexity than that of linear search methods.
In the following example, assigns to .
Example 3.1.
Consider in example 2.1 with all the weights of the soft clauses set to 1. At the beginning, , . The following are the sequence of iterations algorithm 3 executes.

, the constraint is included, , , .

, the constraint is included, , , , . The assignment is indeed an optimal one, falsifying four clauses.
It is often stated that a binary search algorithm performs better than linear search. Although this is true most of the time, there are instances for which linear search is faster than binary search. Let be the sum of the soft clauses falsified by the assignment returned by the SAT solver in the first iteration. If is indeed the optimal solution, linear search methods would discover this fact in the next iteration, while binary search ones would take iterations to declare as the optimal cost. In order to benefit from both search methods, An et al.[3] developed a PMaxSAT algorithm called QMaxSAT (version 0.4) that alternates between linear search and binary search (see algorithm 4).
Algorithm 4 begins by checking that the set of hard clauses is satisfiable (line 1). If not, then the algorithm returns the empty assignment and terminates (line 3). Next, the soft clauses are relaxed (lines 46) and the lower and upper bounds are initialized respectively to 1 and one plus the sum of the weights of the soft clauses (lines 78). BinLinWPMaxSAT has two execution modes, binary and linear. The mode of execution is initialized in line 9 to binary search. At each iteration of the main loop (lines 1027), the SAT solver is called on the clauses of with the constraint bounded by the mid point (line 12), if the current mode is binary, or by the upper bound if the mode is linear (line 14). If the formula is satisfiable (line 16), the upper bound is updated. Otherwise, the lower bound is updated to the mid point. At the end of each iteration, the mode of execution is flipped (lines 2427).
Since the cost of the optimal solution is an integer, it can be represented as an array of bits. Algorithm 5 uses this fact to determine the solution bit by bit. BitBasedWPMaxSAT starts from the most significant bit and at each iteration it moves one bit closer to the least significant bit, at which the optimal cost if found.
At the beginning of the algorithm as in the previous ones, the satisfiability of the hard clauses are checked and the soft clauses are relaxed. The sum of the weights of the soft clauses is an upper bound on the cost and thus it is computed to determine the number of bits needed to represent the optimal solution (line 7). The index of the current bit being considered is initialized to (line 7), and the value of the solution being constructed is initialized (line 8). The main loop (lines 1020) terminates when it reached the least significant bit (when ). At each iteration, the SAT solver is called on with constraint saying that the sum of the weights of the falsified soft clauses must be less than (line 11). If the SAT solver returns (line 12), the sum of the weights of the soft clauses falsified by the current assignment is computed and the set of bits needed to represent that number are determined as well (line 14), the index of the current bit is decreased to the next such that (line 15). If such an index does not exist, then becomes 1 and in the following iteration the algorithm terminates. On the other hand, if the SAT solver returns , the search continues to the most significant bit by decrementing (line 19) and since the optimal cost is greater than the current value of , it is decreased by (line 20).
Example 3.2.
Consider from example 2.1 with all the weights of the soft clauses being 1. At the beginning of the algorithm, the soft clauses are relaxed and the formula becomes . Also, the variables , and are initialized to 2, 2 and respectively. The following are the iterations BitBasedWPMaxSAT executes.

The constraint is included, , , .

The constraint , , , .
4 Coreguided Algorithms
As in the previous method, UNSAT methods use SAT solvers iteratively to solve MaxSAT. Here, the purpose of iterative SAT calls is to identify and relax unsatisfiable formulas (unsatisfiable cores) in a MaxSAT instance. This method was first proposed in 2006 by Fu and Malik in[18] (see algorithm 6). The algorithms described in this section are
Definition 4.1 (Unsatisfiable core).
An unsatisfiable core of a CNF formula is a subset of that is unsatisfiable by itself.
Definition 4.2 (Minimum unsatisfiable core).
A minimum unsatisfiable core contains the smallest number of the original clauses required to still be unsatisfiable.
Definition 4.3 (Minimal unsatisfiable core).
A minimal unsatisfiable core is an unsatisfiable core such that any proper subset of it is not a core[15].
Modern SAT solvers provide the unsatisfiable core as a byproduct of the proof of unsatisfiability. The idea in this paradigm is as follows: Given a WPMaxSAT instance , let be a SAT instance that is satisfiable iff has an assignment with cost less than or equal to . To encode , we can extend every soft clause with a new (auxiliary) variable and add the CNF conversion of the constraint . So, we have
Let be the cost of the optimal assignment of . Thus, is satisfiable for all , and unsatisfiable for all , where may range from 0 to . Hence, the search for the optimal assignment corresponds to the location of the transition between satisfiable and unsatisfiable . This encoding guarantees that the all the satisfying assignments (if any) to are the set of optimal assignments to the WPMaxSAT instance .
4.1 Fu and Malik’s algorithm
Fu and Malik implemented two PMaxSAT solvers, ChaffBS (uses binary search to find the optimal cost) and ChaffLS (uses linear search to find the optimal cost) on top of a SAT solver called zChaff[44]. Their PMaxSAT solvers participated in the first and second MaxSAT Evaluations[10]. Their method (algorithm 6 basis for many WPMaxSAT solvers that came later. Notice the input to algorithm 6 is a PMaxSAT instance since all the weights of the soft clauses are the same.
Fu&Malik (algorithm 6) (also referred to as MSU1) begins by checking if a hard clause is falsified (line 1), and if so it terminates returning the cost (line 2). Next, unsatisfiable cores () are identified by iteratively calling a SAT solver on the soft clauses (line 6). If the working formula is satisfiable (line 7), the algorithm halts returning the cost of the optimal assignment (line 8). If not, then the algorithm starts its second phase by relaxing each soft clause in the unsatisfiable core obtained earlier by adding to it a fresh variable, in addition to saving the index of the relaxed clause in (lines 1114). Next, the new working formula constraints are added indicating that exactly one of variables should be (line 15). Finally, the cost is increased by one (line 16) a clause is falsified. This procedure continues until the SAT solver declares the formula satisfiable.
4.2 Wpm1
Ansótegui, Bonet and Levy[4] extended Fu& Malik to WPMaxSAT. The resulting algorithm is called WPM1 and is described in algorithm 7.
Just as in Fu&Malik, algorithm 7 calls a SAT solver iteratively with the working formula, but without the weights (line 5). After the SAT solver returns an unsatisfiable core, the algorithm terminates if the core contains hard clauses and if it does not, then the algorithm computes the minimum weight of the clauses in the core, (line 9). Next, the working formula is transformed by duplicating the core (line 13) with one copy having the clauses associated with the original weight minus the minimum weight and a second copy having having the clauses augmented with blocking variables with the original weight. Finally, the cardinality constraint on the blocking variable is added as hard clauses (line 18) and the cost is increased by the minimum weight (line 19).
WPM1 uses blocking variables in an efficient way. That is, if an unsatisfiable core, , appears times, all the copies get the same set of blocking variables. This is possible because the two formulae and are MaxSAT equivalent, meaning that the minimum number of unsatisfiable clause of and is the same. However, the algorithm does not avoid using more than one blocking variable per clause. This disadvantage is eliminated by WMSU3 (described later).
Example 4.1.
Consider . In the following, is the relaxation variable added to clause at the th iteration. A possible execution sequence of the algorithm is:

, , , , .

, , , .

, is an optimal assignment with
Comments
There are no comments yet.