Unsatisfiable Cores and Lower Bounding for Constraint Programming

08/25/2015 ∙ by Nicholas Downing, et al. ∙ The University of Melbourne 0

Constraint Programming (CP) solvers typically tackle optimization problems by repeatedly finding solutions to a problem while placing tighter and tighter bounds on the solution cost. This approach is somewhat naive, especially for soft-constraint optimization problems in which the soft constraints are mostly satisfied. Unsatisfiable-core approaches to solving soft constraint problems in SAT (e.g. MAXSAT) force all soft constraints to be hard initially. When solving fails they return an unsatisfiable core, as a set of soft constraints that cannot hold simultaneously. These are reverted to soft and solving continues. Since lazy clause generation solvers can also return unsatisfiable cores we can adapt this approach to constraint programming. We adapt the original MAXSAT unsatisfiable core solving approach to be usable for constraint programming and define a number of extensions. Experimental results show that our methods are beneficial on a broad class of CP-optimization benchmarks involving soft constraints, cardinality or preferences.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Earlier work on unsatisfiable cores for Maximum Satisfiability (MAXSAT) has shown that it is advantageous to consider soft constraints to be hard constraints initially, solve the problem using a modern SAT solver, and use the resulting evidence of infeasibility to see which (temporarily hard) constraints are conflicting with each other, and soften them again only as necessary [fu].

In this paper we extend the unsatisfiable cores algorithm from MAXSAT to Constraint Programming (CP). CP handles soft-constraint problems as minimization problems where the objective is a count of violations, the counts being derived from either reified primitive constraints (whose enforcement is controlled by an auxiliary variable) or soft global constraints (for examples of soft global constraints and their propagation algorithms see Van Hoeve [vanhoeve]).

One reason to expect that unsatisfiable cores will help in CP is that propagation solving relies on eliminating impossible (partial) solutions, but unfortunately when most constraints are soft then most solutions cannot be ruled out definitively and so propagation has little effect. Making as many constraints as possible hard, should improve the propagation behaviour. Conversely, for the approach to be successful the solver needs to be able to prove infeasibility, if this is easy to do repeatedly then unsatisfiable cores will be highly effective, but if it requires a lot of search then it should be put off for as long as possible!

We work in the context of a Lazy Clause Generation (LCG) solver, because the LCG solver can easily ‘explain’ why failures occurred, which is useful because it tells us which (temporarily) hard constraints should be made soft again. LCG is a hybrid approach to CP that uses a traditional ‘propagation and search’ constraint solver as the outer layer which guides the solution process, plus an inner layer which lazily decomposes CP to Boolean satisfiability (SAT) and applies learning SAT solver technology to reduce search [moskewicz, ohrimenko].

The contributions of this paper are:

  • We translate the basic unsatisfiable core approach of SAT to CP solving.

  • We extend the basic unsatisfiable core approach to a nested version which more aggressively makes soft constraints hard.

  • We discuss how we can use the unsatisfiable cores generated to improve the estimation of the objective function in CP search.

  • We give experiments showing that for some CP optimization problems the unsatisfiable-core approach is significantly better than branch and bound.

2 Lazy Clause Generation

We give a brief description of propagation-based solving and LCG, for more details see [ohrimenko]. We consider problems consisting of constraints over integer variables , , , each with a given finite domain . A feasible solution is a valuation to the variables, which satisfies all constraints , and lies in the domain , i.e. .

A propagation solver maintains a domain restriction for each variable and considers only solutions that lie within . Solving interleaves propagation, which repeatedly applies propagators to remove unsupported values, and search which splits the domain of some variable and considers the resulting sub-problems. This continues until all variables are fixed (success) or failure is detected (backtrack and try another subproblem). A singleton domain where all variables are fixed corresponds to a valuation where when , .

Lazy clause generation is implemented by introducing Boolean variables for each potential value of a CP variable, named , and for each bound, . Negating them gives and . Fixing such a literal modifies to make the corresponding fact true, and vice versa. Hence the literals give an alternate Boolean representation of the domain, which supports reasoning. Lazy clause generation makes use of clauses to record nogoods. A clause is a disjunction of literals, which we will often treat as a set of literals.

1:function LCG()17em% initial constraints, domains and objective
2:     17em% empty stack of domains per decision level
3:     17em% best solution found so far, initially none
4:     loop
5:         % the call below updates the implication graph and , not shown explicitly
7:         if  then17em
8:              % failure, nogood is where for all
9:              if  then17em% conflict occurred at level 0
10:                  return 17em% no further improvement possible
11:              else
12:                  17em% make 1UIP nogood where
13:                  pop until reaching highest decision level of literals in or 0
14:                  pop from 17em% backjump
15:                  17em% add redundant constraint to problem
16:              end if
17:         else if  is a singleton domain then
18:              % found solution, record it and restart with tighter objective constraint
20:              pop until reaching decision level 0
21:              pop from 17em
23:         else
24:              % reached a fixed point, execute the user’s programmed search strategy
25:              push onto
27:         end if
28:     end loop
29:end function
30:function Analyze()
32:     while there are multiple with  do
33:         let be the most recent unprocessed propagation at conflict_level
34:         if no such propagations remain unprocessed then break end if
35:         if then end if
36:     end while
37:     return
39:end function
40:function Learn()
41:     return
42:end function
Algorithm 1 CP branch-and-bound with clause learning and backjumping

The high-level solving algorithm LCG, including propagation, search, and nogood generation, is shown as Algorithm 1. It is a standard CP branch-and-bound search, except that propagation (Propagate) must return a nogood as shown, explaining any failures that are detected by propagation. Propagation must also record an implication graph showing the reasons for each propagation step, and for each literal which is fixed, its decision level as the value of at the time of fixing. Conflict analysis derives new redundant constraints to avoid repeated search, and, as a side effect, modifies the backtracking procedure to backjump or restart solving at an appropriate point close to the failure [moskewicz].

The Analyze procedure reduces the information from the failure nogood , and the implication graph, into a 1UIP nogood, which can be learnt as a new redundant constraint. It considers propagations at the conflict level, which is the highest level of any literal in . Working in reverse chronological order, for each propagation , where the propagated literal occurs in , this literal is replaced by its reason giving . The process stops when there is at most one literal in whose decision level is the conflict level, leaving a clause which propagates to fix that literal to its opposite value.

3 Soft Constraint Optimization

Soft constraints are constraints which should be respected if possible. When not all soft constraints can hold simultaneously we attach a cost to each violation. In the resulting optimization problem the overall cost is to be minimized. Soft constraints may be intensional or extensional. An intensional constraint is an equation or predicate capturing the desired relationship between variables, whereas an extensional constraint is written as a table with columns for the variables of interest, explicitly listing the allowed or disallowed tuples.

Specialized solvers have been highly successful for soft-constraint problems in extensional form. All of these solvers attempt to discover conflicts between soft constraints, or unsatisfiable soft constraints, by using what are essentially lookahead approaches, followed by appropriate reformulation that exposes the increased lower bound on solution cost due to the conflict or violation.

For WCSP, in which each extensional table contains a weight column giving the cost to be paid if the row holds, the best solver seems to be toolbar/toulbar2 [degivry, larrosa]. It is based on a branch-and-bound search with consistency notions, where (loosely speaking) a variable is consistent if the costs of the minimum cost value(s) have been subtracted from the tables involving the variable and moved into the lower bound, thus fathoming unpromising branches.

For MAXSAT, good solvers in a recent evaluation [maxsat] included akmaxsat [kuegel] and MaxSatz [lin] variants. Essentially they use lookahead, with unit propagation and failed literal detection, to improve the lower bound [li2, li]. In restricted cases they use MAXSAT resolution, in which conflicting clauses are replaced by a unified clause plus compensation clauses [bonet]. WCSP solvers are also highly effective on MAXSAT, since MAXSAT is a special case of WCSP.

Recently there has also been considerable interest in decomposing MAXSAT to SAT, usually with an unsatisfiable-core approach [ansotegui, fu, marquessilva, marquessilva3]. Because they use learning instead of lookahead (and other improvements such as activity-based search [moskewicz]), they have a considerable advantage over the previously-described approaches. On the other hand they do not employ reformulation, and not all problems are suitable for unsatisfiable-core searches.

Pseudo-Boolean Optimization (PBO) is also promising for MAXSAT, which is a special case of PBO. In particular the Weighted Boolean Optimization (WBO) framework [manquinho] is an application of PBO to soft-constraint problems, using some of the specialized techniques discussed above. Another option is decomposition to SAT via the PBO solver MiniSAT+ [een], which could be useful if unsatisfiability-based methods aren’t applicable to a particular problem.

In this research we extend certain of the above techniques to intensional soft constraints. Modelling with intensional constraints has many advantages, (i) it is much easier since constraints are expressed in a natural way, (ii) it handles more constraints, since decomposition to extensional form is not always practical, and (iii) it can be more efficient, since propagation is a reasoning task as opposed to an expensive table traversal . It also has some disadvantages, (i) propagators must be implemented for each type of intensional constraint, and (ii) due to the many ways that constraints can interact, reformulation is difficult .

We consider solving combinatorial constrained optimization problems with pseudo-Boolean objective (COPPBO). A COPPBO

consists of a vector

of general variables , , a vector of of Boolean variables , which appear in the objective, an initial finite domain , a set of constraints , and an objective to be minimized, where consists of positive constant weighting factors111 In calculating we take and .222We make the coefficients positive by negating Boolean literals if necessary.. COPPBO problems encompass MAXSAT, partial MAXSAT, weighted partial MAXSAT, WBO and PBO.

An important class of COPPBO problems are soft constraint optimization problems given by a set of hard constraints and a vector of soft constraints , with corresponding weight vector such that is the cost of violating soft constraint , . The aim is to find a solution to the variables , which minimizes , subject to , where consists of introduced relaxation variables for the constraints in . Note that a CP system supporting constraint can be straightforwardly extended to support the softened form through half-reification [feydy].

4 Basic Unsatisfiable Cores Algorithm

An unsatisfiable core is a clause which contains only literals in . This clause forces an objective variable to be true, and must add some positive value to the objective. Note that clauses containing a literal are not unsatisfiable cores. The unsatisfiable cores approach to optimization originally arose for solving a MAXSAT problem, which is, given a set of soft clauses , find a solution which satisfies the maximum number of soft clauses.

1: initially
2:function Learn() where (otherwise fall back to Algorithm 1)
3:     8em% remove candidates that have appeared in an unsatisfiable core
4:     return 8em% unchanged constraint set, to avoid risk of learning duplicates
5:end function
6: set of literals which have never appeared in an unsatisfiable core
7:function Decide()
8:     if decision_level = 0 and exists with  then
9:         for all such restrict the corresponding domain to
10:     else
11:         restrict some other domain according to user’s programmed search
12:     end if
13:     return
14:end function
Algorithm 2 Basic unsatisfiable core algorithm (relative to Algorithm 1)

The basic unsatisfiable core solver consists of the procedures in Algorithm 2, which are called by the high level solver of Algorithm 1, and essentially modify the decision procedure based on information from conflict analysis. Each attempt fixes all unfixed variables in , that have never appeared in an unsatisfiable core, to false, and solves the resulting problem. This either finds a solution (which should be of low cost), or it detects that the problem is unsatisfiable. In a learning solver such as a SAT or LCG solver, by fixing the -variables as a (possibly) multiple decision in an artificial first decision level, failure occurring at this level generates, as a side effect, a new unsatisfiable core. This continues until solutions are found, or the original problem is proved unsatisfiable.

We have to modify the standard LCG solver to allow multiple simultaneous decisions when branching in procedure Decide. The Analyze procedure returns generalized 1UIP nogoods where is a set of literals treated as a conjunction, and a set treated as a disjunction, e.g. . The code for Analyze in Algorithm 1 already handles this case, as line 34 will cause the loop to exit when only decisions remain. This line is unnecessary for search without multiple simultaneous decisions. Note also that, for the basic unsatisfiable core algorithm, we will only ever generate generalized 1UIP nogoods where , but we will use the more general form in the next section.

Algorithm 2 keeps track of , the set of optimization variables that have never appeared in an unsatisfiable core. The new post-analysis handler handles the case where , by removing variables in from . Unlike the 1UIP case with , it does not learn the new nogood since this may already be in the clause database, because two or more literals in may have been set false simultaneously (propagating the database precludes single wrong decisions that violate a clause, but this does not extend to multiple decisions).

Example 1

The MAXSAT instance (all constraints soft) in the left column is entered as on the right, with objective to be minimized.

Here and are clearly in conflict, but given the choice it is better to relax so that and can be satisfied simultaneously. At the top level the solver creates the multiple decision (simultaneously) , , and solves. This fails with unsatisfiable core , which is the generalized 1UIP nogood resulting from the implication graph in Figure 4(a).

On the next attempt it sets only , , because , have appeared in an unsatisfiable core. The only possible solution, under these assumptions, is , , , with cost .

¬y_1 [rr] ¬b
¬y_3 [rr] ¬a [r] false
¬y_4 [rrru]
¬y_1 [rr] ¬a [ddr]
¬y_2 [rr] ¬b [dr]
¬y_3 [rrr] false
¬y_4 [rr] ¬c
Figure 1: Two implication graphs from solving the soft constraint problems of: (a) Example 1, and (b) Example LABEL:ex:branchandbound; assuming all soft constraints are satisfied.