1 Introduction
The shortcoming of Answer Set Programming (ASP; [Lifschitz (2008)]) to succinctly represent variables over large numeric domains has led to the development of several systems enhancing ASP with capabilities for finite domain Constraint Processing (CP; [Rossi et al. (2006)]). Starting from the seminal work in [Baselice et al. (2005)] and the consecutive development of traditional DPLL^{1}^{1}1Tracing back to the DavisPutmanLogemannLoveland procedure [Davis and Putnam (1960), Davis et al. (1962)]style hybrid ASP solvers like adsolver [Mellarkod et al. (2008)], modern hybrid ASP solvers take advantage of CDCL^{2}^{2}2Standing for: ConflictDriven Constraint Learningbased solving technology [MarquesSilva and Sakallah (1999), Zhang et al. (2001), Gebser et al. (2007)] in different ways. Let us illustrate this by describing the approach of three representative Constraint Answer Set Programming (CASP; [Balduccini and Lierler (2013)]) systems.
A blackbox approach is pursued in the two previous clingcon series where the ASP solver clasp is combined with the CP solver gecode [Gecode Team (2006)] by following the lazy approach to SMT^{3}^{3}3Standing for: Satisfiability Modulo Theoriessolving [Barrett et al. (2009)]. In the clingcon setting, this means that clasp only generates truth assignments for abstracted constraint expressions, while gecode checks whether the actual constraints can be made true or false accordingly. On the one hand, this blackbox approach benefits from the vast spectrum of constraints available in gecode and seamlessly keeps up with advanced CP technology, among others regarding preprocessing and propagation. Moreover, this approach avoids an explicit representation of integer variables in ASP and thus can deal with very large domains. On the other hand, the usage of an external CP solver restricts information exchange which impedes the CDCL approach of clasp. First, neither conflict nor propagation information is provided by gecode and thus must be approximated within the interface to sustain conflict analysis in CDCL. Second, the granularity induced by constraint abstraction leads to weaker propagation than what is obtainable when encoding integer variables.
A translationbased approach is pursued by the aspartame system [Banbara et al. (2015)] where a CSP^{4}^{4}4Standing for: Constraint Satisfaction Problem is fully translated into ASP and then solved by an ASP solver. This approach follows the one of the CP solver sugar [Tamura et al. (2009)] translating CSPs to SAT^{5}^{5}5Standing for: Satisfiability Testing [Biere et al. (2009)]. This is done by representing each integer variable along with its domain according to the order encoding scheme [Crawford and Baker (1994)]. Such an approach is called eager in SMT solving. On the one hand, this approach benefits from the full power of CDCLbased search. Also, the granularity induced by an explicit representation of integer variables provides more accurate conflict and propagation information, and approximations for reasons and conflicts as used in the former clingcon system [Ostrowski and Schaub (2012)] are made obsolete. On the other hand, such an explicit representation limits scalability: aspartame (just as sugar) can only deal with medium sized domains up to a few thousand integers. Also, when dealing with larger domains, CDCL search may suffer from congestion due to too much conflict information. Finally, aspartame cannot make use of readily available CP techniques for preprocessing and propagation; all this must be captured in the underlying ASP encoding.
A lazy approach is pursued by the inca system [Drescher and Walsh (2012)] where the ASP solver clasp is augmented with dedicated propagators for linear and selected global constraints by following the approach of lazy clause generation [Ohrimenko et al. (2009)]. The idea is to make parts of the encoding explicit whenever they reflect a conflict or propagation signaled by a propagator. In this way, the explicit representation of constraints is only unfolded when needed and its extent is controlled by the deletion scheme of the ASP solver. This approach also benefits from the full power of CDCLbased search but outsources constraint oriented inferences. In this way, the overall size of the hybrid problem is under control of the ASP solver. As a consequence, inca can deal with large domains. But it has its limits because the vocabulary and basic inference schemes of the order encoding must be provided at the outset by introducing auxiliary variables and nogoods. The propagators rely on this for making parts of the constraint encoding explicit. Moreover, this lazy approach cannot harness implemented CP techniques for preprocessing and propagation; inca provides advanced means for propagation but uses no sophisticated preprocessing techniques.
The third generation of clingcon also follows a lazy approach to hybrid ASP solving but largely extends the lazy one of inca while drawing on experience with aspartame and the previous clingcon series. The current version of clingcon 3 features propagators for linear constraints and can translate distinct constraints. The ultimate design goal was to conceive a hybrid solver architecture that integrates seamlessly with the infrastructure of the ASP system clingo in order to take advantage of its full spectrum of grounding and solving capabilities. For the latter, it is essential to give the solver access to the representation of constraint variables and their domains, otherwise hybrid forms of multiobjective optimization or operations on models like intersection or union cannot reuse existing capacities. The lazy approach lets us accomplish this while controlling space demands. However, we take the approach of inca one step further by permitting lazy variable generation [Thibaut and Stuckey (2009)] to unfold the vocabulary and the basic inference schemes of the order encoding only when needed. This enables clingcon 3 to represent very large (and possibly noncontiguous) domains of integer variables. Furthermore, clingcon 3 features a variety of established CP preprocessing techniques to enhance its lazy approach. This also includes an initial eager translation that allows for unfolding up front parts or even the entire CSP.
What is more, clingcon is not restricted to singleshot solving but fully blends in with clingo’s multishot solving capabilities [Gebser et al. (2015)]. This does not only allow for incremental hybrid solving but moreover equips clingcon with powerful APIs. For instance, the latter allow for conceiving reactive procedures to loop on solving while acquiring changes in the problem specification. In fact, due to our design, most of clingo’s elaborate features carry over to clingcon
. Among others, this includes multithreaded solving as well as unsatisfiable core and modeldriven multicriteria optimization. Exceptions to this are signaturebased forms of reasoning, like projective enumeration or heuristic modifications that must be dealt with indirectly by associating constraint atoms with auxiliary regular atoms with which such operations can be performed.
Our paper is structured as follows. The next section provides the formal foundations of Constraint Answer Set Programming (CASP) and presents the basics of CDCLbased ASP solving along with their extension to CASP solving. Section 3 details relevant features of clingcon 3. We start with an architectural overview in Section 3.1 and introduce the input language of clingcon 3 in Section 3.2. We then explain clingcon’s extended solving algorithms in Section 3.3 and detail distinguished features in Section 3.4. The final subsection of Section 3 is dedicated to multishot CASP solving. Section 4 provides a detailed empirical analysis of clingcon’s features and performance in contrast to competing CP and CASP systems. We summarize the salient features of the new clingcon series in Section 5 and discuss related work.
2 Formal Preliminaries
We begin in Section 2.1 with a gentle introduction to CASP along with some auxiliary concepts. We then provide the basics of CDCLbased ASP solving and show how they extend to CASP solving in Section 2.2.
2.1 Constraint Answer Set Programming
Constraint logic programs consist of a logic program over disjoint sets of propositional variables, and an associated constraint satisfaction problem (CSP) . Elements of and are referred to as regular and constraint atoms, respectively. We consider linear CSPs, where is a set of integer variables, is a set of corresponding variable domains, and is a set of linear constraints.
Logic programs.
A logic program consists of rules of the form^{6}^{6}6We present our approach in the context of normal logic programs, though it readily applies to disjunctive logic programs — as does clingcon 3.
(1) 
where and and each is an atom for .
As an example, consider the logic program :
(2)  
(3)  
(4) 
This program contains regular atoms , , and from along with the constraint atom from . Accordingly, is an integer variable in .
We need the following auxiliary definitions. We define as the head of rule in (1), as its body, and . Moreover, we let , , , and . If , is called a fact. If is missing, is called an integrity constraint and stands for where is a new atom.^{7}^{7}7As syntactic sugar, a rule with a constraint atom in the head stands for .
In ASP, the semantics of a logic program is given by its (constraint) stable models [Gelfond and Lifschitz (1988), Gebser et al. (2009)]. However, in view of our focus on computational aspects, we rather deal with Boolean assignments and constraints and give a corresponding characterization of a program’s stable models below.
Constraint Satisfaction Problems.
A linear CSP deals with linear constraints in of the form
(5) 
where and are integers and for . The domain of a variable is given by . The complement of a constraint is denoted as . We require that is closed under complements. Constraint atoms in are identified with constraints in via a function .
In our example, we have and let . Moreover, we associate the constraint atom with the linear constraint , or formally, . Since we require to be closed under complements, it contains both and its complement .
An assignment satisfies a linear constraint, if (5) holds after replacing each by . We let denote the set of all constraints in satisfied by . Following [Drescher (2015)], we call a configuration of . For instance, the assignment satisfies the linear constraint . Accordingly, is a configuration of .
Moreover, we rely on the CP concept of a view. Following [Schulte and Tack (2005)], a view on a variable is an expression for integers ; its image is defined as .^{8}^{8}8Any linear expression with only one variable can be converted to an expression of the form . Since a view can always be replaced with a fresh variable along with a constraint , we may use them nearly everywhere where we would otherwise use variables. For a view , we define and as the smallest/largest value in .^{9}^{9}9Note that for a view of the form we have . Then, () is a function mapping a value to the largest (smallest) element in which is smaller (larger) than if (), otherwise it is (). In our example, we have and , and for instance , , and , respectively.
2.2 Basics of ASP and CASP Solving
The basic idea of CDCLbased ASP solving is to map inferences from rules as in (1) to unit propagation on Boolean constraints. Our description of this approach follows the one given in [Gebser et al. (2012)].
Accordingly, we represent Boolean assignments, , over a set of atoms by sets of signed literals or standing for and , respectively, where . The complement of a signed literal is denoted by . We define and . Then, an assignment is complete, if and . For instance, the assignment is complete wrt the atoms in our example.
Boolean constraints are represented as nogoods. A nogood is a set of signed literals representing an invalid partial assignment. A nogood is violated by a Boolean assignment whenever . A complete Boolean assignment is a solution of a set of nogoods, if it violates none of them. Given a Boolean assignment and a nogood such that and , we say that is unit wrt and asserts the unitresulting literal . For a set of nogoods and an assignment , unit propagation is the iterated process of extending with unitresulting literals until no further literal is unitresulting for any nogood in .
With these concepts in hand, the Boolean constraints induced by a logic program can be captured as follows:
(6)  
Then, according to [Gebser et al. (2012)], a set of atoms is a stable model of a regular logic program iff for a (unique) solution of .
For example, the nogoods obtained in (6) for the atom in our example are and . Similarly, the body of Rule (2) gives rise to nogoods and . Hence, once an assignment contains , we may derive via unit propagation (using both the first and last nogood).
To extend this characterization to programs with constraint atoms, it is important to realize that the truth value of such atoms is determined external to the program. In CASP, this is reflected by the requirement that constraint atoms must not occur in the head of rules.^{10}^{10}10In alternative semantic settings, theory atoms may also occur as rule heads (cf. [Gebser et al. (2016a)]). Hence, treating constraint atoms as regular ones leaves them unfounded. For instance, in our example, we would get from both (6) and (2.2) the nogood , which would set permanently to false. To address this issue, [Drescher and Walsh (2012)] exempt constraint atoms from the respective sets of nogoods and define the variants and by replacing in the qualification of (6) and (2.2) with .
Then, in [Ostrowski (2017)] it is shown that is a constraint stable model of a program wrt as defined in [Gebser et al. (2009)] iff and for a (unique) solution of .
Accordingly, our example yields the following constraint stable models
(8) 
where means that either , or , …or . For instance, the very first constraint stable model corresponds to the Boolean assignment paired with the constraint variable assignment .
Similar to logic programs, linear constraints can be represented as sets of nogoods by means of an order encoding [Tamura et al. (2009)]. This amounts to representing the above unit nogoods by more elaborate nogoods capturing the semantics provided by .
To this end, we let stand for the set of order atoms associated with variables in and require it to be disjoint from . Whenever the set is clear from the context, we drop it and simply write . More precisely, we introduce an order atom for each constraint variable and value . We refer to signed literals over as signed order literals.
Now, we are ready to map a linear CSP into a set of nogoods.
First, we need to make sure that each variable in has exactly one value from its domain in . To this end, we define the following set of nogoods.
(9) 
Intuitively, each such nogood stands for an implication “”. In our example, we get the following nogoods.
(10) 
Second, we need to establish the relation between constraint atoms and their associated linear constraints in . Following [Feydy et al. (2011)], a reified constraint is an equivalence “” where ; it is decomposable into two halfreified constraints “” and “”. To proceed analogously, we extend to signed literals over as follows:
For instance, we have .
To translate constraints into nogoods, we need to translate expressions of the form for and integers into signed ordered literals.^{11}^{11}11Any linear inequality using and one variable can be converted into this form. Following [Tamura et al. (2009)], we then define as
where is defined for as
If then ; if then , where stands for the empty body.^{12}^{12}12We use and as representatives for tautological and unsatisfiable signed literals; they are removed in (12) and (13) below. Expressing our example constraint in terms of signed order literals results in . The signed literal indicates that is the largest integer satisfying the constraint. Also, we get the signed literals and .
We sometimes use ,, or as operators in these expressions and implicitly convert them to the normal form to be used in this translation. Accordingly, the complementary constraint yields .
The actual relation between the constraint atoms in and their associated linear constraints in is established via the following nogoods.
(11) 
For all constraint atoms associated with the linear constraint in , we define for both of its halfreified constraints the set of nogoods
(12)  
(13) 
where
Note that nogoods with and are simplified in (12) and (13). Also, observe that the definition of is recursive although this does not show with our simple examples.
In our example, we obtain
(14)  
(15) 
Taken together, both nogoods realize the aforementioned equivalence between the constraint atom and its associated constraint. Note that is a constraint atom in , while is an order atom in and thus belongs to the encoding of the constraint associated with . For further illustration, reconsider the Boolean assignment inducing the first constraint stable models in (8). Applying unit propagation, we get via (15) and in turn to via the nogoods in in (10). Similarly, making true yields and also via the nogoods in (10).
All in all, a CSP is characterized by the nogoods in and .
While in (8) the corresponding constraint variable assignment is determined externally, it can be directly extracted from a solution for by means of the following functions: The upper bound for a view relative to a Boolean assignment is given by and its lower bound by . Then, for all . Accordingly, the above Boolean assignment corresponds to .
Combining the nogoods stemming from the logic program and its associated CSP, we obtain the following characterization of constraint logic programs.
Theorem 2.1
Let be a constraint logic program over associated with the CSP and let and a total assignment over .
Then, is a constraint stable model of wrt as defined in [Gebser et al. (2009)] iff is a configuration for , for a (unique) solution of , and .
The proof of this theorem is obtained by combining existing characterizations of logic programs in terms of nogoods and similar ones for CSPs in terms of clauses in CNF [Ostrowski (2017)].
Nogood propagators.
The basic idea of lazy constraint propagation is to make the nogoods in and only explicit when needed. This is done by propagators corresponding to the respective set of nogoods. A popular example of this is the unfoundedset algorithm in ASP solvers that only makes the nogoods in in (2.2) explicit when needed.
Following [Drescher and Walsh (2012)], a propagator for a set of nogoods is a function mapping a Boolean assignment to a subset of such that for each total assignment : if for some , then for some . That is, whenever there is a nogood in violated by an assignment , then yields a violated nogood, too. A propagator is conflict optimal, if for all partial assignments , the violation of a nogood in by implies that some nogood in is violated by . is inference optimal, if it is conflict optimal and contains all unit nogoods of wrt .
We obtain the following extension of Theorem 2.1.
Theorem 2.2
Let be a constraint logic program over associated with the CSP and let be a propagator for = , , and , respectively.
Then, is a solution of iff is a solution of
This theorem tells us that the nogoods in , , and must not be explicitly represented but can be computed by corresponding propagators that add them lazily when needed.
To relax the restrictions imposed by this theorem, the idea is to compile out a subset of constraints and variables of the CSP while leaving the others subject to lazy constraint propagation. This is captured by the following corollary to Theorem 2.2.
Corollary 2.1
Let be a constraint logic program over associated with the CSP and let be a propagator for = , , and , respectively, for subsets , , and .
Then, is a solution of iff is a solution of
This correspondence nicely reflects upon the basic idea of our approach. While the entire set of loop nogoods is handled by the unfounded set propagator as usual, the ones capturing the CSP is divided among the explicated nogoods in and the implicit ones handled by the propagators and . Note that variables and domain elements are often only dealt with implicitly through their induced order atoms in .
3 The clingcon system
We now detail various aspects of the new clingcon 3 system. We begin with an overview of its architecture along with its salient components. The next sections detail its input language and major algorithms. The subsequent section is dedicated to distinguished clingcon features, which are experimentally evaluated in Section 4. Finally, we illustrate in the last section clingcon’s multishot solving capabilities by discussing several incremental solutions to the queens puzzle.
3.1 Architecture
clingcon 3 is an extension of the ASP system clingo 5, which itself relies on the grounder gringo and the solver clasp. The architecture of clingcon 3 is given in Figure 1.
More precisely, clingcon uses gringo’s capabilities to specify and process customized theory languages. For this, it is sufficient to supply a grammar fixing the syntax of constraintrelated expressions. As detailed in Section 3.2, this allows us to express linear constraints similar to standard ASP aggregates by using firstorder variables. Unlike this, clingcon extends clasp in several ways to accommodate its lazy approach to constraint solving. First, clasp’s preprocessing capabilities are extended to integrate linear constraints. Second, dedicated propagators are added to account for lazy constraint propagation. Both extensions are detailed in Section 3.3 and 3.4. And finally, a special output module was created to integrate CSP solutions. Notably, clingcon pursues a lazy yet twofold approach to constraint solving that allows for making a part of the nogoods in explicit during preprocessing, while leaving the remaining constraints implicit and the creation of corresponding nogoods subject to the constraint propagator. In this way, a part of the CSP can be put right up front under the influence of CDCLbased search. All other constraints are only turned into nogoods when needed. Accordingly, only a limited subset of order atoms from must be introduced at the beginning; further ones are only created if they are needed upon the addition of new nogoods. This is also called lazy variable generation.
It is worth mentioning that both the grounding and the solving component of clingcon can also be used separately via clingo’s option ‘mode’. That is, the same result as with clingcon is obtained by passing the output of ‘clingcon mode=gringo’ to ‘clingcon mode=clasp’. The intermediate result of grounding a CASP program is expressed in the aspif format [Gebser et al. (2016b)] that accommodates both the regular ASP part of the program as well as its constraintbased extension. This modular design allows others to take advantage of clingcon’s infrastructure for their own CASP solvers. Also, other front ends can be used for generating ground CASP programs; eg. the flatzinc translator used in Section 4.
Finally, extra effort was taken to transfer clasp specific features to clingcon’s solving component. This includes multithreading [Gebser et al. (2012)], unsatisfiable core techniques [Andres et al. (2012)], multicriteria optimization [Gebser et al. (2011)], domainspecific heuristics [Gebser et al. (2013)], multishot solving [Gebser et al. (2015)], and clasp’s reasoning modes like enumeration, intersection and union of models. Vocabularysensitive reasoning modes like projective enumeration and domainspecific heuristics can be used via auxiliary atoms.
3.2 Language
As mentioned, the treatment of the extended input language of CASP programs can be mapped onto gringo’s theory language capabilities [Gebser et al. (2016a)]. For this, it is sufficient to supply a corresponding grammar fixing the syntax of the language extension. The one used for clingcon is given in Listing 1.
The grammar is named csp and consists of two parts, one defining theory terms in lines 227 and another defining theory atoms in lines 2933. All regular terms are implicitly included in the respective theory terms. Theory terms are then used to represent constraintrelated expressions that are turned by grounding into linear constraint atoms using predicate &sum, domain restrictions using predicate &dom, directives &show and &minimize, and the predefined global constraint &distinct.
Before delving into further details, let us illustrate the resulting syntax by the CASP program for two dimensional strip packing given in Listing 2, originally due to [Soh et al. (2010)].
Given a set of rectangles, each represented by a fact r(I,W,H) where I identifies a rectangle with width W and height H, the task is to fit all into a container of width w and height ub while minimizing the needed height of the container. The first two lines of Listing 2 restrict the domain of the left lower corner of each rectangle I. The respective instantiations of x(I) and y(I) yield constraint variables denoting the x and y coordinate of I, respectively. Note that in both lines the consecutive dots ‘..’ construct a theory term ‘0..wW’ and ‘0..ubH’ once w and ub are replaced, respectively. The choice rule in Line 47 lets us choose among all combinations of two rectangles, that is, which one is left, right, below or above. At least one of these relations must hold so that no two rectangles overlap. Atoms of form le(VI,C,VJ) indicate that coordinate VI+C must be less than or equal to VJ. This property is enforced by the linear constraint in Line 9. Finally, to minimize the overall height of (stacked) rectangles, we introduce the variable height. This variable’s value has to be greater than or equal to the y coordinate of any rectangle I plus the rectangle’s height H. This ensures that height is greater or equal to the height of the highest rectangle. Finally, height is minimized in Line 13.
Now, if we take the three rectangles r(a,5,2), r(b,2,3), r(c,2,2) along with ub=10 and w=6, we obtain the ground program in Listing 3.
The domains of the constraint variables giving the x and ycoordinates are delineated in Line 3 and 4. Note that in contrast to regular ASP the grounder leaves terms with the theory symbol .. intact. The orientation of each pair of rectangles is chosen in Lines 611. If for example le(x(c),2,x(b)) becomes true, that is, rectangle is left of , then the constraint is enforced in Line 22. After setting the domain for the height variable in Line 26, we restrict it to be greater or equal to the top ycoordinate of all rectangles in Lines 2830. Line 32 enforces the minimization of this variable. A solution with minimal height consists of the regular atoms le(y(b),3,y(a)), le(y(c),2,y(a)), and le(x(c),2,x(b)) and the constraint variable assignment . Of course other minimal configurations exist.
We have seen above how seamlessly theory atoms capturing constraintrelated expressions can be used in logic programs. We detail below the five distinct atoms featured by clingcon and refer the interested reader for a general introduction to theory terms and atoms to [Gebser et al. (2016a)].
Actual constraints are represented by the theory atoms &dom, &sum, and &distinct. All three can occur in the head and body of rules, as indicated by any in Line 2931 in Listing 1. We discuss below their admissible format after grounding. In the following, a linear expression is a sum of integers, products of integers, or products of an integer and a constraint variable.
 Domain constraints

are of form where

each is a domain term of form

or

where are constraint variable free linear expressions and


is a linear expression containing exactly one constraint variable.
Then, the previous expression represents the constraint , where if , if , and undefined otherwise.
This constraint can be used to set the domain of variables where even noncontiguous domains can be used by having . For example represents the constraint .

 Linear constraints

are of form where

each is a linear expression containing at most one constraint variable, and

is one of the operators <=,=,>=,<,>,!=
This expression represents the linear constraint , which can be translated into one or two linear constraints as in (5).

 Distinct constraints

are of form where each is a linear expression containing at most one constraint variable. Such an expression stands for the constraints for .
The distinct constraint is one of the most common global constraints in CP. We use it to show how global constraints can be incorporated into the language.
The two remaining theory atoms provide directives, similar to their regular counterparts.
 Output directives

are of form where each is a show term of form

where is a function symbol and a positive integer

, where is a constraint variable.
While the latter adds variable to the list of output variables, the first one adds all variables of the form (where is a term) as output variables. For all constraint stable models, the value of the output variables is shown in a solution.

 Minimize directives

are of form where each is a minimize term of form and being a linear expression with at most one constraint variable. Since we support multiobjective optimization, is an integer stating the priority level. Whenever is omitted, it is assumed to be zero. Priorities allow for representing lexicographically ordered minimization objectives. As in regular ASP, higher levels are more significant than lower ones.
Let us make precise how minimize statements induce optimal constraint stable models. Let be a constraint logic program associated with . For a variable assignment and an integer , define as the sum of all values for all occurrences of minimize terms in all minimize statements in . A constraint stable model of wrt is dominated if there is a constraint stable model such that and for all , and optimal otherwise. Maximization can be achieved by multiplying each minimize term by .
Note that the set of constraints supported by clingcon is only a subset of the constraints expressible with the syntax fixed in Listing 1. While for example expressions with more than one constraint variable are wellformed according to the syntax, they are not supported by clingcon.
3.3 Algorithms
As mentioned, clingcon pursues a lazy approach to constraint solving that distinguishes two phases. During preprocessing, any part of the nogoods representing a CSP can be made explicit and thus put right away under the influence of CDCLbased solving. Unlike this, the remaining constraints are at first kept implicit and their corresponding nogoods are only added via constraint propagators to CDCL solving when needed. This partitioning of constraints constitutes a tradeoff. On the one hand, constraint propagators are usually slower than unit propagation, in particular, when dealing with sets of nogoods of moderate size because of modern SAT techniques such as the twowatchedliterals scheme [Zhang et al. (2001)]. On the other hand, translating all constraints is often impracticable, in particular, when dealing with very large domains. Hence, a good tradeoff is to restrict the translation to “small constraints” in order to benefit from the high performance of CDCL solving and to unfold “larger constraints” only by need.
In what follows, we make clingcon’s twofold approach precise by presenting algorithms for translation and propagation of constraints before discussing implementation details in Section 3.4.
Partial Translation.
Following Corollary 2.1, a subset of the constraint atoms is used to create the set of nogoods . Therefore, Algorithm 1 creates a set of nogoods that is equivalent to , as defined in (12) and (13); in turn, they are used to create as shown in (11). To this end, it is initially engaged by .
We start the algorithm by having in our set of literals , and setting to the smallest value greater than in the image of . This is the smallest value needed to violate the constraint. If and the least sum added by all other views is still less than in Line 4, we have to recursively translate the rest of the constraint, while subtracting from the righthand side in Line 5. Otherwise the constraint is already violated and we return all nogoods created so far in Line 7. We iteratively increase in Line 8 and repeat this process (Line 3) for all values in . Note that this also involves adding all order atoms included in the created nogoods to the solver.
Which constraints to translate is subject to heuristics and command line options, as explained in Section 3.4.
Extended Conflict Driven Constraint Learning.
After translating a part of the problem into a set of nogoods , using the order atoms , we explain how to solve the remaining constraint logic program over associated with . Our algorithmic approach follows the one in [Drescher and Walsh (2012)], where a modified CDCL algorithm supporting external propagators is presented. We extend this algorithm with lazy nogood and variable generation in Algorithm 2.
The algorithm relies upon a growing set of Boolean variables , which is initiated with all atoms (regular, constraint, and a subset of the order atoms in ), and subsequently expanded by further order atoms. Accordingly, the Boolean assignment is restricted to atoms in , and recorded nogoods are accumulated in . Starting with an empty assignment, the Propagation method (Line 5), extends the assignment with propagated literals, adds new nogoods to and extends the set of atoms . This method is detailed below in Algorithm 3. When encountering a conflicting assignment (Line 6), we either backtrack (Line 8) or, if we cannot recover from the conflict, return unsatisfiable. Whenever all atoms in are assigned (Line 9), we check whether a complete assignment for the variables in is obtained from in Line 10. If this is the case, we return the constraint stable model . Otherwise, creates a new order atom for the constraint variable with the currently largest domain that splits the domain in half. If we face an incomplete assignment, we extend it using the Select function.
Comments
There are no comments yet.