Answer set programming (ASP) is a rule-based approach to declarative problem solving [15, 22, 24]. The idea is to first formalize a given problem as a set of rules also called an answer-set program so that the answer sets of the program correspond to the solution of the problem. Such problem descriptions are typically devised in a uniform way which distinguishes general principles and constraints of the problem in question from any instance-specific data. To this end, term variables are deployed for the sake of compact representation of rules. Solutions themselves can then be found out by grounding the rules of the answer-set program, and by computing answer sets for the resulting ground program using an answer set solver. State-of-the-art answer set solvers are already very efficient search engines [7, 11] and have a wide range of industrial applications.
The satisfiability modulo theories (SMT) framework  follows a similar modelling philosophy but the syntax is based on extensions of propositional logic rather than rules with term variables. The SMT framework enriches traditional satisfiability (SAT) checking  in terms of background theories which are selected amongst a number of alternatives.111http://combination.cs.uiowa.edu/smtlib/ Parallel to propositional atoms, also theory atoms involving non-Boolean variables222However, variables in SMT are syntactically represented by (functional) constants having a free interpretation over a specific domain such as integers or reals. can be used as references to potentially infinite domains. Theory atoms are typically used to express various constraints such as linear constraints, difference constraints, etc., and they enable very concise representations of certain problem domains for which plain Boolean logic would be more verbose or insufficient in the first place.
As regards the relationship of ASP and SMT, it was quite recently shown [20, 25] that answer-set programs can be efficiently translated into a simple SMT fragment, namely difference logic (DL) . This fragment is based on theory atoms of the form formalizing an upper bound on the difference of two integer-domain variables and . Although the required transformation is linear, it is not reasonable to expect that such theories are directly written by humans in order to express the essentials of ASP in SMT. The translations from [20, 25] and their implementation called lp2diff333http://www.tcs.hut.fi/Software/lp2diff/ enable the use of particular SMT solvers for the computation of answer sets. Our experimental results  indicate that the performance obtained in this way is surprisingly close to that of state-of-the-art answer set solvers. The results of the third ASP competition , however, suggest that the performance gap has grown since the previous competition. To address this trend, our current and future agendas include a number of points:
We gradually increase the number of supported SMT fragments which enables the use of further SMT solvers for the task of computing answer sets.
We continue the development of new translation techniques from ASP to SMT.
We submit ASP-based benchmark sets to future SMT competitions (SMT-COMPs) to foster the efficiency of SMT solvers on problems that are relevant for ASP.
We develop new integrated languages that combine features of ASP and SMT, and aim at implementations via translation into pure SMT as initiated in .
This paper contributes to the first item by devising a translation from answer-set programs into theories of bit-vector logic. There is a great interest to develop efficient solvers for this particular SMT fragment due to its industrial relevance. In view of the second item, we generalize an existing translation from  to the case of bit-vector logic. Using an implementation of the new translation, viz. lp2bv, new benchmark classes can be created to support the third item on our agenda. Finally, the translation also creates new potential for language integration. In the long run, rule-based languages and, in particular, the modern grounders exploited in ASP can provide valuable machinery for the generation of SMT theories in analogy to answer-set programs: The source code of an SMT theory can be compacted using rules and term variables  and specified in a uniform way which is independent of any concrete problem instances. Analogous approaches [2, 14, 23] combine ASP and constraint programming techniques without a translation.
The rest of this paper is organized as follows. First, the basic definitions and concepts of answer-set programs and fixed-width bit-vector logic are briefly reviewed in Section 2. The new translation from answer-set programs into bit-vector theories is then devised in Section 3. The extended rule types of smodels compatible systems are addressed in Section 4. Such extensions can be covered either by native translations into bit-vector logic or translations into normal programs. As part of this research, we carried out a number of experiments using benchmarks from the second ASP competition  and two state-of-the-art SMT solvers, viz. boolector and z3. The results of the experiments are reported in Section 5. Finally, we conclude this paper in Section 6 in terms of discussions of results and future work.
The goal of this section is to briefly review the source and target formalisms for the new translation devised in the sequel. First, in Section 2.1
, we recall normal logic programs subject to answer set semantics and the main notions exploited in their translation. A formal account of bit-vector logic follows in Section2.2.
2.1 Normal Logic Programs
As usual, we define a normal logic program as a finite set of rules of the form
where , , and are propositional atoms and denotes default negation. The head of a rule of the form (1) is whereas the part after the symbol forms the body of , denoted by . The body consists of the positive part and the negative part so that . Intuitively, a rule of the form (1) appearing in a program is used as follows: the head can be inferred by if the positive body atoms in are inferable by the other rules of , but not the negative body atoms in . The positive part of the rule, is defined as . A normal logic program is called positive if holds for every rule .
To define the semantics of a normal program , we let stand for the set of atoms that appear in . An interpretation of is any subset such that for an atom , is true in , denoted , iff . For any negative literal , iff iff . A rule is satisfied in , denoted , iff implies . An interpretation is a classical model of , denoted , iff, holds for every . A model is a minimal model of iff there is no such that . Each positive normal program has a unique minimal model, i.e., the least model of denoted by in the sequel. The least model semantics can be extended for an arbitrary normal program by reducing into a positive program with respect to . Then answer sets, also known as stable models , can be defined.
Definition 1 (Gelfond and Lifschitz )
An interpretation is an answer set of a normal program iff .
Consider a normal program  consisting of the following six rules:
The answer sets of are and . To verify the latter, we note that for which . On the other hand, we have for so that .
The number of answer sets possessed by a normal program can vary in general. The set of answer sets of a normal program is denoted by . Next we present some concepts and results that are relevant in order to capture answer sets in terms of propositional logic and its extensions in the SMT framework.
Given a normal program and an atom , the definition of in is the set of rules . The completion of a normal program , denoted by , is a propositional theory  which contains
for each atom . Given a propositional theory and its signature , the semantics of is determined by . It is possible to relate with the models of a normal program by distinguishing supported models  for . A model is a supported model of iff for every atom there is a rule such that and . In general, the set of supported models of a normal program coincides with . It can be shown  that stable models are also supported models but not necessarily vice versa. This means that in order to capture using , the latter has to be extended in terms of additional constraints as done, e.g., in [17, 20].
For the program of Example 1, the theory has formulas , , , and . The models of , i.e., its supported models, are , , and .
The positive dependency graph of a normal program , denoted by , is a pair where holds iff there is a rule such that and . Let denote the reflexive and transitive closure of . A strongly connected component (SCC) of is a maximal non-empty subset such that and hold for each . The set of defining rules is generalized for an SCC by . This set can be naturally partitioned into sets and of external and internal rules associated with , respectively. Thus, holds in general.
In the case of the program from Example 1, the SCCs of are , , and . For , we have .
2.2 Bit-Vector Logic
Fixed-width bit-vector theories have been introduced for high-level reasoning about digital circuitry and computer programs in the SMT framework [27, 4]. Such theories are expressed in an extension of propositional logic where atomic formulas speak about bit vectors in terms of a rich variety of operators.
As usual in the context of SMT, variables are realized as constants that have a free interpretation over a particular domain (such as integers or reals)444We use typically symbols to denote such free (functional) constants and symbols to denote propositional atoms.. In the case of fixed-width bit-vector theories, this means that each constant symbol represents a vector of bits of particular width , denoted by in the sequel. Such vectors enable a more compact representation of structures like registers and often allow more efficient reasoning about them. A special notation is introduced to denote a bit vector that equals to , i.e., provides a binary representation of . We assume that the actual width is determined by the context where the notation is used. For the purposes of this paper, the most interesting arithmetic operator for combining bit vectors is the addition of two -bit vectors, denoted by the parameterized function symbol in an infix notation. The resulting vector is also -bit which can lead to an overflow if the sum exceeds . Moreover, we use Boolean operators and with the usual meanings for comparing the values of two -bit vectors. Thus, assuming that and are -bit free constants, we may write atomic formulas like and in order to compare the -bit values of and . In addition to syntactic elements mentioned so far, we can use the primitives of propositional logic to build more complex well-formed formulas of bit-vector logic. The syntax defined for the SMT library contains further primitives which are skipped in this paper. A theory in bit-vector logic is a set of well-formed bit-vector formulas as illustrated by the following example.
Consider a system of two processes, say A and B, and a theory formalizing a scheduling policy for them. The intuitive reading of (resp. ) is that process A (resp. B) is scheduled with a higher priority and, thus, should start earlier. The constants and denote the respective starting times of A and B. Thus, e.g., means that process A starts before process B.
Given a bit-vector theory , we write and for the sets of propositional atoms and free constants, respectively, appearing in . To determine the semantics of , we define interpretations for as pairs where is a standard propositional interpretation and is a partial function that maps a free constant and an index to the set of bits . Given , a constant is mapped onto and, in particular, for any . To cover any well-formed terms555The constants and operators appearing in a well-formed term are based on a fixed width . Moreover, the width of each constant must be the same throughout . and involving and -bit constants from , we define and . Hence, the value can be determined for any well-formed term which enables the evaluation of more complex formulas as formalized below.
Let be a bit-vector theory, a propositional atom, and well-formed terms over such that , and and well-formed formulas. Given an interpretation for the theory , we define
or , and
if and only if .
The interpretation is a model of , i.e., , iff for all .
It is clear by Definition 2 that pure propositional theories are treated classically, i.e., iff in the sense of propositional logic. As regards the theory from Example 4, we have the sets of symbols and . Furthermore, we observe that there is no model of of the form because it is impossible to satisfy and simultaneously using any partial function . On the other hand, there are models of the form because can be satisfied in ways by picking different values for the 2-bit vectors and .
In this section, we present a translation of a logic program into a bit-vector theory that is similar to an existing translation  into difference logic. As its predecessor, the translation consists of two parts. Clark’s completion , denoted by , forms the first part of . The second part, i.e., , is based on ranking constraints from  so that . Intuitively, the idea is that the completion captures supported models of  and the further formulas in exclude the non-stable ones so that any classical model of corresponds to a stable model of .
The completion is formed for each atom on the basis of (2):
If , the formula is included to capture the corresponding empty disjunction in (2).
If there is such that , then one of the disjuncts in (2) is trivially true and the formula can be used as such to capture the definition of .
If for a rule with , then we simplify (2) to a formula of the form
Otherwise, the set contains at least two rules (1) with and
is introduced using a new atom for each together with a formula
The rest of the translation exploits the SCCs of the positive dependency graph of that was defined in Section 2.1. The motivation is to limit the scope of ranking constraints which favors the length of the resulting translation. In particular, singleton components require no special treatment if tautological rules with in (1) have been removed. Plain completion (2) is sufficient for atoms involved in such components. However, for each atom having a non-trivial component in such that , two new atoms and are introduced to formalize the external and internal support for , respectively. These atoms are defined in terms of equivalences
Moreover, when and the atom happens to gain external support from these rules, the value of is fixed to by including the formula
Recall the program from Example 1. The completion is:
Since has only one non-trivial SCC, i.e., the component , the weak ranking constraints resulting in are
In addition to these, the formulas
are also included in .
Weak ranking constraints are sufficient whenever the goal is to compute only one answer set, or to check the existence of answer sets. However, they do not guarantee a one-to-one correspondence between the elements of and the set of models obtained for the translation . To address this discrepancy, and to potentially make the computation of all answer sets or counting the number of answer sets more effective, strong ranking constraints can be imported from  as well. Actually, there are two mutually compatible variants of strong ranking constraints:
The local strong ranking constraint (11) is introduced for each . It is worth pointing out that the condition is equivalent to . 666However, the form in (11) is used in our implementation, since and are amongst the base operators of the boolector system. On the other hand, the global variant (12) covers the internal support of entirely. Finally, in order to prune copies of models of the translation that would correspond to the exactly same answer set of the original program, a formula
is included for every atom involved in a non-trivial SCC. We write and for the respective extensions of with local/global strong ranking constraints, and obtained using both. Similar conventions are applied to to distinguish four variants in total. The correctness of these translations is addressed next.
Let be a normal program and its bit-vector translation.
If is an answer set of , then there is a model of such that .
If is a model of , then is an answer set of .
To establish the correspondence of answer sets and models as formalized above, we appeal to the analogous property of the translation of into difference logic (DL), denoted here by . In DL, theory atoms constrain the difference of two integer variables and . Models can be represented as pairs where is a propositional interpretation and maps constants of theory atoms to integers so that . The rest is analogous to Definition 2.
() Suppose that is an answer set of . Then the results of  imply that there is a model of such that . The valuation is condensed for each non-trivial SCC of as follows. Let us partition into such that (i) for each and , (ii) 777A special variable is used as a placeholder for the constant in the translation . for each , and (iii) for each , , and , . Then define for the bit vector associated with an atom by setting iff the bit of is , i.e., . It follows that iff for any . Moreover, we have iff for any . Due to the similar structures of and , we obtain as desired.
() Let be a model of . Then define such that where on the left hand side stands for the integer variable corresponding to the bit vector on the right hand side. It follows that iff . By setting , we obtain if and only if . The strong analogy present in the structures of and implies that is a model of . Thus, is an answer set of by . ∎
Even tighter relationships of answer sets and models can be established for the translations , , and . It can be shown that the model of corresponding to an answer set of is unique, i.e., there is no other model of the translation such that . These results contrast with : the analogous extensions guarantee the uniqueness of in a model but there are always infinitely many copies of such that . Such a valuation can be simply obtained by setting for any .
4 Native Support for Extended Rule Types
The input syntax of the smodels system was soon extended by further rule types . In solver interfaces, the rule types usually take the following simple syntactic forms:
The body of a choice rule (14) is interpreted in the same way as that of a normal rule (1). The head, in contrast, allows to derive any subset of atoms , if the body is satisfied, and to make a choice in this way. The head of a cardinality rule (15) is derived, if its body is satisfied, i.e., the number of satisfied literals amongst and is at least acting as the lower bound. A weight rule of the form (16) generalizes this idea by assigning arbitrary positive weights to literals (rather than 1s). The body is satisfied if the sum of weights assigned to satisfied literals is at least , thus enabling one to infer the head using the rule. In practise, the grounding components used in ASP systems allow for more versatile use of cardinality and weight rules, but the primitive forms (14), (15), and (16) provide a solid basis for efficient implementation via translations. The reader is referred to  for a generalization of answer sets for programs involving such extended rule types. The respective class of weight constraint programs (WCPs) is typically supported by smodels compatible systems.
Whenever appropriate, it is possible to translate extended rule types as introduced above back to normal rules. To this end, a number of transformations are addressed in  and they have been implemented as a tool called lp2normal888http://www.tcs.hut.fi/Software/asptools/. For instance, the head of a choice rule (14) can be captured in terms of rules
where are new atoms and is a new atom standing for the body of (14) which can be defined using (14) with the head replaced by . We assume that this transformation is applied at first to remove choice rules when the goal is to translate extended rule types into bit-vector logic. The strength of this transformation is locality, i.e., it can be applied on a rule-by-rule basis, and linearity with respect to the length of the original rule (14). To the contrary, linear normalization of cardinality and weight rules seems impossible. Thus, we also provide direct translations into formulas of bit-vector logic.
We present the translation of a weight rule (16) whereas the translation of a cardinality rule (15) is obtained as a special case . The body of a weight rule can be evaluated using bit vectors of width constrained by formulas
The lower bound of (16) can be checked in terms of the formula where we assume that is of width , since the rule can be safely deleted otherwise. In view of the overall translation, the formula can be used in conjunction with the completion formula (4). Weight rules also contribute to the dependency graph in analogy to normal rules, i.e., the head depends on all positive body atoms . In this way, generalizes for programs having extended rules.
5 Experimental Results
A new translator called lp2bv was implemented as a derivative of lp2diff999http://www.tcs.hut.fi/Software/lp2diff/ that translates logic programs into difference logic. In contrast, the new translator will provide its output in the bit-vector format. In analogy to its predecessor, it expects to receive its input in the smodels101010http://www.tcs.hut.fi/Software/smodels/ file format. Models of the resulting bit-vector theory are searched for using boolector111111http://fmv.jku.at/boolector/ (v. 1.4.1)  and z3121212http://research.microsoft.com/en-us/um/redmond/projects/z3/ (v. 2.11)  as back-end solvers. The goal of our preliminary experiments was to see how the performances of systems based on lp2bv compare with the performance of a state-of-the-art ASP solver clasp131313http://www.cs.uni-potsdam.de/clasp/ (v. 1.3.5) . The experiments were based on the NP-complete benchmarks of the ASP Competition 2009. In this benchmark collection, there are 23 benchmark problems with 516 instances in total. Before invoking a translator and the respective SMT solver, we performed a few preprocessing steps, as detailed in Figure 1, by calling:
gringo (v. 2.0.5), for grounding the problem encoding and a given instance;
smodels141414http://www.tcs.hut.fi/Software/smodels/ (v. 2.34), for simplifying the resulting ground program;
lpcat (v. 1.18), for removing all unused atom numbers, for making the atom table of the ground program contiguous, and for extracting the symbols for later use; and
lp2normal (version 1.11), for normalizing the program.
The last step is optional and not included as part of the pipeline in Figure 1. Pipelines of this kind were executed under Linux/Ubuntu operating system running on six-core AMD Opteron 2435 processors under 2.6 GHz clock rate and with 2.7 GB memory limit that corresponds to the amount of memory available in the ASP Competition 2009.
For each system based on a translator and a back-end solver, there are four variants of the system to consider: W indicates that only weak ranking constraints are used, while L, G, and LG mean that either local, or global, or both local and global strong ranking constraints, respectively, are employed when translating the logic program.
|347/118||188/ 88||161/ 83||174/ 87||176/ 80||142/ 75||147/ 69||124/ 70||135/ 69||257/103||251/ 98||225/ 99||226/ 98|
|KnightTour||10||8/ 0||2/ 0||1/ 0||0/ 0||0/ 0||1/ 0||0/ 0||0/ 0||1/ 0||6/ 0||6/ 0||4/ 0||5/ 0|
|GraphColouring||29||8/ 0||7/0||7/0||7/0||7/0||6/ 0||7/0||7/0||7/0||7/0||7/0||7/0||7/0|
|WireRouting||23||11/11||2/ 3||1/ 1||1/ 2||0/ 2||1/ 3||0/ 0||0/ 0||0/ 1||3/ 3||2/ 3||2/ 4||5/3|
|DisjunctiveScheduling||10||5/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0|
|GraphPartitioning||13||6/ 7||3/ 0||3/ 0||3/ 0||3/ 0||4/ 0||4/ 0||4/ 0||3/ 0||6/2||6/ 1||6/ 1||6/ 1|
|ChannelRouting||11||6/ 2||6/2||6/2||6/2||6/2||5/ 2||6/2||6/2||6/2||6/2||6/2||6/2||6/2|
|Solitaire||27||19/ 0||2/ 0||5/ 0||1/ 0||4/ 0||0/ 0||0/ 0||0/ 0||0/ 0||21/0||21/0||20/ 0||21/0|
|Labyrinth||29||26/ 0||1/0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0|
|WeightBoundedDominatingSet||29||26/ 0||18/ 0||18/ 0||17/ 0||18/ 0||12/ 0||12/ 0||11/ 0||12/ 0||22/0||22/0||22/0||21/ 0|
|15Puzzle||16||16/ 0||16/0||15/ 0||14/ 0||15/ 0||4/ 0||4/ 0||5/ 0||5/ 0||0/ 0||0/ 0||0/ 0||0/ 0|
|BlockedNQueens||29||15/14||2/ 2||0/ 2||1/ 2||0/ 2||1/ 0||2/ 0||2/ 0||0/ 0||15/13||15/13||15/12||15/13|
|ConnectedDominatingSet||21||10/10||10/11||9/ 8||10/11||6/ 3||10/10||9/10||10/ 9||10/ 9||9/ 8||7/ 6||9/ 7||7/ 6|
|EdgeMatching||29||29/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||0/ 0||3/0||1/ 0||3/0||2/ 0|
|Fastfood||29||10/19||9/16||10/16||10/16||9/16||9/ 9||9/ 9||9/10||9/ 9||10/18||10/18||10/18||10/18|
|GeneralizedSlitherlink||29||29/ 0||29/0||20/ 0||29/0||29/0||29/0||29/0||16/ 0||29/0||29/0||29/0||29/0||29/0|
|HamiltonianPath||29||29/ 0||27/ 0||25/ 0||29/0||28/ 0||26/ 0||27/ 0||25/ 0||26/ 0||29/0||29/0||29/0||29/0|
|Hanoi||15||15/ 0||15/0||15/0||15/0||15/0||5/ 0||5/ 0||5/ 0||4/ 0||15/0||15/0||15/0||15/0|
|HierarchicalClustering||12||8/ 4||8/4||8/4||8/4||8/4||4/ 4||4/ 4||4/ 4||4/ 4||8/4||8/4||8/4||8/4|
|Sudoku||10||10/ 0||5/ 0||4/ 0||4/ 0||5/ 0||4/ 0||4/ 0||4/ 0||4/ 0||9/0||8/ 0||8/ 0||9/0|
|TravellingSalesperson||29||29/ 0||3/ 0||0/ 0||6/ 0||10/ 0||0/ 0||8/ 0||0/ 0||0/ 0||29/0||29/0||7/ 0||7/ 0|
Table 1 collects the results from our experiments without normalization whereas Table 2 shows the results when lp2normal  was used to remove extended rule types discussed in Section 4. In both tables, the first column gives the name of the benchmark, followed by the number of instances of that particular benchmark in the second column. The following columns indicate the numbers of instances that were solved by the systems considered in our experiments. A notation like 8/4 means that the system was able to solve eight satisfiable and four unsatisfiable instances in that particular benchmark. Hence, if there are 15 instances in a benchmark and the system could only solve 8/4, this means that the system was unable to solve the remaining three instances within the time limit of 600 seconds, i.e. ten minutes, per instance151515One observation is that the performance of systems based on lp2bv is quite stable: even when we extended the time limit to 20 minutes, the results did not change much (differences of only one or two instances were perceived in most cases).. As regards the number of solved instances in each benchmark, the best performing translation-based approaches are highlighted in boldface. Though it was not shown in all tables, we also run the experiments using translator lp2diff with z3 as back-end solver, and the summary is included in Table 3—giving an overview of experimental results in terms of total numbers of instances solved out of 516.
It is apparent that the systems based on lp2bv did not perform very well without normalization. As indicated by Table 3, the overall performance was even worse than that of systems using lp2diff for translation and z3 for model search. However, if the input was first translated into a normal logic program using lp2normal, i.e., before translation into a bit-vector theory, the performance was clearly better. Actually, it exceeded that of the systems based on lp2diff and became closer to that of clasp. We note that normalization does not help so much in case of lp2diff and the experimental results obtained using both normalized and unnormalized instances are quite similar in terms of solved instances. Thus it seems that solvers for bit-vector logic are not able to make the best of native translations of cardinality and weight rules from Section 4 in full. If an analogous translation into difference logic is used, as implemented in lp2diff, such a negative effect was not perceived using z3. Our understanding is that the efficient graph-theoretic satisfiability check for difference constraints used in the search procedure of z3 turns the native translation feasible as well. As indicated by our test results, boolector is clearly better back-end solver for lp2bv than z3. This was to be expected since boolector is a native solver for bit-vector logic whereas z3 supports a wider variety of SMT fragments and can be used for more general purposes. Moreover, the design of lp2bv takes into account operators of bit-vector logic which are directly supported by boolector and not implemented as syntactic sugar.
|346/113||279/102||243/100||278/101||281/100||240/106||231/ 99||224/101||232/ 99|
|KnightTour||10||10/ 0||2/0||2/0||1/ 0||0/ 0||1/ 0||0/ 0||0/ 0||0/ 0|
|GraphColouring||29||9/ 0||8/ 0||8/ 0||8/ 0||8/ 0||9/2||9/2||9/2||9/2|
|WireRouting||23||11/11||2/ 6||1/ 3||1/ 3||1/ 3||2/7||1/ 4||1/ 4||1/ 3|
|GraphPartitioning||13||4/ 1||5/0||5/0||4/ 0||5/0||2/ 1||2/ 1||2/ 1||2/ 0|
|Solitaire||27||18/ 0||23/0||23/0||23/0||23/0||22/ 0||22/ 0||22/ 0||22/ 0|
|Labyrinth||29||27/ 0||1/ 0||1/ 0||2/ 0||3/0||0/ 0||0/ 0||0/ 0||0/ 0|
|WeightBoundedDominatingSet||29||25/ 0||15/ 0||15/ 0||15/ 0||16/0||10/ 0||10/ 0||10/ 0||10/ 0|
|15Puzzle||16||15/ 0||16/0||16/0||16/0||16/0||11/ 0||10/ 0||11/ 0||11/ 0|
|EdgeMatching||29||29/ 0||29/ 0||29/ 0||29/ 0||29/ 0||29/ 0||29/ 0||29/ 0||29/ 0|
|GeneralizedSlitherlink||29||29/ 0||29/ 0||21/ 0||29/ 0||29/ 0||29/ 0||29/ 0||21/ 0||29/ 0|
|HamiltonianPath||29||29/ 0||29/ 0||28/ 0||29/ 0||29/ 0||29/ 0||29/ 0||29/ 0||29/ 0|
|Hanoi||15||15/ 0||15/ 0||15/ 0||15/ 0||15/ 0||15/ 0||15/ 0||15/ 0||15/ 0|
|TravellingSalesperson||29||29/ 0||16/ 0||0/ 0||27/0||27/0||0/ 0||0/ 0||0/ 0||0/ 0|
In addition, we note on the basis of our results that the performance of the state-of-the-art ASP solver clasp is significantly better, and the translation-based approaches to computing stable models are still left behind. By the results of Table 2, even the best variants of systems based on lp2bv did not work well enough to compete with clasp. The difference is especially due to the following benchmarks: Knight Tour, Wire Routing, Graph Partitioning, Labyrinth, Weight Bounded Dominating Set, Fastfood, and Travelling Salesperson. All of them involve either recursive rules (Knight Tour, Wire Routing, and Labyrinth), weight rules (Weight Bounded Dominating Set and Fastfood), or both (Graph Partitioning and Travelling Salesperson). Hence, it seems that handling recursive rules and weight constraints in the translational approach is less efficient compared to their native implementation in clasp. When using the current normalization techniques to remove cardinality and weight rules, the sizes of ground programs tend to increase significantly and, in particular, if weight rules are abundant. For example, after normalization the ground programs are ten times larger for the benchmark Weight Bounded Dominating Set, and five times larger for Fastfood. It is also worth pointing out that the efficiency of clasp turned out to be insensitive to normalization.
While having trouble with recursive rules and weight constraints for particular benchmarks, the translational approach handles certain large instances quite well. The largest instances in the experiments belong to the Disjunctive Scheduling benchmark, of which all instances are ground programs of size over one megabyte but after normalization161616In this benchmark, normalization does not affect the size of grounded programs significantly., the lp2bv systems can solve as many instances as clasp.
In this paper, we present a novel and concise translation from normal logic programs into fixed-width bit-vector theories. Moreover, the extended rule types supported by smodels compatible answer set solvers can be covered via native translations. The length of the resulting translation is linear with respect to the length of the original program. The translation has been implemented as a translator, lp2bv, which enables the use of bit-vector solvers in the search for answer sets. Our preliminary experimental results indicate a level of performance which is similar to that obtained using solvers for difference logic. However, this presumes one first to translate extended rule types into normal rules and then to apply the translation into bit-vector logic. One potential explanation for such behavior is the way in which SMT solvers implement reasoning with bit vectors: a predominant strategy is to translate theory atoms involving bit vectors into propositional formulas and to apply satisfiability checking techniques systematically. We anticipate that an improved performance could be obtained if a native support for certain bit vector primitives were incorporated into SMT solvers directly. When comparing to the state-of-the-art ASP solver clasp, we noticed that the performance of the translation based approach compared unfavorably, in particular, for benchmarks which contained recursive rules or weight constraints or both. This indicates that the performance can be improved by developing new translation techniques for these two features. In order to obtain a more comprehensive view of the performance characteristics of the translational approach, the plan is to extend our experimental setup to include benchmarks that were used in the third ASP competition . Moreover, we intend to use the new SMT library format  in future versions of our translators.
This research has been partially funded by the Academy of Finland under the project “Methods for Constructing and Solving Large Constraint Models” (MCM, #122399).
-  Krzysztof Apt, Howard Blair, and Adrian Walker. Towards a theory of declarative knowledge. In Foundations of Deductive Databases and Logic Programming., pages 89–148. Morgan Kaufmann, 1988.
-  Marcello Balduccini. Industrial-size scheduling with ASP+CP. In Delgrande and Faber , pages 284–296.
-  Clark Barrett, Roberto Sebastiani, Sanjit Seshia, and Cesare Tinelli. Satisfiability modulo theories. In Biere et al. , pages 825–885.
-  Clark Barrett, Aaron Stump, and Cesare Tinelli. The SMT-LIB standard version 2.0.
Armin Biere, Marijn Heule, Hans van Maaren, and Toby Walsh, editors.
Handbook of Satisfiability, volume 185 of
Frontiers in Artificial Intelligence and Applications. IOS Press, 2009.
-  Robert Brummayer and Armin Biere. Boolector: An efficient SMT solver for bitvectors and arrays. In Stefan Kowalewski and Anna Philippou, editors, TACAS, volume 5505 of Lecture Notes in Computer Science, pages 174–177. Springer, 2009.
-  Francesco Calimeri, Giovambattista Ianni, Francesco Ricca, Mario Alviano, Annamaria Bria, Gelsomina Catalano, Susanna Cozza, Wolfgang Faber, Onofrio Febbraro, Nicola Leone, Marco Manna, Alessandra Martello, Claudio Panetta, Simona Perri, Kristian Reale, Maria Carmela Santoro, Marco Sirianni, Giorgio Terracina, and Pierfrancesco Veltri. The third answer set programming competition: Preliminary report of the system competition track. In Delgrande and Faber , pages 388–403.
-  Keith Clark. Negation as failure. In Logic and Data Bases, pages 293–322, 1977.
-  Leonardo Mendonça de Moura and Nikolaj Bjørner. Z3: An efficient SMT solver. In TACAS, volume 4963 of Lecture Notes in Computer Science, pages 337–340. Springer, 2008.
-  James Delgrande and Wolfgang Faber, editors. Logic Programming and Nonmonotonic Reasoning - 11th International Conference, LPNMR 2011, Vancouver, Canada, May 16-19, 2011. Proceedings, volume 6645 of Lecture Notes in Computer Science. Springer, 2011.
-  Marc Denecker, Joost Vennekens, Stephen Bond, Martin Gebser, and Miroslaw Truszczynski. The second answer set programming competition. In Erdem et al. , pages 637–654.
-  Esra Erdem, Fangzhen Lin, and Torsten Schaub, editors. Logic Programming and Nonmonotonic Reasoning, 10th International Conference, LPNMR 2009, Potsdam, Germany, September 14-18, 2009. Proceedings, volume 5753 of Lecture Notes in Computer Science. Springer, 2009.
-  Martin Gebser, Benjamin Kaufmann, André Neumann, and Torsten Schaub. clasp : A conflict-driven answer set solver. In Chitta Baral, Gerhard Brewka, and John Schlipf, editors, LPNMR, volume 4483 of Lecture Notes in Computer Science, pages 260–265. Springer, 2007.
-  Martin Gebser, Max Ostrowski, and Torsten Schaub. Constraint answer set solving. In Patricia Hill and David Scott Warren, editors, ICLP, volume 5649 of Lecture Notes in Computer Science, pages 235–249. Springer, 2009.
-  Michael Gelfond and Nicola Leone. Logic programming and knowledge representation – the A-Prolog perspective. Artif. Intell., 138(1-2):3–38, 2002.
-  Michael Gelfond and Vladimir Lifschitz. The stable model semantics for logic programming. In ICLP/SLP, pages 1070–1080, 1988.
-  Tomi Janhunen. Some (in)translatability results for normal logic programs and propositional theories. Journal of Applied Non-Classical Logics, 16(1–2):35–86, June 2006.
-  Tomi Janhunen, Guohua Liu, and Ilkka Niemelä. Tight integration of non-ground answer set programming and satisfiability modulo theories. In Eugenia Ternovska and David Mitchell, editors, Working Notes of Grounding and Transformations for Theories with Variables, pages 1–13, Vancouver, Canada, May 2011.
-  Tomi Janhunen and Ilkka Niemelä. Compact translations of non-disjunctive answer set programs to propositional clauses. In Marcello Balduccini and Tran Cao Son, editors, Logic Programming, Knowledge Representation, and Nonmonotonic Reasoning, volume 6565 of Lecture Notes in Computer Science, pages 111–130. Springer, 2011.
-  Tomi Janhunen, Ilkka Niemelä, and Mark Sevalnev. Computing stable models via reductions to difference logic. In Erdem et al. , pages 142–154.
-  Victor Marek and Venkatramana Subrahmanian. The relationship between stable, supported, default and autoepistemic semantics for general logic programs. Theor. Comput. Sci., 103(2):365–386, 1992.
-  Victor Marek and Mirek Truszczyński. Stable models and an alternative logicprogramming paradigm. In The Logic Programming Paradigm: A 25-Year Perspective, pages 375–398. Springer, 1999.
-  Veena Mellarkod and Michael Gelfond. Integrating answer set reasoning with constraint solving techniques. In Jacques Garrigue and Manuel Hermenegildo, editors, FLOPS, volume 4989 of Lecture Notes in Computer Science, pages 15–31. Springer, 2008.
-  Ilkka Niemelä. Logic programs with stable model semantics as a constraint programming paradigm. Ann. Math. Artif. Intell., 25(3-4):241–273, 1999.
-  Ilkka Niemelä. Stable models and difference logic. Ann. Math. Artif. Intell., 53(1-4):313–329, 2008.
-  Robert Nieuwenhuis and Albert Oliveras. DPLL(T) with exhaustive theory propagation and its application to difference logic. In Kousha Etessami and Sriram Rajamani, editors, CAV, volume 3576 of Lecture Notes in Computer Science, pages 321–334. Springer, 2005.
-  Silvio Ranise and Cesare Tinelli. The SMT-LIB format: An initial proposal.
-  Patrik Simons, Ilkka Niemelä, and Timo Soininen. Extending and implementing the stable model semantics. Artif. Intell., 138(1-2):181–234, 2002.