1 Introduction
Preferences are often an indispensable means in modeling since they allow for identifying preferred solutions among all feasible ones. Accordingly, many forms of preferences have already found their way into systems for Answer Set Programming (ASP; [Baral (2003)]). For instance, smodels provides optimization statements for expressing cost functions on sets of weighted literals [Simons et al. (2002)], and dlv [Leone et al. (2006)] offers weak constraints for the same purpose. Further approaches [Delgrande et al. (2003), Eiter et al. (2003)] allow for expressing various types of preferences among rules. Unlike this, no readily applicable implementation techniques are available for qualitative preferences among answer sets, like inclusion minimality, Paretobased preferences as used in [Sakama and Inoue (2000), Brewka et al. (2004)], or more complex combinations as proposed in [Brewka (2004)]
. This shortcoming is due to their higher expressiveness leading to a significant increase in computational complexity, lifting decision problems (for normal logic programs) from the first to the second level of the polynomial time hierarchy (cf.
[Garey and Johnson (1979)]). Roughly speaking, preferences among answer sets combine an with a problem. The first one defines feasible solutions, while the second one ensures that there are no better solutions according to the preferences at hand. For implementing such problems, Eiter and Gottlob invented in 1995 the saturation technique, using the elevated complexity of disjunctive logic programming. In stark contrast to the ease of common ASP modeling (e.g., strategic companies can be “naturally” encoded Leone et al. (2006) in disjunctive ASP), however, the saturation technique is rather involved and hardly usable by ASP laymen.For taking this burden of intricate modeling off the user, we propose a general, saturationbased implementation technique capturing various forms of qualitative preferences among answer sets. This is driven by the desire to guarantee immediate availability and thus to stay within the realm of ASP rather than to build separate (imperative) components. To this end, we take advantage of recent advances in ASP grounding technology, admitting an easy use of metamodeling techniques. The idea is to reinterpret existing optimization statements in order to express complex preferences among answer sets. While, for instance in smodels, the meaning of is to compute answer sets incurring minimum costs, we may alternatively use it for selecting inclusionminimal ones. In contrast to the identification of minimal models, investigated by Janhunen and Oikarinen in 2004; 2008, a major challenge lies in guaranteeing the stability property of implicit counterexamples, which must be more preferred answer sets rather than (arbitrary) models. For this purpose, we develop a refined metaprogram qualifying answer sets as viable counterexamples. Unlike the approach of Eiter and Polleres 2006, our encoding avoids “guessing” a level mapping to describe the formation of a counterexample, but directly denies models for which there is no such construction. Notably, our metaprograms apply to (reified) extended logic programs Simons et al. (2002), possibly including choice rules and constraints, and we are unaware of any existing metaencoding of their answer sets, neither as candidates nor as counterexamples refuting optimality.
2 Background
We consider extended logic programs Simons et al. (2002) allowing for (proper) disjunctions in heads of rules Gelfond and Lifschitz (1991). A rule is of the following form:
By and , we denote the head and the body of , respectively, where “” stands for default negation. The head is a disjunction over atoms , belonging to some alphabet , or a constraint . In the latter, or is a literal and a nonnegative integer weight for and ; and are integers providing a lower and an upper bound. Either or both of and can be omitted, in which case they are identified with the (trivial) bounds and , respectively. A rule such that ( is the empty disjunction) is an integrity constraint. Each body component is either an atom or a constraint for . If , is called a fact, and we skip “” when writing facts below. For a set , a disjunction , and a constraint , we let , , and . Note that the elements of a constraint form a multiset, possibly containing duplicates. For some or , we define .
A (Herbrand) interpretation is represented by the set of its entailed atoms. The satisfaction relation “” on rules is inductively defined as follows:

if ,

if ,

if ,

if for all , and

if or .
A logic program is a set of rules , and is a model of if for every . The reduct of the head of a rule wrt is if , and if . Furthermore, the reduct of some (positive) body element is if , and if . The reduct of wrt is the following logic program:
That is, for all rules whose bodies are satisfied wrt , the reduct is obtained by replacing constraints in heads with individual atoms belonging to and by eliminating negative components in bodies, where lower bounds of residual constraints (with trivial upper bounds) are reduced accordingly. Finally, is an answer set of if is a model of such that no proper subset of is a model of . In view of the latter condition, note that an answer set is a minimal model of its own reduct.
The definition of answer sets provided above applies to logic programs containing extended constructs ( constraints) under “choice semantics” Simons et al. (2002), while additionally allowing for disjunctions under minimalmodel semantics (wrt a reduct). We use these features to embed extended constructs of an object program into a disjunctive metaprogram, so that their combination yields optimal answer sets of the object program. To this end, we reinterpret statements of the following form:
(1) 
Like with constraints, every is a literal and every an integer weight for , while additionally provides an integer priority level.^{1}^{1}1Explicit priority levels are supported in recent versions of the grounder gringo Gebser, Kaminski, Kaufmann, Ostrowski, Schaub, and Thiele (Gebser et al.). This avoids a dependency of priorities on input order, which is considered by lparse Syrjänen (Syrjänen) if several statements are provided. Priority levels are also supported by dlv Leone et al. (2006) in weak constraints. Furthermore, we admit negative weights in statements, where they cannot raise semantic problems (cf. Ferraris (2005)) going along with the rewriting of constraints suggested in Simons et al. (2002). Priorities allow for representing a sequence of lexicographically ordered objectives, where greater levels are more significant than smaller ones. By default, a statement distinguishes optimal answer sets of a program in the following way. For any and integer , let denote the sum of weights over all occurrences of weighted literals in (1) such that . An answer set of is dominated if there is an answer set of such that and for all , and optimal otherwise.
In the following, we assume that every logic program is accompanied with one (possibly empty) statement of the form (1). Instead of the default semantics, we consider Pareto efficiency wrt priority levels , weights , and several distinct optimization criteria. In view of this, we use levels for inducing a lexicographic order, while weights are used for grouping literals (rather than summation). Pareto improvement then builds upon a twodimensional structure of orderings among answer sets, induced by and . In turn, each such pairing is associated with some of the following orderings. By , we denote that the cardinality of the multiset of occurrences of in (1) such that is not greater than the one of the corresponding multiset for . Furthermore, we write if, for any weighted literal occurring in (1), implies . As detailed in the extended version of this paper Gebser et al. (2011), we additionally consider the approach of Sakama and Inoue (2000) and denote by that is preferable to according to a (given) preference relation among literals such that occurs in (1). Given a logic program and a collection of relations of the form for priority levels , weights , and , an answer set of dominates an answer set of wrt if there are a priority level and a weight such that does not hold for , while holds for all where . In turn, an answer set of is optimal wrt if there is no answer set of that dominates wrt .
As an example, consider the following program, referred to by :
(2)  
(3)  
(4) 
This program has five answer sets, viz. , , , , and . (Sets in (2) and (3) are used as shorthands for .) In addition, let denote the union of with the following statement:
(5) 
This statement specifies that all atoms of except for are subject to minimization. Passing to gringo and an answer set solver like smodels yields the single minimal answer set . Note, however, that has three minimal answer sets, namely , , and . They cannot be computed directly from via any available ASP system.
We implement the complex optimization criteria described above by metainterpretation in disjunctive ASP. For transparency, we provide metaprograms as true ASP code in the firstorder input language of gringo Gebser, Kaminski, Kaufmann, Ostrowski, Schaub, and Thiele (Gebser et al.), including not and  as tokens for and , respectively, as well as {,,} as shorthand for [=1,,=1]. Further constructs are informally introduced by need in the remainder of this paper. Note that our (disjunctive) metaprograms apply to an extended object program that does not include proper disjunctions (over more than one atom). Unless stated otherwise, we below use the term extended program to refer to a logic program without proper disjunctions.
3 Basic MetaModeling
For reinterpreting statements by means of ASP, we take advantage of recent advances in ASP grounding, admitting an easy use of metamodeling techniques. To be precise, we rely upon the unrestricted usage of function symbols and program reification as provided by gringo Gebser, Kaminski, Kaufmann, Ostrowski, Schaub, and Thiele (Gebser et al.). The latter allows for turning an input program along with a statement into facts representing the structure of their ground instantiation.
For illustrating the format output by gringo, consider the facts in Line 1–15 of Listing 1, obtained by calling gringo with option reify on program .
Let us detail the representation of the rule in (2) inducing the facts in Line 1–4. The predicate rule/2 is used to link the rule head and body. By convention, both are positive rule elements, as indicated via the functor pos/1. Furthermore, the term sum(1,0,2) tells us that the head is a constraint with lower bound 1 and (trivial) upper bound 2 over a list labeled 0 of weighted literals. In fact, the included literals are provided via the facts over wlist/4 given in Line 2, whose first arguments are 0. While the second arguments, 0 and 1, are simply indexes (enabling the representation of duplicates in multisets), the third ones provide literals, p and t, each having the (default) weight 1, as given in the fourth arguments. Again by convention, the body of each rule is a conjunction, where the term conjunction(0) in Line 1 refers to the set labeled 0. Its single element, a positive constraint with lower bound 1 and upper bound 2 over a list labeled 1, is provided by the fact in Line 3. The corresponding weighted literals are described by the facts in Line 4; observe that the negative literal not t is represented in terms of the functor neg/1, applied to atom(t). The rules in (3) and (4) are represented analogously in Line 6–8 and 10–11, respectively. It is still interesting to note that recurrences of lists of weighted literals (and sets) can reuse labels introduced before, as done in Line 8 by referring to 0. In fact, gringo identifies repetitions of structural entities and reuses labels. In addition to the rules of , the elements of nontrivial strongly connected components of its positive dependency graph (cf. (6) below) are provided in Line 13–15. Albeit their usage is explained in the next section, note already that the members of the only such component, labeled 0, include atoms as well as (positive) body elements, i.e., conjunctions and constraints, connecting the component. Indeed, the existence of facts over scc/2 tells us that is not tight (cf. Fages (1994)).
Now, we may compute all five answer sets of (given in p0.lp) by combining the facts in Line 1–15 of Listing 1 with the basic metaprogram in Listing 2 (meta.lp):^{2}^{2}2Following Unix customs, the minus symbol “” stands for the output of “gringo reify p0.lp.”
Each answer set of the metaprogram applied to a reified program corresponds to an answer set of the reified program. More precisely, a set of atoms is an answer set of the reified program iff the metaprogram yields an answer set such that , e.g., hold(atom()) stands for . As indicated in the comments (preceded by %), our metaprogram consists of three parts. Among the rule elements extracted in Line 3–13, only those occurring within bodies, identified via eleb/1, are relevant to the generation of answer sets specified in Line 17–28. (Additional head elements, given by elem/1, are of interest in the next section.) In fact, answer set generation follows the structure of reified programs, identifying conjunctions and constraints that hold^{3}^{3}3The “:” connective expands to the list of all instances of its lefthand side such that corresponding instances of literals on the righthand side hold (cf. Syrjänen (Syrjänen) and Gebser, Kaminski, Kaufmann, Ostrowski, Schaub, and Thiele (Gebser et al.)). to further derive atoms occurring in rule heads, either singular or within constraints (cf. Line 24–27). Line 28 deals with integrity constraints represented via the constant false in heads of reified rules. The last part in Line 32 restricts the output of the metaprogram’s answer sets to the representations of original input atoms.
Finally, note that meta.lp does not inspect facts representing a reified statement, such as the ones in Line 17–19 of Listing 1 stemming from the statement in (5). Such facts over minimize/2 provide a priority level as the first argument and the label of a list of weighted literals, like the ones referred to from within terms of functor sum/3, as the second argument. Rather than simply mirroring the standard meaning of statements (by encoding them analogously to rules; cf. Line 17–28 of Listing 2), we support flexible customizations. In fact, the next section presents our metaprograms implementing preference relations and Pareto efficiency, as described in the background.
4 Advanced MetaModeling
Given the reification of extended logic programs and the encoding of their answer sets in meta.lp, our approach to complex optimization is based on the idea that an answer set generated via meta.lp is optimal (and thus acceptable) only if it is not dominated by any other answer set. For implementing our approach, we exploit the capabilities of disjunctive ASP to compactly represent the space of all potential counterexamples, viz. answer sets dominating a candidate answer set at hand. To this end, we encode the subtasks of

guessing an answer set as a potential counterexample and

verifying that the counterexample dominates a candidate answer set.
A candidate answer set passes both phases if it turns out to be infeasible to guess a counterexample that dominates it. For expressing the nonexistence of counterexamples, we make use of an errorindicating atom bot and saturation Eiter and Gottlob (1995), deriving all atoms representing the space of counterexamples from bot. Since the semantics of disjunctive ASP is based on minimization, saturation makes sure that bot is derived only if it is inevitable, i.e., if it is impossible to construct a counterexample. However, via an integrity constraint, we can stipulate bot (and thus the nonexistence of counterexamples) to hold, yet without providing any derivation of bot. In view of such a constraint and saturation, a successful candidate answer set is accompanied by all atoms representing counterexamples. Given that the reduct drops negative literals, the necessity that all atoms representing counterexamples are true implies that we cannot use their default negation in any meaningful way. Hence, we below encode potential counterexamples, i.e., answer sets of extended programs, and (non)dominance of a candidate answer set in disjunctive ASP without taking advantage of default negation (used in meta.lp).
For encoding the first subtask of guessing a counterexample, we rely on a characterization of answer sets in terms of an immediate consequence operator (cf. Lloyd (1987)), defined as follows for a logic program and a set of atoms: . Furthermore, an iterative version of can be defined in the following way: and . In the context of an extended program , possibly including choice rules, default negation, and upper bounds of weight constraints, we are interested in the least fixpoint of applied wrt the reduct . Since a fixpoint is reached in at most applications of , where denotes the set of atoms occurring in , the least fixpoint is given by . As pointed out in Liu and You (2010), a model of an extended program is an answer set of iff . Furthermore, Liu and You 2010 show that violates the loop formula of some atom or loop if is a model, but not an answer set of . This property motivates a “localization” of on the basis of (circular) positive dependencies.
The (positive) dependency graph of an extended program is given by the following pair of nodes and directed edges:
(6) 
A strongly connected component (SCC) is a maximal subgraph of the dependency graph of such that all nodes are pairwisely connected via paths. An SCC is trivial if it does not contain any edge, and nontrivial otherwise. Note that the SCCs of the dependency graph of induce a partition of such that every atom and every loop of is contained in some part. Hence, we can make use of the partition to apply separately to each part.
Proposition 1
Let be an extended logic program, be the sets of atoms belonging to the SCCs of the dependency graph of , and .
Then, we have that iff .
The above property is used in our encoding of answer sets (as counterexamples) in disjunctive ASP. In a nutshell, it combines the following parts:

guessing an interpretation,

deriving the errorindicating atom bot if the interpretation is not a supported model (where each true atom occurs positively in the head of some rule whose body holds),

deriving bot if the true atoms of some nontrivial SCC are not acyclicly derivable (checked via determining the complement of a fixpoint of ), and

saturating interpretations that do not correspond to answer sets by deriving all truth assignments (for atoms) from bot.
Note that the third part, checking acyclic derivability, concentrates on atoms of nontrivial SCCs, while checking support in the second part is already sufficient for trivial SCCs.
The metaprogram in Listing 3 implements the sketched idea. In the following, we concentrate on describing its crucial features. For evaluating support, the metarules in Line 3 and 4 collect atoms having a positive occurrence in the head of a rule along with the rule’s body. Note that, for atoms contained in a constraint in the head, the associated bounds and weights are inessential in the context of support. On the other hand, the metarule in Line 6 sums the weights of all literals in a constraint; this is needed to evaluate bounds in the sequel, where (nonreified) default negation and upper bounds (acting negatively) are inapplicable in view of saturation.
The metarules in Line 10–29 generate an interpretation by guessing some truth value for each atom (Line 10) and evaluating further constructs occurring in a reified program accordingly (Line 12–29). While the special constant false (used as head of integrity constraints) holds in no interpretation (fail(false) is a fact) and the evaluation of conjunctions is straightforward, more care is required for evaluating constraints. For instance, the case that a constraint holds is in the metarule in Line 19–23 identified via sufficiently many literals that hold to achieve the lower bound L and also sufficiently many literals that do not hold to fill the gap between the upper bound U and the sum T of all weights. Note that the latter condition is encoded by the lower bound TU, rather than taking U as an upper bound (as done in meta.lp). The complementary cases that a constraint does not hold are described in the same manner in Line 24–29, where the lower bound TL+1 (or U+1) for weights of literals that do not hold (or hold) is used to indicate a violated lower (or upper) bound of the reified constraint.
Given an interpretation of atoms and the corresponding truth values of further constructs in an extended program, the metarules in Line 33 and 34 are used to derive bot if the interpretation does not provide us with a supported model. To avoid such a derivation of bot, every rule of the reified program must be satisfied, and every true atom must have a positive occurrence in the head of some rule whose body holds.
It remains to check the acyclic derivability of atoms belonging to nontrivial SCCs. To this end, the metarule in Line 38 determines the number Z of atoms in an SCC labeled C as the maximum step at which a fixpoint of , applied locally to C, is reached. Furthermore, the metarule in Line 40–41 derives sccw(A) if the atom referred to by A does not have a derivation external to C. (Recall that the positive body elements of rules internally connecting an SCC, i.e., rules contributing the SCC’s edges to the dependency graph, are marked by facts over scc/2; cf. Listing 1.) The acyclic derivability of atoms indicated by sccw(A) is of particular interest in the sequel. In fact, our encoding identifies the complement of a fixpoint of in terms of atoms A for which wait(atom(A),Z) is derived. To accomplish this, the metarule in Line 45 marks all atoms of C as underived at step 0. As encoded via the metarule in Line 46–47, an atom A stays underived at a later step D if there is no external derivation of A (sccw(A) holds) and the bodies B of all componentinternal supports of A are yet underived at step D1 (wait(B,D1) holds). The latter is checked via the metarules in Line 49–52 and 54–55, respectively. The former applies to constraints and identifies cases where the weights of literals that do not hold along with the ones of yet underived atoms of C exceed TL, so that the lower bound L is not yet established. Similarly, the underivability of a conjunction is recognized via a yet underived positive body element internal to the component C. Also note that the falsity of elements of C is propagated via the metarule in Line 43, so that false atoms, constraints, and conjunctions do not contribute to derivations of atoms of C. As mentioned above, the complement of a fixpoint of contains the atoms A such that wait(atom(A),Z) is eventually derived. If any such atom A is true, failure to construct an answer set is indicated by deriving bot via the metarule in Line 57.
Finally, saturation of interpretations that do not correspond to answer sets is accomplished via the metarules in Line 61 and 62 of Listing 3. They make sure that bot is included in an answer set of the metaprogram only if it is inevitable wrt every interpretation. When considering the encoding part in Listing 3 in isolation, it like meta.lp describes answer sets of a reified program, and bot is derived only if there is no such answer set.
Our metaprograms meta.lp and metaD.lp in Listing 2 and 3 have not yet considered facts minimize(J,S) in reified programs, reflecting input statements. In fact, complex optimization is addressed by the metaprogram metaO.lp, shown in Listing 4. It allows for separate optimization criteria per priority level J and weight W (in facts wlist(S,Q,E,W)). Particular criteria can be provided via the user predicate optimize(J,W,O), where the values card, incl, and pref for O refer to minimality regarding cardinality, inclusion, and preference Sakama and Inoue (2000), respectively, among the involved literals E. Such criteria are reflected via instances of cxopt(J,W,O), derived via the rules in Line 7 and 8–9, where card is taken by default if no criterion is provided by the user. At each priority level J, Pareto improvement of a counterexample (constructed via the rules in metaD.lp) over all weights W and criteria O such that cxopt(J,W,O) holds is used for deciding whether a candidate answer set (constructed via the rules in meta.lp) is optimal. To this end, similarity at a priority level J is indicated by deriving equal(J) from equal(J,W,O) over all instances of cxopt(J,W,O) via the rule in Line 13. Furthermore, the rules in Line 15–19 are used to chain successive priority levels, where a greater level J1 is more significant than its smaller neighbor J2, and to signal whether a priority level J2 is taken into account. The latter is the case if equal(J1) has been derived at all more significant priority levels J1. If it turns out that a candidate answer set is not refuted by a dominating counterexample, we derive bot via the rules in Line 21, 22, and 23: the first rule applies if there are no optimization criteria at all, the second one checks whether the counterexample is worse (or incomparable), as indicated by worse(J1) at an inspected priority level J1, and the third one detects lack of Pareto improvement from equality at the lowest priority level. Finally, the integrity constraint in Line 27 stipulates bot to hold. Along with saturation (in metaD.lp), this implies that a candidate answer set (constructed via the rules in meta.lp) is accepted only if there is no dominating counterexample, thus selecting exactly the optimal answer sets of an input program. The described rules serve the general purpose of identifying undominated answer sets, and the remainder of metaO.lp defines equal(J,W,O) and worse(J) relative to particular optimization criteria.
Inclusionbased minimization, indicated via cxopt(J,W,incl), is implemented by the rules in Line 31–45. The test for equality, attested by deriving equal(J,W,incl) via the rule in Line 40, is accomplished by checking whether a candidate answer set and a (comparable) counterexample agree on all involved literals E; otherwise, ndiff(E) is not derived via the rules in Line 31–38. Furthermore, the counterexample is incomparable to the candidate answer set if it includes some literal not shared by the latter; in such a case, worse(J) is derived via the rules in Line 42–43 and 44–45. In fact, the three minimal answer sets of (given in p1.lp), consisting of the rules in (2)–(4) and the statement in (5) can now be computed in the following way:
Observe that claspD Drescher et al. (2008), the disjunctive extension of clasp Gebser et al. (2007), is used for solving the proper disjunctive ground program obtained from gringo.
In addition to inclusionbased minimization, metaO.lp implements comparisons wrt cardinality Simons et al. (2002) and literal preferences Sakama and Inoue (2000), activatable via facts of the form optimize(J,W,card) and optimize(J,W,pref) (along with prefer(E1,E2)), respectively. For space reasons, the details are omitted here; they can be found in the extended version of this paper Gebser et al. (2011). The latter also provides formal results and arguments demonstrating the correctness of our metaprogramming technique wrt the specification of optimal answer sets in the background.
Regarding the computational complexity of tasks that can be addressed using our metaprogramming approach to optimization, we first note that deciding whether there is an optimal answer set is in , as the existence of some answer set (decidable by means of meta.lp only) is sufficient for concluding that there is also an optimal one. However, the inherent complexity becomes more sensible if we consider the question of whether some atom belongs to an optimal answer set. To decide it, one can augment the reified input program (but not the input program itself), meta.lp, metaD.lp, and metaO.lp with the integrity constraint : not hold(atom()). Then, several complex optimization criteria at a single priority level 1 lead to completeness for , the second level of the polynomial time hierarchy, thus showing that disjunctive ASP is appropriate to implement them. To see this, note that deciding whether an atom belongs to some answer set of a positive disjunctive logic program is complete Eiter and Gottlob (1995). When disjunctions in the heads of rules are rewritten to , the question of whether an atom belongs to an answer set of the original program can be addressed by reifying the rewritten program, adding the integrity constraint : not hold(atom()), and applying meta.lp, metaD.lp, and metaO.lp wrt several optimization criteria. For one, we can include a statement over all atoms of the input program, each associated with a different weight, to exploit the Pareto improvement implemented in metaO.lp for refuting a candidate answer set including if it does not correspond to a minimal model, i.e., an answer set of the original program. Alternatively, we can include a statement over all atoms of the input program, each having the weight 1, and augment the metaprogram with the fact optimize(1,1,incl). We could also use a statement over all atoms of the input program along with their negation, each having the weight 1, and add the facts optimize(1,1,pref) as well as prefer(neg(atom()),pos(atom())). In view of these reductions, we conclude that Pareto efficiency, inclusion, and literal preferences independently capture computational tasks located at the second level of the polynomial time hierarchy, and our metaprograms allow for addressing them via an extended program along with facts (and possibly also integrity constraints) steering optimization relative to its reification.
5 Applications: A Case Study
While the approach of Eiter and Polleres 2006 consists of combining two separate logic programs, one for “guessing” and a second one for “checking,” into a disjunctive program addressing both tasks, our metaprogramming technique applies to a single (reified) input program along with complex optimization criteria. In fact, we provide a generic implementation of such criteria on top of extended programs encoding solution spaces. Hence, our metaprogramming technique allows for a convenient representation of reasoning tasks in which testing the optimality of solutions to an underlying problem in lifts the complexity to hardness. Respective formalisms include ordinary, parallel, as well as prioritized circumscription McCarthy (1980); Lifschitz (1985), minimal consistencybased diagnosis Reiter (1987), and preferred extensions of argumentation frameworks Dung (1995). Similarly, Pareto efficiency is an important optimality condition in decision making Chevaleyre et al. (2007) and system design Gries (2004). In the following, we illustrate the application of our approach on the example of an existing realworld application: repair wrt large generegulatory networks Gebser et al. (2010).
Listing 5 shows a simplified version of the repair encoding given in Gebser et al. (2010). It applies to a regulatory network, a directed graph with (partially) labeled edges, represented by facts of the predicates vertex/1, edge/2, and obs_elabel/3, where a label S is 1 (activation) or 1 (inhibition). In addition, the data of experiments labeled P are provided by facts of the predicates exp/1, inp/2 denoting input vertices (subject to perturbations), and obs_vlabel/3, where a label S is again 1 (increase) or 1 (decrease). The regulatory network is consistent with the experiment data if there are total labelings of edges and vertices (for each experiment P) such that the label of every noninput vertex V is explained by the influence of some of its regulators U, where the influence is the product S*T of the edge label S and the label T of U (in experiment P). In the practice of systems biology, regulatory networks and experiment data often turn out to be mutually inconsistent, which makes it highly nontrivial to draw biologically meaningful conclusions in an automated way. To address this shortage, several repair operations were devised in Gebser et al. (2010), which can be enabled via facts of the form repair(K,J,W), where K indicates a certain kind of admissible repair operations, J a priority level, and W a weight. The repair operations R to apply are selected via the rule in Line 14 of Listing 5, and their effects are propagated via the rules in Line 18–29, thus obtaining total edge and vertex labelings witnessing the reestablishment of consistency. Given that applications of repair operations modify a regulatory network or experiment data, we are interested in applying few operations only, which is expressed by the statement in Line 33.
A reasonable repair configuration could consist of facts of the following form:
 repair(ivert,J,W).

admitting to turn vertices into inputs in all experiments.
 repair(eflip,J,W).

admitting network modifications by flipping edge labels.
 repair(pvert,J,W).

admitting to turn vertices into inputs in specific experiments.
 repair(vflip,J,W).

admitting data modifications by flipping vertex labels.
While the kinds of repair referred to by ivert and eflip operate primarily on a network (in view of incompleteness or incorrectness), the ones denoted by pvert and vflip mainly address the data (which can be noisy). If we penalize all repair operations uniformly via JJJJJ and WWWWW, the instantiation of the statement in Line 33 represents ordinary cardinalitybased optimization, assembled in solvers like clasp and smodels. However, by adding optimize(J,W,incl) as a fact, we can easily switch to inclusionbased minimization and use a disjunctive solver like claspD to solve the more complex problem. While our metaprograms enable such a shift of optimization criteria by means of adding just one fact, a direct disjunctive encoding of inclusionbased minimization has been provided in Gebser et al. (2010); note that the latter is by far more involved than the basic repair encoding in Listing 5. Furthermore, our metaprogramming approach allows us to distinguish between different kinds of repair operations (without prioritizing them) and optimize wrt Pareto efficiency. To accomplish this, one only needs to pick unequal values for WW, where cardinalitybased minimization wrt each W can selectively be replaced by inclusion via providing a fact optimize(J,W,incl). Finally, we can choose to rank kinds of repair operations by providing different priority levels JJ. In this respect, the flexibility gained due to metaprogramming allows for deploying and comparing different optimization criteria, e.g., regarding the accuracy of resulting predictions (cf. Gebser et al. (2010)).
For giving an account of the practical capabilities of our metaprogramming approach, we empirically compared it to the direct encoding of inclusionbased minimization in Gebser et al. (2010). To this end, we ran gringo version 3.0.3 and claspD version 1.1 on 100 instances wrt three kinds of admissible repair operations, resulting in 300 runs each with our metaprograms and with the direct encoding. All runs have been performed sequentially on a machine equipped with Intel Xeon E5520 processors and 48 GB main memory under Linux, imposing a time limit of 4000 sec per run. To our own surprise, more runs were completed in time with the metaprograms than with the direct encoding: 219 versus 150.^{4}^{4}4All instances and detailed results are available at metasp (metasp). The disadvantages of the direct encoding show that further gearing would be required to improve solving efficiency, which adds to the difficulty of furnishing a functional saturationbased encoding. In view of this, we conclude that our metaprogramming approach to complex optimization is an eligible and viable alternative. However, enhancements of disjunctive ASP solvers boosting its performance would still be desirable.
6 Discussion
Our integral approach to modeling complex optimization criteria in ASP brings about a number of individual contributions. To begin with, we introduce the reification capacities of our grounder gringo along with the associated metaencoding, paving the way to the immediate availability of metamodeling techniques. In fact, the full version of the basic metaencoding in Listing 1, obtainable at metasp (metasp), covers the complete language of gringo, including disjunctions and diverse aggregates. Moreover, our metamodeling techniques provide a general account of saturation and, thus, abolish its compulsory replication for expressing complex preferences. Of particular interest is the stability property of answer sets serving as implicit counterexamples. Unlike the approach of Eiter and Polleres 2006, our encoding avoids “guessing” level mappings. Also, our target language involves choice rules and constraints Simons et al. (2002), and we are unaware of any preexisting metaencoding of corresponding answer sets, neither as candidates nor as counterexamples. Likewise, related metaprogramming approaches for generating consequences of logic programs Faber and Woltran (2009) or explanations wrt debugging queries Oetsch et al. (2010) do not consider such aggregates (but disjunctions in object programs).
We exploit the twodimensionality of statements by using levels and weights for combining a lexicographic ranking with Pareto efficiency. At each level, groups of literals sharing the same weight can be compared wrt inclusion. This is extended in Gebser et al. (2011) by cardinalitybased minimization and the framework of Sakama and Inoue (2000), relying on a preference relation among literals (given in addition to statements); the augmented encoding is also available at metasp (metasp). In fact, the approach of Section 4 allows for capturing the special cases of parallel and prioritized circumscription, investigated by Janhunen and Oikarinen in 2004; 2008. An interesting future extension is the encoding of optimality conditions for logic programs with ordered disjunction Brewka et al. (2004), whose custommade implementation in the prototype psmodels interleaves two smodels oracles for accomplishing a generateandtest approach similar to the idea of our metaprograms. Ultimately, our approach could serve as an implementation platform for answer set optimization Brewka et al. (2003) and the preference description language proposed in Brewka (2004). Last but not least, our metaprograms furnish a rich and readily available source of hard challenge problems, fostering the development of ASP solvers for disjunctive logic programming.
Acknowledgments. This work was partly funded by DFG grant SCHA 550/82. We are grateful to Tomi Janhunen, Ilkka Niemelä, and the referees for their helpful comments.
References
 Baral (2003) Baral, C. 2003. Knowledge Representation, Reasoning and Declarative Problem Solving. Cambridge University Press.

Brewka (2004)
Brewka, G. 2004.
Answer sets: From constraint programming towards qualitative
optimization.
In Proceedings of the Seventh International Conference on Logic
Programming and Nonmonotonic Reasoning (LPNMR’04)
, V. Lifschitz and I. Niemelä, Eds. Lecture Notes in Artificial Intelligence, vol. 2923. SpringerVerlag, 34–46.
 Brewka et al. (2004) Brewka, G., Niemelä, I., and Syrjänen, T. 2004. Logic programs with ordered disjunction. Computational Intelligence 20, 2, 335–357.
 Brewka et al. (2003) Brewka, G., Niemelä, I., and Truszczynski, M. 2003. Answer set optimization. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI’03), G. Gottlob and T. Walsh, Eds. Morgan Kaufmann Publishers, 867–872.
 Chevaleyre et al. (2007) Chevaleyre, Y., Endriss, U., Lang, J., and Maudet, N. 2007. A short introduction to computational social choice. In Proceedings of the Thirtythird Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM’07), J. van Leeuwen, G. Italiano, W. van der Hoek, C. Meinel, H. Sack, and F. Plasil, Eds. Lecture Notes in Computer Science, vol. 4362. SpringerVerlag, 51–69.
 Delgrande et al. (2003) Delgrande, J., Schaub, T., and Tompits, H. 2003. A framework for compiling preferences in logic programs. Theory and Practice of Logic Programming 3, 2, 129–187.
 Drescher et al. (2008) Drescher, C., Gebser, M., Grote, T., Kaufmann, B., König, A., Ostrowski, M., and Schaub, T. 2008. Conflictdriven disjunctive answer set solving. In Proceedings of the Eleventh International Conference on Principles of Knowledge Representation and Reasoning (KR’08), G. Brewka and J. Lang, Eds. AAAI Press, 422–432.
 Dung (1995) Dung, P. 1995. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and person games. Artificial Intelligence 77, 2, 321–357.
 Eiter et al. (2003) Eiter, T., Faber, W., Leone, N., and Pfeifer, G. 2003. Computing preferred answer sets by metainterpretation in answer set programming. Theory and Practice of Logic Programming 3, 45, 463–498.
 Eiter and Gottlob (1995) Eiter, T. and Gottlob, G. 1995. On the computational cost of disjunctive logic programming: Propositional case. Annals of Mathematics and Artificial Intelligence 15, 34, 289–323.
 Eiter and Polleres (2006) Eiter, T. and Polleres, A. 2006. Towards automated integration of guess and check programs in answer set programming: a metainterpreter and applications. Theory and Practice of Logic Programming 6, 12, 23–60.
 Faber and Woltran (2009) Faber, W. and Woltran, S. 2009. Manifold answerset programs for metareasoning. In Proceedings of the Tenth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’09), E. Erdem, F. Lin, and T. Schaub, Eds. Lecture Notes in Artificial Intelligence, vol. 5753. SpringerVerlag, 115–128.
 Fages (1994) Fages, F. 1994. Consistency of Clark’s completion and the existence of stable models. Journal of Methods of Logic in Computer Science 1, 51–60.
 Ferraris (2005) Ferraris, P. 2005. Answer sets for propositional theories. In Proceedings of the Eighth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’05), C. Baral, G. Greco, N. Leone, and G. Terracina, Eds. Lecture Notes in Artificial Intelligence, vol. 3662. SpringerVerlag, 119–131.
 Garey and Johnson (1979) Garey, M. and Johnson, D. 1979. Computers and Intractability: A Guide to the Theory of NPCompleteness. W. Freeman and Co.
 Gebser et al. (2010) Gebser, M., Guziolowski, C., Ivanchev, M., Schaub, T., Siegel, A., Thiele, S., and Veber, P. 2010. Repair and prediction (under inconsistency) in large biological networks with answer set programming. In Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR’10), F. Lin and U. Sattler, Eds. AAAI Press, 497–507.
 Gebser, Kaminski, Kaufmann, Ostrowski, Schaub, and Thiele (Gebser et al.) Gebser, M., Kaminski, R., Kaufmann, B., Ostrowski, M., Schaub, T., and Thiele, S. A user’s guide to gringo, clasp, clingo, and iclingo. Available at http://potassco.sourceforge.net.
 Gebser et al. (2011) Gebser, M., Kaminski, R., and Schaub, T. 2011. Complex optimization in answer set programming: Extended version. Available at metasp (metasp). (This is an extended version of the paper at hand.)
 Gebser et al. (2007) Gebser, M., Kaufmann, B., Neumann, A., and Schaub, T. 2007. Conflictdriven answer set solving. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI’07), M. Veloso, Ed. AAAI Press/The MIT Press, 386–392.
 Gelfond and Lifschitz (1991) Gelfond, M. and Lifschitz, V. 1991. Classical negation in logic programs and disjunctive databases. New Generation Computing 9, 365–385.
 Gries (2004) Gries, M. 2004. Methods for evaluating and covering the design space during early design development. Integration 38, 2, 131–183.
 Janhunen and Oikarinen (2004) Janhunen, T. and Oikarinen, E. 2004. Capturing parallel circumscription with disjunctive logic programs. In Proceedings of the Ninth European Conference on Logics in Artificial Intelligence (JELIA’04), J. Alferes and J. Leite, Eds. Lecture Notes in Computer Science, vol. 3229. SpringerVerlag, 134–146.
 Leone et al. (2006) Leone, N., Pfeifer, G., Faber, W., Eiter, T., Gottlob, G., Perri, S., and Scarcello, F. 2006. The DLV system for knowledge representation and reasoning. ACM Transactions on Computational Logic 7, 3, 499–562.
 Lifschitz (1985) Lifschitz, V. 1985. Computing circumscription. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (IJCAI’85), A. Joshi, Ed. Morgan Kaufmann Publishers, 121–127.
 Liu and You (2010) Liu, G. and You, J. 2010. Level mapping induced loop formulas for weight constraint and aggregate logic programs. Fundamenta Informaticae 101, 3, 237–255.
 Lloyd (1987) Lloyd, J. 1987. Foundations of Logic Programming, 2nd ed. Symbolic Computation. SpringerVerlag.
 McCarthy (1980) McCarthy, J. 1980. Circumscription — a form of nonmonotonic reasoning. Artificial Intelligence 13, 12, 27–39.
 metasp (metasp) metasp. http://www.cs.unipotsdam.de/wv/metasp.
 Oetsch et al. (2010) Oetsch, J., Pührer, J., and Tompits, H. 2010. Catching the ouroboros: On debugging nonground answerset programs. Theory and Practice of Logic Programming. Twentysixth International Conference on Logic Programming (ICLP’10) Special Issue 10, 46, 513–529.
 Oikarinen and Janhunen (2008) Oikarinen, E. and Janhunen, T. 2008. Implementing prioritized circumscription by computing disjunctive stable models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence: Methodology, Systems, and Applications (AIMSA’08), D. Dochev, M. Pistore, and P. Traverso, Eds. Lecture Notes in Artificial Intelligence, vol. 5253. SpringerVerlag, 167–180.
 Reiter (1987) Reiter, R. 1987. A theory of diagnosis from first principles. Artificial Intelligence 32, 1, 57–96.
 Sakama and Inoue (2000) Sakama, C. and Inoue, K. 2000. Prioritized logic programming and its application to commonsense reasoning. Artificial Intelligence 123, 12, 185–222.
 Simons et al. (2002) Simons, P., Niemelä, I., and Soininen, T. 2002. Extending and implementing the stable model semantics. Artificial Intelligence 138, 12, 181–234.
 Syrjänen (Syrjänen) Syrjänen, T. Lparse 1.0 user’s manual. Available at http://www.tcs.hut.fi/Software/smodels/lparse.ps.gz.
Comments
There are no comments yet.