Recursive Rules with Aggregation: A Simple Unified Semantics

07/26/2020 ∙ by Yanhong A. Liu, et al. ∙ Stony Brook University 0

Complex reasoning problems are most clearly and easily specified using logical rules, especially recursive rules with aggregation such as counts and sums for practical applications. Unfortunately, the meaning of such rules has been a significant challenge, leading to many different conflicting semantics. This paper describes a unified semantics for recursive rules with aggregation, extending the unified founded semantics and constraint semantics for recursive rules with negation. The key idea is to support simple expression of the different assumptions underlying different semantics, and orthogonally interpret aggregation operations straightforwardly using their simple usual meaning.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many computation problems, including complex reasoning problems in particular, are most clearly and easily specified using logical rules. However, such reasoning problems in practical applications, especially for large applications and when faced with uncertain situations, require the use of recursive rules with aggregation such as counts and sums. Unfortunately, even the meaning of such rules has been challenging and remains a subject with significant complication and disagreement by experts.

As a simple example, consider a single rule for Tom to attend the logic seminar: Tom will attend the logic seminar if the number of people who will attend it is at least 20. What does the rule mean? If 20 or more other people will attend, then surely Tom will attend. If only 10 others will attend, then Tom clearly will not attend. What if only 19 other people will attend? Will Tom attend, or not? Despite simple, this example already shows that, when aggregation is used in recursive rule, the semantics of rules can be subtle.

In fact, the semantics of recursive rules with aggregation has been much more complex and tricky than even recursive rules with negation. The latter was already challenging for over 100 years, going back at least to Russell’s paradox, for which self-reference with negation is believed to form vicious circles [ID16]. Many different semantics, which disagree with each other, have been studied for recursive rules with negation, as given a glance in Section 6. Two of them, well-founded semantics (WFS) [VRS91, VG93] and stable model semantics (SMS) [GL88], became dominant since about 30 years ago.

Semantics of recursive rules with aggregation has been studied continuously since about 30 years ago, and intensively in the last several years, as discussed in Section 6

, especially as they are needed in graph analysis and machine learning applications. However, the different semantics proposed, e.g., 

[VG92, GZ19], are even more sophisticated than WFS and SMS for recursive rules with negation, including having some experts changing their own minds about the desired semantics, e.g., [Gel02, GZ19]. With such complex semantics, aggregation would be too challenging for non-experts to use correctly.

This paper describes a simple unified semantics for recursive rules with aggregation, as well as negation and quantification. The semantics is built on and extends the founded semantics and constraint semantics of logical rules with negation and quantification developed recently by Liu and Stoller [LS18, LS19, LS20a]. The key idea is to support simple expression of the different assumptions underlying different semantics, and orthogonally interpret aggregation operations straightforwardly using their simple usual meaning. We present formal definitions for the new semantics and prove the consistency and correctness properties of the semantics.

We further show our semantics applied to a variety of different examples, including the longest and most sophisticated ones from dozens of previously studied examples [GZ19]. For these from previously studied examples, instead of computing answer sets using naive guesses followed by sophisticated reducts, all of the results can be computed with a simple default assumption and a simple least fixed-point computation, as is used for formal inductive definitions and for commonsense reasoning. In all case, we show that the resulting semantics match the desired semantics.

The rest of the paper is organized as follows. Section 2 describes the motivation for the problems and solutions. Sections 3 presents the language of recursive rules with unrestricted negation, quantification, and aggregation. Section 4 defines the formal semantics and states the consistency and correctness properties. Section 5 illustrates our semantics on a variety of examples from previous studies. Section 6 discusses related work and concludes. Appendix A contains proofs, and Appendix B gives additional examples.

2 Problem and solution overview

Consider a simplest example of a recursive rule with aggregation [GZ14, GZ18], given as follows. It says that p is true for value a if the number of x’s for which p is true equals 1:

    p(a) \(\leftarrow\) count {x: p(x)} \(=\) 1
This rule is recursive because inferring a conclusion about p requires using p in a hypothesis. It uses an aggregation of count over a set. While each of recursion and aggregation by itself has a simple meaning, allowing recursion with aggregation is tricky, because recursion is used to define a predicate, which is equivalent to a set, but aggregation using a set requires the set to be already defined.

In practice, it is unlikely that someone will write a rule like this example, similar to a rule like p(a) p(a), but when complex rules are written, they might end up being equivalent to such a rule, posing the same semantic challenges as such simplest cases. So defining the correct semantics is critical for practical applications.

Simple two models: Kemp-Stuckey 1991 and Gelfond 2002. According to Kemp and Stuckey [KS91] and Gelfond [Gel02], the above rule has two models: one empty model, that is, a model in which nothing is true and thus p(a) is false, and one containing only p(a), that is, p(a) is true and everything else is false.

Simple one model: Faber-Pfeifer-Leone 2011 and Gelfond-Zhang 2014-2019. According to Faber, Pfeifer, and Leone [FPL11] and Gelfond and Zhang [GZ14, Examples 2 and 7], [GZ18, Examples 4 and 6], and [GZ19, Example 9], the rule above has only one model: the empty model.

Complex possible models. As one of the several main efforts investigating aggregation, Gelfond and Zhang [Gel02, GZ14, ZR16, GZ17, GZ18, GZ19] have studied the challenges and solutions extensively, presenting dozens of definitions and propositions and discussing dozen of examples [GZ19]. Their examples where count is used in inequalities, greater than, etc., with other rules and facts, or with more hypotheses in rules, are even more complicated as they can have more possible models. We discuss their most extensive examples in Section 5.

Extending founded semantics and constraint semantics for aggregation. Aggregation, such as count, is a simple concept that even kids understand, just like negation. So it is simply stunning to require so many sophisticated treatments by experts to figure out its meaning when used in rules, not to mention that the different semantics give disagreeing results.

We develop a simple and unified semantics for rules with aggregation as well as negation and quantification by building on founded semantics and constraint semantics [LS18, LS20a] for rules with negation and quantification. The key insight is that disagreeing complex semantics for rules with aggregation are because of different underlying assumptions, and these assumptions can be captured using the same simple binary declarations about predicates as in founded semantics and constraint semantics but generalized to include the meaning of aggregation.

  • First, if there is no aggregation or no potential non-monotonicity—that is, adding new facts used in the hypotheses of a rule may make the conclusion of a rule from true to false—in recursion, then the predicate in the conclusion can be declared “certain”.

    Being certain means that assertions of the predicate are given true or inferred true by simply following rules whose hypotheses are given or inferred true, and the remaining assertions of the predicate are false. This is both the founded semantics and constraint semantics.

    For the example of Tom attending the logic seminar, there is no potential non-monotonicity; with this declaration, when given that only 19 others will attend, the hypothesis of the rule is not true, so the conclusion cannot be inferred. Thus Tom will not attend.

  • Regardless of monotonicity, a predicate can be declared “uncertain”. It means that assertions of the predicate can be given or inferred true or false using what is given, and any remaining assertions of the predicate are undefined. This is the founded semantics.

    If there are undefined assertions from founded semantics, all combinations of true and false values are checked against the rules as constraints, yielding a set of possible satisfying combinations. This is the constraint semantics.

  • An uncertain predicate can be further declared “complete” or not. Being complete means that all rules that can conclude assertions of the predicate are given. Thus a new rule, called completion rule, can be created to infer negative assertions of the predicate when none of the given rules apply.

    Being not complete means that negative assertions cannot be inferred using completion rules, and thus all assertions of the predicate that were not inferred to be true are undefined.

    For the example of Tom attending the logic seminar, the completion rule essentially says: Tom will not attend the logic seminar if the number of people who will attend it is less than 20.

    When given that only 19 others will attend, due to the uncertainty of whether Tom will attend, neither the given rule nor the completion rule will fire. So whether one uses the declaration of complete or not, there is no way to infer that Tom will attend, or Tom will not attend. So, founded semantics says it is undefined.

    Then constraint semantics tries both for it to be true, and for it to be false; both satisfy the rule, so there are two models: one that Tom will attend, and one that Tom will not attend.

  • Finally, an uncertain complete predicate can be further declared “closed”, meaning that an assertion of the predicate is made false if inferring it to be true requires itself to be true.

    For the example of Tom attending the logic seminar, with this declaration, if there are only 19 others attending, then Tom will not attend in both founded semantics and constraint semantics. This is because inferring that Tom will attend requires Tom himself to attend to make the count to be 20, so it should be made false, meaning that Tom will not attend.

Back to the simplest example about p in this section, the equality comparison is not monotonic, because adding facts can change it from true to false. Thus p must be declared uncertain.

  • Suppose p is declared not complete. Founded semantics does not infer p(a) to be true using the given rule because count {x: p(x)} = 1 is undefined, and nothing infers p(a) to be false. Thus p(a) is undefined, and so is p(b) for any constant b other than a. Constraint semantics gives a set of models each for a different combination of true and false values of p for different constants. This corresponds to what is often called open-world assumption and used informally in common-sense reasoning.

  • Suppose p is declared complete but not closed. A complete rule is first added. The precise completion rule is:

        \(\neg\) p(x) \(\leftarrow\) x \(\neq\) a \(\lor\) count {x: p(x)} \(\neq\) 1
    
    Founded semantics does not infer p(a) to be true or false using the given rule or completion rule, because count {x: p(x)} 1 is also undefined. Thus p(a) is undefined. Founded semantics infers p(b) for any constant b other than a to be false using the completion rule. Constraint semantics gives two models: one with p(a) being true, and p(b) being false for any constant b other than a; and one with p being false for every constant. This is the two-model semantics above, Kemp-Stuckey 1991 and Gelfond 2002.

  • Supposed p is declared complete and closed. Both founded semantics and constraint semantics give only the second model above, that is, p(c) is false for every constant c. They have p(a) being false because inferring p(a) to be true requires p(a) itself to be true. This is the one-model semantics above, Faber-Pfeifer-Leone 2011 and Gelfond-Zhang 2014-2019.

We see that simple binary declarations of the underlying assumptions, with simple inference using rules and taking rules as constraints, give the different desired semantics.

3 Language

We consider Datalog rules extended with unrestricted negation, disjunction, quantification, aggregation, and comparison involving aggregation.

Datalog rules with unrestricted negation. We first present a simple core form of rules and then describe additional constructs that can appear in rules. The core form of a rule is the following, where any may be preceded with :

Symbols , , and indicate backward implication, conjunction, and negation, respectively. and the are predicates, each argument and is a constant or a variable, and each variable in the arguments of must also be in the arguments of some . Constants may be numbers or other values. The semantics does not restrict the type of numbers used in programs; it could be integers, rational numbers, Turing-computable real numbers, etc. In arguments of predicates in examples, we use numbers and quoted strings for constants and letters for variables.

If , there is no or , and each must be a constant, in which case is called a fact. For the rest of the paper, “rule” refers only to the case where , in which case the left side of the backward implication is called the conclusion, the right side is called the body, and each conjunct in the body is called a hypothesis.

Disjunction. In a rule body, hypotheses may be combined using disjunction as well as conjunction. Conjunction and disjunction may be nested arbitrarily.

Quantification. A hypothesis can be an existential or universal quantification of the form

  existential quantification
  universal quantification

where the are variables that appear in , and has the same form as a rule body, as defined above. The quantifications return true iff for some or all, respectively, combinations of values of , the body is true. The domain of each quantified variable is the set of all constants in the program.

Aggregation and comparison. An aggregation has the form , where agg is an aggregation operator (count, min, max, or sum), and is a set expression. The aggregation returns the result of applying the respective agg operation on the set value of . A set expression has the form , where each is a variable in , and has the core form of a rule body, as defined above. The order used by min and max is the order on numbers, extended lexicographically to an order on tuples. We use sum for numbers, and, the set expression after sum must collect values of one variable, that is, must have the form .

A hypothesis of a rule may be a comparison, specifically, an equality () or inequality (, , , , or ), with an aggregation on the left and a variable or constant on the right. We include a comprehensive set of comparison operators for readability and to eliminate the need to allow negation applied to comparisons; for example, the negation of a comparison using is a comparison using .

The key idea here is that the value of an aggregation or comparison is undefined if there is not enough information about the predicates used to determine the value, or if the aggregation or comparison is applied to a value of a wrong type.

Additional aggregation and comparison functions, including summing only the first component of a set of tuples and using orders on characters and strings, can be supported in the same principled way as we support those discussed here.

Programs, atoms, and literals. A program is a set of rules and facts, plus declarations for predicates, described after dependencies are introduced next.

An atom of is a formula formed by applying a predicate symbol in to constants in , or a comparison that appears in ; these are called predicate atoms for and comparison atoms, respectively.

A literal of is an atom of or the negation of a predicate atom of . These are called positive literals and negative literals, respectively.

Dependency graph. The dependency graph of a program summarizes dependencies between predicates induced by the rules, distinguishing positive from non-positive dependencies. We define the dependency graph before discussing declarations for predicates, because the permitted declarations and default declarations are determined by the dependency graph.

An occurrence of a predicate atom in a hypothesis is a positive occurrence if (1) is the positive literal ; (2) is a comparison atom of the form , , , , , or , and is in a positive literal in the set expression ; or (3) is a comparison atom of the form , , , , , or , and is in a negative literal in the set expression . Otherwise, the occurrence is a non-positive occurrence.

This definition ensures that hypotheses, and hence rule bodies, are monotonic with respect to positive occurrences of atoms, in the sense that truthifying a positive occurrence of an atom in a hypothesis (that is, changing the truth value of the atom to ) cannot un-truthify the hypothesis (that is, change truth value of the hypothesis from ). In general, any occurrence of a predicate atom in a hypothesis (including any comparison involving any aggregation) is a positive occurrence if can be determined to be monotonic with respective to . For example, if predicate holds for only positive numbers, then is a positive occurrence in .

The dependency graph of program is a directed graph with a node for each predicate of , and an edge from to labeled positive (respectively, non-positive) if a rule whose conclusion contains has a hypothesis that contains a positive (respectively, non-positive) occurrence of an atom for . If there is a path from to in , then depends on in . If the node for is in a cycle containing a non-positive edge in , then has circular non-positive dependency in .

Declarations. A predicate declared certain means that each assertion of the predicate has a unique true () or false () value. A predicate declared uncertain means that each assertion of the predicate has a unique true, false, or undefined () value. A predicate declared complete means that all rules with that predicate in the conclusion are given in the program. A predicate declared closed means that an assertion of the predicate is made false, called self-false, if inferring it to be true using the given rules and facts requires assuming itself to be true.

A predicate must be declared uncertain if it has circular non-positive dependency or depends on an uncertain predicate; otherwise, it may be declared certain or uncertain and is by default certain. A predicate may be declared complete or not only if it is uncertain, and it is by default complete. A predicate may be declared closed or not only if it is uncertain and complete, and it is by default not closed.

We do not give here a syntax for declarations of predicates to be certain, complete, closed, or not, because it is straightforward, and in almost all examples, the default declarations are used. However, Liu and Stoller [LS20b] introduces a language that supports such declarations and supports the use of both founded semantics and constraint semantics.

Notations. In presenting the semantics, in particular the completion rules, we allow negation in the conclusion of rules, and we allow hypotheses to be equalities between variables.

4 Formal semantics

This section extends the definitions of founded semantics and constraint semantics in [LS18, LS20a] to handle aggregation and comparison. Most of the foundational definitions need to be extended, including the definitions of atom, literal, and positive occurrence of a predicate atom in Section 3, and of complement, ground instance, model, one-step derivability, and unfounded set in this section. By carefully extending these foundational definitions, we are able to avoid explicit changes to the definitions of other terms and functions built on them, including the definition of completion and the definition of the least fixed point at the heart of the semantics, embodied mainly in the function .

Complements and consistency. The predicate literals and are complements of each other. The following pairs of comparison literals are complements of each other: and ; and ; and .

A set of literals is consistent if it does not contain a literal and its complement.

Interpretations, ground instances, derivability of comparisons, models, and one-step derivability. An interpretation of a program is a consistent set of literals of . Interpretations are generally 3-valued: a literal is true () in interpretation if it is in , is false () in if its complement is in , and is undefined () in if neither it nor its complement is in . An interpretation of is 2-valued if it contains, for each predicate atom of , either or its complement. Interpretations are ordered by set inclusion .

An occurrence of a variable in a quantification is bound in if is in the variable list to the left of the vertical bar in . An occurrence of a variable in a set expression is bound if is in the variable list to the left of the colon in . An occurrence of a variable in a rule is free if it is not bound in a quantification or set expression in .

A ground instance of a rule in a program is any rule obtained from by expanding universal quantifications into conjunctions over all constants (in ), instantiating existential quantifications with any constants, and instantiating the remaining free occurrences of variables with any constants (of course, all free occurrences of the same variable are replaced with the same constant). A ground instance of a comparison atom is a comparison atom obtained from by instantiating the free occurrences of variables in with any constants. A ground instance of a set expression is a pair obtained by instantiating all variables in and with any constants. Let denote the set of ground instances of set expression . For a set expression and truth value , let .

Figure 1: Derivability of comparisons. denotes the set of numbers that may appear in programs, and denotes the set of numbers in Num and tuples of numbers in Num. Biconditionals for derivability of other comparisons are obtained from those given as follows. (1) Biconditionals for deriving comparisons using min are obtained from those for max by replacing max with min and reversing the direction of inequalities and to and , respectively. (2) For each aggregation operator agg, biconditionals for deriving , , and are obtained from the given biconditionals for , , and , respectively, by replacing “” with “”, “” with “”, and “” with “”, respectively.

Informally, a comparison is derivable in , denoted , if it must hold in , regardless of whether atoms with undefined truth values are true or false. The formal definition, shown in Figure 1, is a case analysis on the aggregation operator and the comparison operator. The definition implies that, in an interpretation , if a comparison atom involves applying min, max, or sum to a set containing a non-numeric value, then the comparison atom and its complement are not derivable in , and therefore, its truth value is in .

An interpretation is a model of a program if it (1) contains all facts in , (2) satisfies all rules of , interpreted as formulas in 3-valued logic [Fit85] (that is, for each ground instance of each rule, if the body is true in , then so is the conclusion), and (3) contains all derivable ground instances of comparison atoms of (that is, if , then is true in ).

The one-step derivability function for program performs one step of inference using rules of and evaluation of comparisons in a given interpretation. Formally, iff (1) is a fact of , (2) there is a ground instance of a rule of with conclusion such that the body of is true in interpretation , or (3) is a ground instance of a comparison atom of and .

Founded semantics without closed declarations. We first define a version of founded semantics, denoted , that does not consider declarations of closed predicates. We then extend the definition to handle those declarations. Intuitively, the founded model of a program , denoted , is the least set of literals that are given as facts or can be inferred by repeated use of the rules. We define , where functions , , , and are defined as follows.

Completion. Completion function returns the completed program of . Formally,, where and are defined as follows.

The function returns the program obtained from by replacing the facts and rules defining each uncertain complete predicate with a single combined rule for , defined as follows. Transform the facts and rules defining so they all have the same conclusion , by replacing each fact or rule with

where are fresh variables (that is, not occurring in any given rule defining ), and are all variables occurring in the original rule . Combine the resulting rules for into a single rule defining whose body is the disjunction of the bodies of those rules. This combined rule for is logically equivalent to the original facts and rules for . Similar completion rules are used in Clark completion [Cla78] and Fitting semantics [Fit85].

The function returns the program obtained from by adding, for each uncertain complete predicate , a completion rule that derives negative literals for . The completion rule for is obtained from the inverse of the combined rule defining (recall that the inverse of is ), by (1) putting the body of the rule in negation normal form, that is, using laws of predicate logic to move negation inwards and eliminate double negations, and (2) using laws of arithmetic to eliminate negation applied by comparison atoms (for example, replace with ). As a result, in completion rules, negation is applied only to predicate atoms.

Least fixed point. The least fixed point is preceded and followed by functions that introduce and remove, respectively, new predicates representing the negations of the original predicates.

The function returns the program obtained from by replacing each negative literal with , where the new predicate represents the negation of predicate .

The function uses a least fixed point to infer facts for each strongly connected component (SCC) in the dependency graph of , as follows. Let be a list of the SCCs in dependency order, so earlier SCCs do not depend on later ones; it is easy to show that any linearization of the dependency order leads to the same result for . The projection of a program onto an SCC , denoted , contains all facts of whose predicates are in and all rules of whose conclusions contain predicates in .

Define , where and for . is the least fixed point operator. returns the interpretation obtained from interpretation by adding completion facts for certain predicates in to ; specifically, for each certain predicate in , for each combination of values of arguments of , if does not contain , then add .

The least fixed point is well-defined, because the one-step derivability function is monotonic. To see this, we show monotonicity of each of three parts of the definition of . Part (1) adds a fixed set of facts and hence is trivially monotonic. Part (2) is monotonic because the rules do not contain negation, as a result of applying . Part (3) is monotonic because the definition of derivability of comparisons ensures that adding literals to an interpretation cannot change the truth value of a comparison from true to false or vice versa (it can only change the truth value from undefined to true or false).

The function returns the interpretation obtained from interpretation by replacing each atom with .

Founded semantics with closed declarations. Informally, when an uncertain complete predicate is declared closed, an atom of the predicate is false in an interpretation for a program , called self-false in , if every ground instance of a rule that concludes has a hypothesis that is false in or, recursively, is self-false in . To simplify the formalization, the following definitions assume that ground instances of rules have been transformed to eliminate disjunction, by putting the body of each ground instance of a rule into disjunctive normal form (DNF) and then replacing with multiple rules, one per disjunct of the DNF.

A set of predicate atoms for closed predicates is an unfounded set of with respect to an interpretation of iff is disjoint from and, for each atom in , for each ground instance of a rule of with conclusion ,

  1. some hypothesis of is false in ,

  2. some positive predicate hypothesis of for a closed predicate is in , or

  3. some comparison hypothesis of is false when all atoms in are false, that is, ,

where, for a set of positive literals, , called the element-wise negation of , and where is implicitly simplified to eliminate negation applied to by changing the comparison operator in . This is the same as the usual definition of unfounded set [VRS91] except we inserted “for a closed predicate” in clause (2), and we added the new clause (3). Because comparisons are not conjunctions of literals, it is not easy to directly express (by analogy with clause (2)) that some atom in is a necessary condition for to be true, so we instead check in clause (3) that is false when all atoms in are false.

The definition of unfounded set ensures that extending to make all atoms in false is consistent with , in the sense that no atom in can be inferred to be true in the extended interpretation. , the set of self-false atoms of with respect to interpretation , is the greatest unfounded set of with respect to .

The founded semantics is defined by repeatedly computing the semantics given by (founded semantics without closed declarations) and then setting self-false atoms to false, until a least fixed point is reached. Formally, the founded semantics is , where .

Constraint semantics. Constraint semantics is a set of 2-valued models based on founded semantics. A constraint model of is a consistent 2-valued interpretation such that is a model of and and . We define to be the set of constraint models of . Constraint models can be computed from by iterating over all assignments of true and false to atoms that are undefined in , and checking which of the resulting interpretations satisfy all rules in and .

Consistency and correctness. The most important properties of the semantics are consistency and correctness. Proofs of the following theorems are in Appendix A.

Theorem 1. The founded model and constraint models of a program are consistent.

Theorem 2. The founded model of a program is a model of and . The constraint models of are 2-valued models of and .

5 Examples

Many small examples similar to the example in Section 2 have been discussed extensively in the literature. The most recent work [GZ19] is most comprehensive in discussing 28 examples; we discuss their Examples 1 and 28 to show the range of difficulties they deal with, Example 15 that resorts to a subset relation, and Example 25 that spans the most discussion. Appendix B contains additional examples.

5.1 Classes needing teaching assistants

This is Example 1 in [GZ19]. It considers a complete list of students enrolled in a class c, represented by a collection of facts:

  enrolled(’c’,’mike’)  enrolled(’c’,’john’)  ...
It defines a relation need_ta(c) that holds iff class c needs a teaching assistant, that is, the number of students enrolled in the class is greater than 20, and it gives a second rule for its negation, as follows:
  need_ta(c) \(\leftarrow\) count {x : enrolled(c,x)} \(>\) 20
  n_need_ta(c) \(\leftarrow\) \(\neg\) need_ta(c)

Because enrolled is certain from the list being complete, and there is no aggregation in recursion, need_ta is certain by default. Thus need_ta is straightforward to infer by just doing the counting for each c and then checking if the count is greater than 20. Then n_need_ta is computed, simply concluding true for classes for which need_ta is false.

5.2 Subset relation as conditions—with universal quantification

This is Example 15 in [GZ19]. It considers a knowledge base containing two complete lists of atoms, for two relations taken and required:

  taken(’mike’,’cs1’)  taken(’mike’,’cs2’)
  taken(’john’,’cs2’)
  required(’cs1’)  required(’cs2’)
It introduces a subset relation to define a new relation ready_to_graduate(s) that holds if student s has taken all the required classes:
  ready_to_graduate(s) \(\leftarrow\) {c: required(c)} \(\subseteq\) {c: taken(s,c)}
The problem description in [GZ19, Example 15] says that using the subset relation “avoids a more complex problem of introducing universal quantifiers and some kind of implication in the rules of the language”.

With our language, the rule can be written directly using universal quantification and implication, where P Q can be trivially rewritten as  P  Q, yielding:

  ready_to_graduate(s) \(\leftarrow\) \(\forall\) c | \(\neg\) required(c) \(\lor\) taken(s,c)

Because taken and required as given are certain, and there is no negation or aggregation in recursion, ready_to_graduate is certain by default and can be computed simply as a least fixed point. This yields the same result for founded semantics and constraint semantics: ready_to_graduate(’mike’) is true and ready_to_graduate(’john’) is false.

Example 15 in [GZ19] also discusses other assumptions and rules. They are either non-issues or straightforward to handle in our language. For example. if taken is not complete, founded semantics gives that ready_to_graduate(’mike’) is true and ready_to_graduate(’john’) is undefined, and constraint semantics gives two models: one with ready_to_graduate(’john’) being true and one with it being false.

5.3 Digital circuits—from the most complex to the simplest

This is Example 25, one with the longest span of discussion, in [GZ19], by building on their Examples 11, 23, and 24 as simpler instances or parts. It considers a program for propagating binary signals through a digital circuit that does not have a feedback, consisting of the following facts (where input(w,g) means that wire w is an input to gate g, output(w,g) is similar, gate(g,’and’) means that gate g is an and gate, and val(w,v) means that wire w has value v):

  input(’w1’,’g1’)  input(’w2’,’g1’)  input(’w0’,’g2’)
  output(’w0’,’g1’)  output(’w3’,’g2’)
  gate(’g1’,’and’)  gate(’g2’,’and’)
  val(’w1’,0)  val(’w2’,1)
and a rule:
  val(w,0) \(\leftarrow\) output(w,g) \(\land\) gate(g,’and’)
              \(\land\) count {w: val(w,0), input(w,g)} \(>\) 0
Their Example 11 does not have the last fact on each line (that is, no gate g2, input on w0, output on w3, and value on w2).111Their Example 11 also reverses the first two hypotheses of the rule; this appears to be accidental. Their Examples 23 and 24 use simpler instances of the rule to illustrate their definitions of “splitting set” and “stratification”, respectively.

First, input, output, and gate as given are certain. Then, val is certain by default, despite that val is defined using val in aggregation, because the dependency is positive—counting with and with no negation is monotonic. Therefore, the semantics is simply a least fixed point by using the given rule, yielding the same result for founded semantics and constraint semantics: val(’w0’,0), val(’w3’,0), plus the given facts, consistent with all of Examples 11 and 23-25 in [GZ19].

5.4 Correlated counts—with different predicate declarations

This is Example 28, the last example, in [GZ19]. It considers the following one fact and two rules:

  p(1)
  p(3) \(\leftarrow\) count {x: p(x)} \(\geq\) 2
  p(2) \(\leftarrow\) count {x: p(x)} \(\geq\) 2

We have that p is certain by default, despite that p is defined using p in aggregation, because the dependency is positive—counting with and with no negation is monotonic. The least fixed point infers p(1) being true in one iteration, and ends with p(1) being true, and p(3) and p(2) being false, as the result of both founded semantics and constraint semantics. This is the same as the resulting answer set in [GZ19], but is obtained straightforwardly, not using naive guesses followed by sophisticated reducts as in computing answer sets, which, for this example and for one of the answer sets, considers 9 S-reducts, each containing 3 rules or clauses for a combination of three models each containing 2 of 3 possible facts [GZ19].

Suppose that the default is not used, and p is declared uncertain and complete. We add the following completion rule, which does not infer p(3) or p(2) to be false.

  \(\neg\) p(x) \(\leftarrow\) x \(\neq\) 1 \(\land\) (x \(\neq\) 3 \(\lor\) count {x: p(x)} \(<\) 2)
                   \(\land\) (x \(\neq\) 2 \(\lor\) count {x: p(x)} \(<\) 2)
Founded semantics gives that p(1) is true, and p(3) and p(2) are undefined. Constraint semantics gives two models: {p(1)} and {p(1),p(2),p(3)}.

Suppose that p is declared uncertain and not complete. It is straightforward that p(1) is true as given, and p(3) and p(2) are left as undefined. Thus, founded semantics and constraint semantics are the same as when p is uncertain and complete.

Supposed that p is declared closed. Then the greatest unfounded set is {p(3),p(2)}, and founded semantics gives that p(1) is true, and p(3) and p(2) are false. That is, it makes the last two false, instead of undefined, and is the same as when p is certain. Since there are no undefined values, constraint semantics has one model: {p(1)}. This is again the same as in [GZ19], but note that using p being certain as above yields this desired result straightforwardly.

6 Related work and conclusion

The study of recursive rules with negation goes back at least to Russell’s paradox, discovered about 120 years ago [ID16]. Many logic languages and disagreeing semantics have since been proposed, with significant complications and challenges described in various survey and overview articles, e.g., [AB94, RU95, Fit02, Tru18], and in works on relating and unifying different semantics, e.g., [Dun92, Prz94, LZ04, DT08, HDCD10, BDT16, LS20a].

Recursive rules with aggregation have been a subject of study soon after rules with negation were used in programming. They received a large variety of different semantics in 20 years, e.g., [KS91, VG92, RS92, SSRB93, CM93, SNS02, Gel02, PDB07, FPL08, PDB07, FPL08, FPL11], and even more intensive studies in the last few years [GZ14, AFG15, AL15, Alv16, AFG16, ZR16, GZ17, ZYD17, ADM18, CFDCP18, GZ18, CFSS19, GZ19, GWM19, DLW19, ZDG19], especially as they are needed in graph analysis and machine learning applications.

Aggregation is even more challenging than negation because it is more general. For example, the count of all values x for which p(x) holds is 0 iff for all x, p(x) does not hold, but the count can be, say, 3, meaning that p(x) holds for some combination of 3 values, but does not for the other values, with many possibilities. This has led to more different and more sophisticated semantics than for negation.

Kemp and Stuckey [KS91] is one of the earliest comprehensive study, improving over a number of previous works. They extend WFS and SMS to programs with aggregation and study previously defined classes of aggregate programs under several notions of stratification as well as properties such as monotonicity. It requires that a set be fully defined before aggregation could be performed on it. This may leave too many values undefined and may give more models than desired in some use cases. Different use cases work with different assumptions, but before our work, there was no study to state the assumptions explicitly and simply and then accommodate the assumptions. Our founded semantics and constraint semantics allow easy expression of the underlying assumptions to obtain different desired semantics.

Van Gelder [VG92] presents an early approach in which aggregations are defined using ordinary rules, rather than introduced as new primitives, in a language with 3-valued semantics. It illustrates the approach using examples involving min, max, subset, and sum, with the rules defining the aggregations customized in some cases to the problem at hand. The paper shows that the desired results are obtained for several non-trivial examples but not for some others. Unfortunately, it is hard to characterize the programs for which the approach gives desired results. In contrast, our work handles a clearly defined language of programs with aggregations, allows specification of different assumptions, and supports both 2-valued and 3-valued semantics. Also, our work allows rules with disjunction and quantifiers. These are not considered in [VG92].

Many other different semantics, some focused on restricted classes or issues, have been studied. For example, the survey by Ramakrishnan and Ullman [RU95] discusses some different semantics, optimization methods, and uses of recursive rules with aggregation in earlier projects. Ross and Sagiv [RS92] studies monotonic aggregation but not general aggregation. Beeri et al. [BRSS92]

presents the valid model semantics for logic programs with negation, set expressions, and grouping. Sudarshan et al. 

[SSRB93] extends the valid model semantics for aggregation, gives semantics for more programs than Van Gelder [VG92], and subsumes a class of programs in Ganguly et al. [GGZ91], but it is only a 3-valued semantics. Hella et al. [HLNW99, HLNW01] study expressiveness of aggregation operators but without recursion. Pelov et al. [PDB07] formally studies and compares different semantics for aggregation, specially in terms of precision.

Gelfond and Zhang [Gel02, GZ14, ZR16, GZ17, GZ18, GZ19] study the challenges and solutions for aggregation in recursion extensively, in an effort to establish the desired semantics for aggregation that corresponds to SMS, a set of 2-valued models. This resulted in changes from earlier semantics by Gelfond [Gel02], essentially to capture an implicit closed-world assumption. Their most recent work [GZ19] systematically presents dozens of definitions and propositions and discusses dozen of examples. A number of other works have followed their line of study for answer set programming (ASP) [CFDCP18, CFSS19, GZ19].

Zaniolo et al. [GGZ91, ZAO93, ZYD17, GWM19, DLW19, ZDG19] study recursive rules with aggregation for database applications, especially including for big data analysis and machine learning applications in recent years. They study optimizations that exploit monotonicity as well as additional properties of the aggregation operators in computing the least fixed point, yielding superior performance and scalability necessary for these large applications. They discuss insight from their application experience as well as prior research for centering on fixed-point computation [ZYD17], which essentially corresponds to the assumption that predicates are certain.

Our founded semantics and constraint semantics for recursive rules with aggregation unify different previous semantics by allowing different underlying assumptions to be easily specified explicitly, and furthermore separately for each predicate if desired. Our semantics are also fully declarative, giving both a single 3-valued model from simply a least fixed-point computation and a set of 2-valued models from simply constraint solving.

The key enabling ideas of simple binary choices for expressing assumptions and simple lease fixed-point computation and constraint solving are taken from Liu and Stoller [LS18, LS20a], where they present a simple unified semantics for recursive rules with negation and quantification. To use the power of founded semantics and constraint semantics in programming, they propose the DA logic, for design and analysis logic [LS20b], that allows different assumptions to be specified as one of four meta-constraints, allows the resulting semantics to be referenced directly, and allows programs to be easily and modularly specified by using knowledge units.

There are many directions for future research, including additional relationships with prior semantics, additional language features, efficient implementation methods, and complexity guarantees.

References

  • [AB94] Krzysztof R. Apt and Roland N. Bol. Logic programming and negation: A survey. Journal of Logic Programming, 19:9–71, 1994.
  • [ADM18] Mario Alviano, Carmine Dodaro, and Marco Maratea. Shared aggregate sets in answer set programming. Theory and Practice of Logic Programming, 18(3-4):301–318, 2018.
  • [AFG15] Mario Alviano, Wolfgang Faber, and Martin Gebser. Rewriting recursive aggregates in answer set programming: back to monotonicity. Theory and Practice of Logic Programming, 15(4-5):559–573, 2015.
  • [AFG16] Mario Alviano, Wolfgang Faber, and Martin Gebser. From non-convex aggregates to monotone aggregates in ASP. In

    Proceedings of the International Joint Conference on Artificial Intelligence

    , pages 4100–4104, 2016.
  • [AL15] Mario Alviano and Nicola Leone. Complexity and compilation of GZ-aggregates in answer set programming. Theory and Practice of Logic Programming, 15(4-5):574–587, 2015.
  • [Alv16] Mario Alviano. Evaluating answer set programming with non-convex recursive aggregates. Fundamenta Informaticae, 149(1-2):1–34, 2016.
  • [BDT16] Maurice Bruynooghe, Marc Denecker, and Miroslaw Truszczynski. First order logic with inductive definitions for model-based problem solving. AI Magazine, 37(3):69–80, 2016.
  • [BRSS92] Catriel Beeri, Raghu Ramakrishnan, Divesh Srivastava, and S Sudarshan. The valid model semantics for logic programs. In Proceedings of the 11th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, pages 91–104. ACM, 1992.
  • [CFDCP18] Pedro Cabalar, Jorge Fandinno, Luis Farinas Del Cerro, and David Pearce. Functional ASP with intensional sets: Application to Gelfond-Zhang aggregates. Theory and Practice of Logic Programming, 18(3-4):390–405, 2018.
  • [CFSS19] Pedro Cabalar, Jorge Fandinno, Torsten Schaub, and Sebastian Schellhorn. Gelfond-zhang aggregates as propositional formulas. Artificial Intelligence, 274:26–43, 2019.
  • [Cla78] Keith L. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, Logic and Databases, pages 293–322. Plenum Press, 1978.
  • [CM93] Mariano P Consens and Alberto O Mendelzon. Low-complexity aggregation in GraphLog and Datalog. Theoretical Computer Science, 116(1):95–116, 1993.
  • [DLW19] Ariyam Das, Youfu Li, Jin Wang, Mingda Li, and Carlo Zaniolo. Bigdata applications from graph analytics to machine learning by aggregates in recursion. In Proceedings of the 35th International Conference on Logic Programming (Technical Communications), pages 273–279, 2019.
  • [DT08] Marc Denecker and Eugenia Ternovska. A logic of nonmonotone inductive definitions. ACM Transactions on Computational Logic, 9(2):14, 2008.
  • [Dun92] Phan Minh Dung. On the relations between stable and well-founded semantics of logic programs. Theoretical Computer Science, 105(1):7–25, 1992.
  • [Fit85] Melvin Fitting. A Kripke-Kleene semantics for logic programs. Journal of Logic Programming, 2(4):295–312, 1985.
  • [Fit02] Melvin Fitting. Fixpoint semantics for logic programming: A survey. Theoretical Computer Science, 278(1):25–51, 2002.
  • [FPL08] Wolfgang Faber, Gerald Pfeifer, Nicola Leone, Tina Dell’Armi, and Giuseppe Ielpa. Design and implementation of aggregate functions in the DLV system. Theory and Practice of Logic Programming, 8(5-6):545–580, 2008.
  • [FPL11] Wolfgang Faber, Gerald Pfeifer, and Nicola Leone. Semantics and complexity of recursive aggregates in answer set programming. Artificial Intelligence, 175(1):278–298, 2011.
  • [Gel02] Michael Gelfond. Representing knowledge in A-Prolog. In Computational Logic: Logic Programming and Beyond, pages 413–451. Springer, 2002.
  • [GGZ91] Sumit Ganguly, Sergio Greco, and Carlo Zaniolo. Minimum and maximum predicates in logic programming. In Proceedings of the 10th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, pages 154–163. ACM, 1991.
  • [GL88] Michael Gelfond and Vladimir Lifschitz. The stable model semantics for logic programming. In Proceedings of the 5th International Conference and Symposium on Logic Programming, pages 1070–1080. MIT Press, 1988.
  • [GWM19] Jiaqi Gu, Yugo H. Watanabe, William A. Mazza, Alexander Shkapsky, Mohan Yang, Ling Ding, and Carlo Zaniolo. RaSQL: Greater power and performance for big data analytics with recursive-aggregate-SQL on Spark. In Proceedings of the 2019 International Conference on Management of Data, pages 467–484, 2019.
  • [GZ14] Michael Gelfond and Yuanlin Zhang. Vicious circle principle and logic programs with aggregates. Theory and Practice of Logic Programming, 14(4-5):587–601, 2014.
  • [GZ17] Michael Gelfond and Yuanlin Zhang. Vicious circle principle and formation of sets in ASP based languages. In International Conference on Logic Programming and Nonmonotonic Reasoning, pages 146–159. Springer, 2017.
  • [GZ18] Michael Gelfond and Yuanlin Zhang. Vicious circle principle and logic programs with aggregates. arXiv preprint arXiv:1808.07050, 2018.
  • [GZ19] Michael Gelfond and Yuanlin Zhang. Vicious circle principle, aggregates, and formation of sets in ASP based languages. Artificial Intelligence, 275:28–77, 2019.
  • [HDCD10] P. Hou, B. De Cat, and M. Denecker. FO(FD): Extending classical logic with rule-based fixpoint definitions. Theory and Practice of Logic Programming, 10(4-6):581–596, 2010.
  • [HLNW99] Lauri Hella, Leonid Libkin, Juha Nurmonen, and Limsoon Wong. Logics with aggregate operators. In Proceedings of the 14th Annual IEEE Symposium on Logic in Computer Science, page 35. IEEE Computer Society, 1999.
  • [HLNW01] Lauri Hella, Leonid Libkin, Juha Nurmonen, and Limsoon Wong. Logics with aggregate operators. Journal of the ACM, 48(4):880–907, 2001.
  • [ID16] Andrew David Irvine and Harry Deutsch. Russell’s paradox. Stanford Encyclopedia of Philosophy, First published Fri Dec 8, 1995; substantive revision Sun Oct 9, 2016. https://plato.stanford.edu/entries/russell-paradox/ Accessed Nov. 6, 2019.
  • [KS91] David B Kemp and Peter J Stuckey. Semantics of logic programs with aggregates. In Proceedings of the International Symposium on Logic Programming, volume 91, pages 387–401, 1991.
  • [LS18] Yanhong A. Liu and Scott D. Stoller. Founded semantics and constraint semantics of logic rules. In Proceedings of the International Symposium on Logical Foundations of Computer Science, volume 10703 of Lecture Notes in Computer Science, pages 221–241. Springer, Jan. 2018.
  • [LS19] Yanhong A. Liu and Scott D. Stoller. Founded semantics and constraint semantics of logic rules: An overview. In Proceedings of the 35th International Conference on Logic Programming (Technical Communications), pages 367–368. Open Publishing Association, Sept. 2019.
  • [LS20a] Yanhong A. Liu and Scott D. Stoller. Founded semantics and constraint semantics of logic rules. Journal of Logic and Computation, 30(8), 2020. To appear.  http://arxiv.org/abs/1606.06269.
  • [LS20b] Yanhong A. Liu and Scott D. Stoller. Knowledge of uncertain worlds: Programming with logical constraints. In Proceedings of the International Symposium on Logical Foundations of Computer Science, volume 11972 of Lecture Notes in Computer Science, pages 111–127. Springer, Jan. 2020.
  • [LZ04] Fangzhen Lin and Yuting Zhao. ASSAT: Computing answer sets of a logic program by SAT solvers. Artificial Intelligence, 157(1-2):115–137, 2004.
  • [PDB07] Nikolay Pelov, Marc Denecker, and Maurice Bruynooghe. Well-founded and stable semantics of logic programs with aggregates. Theory and Practice of Logic Programming, 7(3):301–353, 2007.
  • [Prz94] Teodor C. Przymusinski. Well-founded and stationary models of logic programs. Annals of Mathematics and Artificial Intelligence, 12(3):141–187, 1994.
  • [RS92] Kenneth A Ross and Yehoshua Sagiv. Monotonic aggregation in deductive databases. In Proceedings of the 11th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, pages 114–126. ACM, 1992.
  • [RU95] Raghu Ramakrishnan and Jeffrey D Ullman. A survey of deductive database systems. Journal of Logic Programming, 23(2):125–149, 1995.
  • [SNS02] Patrik Simons, Ilkka Niemelä, and Timo Soininen. Extending and implementing the stable model semantics. Artificial Intelligence, 138(1-2):181–234, 2002.
  • [SSRB93] S Sudarshan, Divesh Srivastava, Raghu Ramakrishnan, and Catriel Beeri. Extending the well-founded and valid semantics for aggregation. In Proceedings of the 1993 International Symposium on Logic programming, pages 590–608. MIT Press, 1993.
  • [Tru18] Miroslaw Truszczynski. An introduction to the stable and well-founded semantics of logic programs. In Michael Kifer and Yanhong Annie Liu, editors, Declarative Logic Programming: Theory, Systems, and Applications, pages 121–177. ACM and Morgan & Claypool, 2018.
  • [VG92] Allen Van Gelder. The well-founded semantics of aggregation. In Proceedings of the 11th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, pages 127–138, June 2-4, 1992, San Diego, California, 1992.
  • [VG93] Allen Van Gelder. The alternating fixpoint of logic programs with negation. Journal of Computer and System Sciences, 47(1):185–221, 1993.
  • [VRS91] Allen Van Gelder, Kenneth Ross, and John S. Schlipf. The well-founded semantics for general logic programs. Journal of the ACM, 38(3):620–650, 1991.
  • [ZAO93] Carlo Zaniolo, Natraj Arni, and KayLiang Ong. Negation and aggregates in recursive rules: the ldl++ approach. In International Conference on Deductive and Object-Oriented Databases, pages 204–221. Springer, 1993.
  • [ZDG19] Carlo Zaniolo, Ariyam Das, Jiaqi Gu, Youfu Li, Mingda Li, and Jin Wang. Monotonic properties of completed aggregates in recursive queries. CoRR, abs/1910.08888, 2019.
  • [ZR16] Yuanlin Zhang and Maede Rayatidamavandi. A characterization of the semantics of logic programs with aggregates. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 1338–1344, 2016.
  • [ZYD17] Carlo Zaniolo, Mohan Yang, Ariyam Das, Alexander Shkapsky, Tyson Condie, and Matteo Interlandi. Fixpoint semantics and optimization of recursive datalog programs with aggregates. Theory and Practice of Logic Programming, 17(5-6):1048–1065, 2017.

Appendix A Proofs

Proof of Theorem 1. The proof of consistency of the founded model is an extension of the corresponding proof by induction for the language without aggregation [LS20a, Theorem 1]. The proof is by induction on the sequence of interpretations constructed in the semantics by steps that either apply one-step derivability or add the element-wise negation of a greatest unfounded set. Two extensions to the proof are needed to show consistency for the language extended with aggregation.

To show that steps that apply one-step derivability still preserve consistency, we extend the proof to show consistency for comparison atoms added to the interpretation. This follows directly from the definition of derivability of comparisons in Figure 1: for each pair of biconditionals for deriving complementary comparisons, the right sides of those biconditionals are mutually exclusive conditions, that is, the conjunction of those two conditions is not satisfiable.

To show that steps that add the element-wise negation of a greatest unfounded set still preserve consistency, we extend the proof to show that the extended definition of unfounded set still ensures that none of the atoms in an unfounded set for an interpretation are derivable in . This property still holds because the definition ensures that, for each rule instance that could be used to derive an atom in , (1) some hypothesis of is false in and hence is false in , (2) some positive predicate hypothesis of is in and hence is false in , or (3) some comparison hypothesis of is false in . Note that these three cases correspond to the three cases in the extended definition of unfounded set.

For consistency of constraint semantics, note that constraint models are consistent by definition.

Proof of Theorem 1. The proof that is a model of and is an extension of the corresponding proof for the language without aggregation [LS20a, Theorem 2]. The extension is to show that contains all derivable comparisons of . This follows from the fact that applying the one-step derivability function adds all derivable comparisons to the interpretation, except for one remaining issue: one-step derivability is not applied after is used to add completion facts for the last SCC . Therefore, we need to show that these completion facts cannot give rise to new comparison atoms by derivability .

We prove this by contradiction. Suppose adds a negative literal of predicate in to , and this causes a comparison atom in a hypothesis of a ground instance of a rule to become derivable. This implies that occurs non-positively in , for two reasons. First, must occur in by definition of derivability . Second, positive occurrences of in cannot have this effect, which can be shown by case analysis for each of the forms of comparison atom in the definition of positive occurrence. occurring non-positively in implies that the predicate in the conclusion of depends non-positively on . Because is the last SCC, must also be in ; because and are in the same SCC, must also depend on . This implies that has circular non-positive dependency and hence must be uncertain. However, this contradicts the assumption that added a literal for , because adds literals only for certain predicates.

Constraint models are 2-valued models of by definition. Any model of is also a model of , because is logically equivalent to the subset of obtained by removing the completion rules added by .

Appendix B Additional examples

Double-win game—for any kind of input moves

Consider the following game, called double-win game. Give a set of moves, the game uses the following single rule, called double-win rule, for winning:

    win(x) \(\leftarrow\) count {y: move(x,y) \(\land\) \(\neg\) win(y)} \(\geq\) 2
It says that x is a winning position if the set of positions, y, such as there is a move from x to y and y is not a winning position, has at least two elements. That is, x is a winning position if there are at least two positions to move to from x that are not winning positions.

The double-win game is a generalization of the well-known win-not-win game [LS18, LS20a], which has a single rule, stating that x is a winning position if there is a move from x to some positions y and y is not a winning position:

    win(x) \(\leftarrow\) move(x,y) \(\land\) \(\neg\) win(y)
One could also rewrite the double-win rule using two explicit positions y1 and y2 and adding y1!=y2, but this approach does not scale when the count can be compared with any number, not just 2, and is not necessarily known in advance.

By default, move is certain, and win is uncertain but complete. First, add the completion rule:

    \(\neg\) win(x) \(\leftarrow\) count {y: move(x,y) \(\land\) \(\neg\) win(y)} \(<\) 2
Then, rename win to n.win:
    win(x) \(\leftarrow\) count {y: move(x,y) \(\land\) n.win(y)} \(\geq\) 2
    n.win(x) \(\leftarrow\) count {y: move(x,y) \(\land\) n.win(y)} \(<\) 2
Now compute the least fixed point. Start with the base case, in the second rule, for positions x that have moves to fewer than 2 positions; this infers n.win(x) facts for those positions x. Then, the first rule infers win(x) facts for any position x that can move to 2 or more positions for which n.win is true.

This process iterates to infer more n.win and more win facts, until a fixed point is reached, where win gives winning positions, n.win gives losing positions, and the remaining positions are draw positions, corresponding to positions for which win is true, false, and undefined, respectively.