Founded Semantics and Constraint Semantics of Logic Rules

This paper describes a simple new semantics for logic rules, founded semantics, and its straightforward extension to another simple new semantics, constraint semantics. The new semantics support unrestricted negation, as well as unrestricted existential and universal quantifications. They are uniquely expressive and intuitive by allowing assumptions about the predicates and rules to be specified explicitly. They are completely declarative and easy to understand and relate cleanly to prior semantics. In addition, founded semantics can be computed in linear time in the size of the ground program.

Authors

• 13 publications
• 14 publications
• Recursive Rules with Aggregation: A Simple Unified Semantics

Complex reasoning problems are most clearly and easily specified using l...
07/26/2020 ∙ by Yanhong A. Liu, et al. ∙ 0

• Knowledge of Uncertain Worlds: Programming with Logical Constraints

Programming with logic for sophisticated applications must deal with rec...
10/23/2019 ∙ by Yanhong A. Liu, et al. ∙ 0

• Visualization of Constraint Handling Rules: Semantics and Applications

The work in the paper presents an animation extension (CHR^vis) to Const...
06/04/2017 ∙ by Nada Sharaf, et al. ∙ 0

• Restricted Predicates for Hypothetical Datalog

Hypothetical Datalog is based on an intuitionistic semantics rather than...
12/22/2015 ∙ by Fernando Sáenz-Pérez, et al. ∙ 0

• Extending the Stable Model Semantics with More Expressive Rules

The rules associated with propositional logic programs and the stable mo...
08/06/1999 ∙ by Patrik Simons, et al. ∙ 0

• Extending and Implementing the Stable Model Semantics

An algorithm for computing the stable model semantics of logic programs ...
05/08/2000 ∙ by Patrik Simons, et al. ∙ 0

• Adventures in Monitorability: From Branching to Linear Time and Back Again

This paper establishes a comprehensive theory of runtime monitorability ...
02/01/2019 ∙ by Luca Aceto, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Logic rules and inference are fundamental in computer science, especially for solving complex modeling, reasoning, and analysis problems in critical areas such as program verification, security, and decision support.

The semantics of logic rules and their efficient computations have been a subject of significant study, especially for complex rules that involve recursive definitions and unrestricted negation and quantifications. Many different semantics and computation methods have been proposed. Even the two dominant semantics for logic programs, well-founded semantics (WFS)

[VRS91, VG93] and stable model semantics (SMS) [GL88], are still difficult to understand intuitively, even for extremely simple rules; they also make implicit assumptions and, in some cases, do not capture common sense, especially ignorance.

This paper describes a simple new semantics for logic rules, founded semantics, that extends straightforwardly to another simple new semantics, constraint semantics.

• The new semantics support unrestricted negation (both stratified and non-stratified), as well as unrestricted combinations of existential and universal quantifications.

• They allow each predicate to be specified explicitly as certain (each assertion of the predicate has one of two values: true, false) or uncertain (has one of three values: true, false, undefined), and as complete (all rules defining the predicate are given) or not.

• Completion rules are added for predicates that are complete, as explicit rules for inferring the negation of those predicates using the negation of the hypotheses of the given rules.

• Founded semantics infers all true and false values that are founded, i.e., rooted in the given true or false values and exactly following the rules, and it completes certain predicates with false values and completes uncertain predicates with undefined values.

• Constraint semantics extends founded semantics by allowing undefined values to take all combinations of true and false values that satisfy the constraints imposed by the rules.

Founded semantics and constraint semantics unify the core of previous semantics and have three main advantages:

1. They are expressive and intuitive, by allowing assumptions about predicates and rules to be specified explicitly, by including the choice of uncertain predicates to support common-sense reasoning with ignorance, and by adding explicit completion rules to define the negation of predicates.

2. They are completely declarative and easy to understand. Founded semantics takes the given rules and completion rules as recursive definitions of the predicates and their negation, and is simply the least fixed point of the recursive functions. Constraint semantics takes the given rules and completion rules as constraints, and is simply the set of all solutions that are consistent with founded semantics.

3. They relate cleanly to prior semantics, including WFS and SMS, as well as Fitting semantics (also called Kripke-Kleene semantics) [Fit85], supported models [ABW88], stratified semantics [ABW88, VG89], and first-order logic, by explicitly specifying corresponding assumptions about the predicates and rules.

Additionally, founded semantics can be computed in linear time in the size of the ground program, as opposed to quadratic time for WFS.

Finally, founded semantics and constraint semantics can be extended to allow uncertain, complete predicates to be specified as closed—making an assertion of the predicate false if inferring it to be true (respectively false) using the given rules and facts requires assuming itself to be true (respectively false)—and thus match WFS and SMS, respectively.

2 Motivation for founded semantics and constraint semantics

Founded semantics and constraint semantics are designed to be intuitive and expressive. For rules with no negation or with restricted negation, which have universally accepted semantics, the new semantics are consistent with the accepted semantics. For rules with unrestricted negation, which so far lack a universally accepted semantics, the new semantics unify the core of prior semantics with two basic principles:

1. Assumptions about certain and uncertain predicates, with true () and false () values, or possibly undefined () values, and about whether the rules defining each predicate are complete must be made explicit.

2. Any easy-to-understand semantics must be consistent with one where everything inferred that has a unique or value is rooted in the given or values and following the rules.

This section gives informal explanations.

Rules with no negation.

Consider a set of rules with no negation in the hypotheses, e.g., a rule can be “q(x) if p(x)” but not “q(x) if not p(x)” for predicates p and q and variable x. The meaning of the rules, given a set of facts, e.g., a fact p(a) for constant a, is the set of all facts that are given or can be inferred by applying the rules to the facts, e.g., {p(a),q(a)} using the example rule and fact given. In particular,

1. Everything is either or , i.e., as given or inferred facts, or as otherwise. So one can just explicitly express what are , and the rest are .

2. Everything inferred must be founded, i.e., rooted in the given facts and following the rules. So anything that always depends on itself, e.g., p(a), given only the rule “p(x) if p(x)”, is not .

In technical terms, the semantics is 2-valued, and the set of all facts, i.e., true assertions, is the minimum model, equal to the least fixed point of applying the rules starting from the given facts.

Rules with restricted negation.

Consider rules with negation in the hypotheses, but with each negation only on a predicate all of whose facts can be inferred without using the rule containing that negation, e.g., one can have “q(x) if not p(x)” but not “p(x) if not p(x)”. The meaning of the rules is as for rules with no negation except that a rule with negation is applied only after all facts of the negated predicates have been inferred. In other words,

1. The true assertions of any predicate do not depend on the negation of that predicate. So a negation could be just a test after all facts of the negated predicate are inferred. The rest remains the same as for rules with no negation.

In technical terms, this is stratified negation; the semantics is still 2-valued, the minimum model, and the set of all true assertions is the least fixed point of applying the rules in order of the strata.

Rules with unrestricted negation.

Consider rules with unrestricted negation in the hypotheses, where a predicate may cyclically depend on its own negation, e.g., “p(x) if not p(x)”. Now the value of a negated assertion needs to be established before all facts of the negated predicate have been inferred. In particular,

• There may not be a unique or value for each assertion. For example, given only rule “p(x) if not p(x)”, p(a) cannot be because inferring it following the rule would require itself be , and it cannot be because it would lead to itself being following the rule. That is, there may not be a unique 2-valued model.

In technical terms, the negation may be non-stratified. There are two best solutions to this that generalize a unique 2-valued model: a unique 3-valued model and a set of 2-valued models, as in the dominant well-founded semantics (WFS) and stable model semantics (SMS), respectively.

In a unique 3-valued model, when a unique or value cannot be established for an assertion, a third value, undefined (), is used. For example, given only rule “p(x) if not p(x)”, p(a) is , in both WFS and founded semantics.

• With the semantics being 3-valued, when one cannot infer that an assertion is , one should be able to express whether it is or when there is a choice. For example, given only rule “p(x) if p(x)”, p(a) is not , so p(a) may in general be or .

• WFS requires that such an assertion be , even though common sense generally says that it is . WFS attempts to be the same as in the case of 2-valued semantics, even though one is now in a 3-valued situation.

• Founded semantics supports both, allowing one to choose explicitly when there is a choice. Founded semantics is more expressive by supporting the choice. It is also more intuitive by supporting the common-sense choice for expressing ignorance.

For a set of 2-valued models, similar considerations motivate our constraint semantics. In particular, given only rule “p(x) if not p(x)”, the semantics is the empty set, in both SMS and constraint semantics, because no model can contain p(a) or not p(a), for any a, because p(a) cannot be or as discussed above. However, given only rule “p(x) if p(x)”, SMS requires that p(a) be in all models, while constraint semantics allows the choice of p(a) being in all models or being in some models.

Certain or uncertain.

Founded semantics and constraint semantics first allow a predicate to be declared certain (i.e., each assertion of the predicate has one of two values: , ) or uncertain (i.e., each assertion of the predicate has one of three values: , , ) when there is a choice. If a predicate is defined (as conclusions of rules) with use of non-stratified negation, then it must be declared uncertain, because it might not have a unique 2-valued model. Otherwise, it may be declared certain or uncertain.

• For a certain predicate, everything must be given or inferred by following the rules, and the rest are , in both founded semantics and constraint semantics.

• For an uncertain predicate, everything or must be given or inferred, and the rest are in founded semantics. Constraints semantics then extends everything to be combinations of and that satisfy all the rules and facts as constraints.

Complete or not.

Founded semantics and constraint semantics then allow an uncertain predicate that is in the conclusion of a rule to be declared complete, i.e., all rules with that predicate in the conclusion are given.

• If a predicate is complete, then completion rules are added to define the negation of the predicate explicitly using the negation of the hypotheses of all given rules and facts of that predicates.

• Completion rules, if any, and given rules are used together to infer everything and . The rest are in founded semantics, or are combinations of and in constraint semantics as described above.

Closed or not.

Finally, founded semantics and constraint semantics can be extended to allow an uncertain, complete predicate to be declared closed, i.e., an assertion of the predicate is made , called self-false, if inferring it to be (respectively ) using the given rules and facts requires assuming itself to be (respectively ).

• Determining self-false assertions is similar to determining unfounded sets in WFS. Repeatedly computing founded semantics and self-false assertions until a least fixed point is reached yields WFS.

• Among combinations of and values for assertions with values in WFS, removing each combination that has self-false assertions that are not already in that combination yields SMS.

Correspondence to prior semantics, more on motivation.

Table 1 summarizes corresponding declarations for different assumptions under prior semantics; formal definitions and proofs for these and for additional relationships appear in the following sections. Founded semantics and constraint semantics allow additional combinations of declarations than those in the table.

Some observations from the table may help one better understand founded semantics and constraint semantics.

• The top 4 wide rows cover all combinations of allowed declarations (for all predicates).

• Wide row 1 is a special case of wide row 4, because being certain implies being complete and closed. So one could prefer to use only the latter two choices and omit the first choice. However, being certain is uniquely important, both for conceptual simplicity and practical efficiency:

(1)  It covers the vast class of database applications that do not use non-stratified negation, for which stratified semantics is universally accepted. It does not need to be understood by explicitly combining the latter two more sophisticated notions.

(2)  It allows founded semantics to match WFS for all example programs we found in the literature, with predicates being certain when possible and complete otherwise, but without the last, most sophisticated notion of being closed; and the semantics can be computed in linear time.

• Wide rows 2 and 3 allow the assumption about predicates that are uncertain, not complete, or not closed to be made explicitly.

In a sense, WFS uses for both false and some kinds of ignorance (no knowledge of something must mean it is ), uses for both true and some kinds of ignorance inferred through negation of , and uses for conflict, remaining kinds of ignorance from and , and imprecision; SMS resolves the ignorance in , but not the ignorance in and . In contrast,

• founded semantics uses only for true, only for false, and for conflict, ignorance, and imprecision;

• constraint semantics further differentiates among conflict, ignorance, and imprecision—corresponding to there being no model, multiple models, and a unique model, respectively, consistent with founded semantics.

After all, any easy-to-understand semantics must be consistent with the and values that can be inferred by exactly following the rules and completion rules starting from the given facts.

• Founded semantics is the maximum set of such and assertions, as a least fixed point of the given rules and completion rules if any, plus values for the remaining assertions.

• Constraint semantics is the set of combinations of all and assertions that are consistent with founded semantics and satisfy the rules as constraints.

Founded semantics without closed predicates can be computed easily and efficiently, as a least fixed point, instead of an alternating fixed point or iterated fixed point for computing WFS.

3 Language

We first consider Datalog with unrestricted negation in hypotheses. We extend it in Section 7 to allow unrestricted combinations of existential and universal quantifications and other features.

Datalog with unrestricted negation.

A program in the core language is a finite set of rules of the following form, where any may be preceded with , and any and over all rules may be declared certain or uncertain, and declared complete or not:

 Q(X1,...,Xa) ← P1(X11,...,X1a1) ∧ ... ∧ Ph(Xh1,...,Xhah) (1)

Symbols , , and indicate backward implication, conjunction, and negation, respectively; is a natural number, each (respectively ) is a predicate of finite number (respectively ) of arguments, each and is either a constant or a variable, and each variable in the arguments of must also be in the arguments of some .

If , there is no or , and each must be a constant, in which case is called a fact. For the rest of the paper, “rule” refers only to the case where , in which case each or is called a hypothesis of the rule, and is called the conclusion of the rule. The set of hypotheses of the rule is called the body of the rule.

A predicate declared certain means that each assertion of the predicate has a unique true () or false () value. A predicate declared uncertain means that each assertion of the predicate has a unique true, false, or undefined () value. A predicate declared complete means that all rules with that predicate in the conclusion are given in the program.

A predicate in the conclusion of a rule is said to be defined using the predicates or their negation in the hypotheses of the rule, and this defined-ness relation is transitive.

• A predicate must be declared uncertain if it is defined transitively using its own negation, or is defined using an uncertain predicate; otherwise, it may be declared certain or uncertain and is by default certain.

• A predicate may be declared complete or not only if it is uncertain and is in the conclusion of a rule, and it is by default complete.

In examples with no explicit specification of declarations, default declarations are used.

Rules of form (1) without negation are captured exactly by Datalog [CGT90, AHV95], a database query language based on the logic programming paradigm. Recursion in Datalog allows queries not expressible in relational algebra or relational calculus. Negation allows more sophisticated logic to be expressed directly. However, unrestricted negation in recursion has been the main challenge in defining the semantics of such a language, e.g., [AB94, Fit02, Tru17], including whether the semantics should be 2-valued or 3-valued, and whether the rules are considered complete or not.

Example. We use win, the win-not-win game, as a running example, with default declarations: move is certain, and win is uncertain and complete. A move from position x to position y is represented by a fact move(x,y). The following rule captures the win-not-win game: a position x is winning if there is a move from x to some position y and y is not winning. Arguments x and y are variables.

    win(x) $$\leftarrow$$ move(x,y) $$\land$$ $$\neg$$ win(y)


Notations.

In arguments of predicates, we use letter sequences for variables, and use numbers and quoted strings for constants. In presenting the semantics, in particular the completion rules, we use equality and the notations below for existential and universal quantifications, respectively, in the hypotheses of rules, and use negation in the conclusions.

∃ X1, ..., Xn \tt\small| Y existential quantification universal quantification
(2)

The quantifications return iff for some or all, respectively, combinations of values of , the value of Boolean expression is . The domain of each quantified variable is the set of all constants in the program.

4 Formal definition of founded semantics and constraint semantics

Atoms, literals, and projection.

Let be a program. A predicate is intensional in if it appears in the conclusion of at least one rule; otherwise, it is extensional. An atom of is a formula formed by applying a predicate symbol in to constants in . A literal of is an atom of or the negation of an atom of . These are called positive literals and negative literals, respectively. The literals and are complements of each other. A set of literals is consistent if it does not contain a literal and its complement. The projection of a program onto a set of predicates, denoted , contains all facts of for predicates in and all rules of whose conclusions contain predicates in .

Interpretations, ground instances, models, and derivability.

An interpretation of is a consistent set of literals of . Interpretations are generally 3-valued: a literal is true () in interpretation if it is in , is false () in if its complement is in , and is undefined () in if neither it nor its complement is in . An interpretation of is 2-valued if it contains, for each atom of , either or its complement. An interpretation is 2-valued for predicate if, for each atom for , contains or its complement. Interpretations are ordered by set inclusion .

A ground instance of a rule is any rule that can be obtained from by expanding universal quantifications into conjunctions over all constants in the domain, instantiating existential quantifications with constants, and instantiating the remaining variables with constants. For example, is a ground instance of . An interpretation is a model of a program if it contains all facts in the program and satisfies all rules of the program, interpreted as formulas in 3-valued logic [Fit85], i.e., for each ground instance of each rule, if the body is true, then so is the conclusion. The one-step derivability operator for program performs one step of inference using rules of , starting from a given interpretation. Formally, iff is a fact of or there is a ground instance of a rule of with conclusion such that each hypothesis of is true in interpretation .

Dependency graph.

The dependency graph of program is a directed graph with a node for each predicate of , and an edge from to labeled (respectively, ) if a rule whose conclusion contains has a positive (respectively, negative) hypothesis that contains . If the node for predicate is in a cycle containing only positive edges, then has circular positive dependency in ; if it is in a cycle containing a negative edge, then has circular negative dependency in .

Founded semantics.

Intuitively, the founded model of a program , denoted , is the least set of literals that are given as facts or can be inferred by repeated use of the rules. We define , where functions , , , and are defined as follows.

Completion.

The completion function, , returns the completed program of . Formally, , where and are defined as follows.

The function returns the program obtained from by replacing the facts and rules defining each uncertain complete predicate with a single combined rule for , defined as follows. Transform the facts and rules defining so they all have the same conclusion , by replacing each fact or rule with , where are fresh variables (i.e., not occurring in the given rules defining ), and are all variables occurring in . Combine the resulting rules for into a single rule defining whose body is the disjunction of the bodies of those rules. This combined rule for is logically equivalent to the original facts and rules for . Similar completion rules are used in Clark completion [Cla87] and Fitting semantics [Fit85].

Example. For the win example, the rule for win becomes the following. For readability, we renamed variables to transform the equality conjuncts into identities and then eliminated them.

    win(x) $$\leftarrow$$ $$\exists$$ y | (move(x,y) $$\land$$ $$\neg$$ win(y))


The function returns the program obtained from by adding, for each uncertain complete predicate , a completion rule that derives negative literals for . The completion rule for is obtained from the inverse of the combined rule defining (recall that the inverse of is ), by putting the body of the rule in negation normal form, i.e., using identities of predicate logic to move negation inwards and eliminate double negations, so that negation is applied only to atoms.

Example. For the win example, the added rule is

    $$\neg$$ win(x) $$\leftarrow$$ $$\forall$$ y | ($$\neg$$ move(x,y) $$\lor$$ win(y))


Least fixed point.

The least fixed point is preceded and followed by functions that introduce and remove, respectively, new predicates representing the negations of the original predicates.

The function returns the program obtained from by replacing each negative literal with , where the new predicate represents the negation of predicate .

Example. For the win example, this yields:

    win(x) $$\leftarrow$$ $$\exists$$ y | (move(x,y) $$\land$$ n.win(y))
n.win(x) $$\leftarrow$$ $$\forall$$ y | (n.move(x,y) $$\lor$$ win(y))


The function uses a least fixed point to infer facts for each strongly connected component (SCC) in the dependency graph of , as follows. Let be a list of the SCCs in dependency order, so earlier SCCs do not depend on later ones; it is easy to show that any linearization of the dependency order leads to the same result for . For convenience, we overload to also denote the set of predicates in the SCC.

Define , where is the empty set and for . is the least fixed point operator. The least fixed point is well-defined, because the one-step derivability function is monotonic, because the program does not contain negation. The function returns the interpretation obtained from interpretation by adding completion facts for certain predicates in to ; specifically, for each such predicate , for each combination of values of arguments of , if does not contain , then add .

Example. For the win example, the least fixed point calculation

1. infers n.win(x) for any x that does not have move(x,y) for any y, i.e., has no move to anywhere;

2. infers win(x) for any x that has move(x,y) for some y and n.win(y) has been inferred;

3. infers more n.win(x) for any x such that any y having move(x,y) has win(y);

4. repeatedly does 2 and 3 above until a fixed point is reached.

The function returns the interpretation obtained from interpretation by replacing each atom with .

Example. For the win example, positions x for which win(x) is , , and , respectively, in the founded model correspond exactly to the well-known win, lose, and draw positions, respectively. In particular,

1. a losing position is one that either does not have a move to anywhere or has moves only to winning positions;

2. a winning position is one that has a move to a losing position; and

3. a draw position is one not satisfying either case above, i.e., it is in a cycle of moves that do not have a move to a losing position, called a draw cycle, or is a position that has only sequences of moves to positions in draw cycles.

Example. If the running example uses the declaration that move is uncertain instead of the default of being certain, then the founded semantics infers that win is for all positions.

Constraint semantics.

Constraint semantics is a set of 2-valued models based on founded semantics. A constraint model of is a consistent 2-valued interpretation such that is a model of and . Let denote the set of constraint models of . Constraint models can be computed from by iterating over all assignments of true and false to atoms that are undefined in , and checking which of the resulting interpretations satisfy all rules in .

Example. For win, draw positions (i.e., positions for which win is undefined) are in draw cycles, i.e., cycles that do not have a move to a n.win position, or are positions that have only a sequence of moves to positions in draw cycles.

1. If some SCC has draw cycles of only odd lengths, then there is no satisfying assignment of

and to win for positions in the SCC, so there are no constraint models of the program.

2. If some SCC has draw cycles of only even lengths, then there are two satisfying assignments of and to win for positions in the SCC, with the truth values alternating between and around each cycle, and with the second truth assignment obtained from the first by swapping and . The total number of constraint models of the program is exponential in the number of such SCCs.

5 Properties of founded semantics and constraint semantics

Proofs of theorems appear in Appendix C.

Consistency and correctness.

The most important properties are consistency and correctness.

Theorem 1. The founded model and constraint models of a program are consistent.

Theorem 2. The founded model of a program is a model of and . The constraint models of are 2-valued models of and .

Same SCC, same certainty.

All predicates in an SCC have the same certainty.

Theorem 3. For every program, for every SCC in its dependence graph, all predicates in are certain, or all of them are uncertain.

Higher-order programming.

Higher-order logic programs, in languages such as HiLog, can be encoded as first-order logic programs by a semantics-preserving transformation that replaces uses of the original predicates with uses of a single predicate holds whose first argument is the name of an original predicate [CKW93]. For example, win(x) is replaced with holds(win,x). This transformation merges a set of predicates into a single predicate, facilitating higher-order programming. We show that founded semantics and constraint semantics are preserved by merging of compatible predicates, defined below, if a simple type system is used to distinguish the constants in the original program from the new constants representing the original predicates.

We extend the language with a simple type system. A type denotes a set of constants. Each predicate has a type signature that specifies the type of each argument. A program is well-typed if, in each rule or fact, (1) each constant belongs to the type of the argument where the constant occurs, and (2) for each variable, all its occurrences are as arguments with the same type. In the semantics, the values of predicate arguments are restricted to the appropriate type.

Predicates of program are compatible if they are in the same SCC in and have the same arity, same type signature, and (if uncertain) same completeness declaration. For a set of compatible predicates of program with arity and type signature , the predicate-merge transformation transforms into a program in which predicates in are replaced with a single fresh predicate holds whose first parameter ranges over , and which has the same completeness declaration as the predicates in . Each atom in a rule or fact of is replaced with , where the function on atoms is defined by: equals holds("", , …, ) if and equals otherwise. We extend pointwise to a function on sets of atoms and a function on sets of sets of atoms. The predicate-merge transformation introduces as a new type. The type signature of holds is .

Theorem 4. Let be a set of compatible predicates of program . Then and have the same founded semantics, in the sense that . and also have the same constraint semantics, in the sense that .

6 Comparison with other semantics

Stratified semantics.

A program has stratified negation if it does not contain predicates with circular negative dependencies. Such a program has a well-known and widely accepted semantics that defines a unique 2-valued model, denoted , as discussed in Section 2.

Theorem 5. For a program with stratified negation and in which all predicates are certain, .

First-order logic.

The next theorem relates constraint models with the interpretation of a program as a set of formulas in first-order logic; recall that the definition of a model of a program is based on that interpretation.

Theorem 6. For a program in which all predicates are uncertain and not complete, the constraint models of are exactly the 2-valued models of .

Fitting semantics.

Fitting [Fit85] defines an interpretation to be a model of a program iff it satisfies a formula we denote , which is Fitting’s 3-valued-logic version of the Clark completion of [Cla87]. Briefly, , where is the conjunction of formulas corresponding to the combined rules introduced by except with replaced with (which is called “complete equivalence” and means “same truth value”), and is the conjunction of formulas stating that predicates not used in any fact or the conclusion of any rule are false for all arguments. The Fitting model of a program , denoted , is the least model of [Fit85].

Theorem 7. For a program in which all extensional predicates are certain, and all intensional predicates are uncertain and complete, .

Founded semantics for some declarations is less defined than or equal to Fitting semantics, as stated in the following theorem. A simple program for which the inclusion is strict, as in part (b) of the theorem, is program 6 in Table 2, which has only one rule q p, with both predicates uncertain and complete. and .

Theorem 8. (a) For a program in which all intensional predicates are uncertain and complete, . (b) If, furthermore, some extensional predicate is uncertain, and some positive literal for some uncertain extensional predicate does not appear in , then .

Founded semantics for default declarations is at least as defined as Fitting semantics, as stated in the following theorem. A simple program for which the inclusion is strict, as in part (b) of the theorem, is program 3 in Table 2, which has only one rule q q. and .

Theorem 9. (a) For a program in which all predicates have default declarations as certain or uncertain and complete or not, . (b) If, furthermore, is not 2-valued for some certain intensional predicate , then .

Well-founded semantics.

The well-founded model of a program , denoted , is the least fixed point of a monotone operator on interpretations, defined as follows [VRS91]. A set of atoms of a program is an unfounded set of with respect to an interpretation of iff, for each atom in , for each ground instance of a rule of with conclusion , either (1) some hypothesis of is false in or (2) some positive hypothesis of is in . Intuitively, the atoms in can be set to false, because each rule whose conclusion is in either has a hypothesis already known to be false or has a hypothesis in (which will be set to false). Let be the greatest unfounded set of program with respect to interpretation . For a set of atoms, let denote the set containing the negations of those atoms. is defined by . The well-founded model satisfies , so for all programs  [VRS91].

Theorem 10. For every program , .

One might conjecture that for propositional programs. However, this is false, as program 8 in Table 2 in Appendix A shows.

Supported models.

Supported model semantics of a logic program is a set of 2-valued models. An interpretation is a supported model of if is 2-valued and is a fixed point of the one-step derivability operator [ABW88]. Let denote the set of supported models of . Supported models, unlike Fitting semantics and WFS, allow atoms to be set to true when they have circular positive dependency.

The following three theorems relating constraint semantics with supported model semantics are analogous to the three theorems relating founded semantics with Fitting semantics. The inclusion in Theorem 6 is strict for the program described above. and . The inclusion in Theorem 6 is strict for the program described above. and .

Theorem 11. For a program in which all extensional predicates are certain, and all intensional predicates are uncertain and complete, .

Theorem 12. For a program in which all intensional predicates are uncertain and complete, .

Theorem 13. For a program in which all predicates have default declarations as certain or uncertain and complete or not, .

Stable models.

Gelfond and Lifschitz define stable model semantics (SMS) of logic programs [GL88]. They define the stable models of a program to be the 2-valued interpretations of that are fixed points of a particular transformation. Van Gelder et al. proved that the stable models of are exactly the 2-valued fixed points of the operator described above [VRS91, Theorem 5.4]. Let denote the set of stable models of . The inclusion in Theorem 6 is strict for program 7 in Table 2, denoted , which has two rules q q and q q. and .

Theorem 14. For a program in which all predicates have default declarations as certain or uncertain, .

Example. For the win example with default declarations, Fitting semantics and WFS are the same as founded semantics in Section 4, and supported model semantics and SMS are the same as constraint semantics in Section 4. Additional examples can be found in Appendix B.

7 Computational complexity and extensions

Computing founded semantics and constraint semantics.

Theorem 15. Computing founded semantics is linear time in the size of the ground program.

Proof. First ground all given rules, using any grounding. Then add completion rules, if any, by adding an inverse rule for each group of the grounded given rules that have the same conclusion, yielding ground completion rules of the same asymptotic size as the grounded given rules. Now compute the least fixed point for each SCC of the resulting ground rules using a previous method [LS09]: introduce a new intermediate predicate and rule for each conjunction and disjunction in the rules, yielding a new set of rules of the same asymptotic size; each resulting rule incurs at most one rule firing, because there are no variables in the rule, and each firing takes worst-case time. Thus, the total time is worst-case linear in the size of all ground rules and of the grounded given rules.

The size of the ground program is polynomial in the size of input data, i.e., the given facts, because each variable in each rule can be instantiated at most times (because the domain size is at most ), and there is a fixed number of variables in each rule, and a fixed size of the given rules. Precisely, the size of the ground program is in the worst case , where is the maximum number of variables in a rule, and is the size of the given rules.

Computing constraint semantics may take exponential time in the size of the input data, because in the worse case, all assertions of all predicates may have values in founded semantics, and there is an exponential number of combinations of and values of all assertions, where each combination may be checked for whether it satisfies the constraints imposed by all rules.

These complexity analyses also apply to the extensions below except that computing founded semantics with closed predicates may take quadratic time in the size of the ground program, because of repeated computation of founded semantics and self-false assertions.

Closed predicate assumption.

We can extend the language to support declaration of uncertain complete predicates as closed. Informally, this means that an atom of the predicate is false in an interpretation , called self-false in , if every ground instance of rules that concludes , or recursively concludes some hypothesis of that rule instance, has a hypothesis that is false or, recursively, is self-false in . Self-false atoms are elements of unfounded sets.

Formally, , the set of self-false atoms of program with respect to interpretation , is defined in the same way as the greatest unfounded set of with respect to , except replacing “some positive hypothesis of is in ” with “some positive hypothesis of for a closed predicate is in ”. The founded semantics of this extended language is defined by repeatedly computing the semantics as per Section 4 and then setting self-false atoms to false, until a least fixed point is reached. Formally, the founded semantics is , where .

The constraint semantics for this extended language includes only interpretations containing the negative literals required by the closed declarations. A constraint model of a program with closed declarations is a consistent 2-valued interpretation such that is a model of , , and . Let denote the set of constraint models of .

The next theorem states that changing predicate declarations from uncertain, complete, and closed to certain, or vice versa, preserves founded and constraint semantics. Theorem 5 implies that this change needs to be made for all predicates in an SCC.

Theorem 16. Let be a program. Let be an SCC in its dependence graph containing only predicates that are uncertain, complete, and closed and that can be declared certain, i.e., all SCCs that precede in dependency order contain certain predicates, and predicates in do not have circular negative dependency. Let be the program obtained from by changing the declarations of predicates in from uncertain to certain. Then and .

Theorem 17. For a program in which every uncertain predicate is complete and closed, .

Theorem 18. For a program in which every uncertain predicate is complete and closed, .

Note, however, that founded semantics for default declarations (certain when possible and complete otherwise) allows the number of repetitions for computing self-false atoms to be greatly reduced, even to zero, compared with WFS that does repeated computation of unfounded sets.

In all examples we have found in the literature, and all natural examples we have been able to think of, founded semantics for default declarations, without closed predicate assumption, infers the same result as WFS. While founded semantics computes a single least fixed point without the outer repetition and is worst-case linear time, WFS computes an alternating fixed point or iterated fixed point and is worst-case quadratic. In fact, we have not found any natural example showing that an actual quadratic-time alternating or iterated fixed-point for computing WFS is needed.111Even a contrived example that demonstrates the worst-case quadratic-time computation of WFS has been challenging to find. For example, the quadratic-time example in [Zuk01] turns out to be linear in XSB; after significant effort between us and Warren, we found a much more sophisticated one that appears to work, but a remaining bug in XSB makes the correctness of its computation unclear.

Unrestricted existential and universal quantifications in hypotheses.

We extend the language to allow unrestricted combinations of existential and universal quantifications as well as negation, conjunction, and disjunction in hypotheses. The domain of each quantified variable is the set of all constants in the program.

Example. For the win example, the following two rules may be given instead:

    win(x) $$\leftarrow$$ $$\exists$$ y | move(x,y) $$\land$$ lose(y)
lose(x) $$\leftarrow$$ $$\forall$$ y | $$\neg$$ move(x,y) $$\lor$$ win(y)


The semantics in Section 4 is easily extended to accommodate this extension: these constructs simply need to be interpreted, using their 3-valued logic semantics [Fit85], when defining one-step derivability. Theorems 55 hold for this extended language. The other semantics discussed above are not defined for this extension, so we do not have theorems relating to them.

Negation in facts and conclusions.

We extend the language to allow negation in given facts and in conclusions of given rules; such facts and rules are said to be negative. The Yale shooting example in Appendix B is a simple example.

The definition of founded semantics applies directly to this extension, because it already introduces and handles negative rules, and it already infers and handles negative facts. Note that combines only positive facts and positive rules to form combined rules; negative facts and negative rules are copied unchanged into the completed program.

With this extension, a program and hence its founded model may be inconsistent; for example, a program could contain or imply p and p. Thus, Theorem 5 does not hold for such programs. When the founded model is inconsistent, the inconsistent literals in it can easily be reported to the user. When the founded model is consistent, the definition of constraint semantics applies directly, and Theorems 55 hold. The other semantics discussed above are not defined for this extended language, so we do not have theorems relating to them.

8 Related work and conclusion

There is a large literature on logic language semantics. Several overview articles [AB94, Prz94, RU95, Fit02, Tru17] give a good sense of the challenges when there are unrestricted negation. We discuss major related work.

Clark [Cla87] describes completion of logic programs to give a semantics for negation as failure. Numerous others, e.g., [LT84, ST84, JLM86, Cha88, FRTW88, Stu91], describe similar additions. Fitting [Fit85] presents a semantics, called Fitting semantics or Kripke-Kleene semantics, that aims to give a least 3-valued model. Apt et al. [ABW88] defines supported model semantics, which is a set of 2-valued models; the models correspond to extensions of the Fitting model. Apt et al. [ABW88] and Van Gelder [VG89] introduce stratified semantics. WFS [VRS91, VG93] also gives a 3-valued model but aims to maximize false values. SMS [GL88] also gives a set of 2-valued models and aims to maximize false values. Other formalisms and semantics include partial stable models, also called stationary models [Prz94], and FO(ID), for first-order logic with inductive definitions [DT08]. There are also many studies that relate different semantics, e.g. [Dun92].

Our founded semantics, which extends to constraint semantics, is unique in that it allows predicates to be specified as certain or uncertain, as complete or not, and as closed or not. These choices clearly and explicitly capture the different assumptions one can have about the predicates, rules, and reasoning, including the well-known closed-world assumption vs open-world assumption—i.e., whether or not all rules and facts about a predicate are given in the program—and allow both to co-exist naturally. These choices make our new semantics more expressive and intuitive. Instead of using many separate semantics, one just need to make the assumptions explicit; the same underlying logic is used for inference. In this way, founded semantics and constraint semantics unify the core of different semantics.

In addition, founded semantics and constraint semantics are completely declarative and easy to understand, as a least fixed point and as constraint satisfaction, respectively. Our default declarations without closed predicates lead to the same semantics as WFS and SMS for all natural examples we have found. Additionally, founded semantics without closed predicates can be computed in linear time in the size of the ground program, as opposed to quadratic time for WFS.

There are many directions for future study, including additional relationships with prior semantics, further extensions, efficient implementations, and many applications.

References

• [AB94] Krzysztof R. Apt and Roland N. Bol. Logic programming and negation: A survey. Journal of Logic Programming, 19:9–71, 1994.
• [ABW88] Krzysztof R. Apt, Howard A. Blair, and Adrian Walker. Towards a theory of declarative knowledge. In Foundations of Deductive Databases and Logic Programming, pages 89–148. Morgan Kaufman, 1988.
• [AHV95] Serge Abiteboul, Richard Hull, and Victor Vianu. Foundations of Databases: The Logical Level. Addison-Wesley, 1995.
• [CGT90] Stefano Ceri, Georg Gottlob, and Letizia Tanca. Logic Programming and Databases. Springer, 1990.
• [Cha88] David Chan. Constructive negation based on the completed database. In Proc. of the 5th Intl. Conf. and Symp. on Logic Programming, pages 111–125. MIT Press, 1988.
• [CKW93] Weidong Chen, Michael Kifer, and David S. Warren. HiLog: A foundation for higher-order logic programming. Journal of Logic Programming, 15(3):187–230, 1993.
• [Cla87] Keith L. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, Logic and Databases, pages 293–322. Plenum Press, Apr. 1987.
• [DT08] M. Denecker and E. Ternovska. A logic of nonmonotone inductive definitions. ACM Transactions on Computational Logic, 9(2):14, 2008.
• [Dun92] Phan Minh Dung. On the relations between stable and well-founded semantics of logic programs. Theoretical Computer Science, 105(1):7–25, 1992.
• [Fit85] Melvin Fitting. A Kripke-Kleene semantics for logic programs. Journal of Logic Programming, 2(4):295–312, 1985.
• [Fit02] Melvin Fitting. Fixpoint semantics for logic programming: A survey. Theoretical Computer Science, 278(1):25–51, 2002.
• [FRTW88] Norman Y. Foo, Anand S. Rao, Andrew Taylor, and Adrian Walker. Deduced relevant types and constructive negation. In Proc. of the 5th Intl. Conf. and Symp. on Logic Programming, pages 126–139, 1988.
• [GL88] Michael Gelfond and Vladimir Lifschitz. The stable model semantics for logic programming. In Proc. of the 5th Intl. Conf. and Symp. on Logic Programming, pages 1070–1080. MIT Press, 1988.
• [JLM86] Joxan Jaffar, Jean-Louis Lassez, and Maher J. Maher. Some issues and trends in the semantics of logic programming. In Proc. on 3rd Intl. Conf. on Logic Programming, pages 223–241. Springer, 1986.
• [LS09] Yanhong A. Liu and Scott D. Stoller. From Datalog rules to efficient programs with time and space guarantees. ACM Transactions on Programming Languages and Systems, 31(6):1–38, 2009.
• [LT84] John W. Lloyd and Rodney W. Topor. Making Prolog more expressive. Journal of Logic Programming, 1(3):225–240, 1984.
• [Prz94] T.C. Przymusinski. Well-founded and stationary models of logic programs.

Annals of Mathematics and Artificial Intelligence

, 12(3):141–187, 1994.
• [RU95] Raghu Ramakrishnan and Jeffrey D Ullman. A survey of deductive database systems. Journal of Logic Programming, 23(2):125–149, 1995.
• [ST84] Taisuke Sato and Hisao Tamaki. Transformational logic program synthesis. In Proceedings of the International Conference on Fifth Generation Computer Systems, pages 195–201, 1984.
• [Stu91] Peter J Stuckey. Constructive negation for constraint logic programming. In Proceedings of the 6th Annual IEEE Symposium on Logic in Computer Science, pages 328–339, 1991.
• [Tru17] Mirek Truszczynski. An introduction to the stable and the well-founded semantics of logic programs. In Michael Kifer and Yanhong A. Liu, editors, Declarative Logic Programming: Theory, Systems, and Applications. ACM and Morgan & Claypool, 2017. Expected.
• [VG89] Allen Van Gelder. Negation as failure using tight derivations for general logic programs. Journal of Logic Programming, 6(1):109–133, 1989.
• [VG93] Allen Van Gelder. The alternating fixpoint of logic programs with negation. Journal of Computer and System Sciences, 47(1):185–221, 1993.
• [VRS91] Allen Van Gelder, Kenneth Ross, and John S. Schlipf. The well-founded semantics for general logic programs. Journal of the ACM, 38(3):620–650, 1991.
• [Zuk01] Ulrich Zukowski. Flexible Computation of the Well-Founded Semantics of Normal Logic Programs. PhD thesis, Faculty of Computer Science and Mathematics, University of Passau, 2001.

Appendix A Comparison of semantics for well-known small examples and more

Table 2 shows well-known example rules and more for tricky boundary cases in the semantics, where all uncertain predicates that are in a conclusion are declared complete, but not closed, and shows different semantics for them.

• Programs 1 and 2 contain only negative cycles. All three of Founded, WFS, and Fitting agree. All three of Constraint, SMS, and Supported agree.

• Programs 3 and 4 contain only positive cycles. Founded for certain agrees with WFS; Founded for uncertain agrees with Fitting. Constraint for certain agrees with SMS; Constraint for uncertain agrees with Supported.

• Programs 5 and 6 contain no cycles. Founded for certain agrees with WFS and Fitting; Founded for uncertain has more undefined. Constraint for certain agrees with SMS and Supported; Constraint for uncertain has more models.

• Programs 7 and 8 contain both negative and positive cycles. For program 7 where q and q are disjunctive, all three of Founded WFS, and Fitting agree; Constraint and Supported agree, but SMS has no model. For program 8 where q and q are conjunctive, Founded and Fitting agree, but WFS has q being ; all three of Constraint, SMS, and Supported agree.

For all 8 programs, with default complete but not closed predicates, we have the following:

• If all predicates are the default certain or uncertain, then Founded agrees with WFS, and Constraint agrees with SMS, with one exception for each:

(1) Program 7 concludes q whether q is or

, so SMS having no model is an extreme outlier among all 6 semantics and is not consistent with common sense.

(2) Program 8 concludes q if q is and , so Founded semantics with q being is imprecise, but Constraint has q being . WFS has q being because it uses for ignorance.

• If predicates not in any conclusion are certain (not shown in Table 2 but only needed for q in programs 5 and 6), and other predicates are uncertain, then Founded equals Fitting, and Constraint equals Supported, as captured in Theorems 6 and 6, respectively.

• If all predicates are uncertain, then Founded has all values being , capturing the well-known unclear situations in all these programs, and Constraint gives all different models except for programs 2 and 5, and programs 4 and 6, which are pair-wise equivalent under completion, capturing exactly the differences among all these programs.

Finally, if all predicates in these programs are not complete, then Founded and Constraint are the same as in Table 2 except that Constraint for uncertain becomes equivalent to truth values in first-order logic: programs 1 and 8 have an additional model, {q}, program 6 has an additional model, {, q}, and programs 2 and 5 have an additional model, {p,q}.

We discuss the semantics of some well-known examples.

Graph reachability.

A source vertex x is represented by a fact source(x). An edge from a vertex x to a vertex y is represented by a fact edge(x,y). The following two rules capture graph reachability, i.e., the set of vertices reachable from source vertices by following edges.

    reach(x) $$\leftarrow$$ source(x)
reach(y) $$\leftarrow$$ edge(x,y) $$\land$$ reach(x)

In the dependency graph, each predicate is in a separate SCC, and the SCC for reach is ordered after the other two. There is no negation in this program.

With the default declaration of predicates being certain, no completion rules are added. The least fixed point computation for founded semantics infers reach to be for all vertices that are source vertices or are reachable from source vertices by following edges, as desired. For the remaining vertices, reach is . This is the same as WFS.

If reach is declared uncertain and complete, but not closed, then after completion, we obtain

    reach(x) $$\leftarrow$$ source(x) $$\lor$$
($$\exists$$ y | (edge(y,x) $$\land$$ reach(y)))
n.reach(x) $$\leftarrow$$ n.source(x) $$\land$$
($$\forall$$ y | (n.edge(y,x) $$\lor$$ n.reach(y)))

The least fixed point computation for founded semantics infers reach to be for all reachable vertices as when predicates are certain, and infers reach to be for all vertices that are not source vertices and that have no in-coming edge at all or have in-coming edges only from vertices for which reach is . For the remaining vertices, i.e., those that are not reachable from the source vertices but are in cycles of edges, reach is . This is the same as in Fitting semantics.

Russell’s paradox is illustrated as the barber paradox. The barber is a man who shaves all those men, and those men only, who do not shave themselves, as specified below. The question is: Does the barber shave himself? That is: What is the value of shave(’barber’, ’barber’)?

    shave(’barber’,x) $$\leftarrow$$ man(x) $$\land$$ $$\neg$$ shave(x,x)
man(’barber’)


Since shave is defined transitively using its own negation, it is uncertain. With the default declaration that shave is complete, but not closed, the completion step adds the rule

    $$\neg$$ shave(’barber’,x) $$\leftarrow$$ $$\neg$$ man(x) $$\lor$$ shave(x,x)

The completed program, after eliminating negation, is
    shave(’barber’,x) $$\leftarrow$$ man(x) $$\land$$ n.shave(x,x)
man(’barber’)
n.shave(’barber’,x) $$\leftarrow$$ n.man(x) $$\lor$$ shave(x,x)

The least fixed point computation for founded semantics infers no or facts of shave, so shave(’barber’,’barber’) is . Constraint semantics has no model. These results correspond to WFS and SMS, respectively. All confirm the paradox.

Additionally, if there are other men besides the barber, then founded semantics will also infer shave(’barber’,x) for all man x except ’barber’ to be , and shave(x,y) for all man x except ’barber’ and for all man y to be , confirming the paradox that only shave(’barber’,’barber’) is . Constraint semantics has no model. These results again correspond to WFS and SMS, respectively.

Even numbers.

In this example, even numbers are defined by the predicate even, and natural numbers in order are given using the predicate succ.

    even(n) $$\leftarrow$$ succ(m,n) $$\land$$ $$\neg$$ even(m)
even(0)
succ(0,1)
succ(1,2)
succ(2,3)

With default declarations, founded semantics infers that even(1) is , even(2) is , and even(3) is . Constraint semantics is the same. These results are the same as WFS and SMS, respectively.

Yale shooting.

This example is about whether a turkey is alive, based on some facts and rules, given below, about whether and when the gun is loaded. It uses the extension that allows negative facts and negative conclusions.

    alive(0)
$$\neg$$ loaded(0)
loaded(1) $$\leftarrow$$ $$T$$
$$\neg$$ alive(3) $$\leftarrow$$ loaded(2)

Both predicates are declared uncertain and not complete. In the dependency graph, there are two SCCs: one with loaded, one with alive, and the former is ordered before the latter. Founded semantics infers that loaded(0) is , loaded(1) is , loaded(2) and loaded(3) are , alive(0) is , and alive(1), alive(2), and alive(3) are . Constraint semantics has multiple models, some containing that loaded(2) is and alive(3) is , and some containing that loaded(2) is and alive(3) is . Both confirm the well-known outcome.

Variant of Yale shooting.

This is a variant of the Yale shooting problem, copied from [VRS91]:

    noise(T) $$\leftarrow$$ loaded(T) $$\land$$ shoots(T).
loaded(T) $$\leftarrow$$ succ(S,T) $$\land$$ loaded(S) $$\land$$ $$\neg$$ shoots(S).
shoots(T) $$\leftarrow$$ triggers(T).
triggers(1).
succ(0,1).

There is no circular negative dependency, so all predicates are certain by default, and no completion rules are added. Founded semantics and constraint semantics both yield: trigger(1), trigger(0), shoots(1), shoots(0), noise(1), noise(0), loaded(1), and loaded(0). This is the same as WFS, Fitting semantics, SMS, and supported models.

Appendix C Proofs

Proof of Theorem 5. First we show the founded model is consistent. A given program cannot contain negative facts or negative conclusions, so all negative literals in are added by the construction. For a predicate declared uncertain and not complete, no negative literals are added. For a predicate declared uncertain and complete, consistency follows from the fact that the only rule defining n. in is the inverse of the only rule defining in . The body of the former rule is the negation of the body of the latter rule. Monotonicity of implies that the value of a ground instance of cannot change from true to false, or vice versa, during the fixed point calculation for the SCC containing . Using this observation, it is easy to show by induction on the number of iterations of the fixed point calculation for that an atom for and its negation cannot both be added to the interpretation. For a certain predicate, consistency follows from the fact that adds only literals whose complement is not in the interpretation. Constraint models are consistent by definition.

Proof of Theorem 5. First we show that is a model of . contains all facts in , because each fact in is either merged into a combined rule in or copied unchanged into , and in either case is added to the founded model by the LFP for some SCC. Consider a rule in with predicate in the conclusion . Note that may be a positive or negative literal. If the body becomes true before or in the LFP for the SCC containing , then the corresponding disjunct in the combined rule defining becomes true before or in that LFP, so the conclusion is added to the interpretation by that LFP, so the rule is satisfied. It remains to show that could not become true after that LFP. cannot become true during processing of a subsequent SCC, because SCCs are processed in dependency order, so subsequent SCCs do not contain predicates in . We prove by contradiction that cannot become true in for , i.e., we suppose becomes true in for and show a contradiction. for adds only negative literals for certain predicates in , so must contain such a literal, say . and are in the same SCC , so must be defined, directly or indirectly, in terms of . Since is certain and is defined in terms of , must be certain. Since and are defined in the same SCC , and depends negatively on , has a circular negative dependency, so must be uncertain, a contradiction.

Constraint models are 2-valued models of by definition.

Any model of is also a model of , because is logically equivalent to the subset of obtained by removing the completion rules added by .

Proof of Theorem 5. It suffices to show that, if some predicate in is uncertain, then all predicates in are uncertain. Suppose contains an uncertain predicate , and let be another predicate in . is defined directly or indirectly in terms of predicate , and is uncertain, so must be uncertain.

Proof of Theorem 5. The proof is based on a straightforward correspondence between the constructions of founded semantics of and .

Note that:

• All predicates in are certain, or all of them are uncertain, by Theorem 5.

• There is a 1-to-1 correspondence between the set of disjuncts in the bodies of the rules for predicates in in and the set of disjuncts in the body of the rule for holds in .

• If predicates in are uncertain and complete, there is a 1-to-1 correspondence between the set of conjuncts in the bodies of the completion rules for predicates in in and the set of conjuncts in the body of the completion rule for holds in .

Based on these observations, it is straightforward to show that:

• For each predicate not in , each atom for or is derivable in the semantics for iff is derivable in the semantics for .

• In the LFP for the SCC containing , for each predicate in , an atom for is derivable using a disjunct of the rule for in iff is derivable using the corresponding disjunct of the rule for holds in .

• In the LFP for the SCC containing , for each uncertain complete predicate in , an atom for is derivable using the completion rule for in iff is derivable using the corresponding conjuncts in the completion rule for holds in (the other conjuncts in the completion rule for holds have the form v "" and hence are true when considering derivation of atoms of the form n.holds("", )).

• In for the SCC containing , for each certain predicate in , an atom for is inferred in the semantics for iff is inferred in the semantics for .

Proof of Theorem 6. For certain predicates, the program completion has no effect, and is essentially the same as the definition of stratified semantics, except using SCCs in the dependency graph instead of strata. The SCCs used in founded semantics subdivide the strata used in stratified semantics; intuitively, this is because predicates are put in different SCCs whenever possible, while predicates are put in different strata only when necessary. This subdivision of strata does not affect the result of , so founded semantics is equivalent to the stratified semantics.

Proof of Theorem 6. Observe that, for a program satisfying the hypotheses of the theorem, is logically equivalent to . Every constraint model is a 2-valued model of and hence a 2-valued model of . Consider a 2-valued model of . Since satisfies the hypotheses of the theorem, contains only positive literals, added by the LFPs in . The LFPs add a positive literal to only if that literal is implied by the facts and rules in and therefore holds in all 2-valued models of . Therefore, . satisfies and hence, by the above observation, also . Thus, is a constraint model of .

Proof of Theorem 6. Consider an intensional predicate . By assumption, is uncertain and complete. It is straightforward to show that the LFP for the SCC containing using the combined rule for in , of the form , and its inverse, of the form , is equivalent to satisfying the conjunct for in , of the form . The proof for the forward direction () of the equivalence is a case analysis on the truth value of the body in : (1) if is true, then the LFP uses the combined rule to infer is true, so holds; (2) if is false, then the LFP uses the inverse rule to infer is false, so holds; (3) if is undefined, then neither rule applies and is undefined, so holds. Similarly, the proof for the reverse direction () is a simple case analysis on the truth values of and (which are the same, since by assumption).

Consider an extensional predicate . By assumption, is certain. Let be the set of atoms for in . It is easy to show that and contain the atoms in and contain negative literals for for all other arguments.

Proof of Theorem 6. (a) This follows from Theorem 6 and the observation that, if satisfies the premises of Theorem 6, and is obtained from by changing the declarations of some extensional predicates from certain to uncertain, then ; intuitively, fewer assumptions are made about uncertain predicates, so contains fewer conclusions. (b) This follows from part (a) and the observation that is undefined in , and is false in (i.e., contains ), so the inclusion relation is strict.

Proof of Theorem 6. (a) This follows from Theorem 6, the differences between the declarations assumed in Theorem 6 and the default declarations, and the effect of those differences on the founded model. It is easy to show that the default declarations can be obtained from the declarations assumed in Theorem 6 by changing the declarations of some intensional predicates from uncertain and complete to certain. Let be such a predicate. This change does not affect the set of positive literals derived for , because the combined rule for is equivalent to the original rules and facts for . This change can only preserve or increase the set of negative literals derived for , because derives all negative literals for that can be derived while preserving consistency of the interpretation (in particular, negative literals for all arguments of not in ).

(b) This follows from the proof of part (a) and the observation that the additional premise for part (b) implies there is a literal for that is undefined in and defined (i.e., true or false) in (because is 2-valued for ), so the inclusion is strict.

Proof of Theorem 6. We prove an invariant that, at each step during the construction of , the current approximation to satisfies . It is straightforward to show, using the induction hypothesis, that literals added to by the LFPs in are in . Consider a literal added by a combined rule . This implies is true in . By the induction hypothesis, , so is true in . Using the rule in corresponding to a disjunct in that is true in , we conclude . The definition of implies is closed under , so . Consider a literal added by a combined rule . All of the disjuncts in the negation normal form of are true in , so the bodies of all rules in that derive are false in and, by the induction hypothesis, are false in , so . The definition of implies is closed under , so .

It remains to show that negative literals added to by are in . Consider an SCC in the dependency graph. Let be the set of atoms whose negations are added to by for . Let denote the interpretation produced by the LFP for . Since is monotone, it suffices to show that is an unfounded set for with respect to , i.e., for each atom in , for each ground instance of a rule of with conclusion , either (1) some hypothesis in is false in or (2) some positive hypothesis in is in . We use a case analysis on the truth value of in . cannot be true in , because if it were, would be added to by the LFP and would not be in . If is false in , then case (1) holds. Suppose is undefined in . This implies that at least one hypothesis in is undefined in . Let be the predicate in , and let be the predicate in . adds literals only for certain predicates, so is certain. depends on , so