1 Introduction
The central component of existing logic programming systems is a refutation procedure, which is based on the resolution rule created by Robinson [21]. The first such refutation procedure, called SLDresolution, was introduced by Kowalski [13, 31], and further formalized by Apt and Van Emden [1]. SLDresolution is only suitable for positive logic programs, i.e. programs without negation. Clark [8] extended SLDresolution to SLDNFresolution by introducing the negation as finite failure rule, which is used to infer negative information. SLDNFresolution is suitable for general logic programs, by which a ground negative literal succeeds if finitely fails, and fails if succeeds.
As an operational/procedural semantics of logic programs, SLDNFresolution has many advantages, among the most important of which is its linearity of derivations. Let be a derivation with the top goal and the latest generated goal. A resolution is said to be linear for query evaluation if when applying the most widely used depthfirst search rule, it makes the next derivation step either by expanding using a program clause (or a tabled answer), which yields , or by expanding via backtracking.^{1}^{1}1 The concept of “linear” here is different from the one used for SLresolution [12]. It is with such linearity that SLDNFresolution can be realized easily and efficiently using a simple stackbased memory structure [36, 38]. This has been sufficiently demonstrated by Prolog, the first and yet the most popular logic programming language which implements SLDNFresolution.
However, SLDNFresolution suffers from two serious problems. One is that the declarative semantics it relies on, i.e. the completion of programs [8], incurs some anomalies (see [15, 29] for a detailed discussion); and the other is that it may generate infinite loops and a large amount of redundant subderivations [2, 9, 35].
The first problem with SLDNFresolution has been perfectly settled by the discovery of the wellfounded semantics [33].^{2}^{2}2Some other important semantics, such as the stable model semantics [11], are also proposed. However, for the purpose of query evaluation the wellfounded semantics seems to be the most natural and robust. Two representative methods were then proposed for topdown evaluation of such a new semantics: Global SLSresolution [18, 22] and SLGresolution [6, 7].
Global SLSresolution is a direct extension of SLDNFresolution. It overcomes the semantic anomalies of SLDNFresolution by treating infinite derivations as failed and infinite recursions through negation as undefined. Like SLDNFresolution, it is linear for query evaluation. However, it inherits from SLDNFresolution the problem of infinite loops and redundant computations. Therefore, as the authors themselves pointed out, Global SLSresolution can be considered as a theoretical construct [18] and is not effective in general [22].
SLGresolution (similarly, Tabulated SLSresolution [4])
is a tabling mechanism for topdown evaluation of the wellfounded
semantics. The main idea of tabling is to store intermediate results
of relevant subgoals and then use them to solve variants of the subgoals
whenever needed. With tabling no variant subgoals will be recomputed by applying
the same set of program clauses, so infinite loops can be avoided and
redundant computations be substantially reduced
[4, 7, 30, 35, 37]. Like all other existing
tabling mechanisms, SLGresolution adopts the solutionlookup mode.
That is, all nodes in a search tree/forest are partitioned into two subsets,
solution nodes and lookup nodes. Solution nodes produce child nodes
only using program clauses, whereas lookup nodes produce child nodes only using answers
in the tables. As an illustration, consider the derivation
.
Assume that so far no answers of have been derived (i.e.,
currently the table for is empty).
Since is a variant of and thus a lookup node,
the next derivation step is to expand against a program clause,
instead of expanding the latest generated goal .
Apparently, such kind of resolutions is not linear for query evaluation.
As a result, SLGresolution cannot be implemented using
a simple, efficient stackbased memory structure
nor can it be easily extended to handle some
strictly sequential operators such as cuts in Prolog because the
sequentiality of these operators fully depends
on the linearity of derivations.^{3}^{3}3
It is well known that cuts are indispensable in
real world programming practices. This has been evidenced
by the fact that XSB, the best known stateoftheart tabling system that implements
SLGresolution, disallows clauses like
because the tabled predicate
occurs in the scope of a cut [23, 24, 25].
One interesting question then arises: Can we have a linear tabling method for topdown evaluation of the wellfounded semantics of general logic programs, which resolves infinite loops and redundant computations (like SLGresolution) without sacrificing the linearity of SLDNFresolution (like Global SLSresolution)? In this paper, we give a positive answer to this question by developing a new tabling mechanism, called SLTresolution. SLTresolution is a substantial extension of SLDNFresolution with tabling. Its main features are as follows.

SLTresolution is based on finite SLTtrees. The construction of SLTtrees can be viewed as that of SLDNFtrees with an enhancement of some loop handling mechanisms. Consider again the derivation . Note that the derivation has gone into a loop since the proof of needs the proof of , a variant of . By SLDNF or Global SLSresolution, will be expanded using the same set of program clauses as . Obviously, this will lead to an infinite loop of the form In contrast, SLTresolution will break the loop by disallowing to use the clause that has been used by . As a result, SLTtrees are guaranteed to be finite for programs with the boundedtermsize property.

SLTresolution makes use of tabling to reduce redundant computations, but is linear for query evaluation. Unlike SLGresolution and all other existing topdown tabling methods, SLTresolution does not distinguish between solution and lookup nodes. All nodes will be expanded by applying existing answers in tables, followed by program clauses. For instance, in the above example derivation, since currently there is no tabled answer available to , will be expanded using some program clauses. If no program clauses are available to , SLTresolution would move back to (assume using a depthfirst control strategy). This shows that SLTresolution is linear for query evaluation. When SLTresolution moves back to , all program clauses that have been used by will no longer be used by . This avoids redundant computations.

SLTresolution is terminating, and sound and complete w.r.t. the wellfounded semantics for any programs with the boundedtermsize property with nonfloundering queries. Moreover, its time complexity is comparable with SLGresolution and polynomial for functionfree logic programs.

Because of its linearity for query evaluation, SLTresolution can be implemented by an extension to any existing Prolog abstract machines such as WAM [36] or ATOAM [38]. This differs significantly from nonlinear resolutions such as SLGresolution since their derivations cannot be organized using a stackbased memory structure, which is the key to the Prolog implementation.
1.1 Notation and Terminology
We present our notation and review some standard terminology of logic programs [15].
Variables begin with a capital letter, and predicate, function and constant symbols with a lower case letter. Let be a predicate symbol. By we denote an atom with the list of variables. Let be a set of atoms. By we denote the complement of .
Definition 1.1
A general logic program (program for short) is a finite set of (program) clauses of the form
where is an atom and s are literals. is called the head and is called the body of the clause. If a program has no clause with negative literals in its body, it is called a positive program.
Definition 1.2 ([22])
Let be a program and , and be a predicate symbol, function symbol and constant symbol respectively, none of which appears in . The augmented program .
Definition 1.3
A goal is a headless clause where each is called a subgoal. When , the “” symbol is omitted. A computation rule (or selection rule) is a rule for selecting one subgoal from a goal.
Let be a goal with a positive subgoal. Let be a clause such that where is an mgu (i.e. most general unifier). The resolvent of and on is the goal . In this case, we say that the proof of is reduced to the proof of .
The initial goal, , is called a top goal. Without loss of generality, we shall assume throughout the paper that a top goal consists only of one atom (i.e. and is a positive literal). Moreover, we assume that the same computation rule always selects subgoals at the same position in any goals. For instance, if in the above goal is selected by , then in will be selected by since and are at the same position in their respective goals.
Definition 1.4
Let be a program. The Herbrand universe of is the set of ground terms that use the function symbols and constants in . (If there is no constant in , then an arbitrary one is added.) The Herbrand base of is the set of ground atoms formed by predicates in whose arguments are in the Herbrand universe. By and we denote respectively the existential and universal closure of over the Herbrand universe.
Definition 1.5
A Herbrand instantiated clause of a program is a ground instance of some clause in that is obtained by replacing all variables in with some terms in the Herbrand universe of . The Herbrand instantiation of is the set of all Herbrand instantiated clauses of .
Definition 1.6
Let be a program and its Herbrand base. A partial interpretation of is a set such that and . We use and to refer to and , respectively.
Definition 1.7
By a variant of a literal we mean a literal that is the same as up to variable renaming. (Note that is a variant of itself.)
Finally, a substitution is more general than a substitution if there exists a substitution such that . Note that is more general than itself because where is the identity substitution [15].
2 The WellFounded Semantics
In this section we review the definition of the wellfounded semantics of logic programs. We also present a new constructive definition of the greatest unfounded set of a program, which has technical advantages for the proof of our results.
Definition 2.1 ([22, 33])
Let be a program and its Herbrand base. Let be a partial interpretation. is an unfounded set of w.r.t. if each atom satisfies the following condition: For each Herbrand instantiated clause of whose head is , at least one of the following holds:

The complement of some literal in the body of is in .

Some positive literal in the body of is in .
The greatest unfounded set of w.r.t. , denoted , is the union of all sets that are unfounded w.r.t. .
Definition 2.2 ([22])
Define the following transformations:

if and only if there is a Herbrand instantiated clause of , , such that all are in .

.

, where , and for any .

is the greatest unfounded set of w.r.t. , as in Definition 2.1.

.
Since derives only positive literals, the following result is straightforward.
Lemma 2.1
if and only if .
Definition 2.3 ([22, 33])
Let and be countable ordinals. The partial interpretations are defined recursively by

For limit ordinal , , where .

For successor ordinal , .
The transfinite sequence is monotonically increasing (i.e. if ), so there exists the first ordinal such that . This fixpoint partial interpretation, denoted , is called the wellfounded model of . Then for any , is true if , false if , and undefined otherwise.
Lemma 2.2
For any , and .
Proof: Let . Since is monotonically increasing, and .
The following definition is adapted from [20].
Definition 2.4
is obtained from the Herbrand instantiation of by

first deleting all clauses with a literal in their bodies whose complement is in ,

then deleting all negative literals in the remaining clauses.
Clearly is a positive program. Note that for any partial interpretation , is a partial interpretation that consists of and all ground atoms that are iteratively derivable from and . We observe that the greatest unfounded set of w.r.t. can be constructively defined based on and .
Definition 2.5
Define the following two transformations:

.

.
We will show that (see Theorem 2.5). The following result is immediate.
Lemma 2.3
, and are mutually disjoint and .
From Definitions 2.4 and 2.5 it is easily seen that , which is generated iteratively as follows: First, for each there must be a Herbrand instantiated clause of of the form
(1) 
where all s and some s are in and for the remaining s (not empty; otherwise ) neither nor is in . Note that the proof of can be reduced to the proof of s given . Then for each there must be a clause like (1) above where no is in , some s are in , and the remaining s (not empty) are in . Continuing such process of reduction, for each with there must be a clause like (1) above where no is in , some s are in , and the remaining s (not empty) are in .
The following lemma shows a useful property of literals in .
Lemma 2.4
Given , the proof of any can be reduced to the proof of a set of ground negative literals s where neither nor is in .
Proof: Let . The lemma is proved by induction on . Obviously, it holds for each . As inductive hypothesis, assume that the lemma holds for any with . We now prove that it holds for each .
Let . For convenience of presentation, in clause (1) above for let , , , and for each neither nor is in . By the inductive hypothesis the proof of can be reduced to the proof of a set of negative literals where neither nor is in . So the proof of can be reduced to the proof of .
Theorem 2.5
.
Proof: Let and be a Herbrand instantiated clause of for . By Definition 2.5, either some or is in , or (when is in ) there exists some such that neither nor , i.e. (see Lemma 2.3). By Definition 2.1, is an unfounded set w.r.t. , so .
Assume, on the contrary, that there is an but . Since , . So there exists a Herbrand instantiated clause of
such that does not satisfy point 1 of Definition 2.1 (since ) and
is in where each is either in or in . Since , by point 2 of Definition 2.1 some and thus .
Repeating the above process leads to an infinite chain: the proof of needs the proof of that needs the proof of , and so on, where each . Obviously, for no along the chain its proof can be reduced to a set of ground negative literals s where neither nor is in . This contradicts Lemma 2.4, so .
Starting with , we compute , followed by and . By Lemma 2.2 and Theorem 2.5, each (resp. ) is true (resp. false) under the wellfounded semantics. is a set of temporarily undefined ground literals whose truth values cannot be determined at this stage of transformations based on . We then do iterative computations by letting until we reach a fixpoint. This forms the basis on which our operational procedure is designed for topdown computation of the wellfounded semantics.
3 SLTTrees and SLTResolution
In this section, we define SLTtrees and SLTresolution. Here “SLT” stands for “Linear Tabulated resolution using a Selection/computation rule.”
Recall the familiar notion of a tree for describing the search space of a topdown proof procedure. For convenience, a node in such a tree is represented by , where is the node name and is a goal labeling the node. Assume no two nodes have the same name. Therefore, we can refer to nodes by their names.
Definition 3.1 ([26] with slight modification)
An ancestor list of pairs , where is a node name and is an atom, is associated with each subgoal in a tree, which is defined recursively as follows.

If is at the root, then unless otherwise specified.

Let be at node and be its parent node. If is copied or instantiated from some subgoal at then .

Let be a node that contains a positive literal . Let be at node that is obtained from by resolving against a clause on the literal with an mgu . If is for some , then .
Apparently, for any subgoals and if is in the ancestor list of , i.e. , the proof of needs the proof of . Particularly, if and is a variant of , the derivation goes into a loop. This leads to the following.
Definition 3.2
Let be a computation rule and and be two subgoals that are selected by at nodes and , respectively. If , (resp. ) is called an ancestor subgoal of (resp. an ancestor node of ). If is both an ancestor subgoal and a variant, i.e. an ancestor variant subgoal, of , we say the derivation goes into a loop, where and all its ancestor nodes involved in the loop are called loop nodes and the clause used by to generate this loop is called a looping clause of w.r.t. . We say a node is loopdependent if it is a loop node or an ancestor node of some loop node. Nodes that are not loopdependent are loopindependent.
In tabulated resolutions, intermediate positive and negative (or alternatively, undefined) answers of some subgoals will be stored in tables at some stages. Such answers are called tabled answers. Let be a table that stores some ground negative answers; i.e. for each . In addition, we introduce a special subgoal, , which is assumed to occur in neither programs nor top goals. will be used to substitute for some ground negative subgoals whose truth values are temporarily undefined. We now define SLTtrees.
Definition 3.3 (SLTtrees)
Let be a program, a top goal, and a computation rule. Let be a set of ground atoms such that for each . The SLTtree for via is a tree rooted at node such that for any node in the tree with :

If then is a success leaf, marked by .

If then is a temporarily undefined leaf, marked by .

Let be a positive literal selected by . Let be the set of clauses in whose heads unify with and be the set of looping clauses of w.r.t. its ancestor variant subgoals. If then is a failure leaf, marked by ; else the children of are obtained by resolving with each of the clauses in over the literal .

Let be a negative literal selected by . If is not ground then is a flounder leaf, marked by ; else if is in then has only one child that is labeled by the goal ; else build an SLTtree for via , where the subgoal at the root inherits the ancestor list of . We consider the following cases:

If has a success leaf then is a failure leaf, marked by ;

If has no success leaf but a flounder leaf then is a flounder leaf, marked by ;

Otherwise, has only one child that is labeled by the goal if or if .

In an SLTtree, there may be four types of leaves: success leaves , failure leaves , temporarily undefined leaves , and flounder leaves . These leaves respectively represent successful, failed, (temporarily) undefined, and floundering derivations (see Definition 3.5). In this paper, we shall not discuss floundering a situation where a nonground negative literal is selected by a computation rule (see [5, 10, 14, 19] for discussion on such topic). Therefore, in the sequel we assume that no SLTtrees contain flounder leaves.
The construction of SLTtrees can be viewed as that of SLDNFtrees [8, 15] enhanced with the following loophandling mechanisms: (1) Loops are detected using ancestor lists of subgoals. Positive loops occur within SLTtrees, whereas negative loops (i.e. loops through negation) occur across SLTtrees (see point 4 of Definition 3.3, where the child SLTtree is connected to its parent SLTtree by letting at the root of inherit the ancestor list of ). (2) Loops are broken by disallowing subgoals to use looping clauses for node expansion (see point 3 of Definition 3.3). This guarantees that SLTtrees are finite (see Theorem 3.1). (3) Due to the exclusion of looping clauses, some answers may be missed in an SLTtree. Therefore, for any ground negative subgoal its answer (true or false) can be definitely determined only when is given to be false (i.e. ) or the proof of via the SLTtree succeeds (i.e. has a success leaf). Otherwise, is assumed to be temporarily undefined and is replaced by (see point 4 of Definition 3.3). Note that is only introduced to signify the existence of subgoals whose truth values are temporarily undefined. Therefore, keeping one in a goal is enough for such a purpose (see point 4 (c)). From point 2 of Definition 3.3 we see that goals with a subgoal cannot lead to a success leaf. However, they may arrive at a failure leaf if one of the remaining subgoals fails.
For convenience, we use dotted edges to connect parent and child SLTtrees, so that negative loops can be clearly identified (see Figure 1). Moreover, we refer to , the top SLTtree, along with all its descendant SLTtrees as a generalized SLTtree for , denoted (or simply when no confusion would occur). Therefore, a path of a generalized SLTtree may come across several SLTtrees through dotted edges.
Example 3.1
Consider the following program and let be the top goal. : For convenience, let us choose the leftmost computation rule and let . The generalized SLTtree for is shown in Figure 1,^{4}^{4}4 For simplicity, in depicting SLTtrees we omit the “” symbol in goals. which consists of five SLTtrees that are rooted at , , , and , respectively. and are success leaves because they are labeled by an empty goal. , and are failure leaves because they have no clauses to unify with except for the looping clauses (for ) and (for ). , and are temporarily undefined leaves because their goals consist only of .
SLTtrees have some nice properties. Before proving those properties, we reproduce the definition of boundedtermsize programs. The following definition is adapted from [32].
Definition 3.4
A program has the boundedtermsize property if there is a function such that whenever a top goal has no argument whose term size exceeds , then no subgoals and tabled answers in any generalized SLTtree have an argument whose term size exceeds .
The following result shows that the construction of SLTtrees is always terminating for programs with the boundedtermsize property.
Theorem 3.1
Let be a program with the boundedtermsize property, a top goal and a computation rule. The generalized SLTtree for via is finite.
Proof: The boundedtermsize property guarantees that no term occurring on any path of can have size greater than , where is a bound on the size of terms in the top goal . Assume, on the contrary, that is infinite. Then it must have an infinite path because its branching factor (i.e. the average number of children of all nodes in the tree) is bounded by the finite number of clauses in . Since has only a finite number of predicate, function and constant symbols, some positive subgoal selected by must have infinitely many variant descendants on the path such that the proof of needs the proof of that needs the proof of , and so on. That is, is an ancestor variant subgoal of for any . Let have totally clauses that can unify with . Then by point 3 of Definition 3.3, , when selected by , will have no clause to unify with except for the looping clauses. That is, shoud be at a leaf, contradicting that it has variant decendants on the path.
Definition 3.5
Let be the SLTtree for . A successful (resp. failed or undefined) branch of is a branch that ends at a success (resp. failure or temporarily undefined) leaf. A correct answer substitution for is given by where the s are the most general unifiers used at each step along a successful branch of . An SLTderivation of is a branch of .
Another principal property of SLTtrees is that correct answer substitutions for top goals are sound w.r.t. the wellfounded semantics.
Theorem 3.2
Let be a program with the boundedtermsize property, a top goal, and the SLTtree for . For any correct answer substitution for in .
Proof: Let be the depth of a successful branch. Without loss of generality, assume the branch is of the form
where and . We show, by induction on , .
Let . Since is a success leaf, has only one literal, say . If is positive, must be a bodyless clause in such that . In such a case, , so that . Otherwise, is a ground negative literal. By point 4 of Definition 3.3 and thus . Therefore with .
As induction hypothesis, assume that for . We now prove .
Let with being the selected literal. If is negative, must be ground and (otherwise either is a flound leaf or a failure leaf, or contains a subgoal in which case will never lead to a success leaf). So with and . By induction hypothesis we have
.
Otherwise, is positive. So there is a clause in with . That is, . Since is true in , is true in . So is true in . Therefore
.
SLTtrees provide a basis for us to develop a sound and complete method for computing the wellfounded semantics.
Observe that the concept of correct answer substitutions for a top goal , defined in Definition 3.5, can be extended to any goal at node in a generalized SLTtree . This is done simply by adding a condition that the (sub) branch starts at . For instance, in Figure 1 the branch that starts at and ends at yields a correct answer substitution for the goal at , where is the mgu of unifying with the head of and is the mgu of at unifying with . From the proof of Theorem 3.2 it is easily seen that it applies to correct answer substitutions for any goals in .
Let be a goal in and be the selected subgoal in . Assume that is positive. The partial branches of that are used to prove constitute subderivations for . By Theorem 3.2, for any correct answer substitution built from a successful subderivation for . We refer to such intermediate results like as tabled positive answers.
Let consist of all tabled positive answers in . Then is equivalent to w.r.t. the wellfounded semantics. Due to the addition of tabled positive answers, a new generalized SLTtree for can be built with possibly more tabled positive answers derived. Let consist of all tabled positive answers in but not in and
Comments
There are no comments yet.