1 General Information
RES is a system for modeling and reasoning about dynamic systems. Specifically, it implements the Language [Kakas & Miller1997b, Kakas & Miller1997a], a specialist logic for reasoning about narratives of action occurrences and observations. RES is implemented in SICStus Prolog and runs on any platform for which SICStus is supported (e.g. Windows, Linux, UNIX, Mac). The program is about 300 lines long (a URL is given at the end of the paper). The semantics of is modeltheoretic, but this implementation is based on a sound and complete reformulation of in terms of argumentation, and uses general computational techniques of argumentation frameworks. To describe the operation and utility of RES, it is necessary to first review the Language .
1.1 The Language
Like many logics, the Language is really a collection of languages, since the particular vocabulary employed depends on the domain being modeled. The domaindependent vocabulary always consists of a set of fluent constants, a set of action constants, and a partially ordered set of timepoints. A fluent literal is either a fluent constant or its negation . In the current implementation of RES the only time structure that is supported is that of the natural numbers, so we restrict our attention here to domains of this type, using the standard ordering relation in all examples.
Domain descriptions in the Language are collections of four kinds of statements (where is an action constant, is a timepoint, is a fluent constant, is a fluent literal and is a set of fluent literals):

tpropositions (“t” for “timepoint”), of the form

hpropositions (“h” for “happens”), of the form

cpropositions (“c” for “causes”), either of the form
or of the form

rpropositions (“r” for “ramification”), of the form
.
The precise semantics of is described in [Kakas & Miller1997b] and [Kakas & Miller1997a]. Tpropositions are used to record observations that particular fluents hold or do not hold at particular timepoints, and hpropositions are used to state that particular actions occur at particular timepoints. Cpropositions state general “action laws” – the intended meaning of “ initiates when ” is “ is a minimally sufficient set of conditions for an occurrence of to have an initiating effect on ”. (When is empty the proposition is stated simply as “ initiates ”.) Rpropositions serve a dual role in that they describe both static constraints between fluents and ways in which fluents may be indirectly affected by action occurrences. The intended meaning of “ whenever ” is “at every timepoint that holds, holds, and hence every action occurrence that brings about also brings about ”.
’s semantics is perhaps best understood by examples, and so several are given in the next subsection. The key features of the semantics are as follows.

Models are simply mappings of fluent/timepoint pairs to which satisfy various properties relating to the propositions in the domain.

The semantics describes entailment () of extra tpropositions (but not h, c or rpropositions) from domain descriptions.

is monotonic as regards addition of tpropositions to domain descriptions, but nonmonotonic (in order to eliminate the frame problem) as regards addition of h, c and rpropositions. The semantics encapsulates the assumptions that (i) no actions occur other than those explicitly represented by hpropositions, (ii) actions have no direct effects other than those explicitly described by cpropositions, and (iii) actions have no indirect effects other than those that can be explained by “chains” of rpropositions in the domain description. (Technically, these “chains” are defined using the notion of a least fixed point.)

The semantics ensures that fluents have a default persistence. In each model, fluents change truth values only at timepoints (called initiation points and termination points) where an hproposition and a cproposition (whose preconditions are satisfied in the model) combine to cause a change, or where an hproposition, a cproposition and a “chain” of rpropositions all combine to give an indirect or knockon effect. All effects (direct and indirect) of an action occurrence are instantaneous, i.e. all changes are apparent immediately after the occurrence.

As well as indicating how the effects of action occurrences instantaneously propagate, rpropositions place constraints on which combinations of tpropositions referring to the same timepoint are allowable. In this latter respect they behave as ordinary classical implications.
1.2 Example Language Domain Descriptions
Each of the following domain descriptions illustrates how supports particular modes of reasoning about the effects of actions. These domain descriptions are used in subsequent sections of the paper to illustrate the functionality of the RES system.
Example 1
(Vaccinations)
This example concerns vaccinations against a particular
disease. Vaccine A only provides protection for people with
blood type O, and vaccine B only works on people with blood
type other than O. Fred’s blood type is not
known, so he is injected with vaccine A at 2 o’clock and
vaccine B at 3 o’clock. To describe this scenario we need a
vocabulary of two action constants InjectA and InjectB, and two
fluent constants Protected and TypeO. The domain description
consists of two cpropositions and two hpropositions:
(1)  
(2)  
(3)  
(4) 
If we now consider some time later than 3 o’clock, say 6 o’clock, we can see intuitively that Fred should be protected, and indeed it is the case that
.
This is because there are two classes of models for this domain. In models of the first type, TypeO holds for all timepoints, so that (1) and (3) combine to form an initiation point for Protected at . In models of the second type, TypeO does not hold for any timepoint, and so (2) and (4) combine to form an initiation point for Protected at . In either type of model, Protected then persists from its initiation point up to timepoint , since the fluent has no intervening termination points to override its initiation. Note, however, that there are no default assumptions directly attached to tpropositions, so that for any time it is neither the case that entails nor the case that entails .
Example 2
(Photographs)
This example shows that the Language can be used
to infer information about what conditions hold at the time of
an action occurrence, given other information about what held at
times before and afterwards. It concerns taking a photograph. There
is a single action Take, and two fluents Picture (representing that
a photograph has been successfully taken) and Loaded (representing that the
camera is loaded with film).
Suppose that the domain
description consists of a single cproposition,
a single hproposition and two tpropositions:
(1)  
(2)  
(3)  
(4) 
Since a change occurs in the truth value of Picture between and , in all models an action must occur at some timepoint between and whose initiating conditions for the property Picture are satisfied at that point. The only candidate is the Take occurrence at 2, whose condition for initiating Picture is Loaded. Hence
.
Indeed, by the persistence of Loaded (in the absence of possible initiation or termination points for this fluent), for any time , .
Example 3
(Cars)
This example illustrates the use of rpropositions, and shows how the
effects of later action occurrences override the effects of earlier action
occurrences. It concerns a car engine. The fluent Running represents
that the engine is running, the fluent Petrol represents that there
is petrol (gas) in the tank, the action TurnOn represents the
action of turning on the engine, the action TurnOff represents the
action of turning off the engine, and the action Empty represents the event of the tank becoming empty (or the action of
someone emptying the tank). We describe a narrative where the engine
is initially running, is turned off at time , is turned back on at
time , and runs out of petrol at time . We also want to state
the general constraint that the engine cannot run without petrol. The
domain description
consists of:
(1)  
TurnOff terminates Running  (2) 
Empty terminates Petrol  (3) 
(4)  
(5)  
(6)  
(7)  
(8) 
The Language supports the following conclusions concerning the
fluents Running and Petrol:
(i)
For , .
This is because of (5), and because there are no
relevant action occurrences before time to override Running’s
default persistence.
(ii)
For , .
This is because we obtain directly from (4) and (5), and because there are no
relevant action occurrences before time to override Petrol’s
default persistence. Note that in this case (4) has been used
(in the contrapositive) in its capacity as a static constraint at
time .
(iii)
For , .
This is because (2) and (6) combine to form a
termination point for Running at (in all models).
(iv)
For , .
This is because (1) and (7) combine to form an
initiation point for Running at , and this
overrides the earlier termination point (for all times greater than
).
(v)
For , .
This is because (3) and (8) combine to form a
termination point for Petrol at .
(vi)
For , .
This is because (3), (4) and (8) combine to form a
termination point for Running at which overrides the earlier
initiation point.
Note that does not allow rpropositions to be
used in the contrapositive to generate extra initiation or
termination points. For example, if we were to add the two
propositions
JumpStart initiates Running  (9) 

(10) 
to the domain description, we would have inconsistency. The combination of (9) and (10) would give an initiation point for Running at time , so that at subsequent times Running would be true. However, (v) above shows that for such times Petrol is false, and this contradicts (4) in its capacity as a static constraint. (4) cannot be used in the contrapositive to “fix” this by generating a termination point for Petrol from the termination point for Running.
2 Description of the System
The system relies upon a reformulation of the Language into argumentation as described in [Kakas, Miller, & Toni1999].
2.1 Argumentation Formulation of
A domain description without tpropositions and without rpropositions is translated into an argumentation program , where is the background theory, is the argumentation theory, i.e. a set of argument rules, is the argument base, and is a priority relation over the (ground instances of the) argument rules. Intuitively, the sentences in the monotonic background theory can be seen as nondefeasible argument rules which must belong to any nonmonotonic extension of the theory. These extensions are given by the admissible subsets of , namely subsets that are both nonselfattacking and (counter)attack any set of argument rules attacking them. Whereas an admissible set can consist only of argument rules in the argument base, attacks against an admissible set are allowed to be subsets of the larger argument theory. The exact definition of an attack, which is dependent on the priority relation and the derivation of complimentary literals, is given in [Kakas, Miller, & Toni1999].
Both and use the predicates HappensAt, HoldsAt, Initiation and Termination. is a set of Horn clauses corresponding to the h and cpropositions in defining the above predicates expect HoldsAt. is a domain independent set of generation, persistence and assumption rules for HoldsAt. For example, a generation rule is given by , where is any fluent and are any two time points. The relation is such that the effects of later events take priority over the effects of earlier ones (see [Kakas, Miller, & Toni1999]). Given the translation, results in [Kakas, Miller, & Toni1999] show that there is a onetoone correspondence between (i) models of and maximal admissible sets of arguments of , and (ii) tpropositions entailed by and sceptical nonmonotonic consequences of the form of , where a given literal is a sceptical (resp. credulous) nonmonotonic consequence of an argumentation program iff for all (resp. some) maximal admissible extension(s) of the program.
This method can be applied directly for conjunctions of literals rather than individual literals. Hence the above techniques can be straightforwardly applied to domains with tpropositions simply by adding all tpropositions in the domain to the conjunctions of literals whose entailment we want to check. Similarly, the above techniques can be directly adapted for domains with rpropositions by conjoining to the given literals the conclusion of ramification statements that are “fired”.
2.2 Proof theory
Given the translation of an domain description into , a proof theory can be developed directly [Kakas, Miller, & Toni1999], in terms of derivations of trees, whose nodes are sets of arguments in attacking the arguments in their parent nodes. Suppose that we wish to demonstrate that a tproposition is entailed by . Let be a (nonselfattacking) set of arguments in such that ( can be easily built by backward reasoning). Two kinds of derivations are defined:

successful derivations, building, from a tree consisting only of the root , a tree whose root is an admissible subset of such that , and

finitely failed derivations, guaranteeing the absence of any admissible set of arguments containing .
Hence the given tproposition is entailed by if there exists a successful derivation with initial tree consisting only of the root and, for every set of argument rules in such that derives (via ) the complement of the (literal translation of the) tproposition, every derivation for is finitely failed.
2.3 Implementation
The system is an implementation of the proof theory presented in [Kakas, Miller, & Toni1999]
, but it does not rely explicitly on treederivations. Instead, it implicitly manipulates trees via their frontiers, in a way similar to the proof procedure for computing partial stable models of logic programs in
[Eshghi & Kowalski1989, Kakas & Mancarella1990]. (See also [Kakas & Toni1999] for a general discussion of this technique.)RES defines the Prolog predicates sceptical/1 and credulous/1. For some given Goal which is a list of literals, with each literal either of the form holds(f,t) or neg(holds(f,t)) (where the Prolog constant symbols f and t represent a ground fluent constant and time point respectively),

if sceptical(Goal) succeeds then each literal in Goal is a sceptical nonmonotonic consequence of the domain

if sceptical(Goal) finitely fails then some literal in Goal is not a sceptical nonmonotonic consequence of the domain

if credulous(Goal) succeeds then each literal in Goal is a credulous nonmonotonic consequence of the domain

if credulous(Goal) finitely fails then some literal in Goal is not a credulous nonmonotonic consequence of the domain.
The implementation also defines the Prolog predicate credulous/2. This is such that for some given Goal,

if credulous(Goal,X) succeeds then each literal in Goal is a credulous nonmonotonic consequence of the domain, and the set of arguments in X provides the corresponding admissible extension of the argumentation program translation of the domain.
Hence credulous(Goal,X) can be used to provide an explanation X for the goal Goal.
Domain descriptions in may sometimes be described using
metalevel quantification, and RES can support
a restricted form of nonpropositional programs where
all cpropositions are “strongly rangerestricted”, i.e. only of the form
where .
(We assume the usual convention of universal quantification
over the whole proposition.)
However, all hpropositions and queries must be
ground.
Ramifications could also be specified with variables provided
that they all have the general form
where .
However, in the present implementation such statements need to be ground
before they can be handled by the system.
3 Applying the System
3.1 Methodology
The system relies upon the
formulation of problems as domains in the Language ,
and a simple and straightforward translation of these domains into their
logicprogramming based counterparts, which are directly manipulated by the system.
At the time of writing this report,
the translation needs to be performed by hand by the user.
However, the problem of automating this translation presents no conceptual
difficulties, and is scheduled to be implemented in the near future.
As an illustration, consider
Example 3. Its
translation is:
initiation(running,T): 
happens(turnOn,T), holds(petrol,T), true. 
termination(running,T): 
happens(turnOff,T), true. 
termination(petrol,T): 
happens(empty,T), true. 
ram(neg(holds(running,T))): 
neg(holds(petrol,T)). 
tprop(holds(running,1)). 
happens(turnOff,2). 
happens(turnOn,5). 
happens(empty,8). 
3.2 Specifics
The system relies upon a logicbased representation of concrete domains. The system has been developed systematically from its specification given by the modeltheoretic semantics, and this guarantees its correctness.
The system performs the kind of reasoning which forms the basis of a number of applications in computer science and artificial intelligence, such as simulation, fault diagnosis, planning and cognitive robotics. We are currently studying extensions of the system that can be used directly to perform planning in domains that are partially unknown
[Kakas, Miller, & Toni2000].3.3 Users and Usability
The use of RES requires knowledge of the Language , which (like the Language [Gelfond & Lifschitz1993]) has been designed as a highlevel specification tool, in ’s case for modeling dynamic systems as narratives of action occurrences and observations, where actions can have both direct and indirect effects. As mentioned above, RES is at an early stage of development, but we aim to soon have a user interface that will allow domain descriptions to be described directly in ’s syntax.
4 Evaluating the System
The RES system is an initial prototype. The prototype has been evaluated in two distinct ways. First, theoretical results have been developed (documented in [Kakas, Miller, & Toni1999]) which verify that the system meets its specification, i.e. that it faithfully captures the entailment relation of the Language . Second, the system has been evaluated by testing it on a suite of examples that involve different modes of reasoning about actions and change. These examples, which include those given above, provide “proofofprinciple” evidence for the Language (and the argumentation approach taken in providing a computational counterpart to it) as a suitable framework for reasoning about actions and change.
RES correctly computes all the tpropositions entailed
by Examples 1, 2
and 3.
This involves reasoning
with incomplete information, reasoning from effects to causes,
reasoning backwards and forwards in time, reasoning about alternative
causes of effects, reasoning about indirect effects, and combining
these forms of reasoning. In the remainder of this section we consider in detail
how RES processes a small selection of the queries that can be associated with
these example domains.
Testing with Example 1
Example 1 can be used to test how RES
deals with incomplete information about fluents, and how it is able
to reason with alternatives.
As explained previously, up until time the
truth value of Protected is unknown. In other words, for times less
than or equal to , the literals Protected and
both hold credulously, but neither holds sceptically. Reflecting
this, for all , RES succeeds on
credulous([holds(protected,t)])
credulous([neg(holds(protected,t))])
but fails on
sceptical([holds(protected,t)])
sceptical([neg(holds(protected,t))])
After time 3, however, the fluent Protected holds sceptically and so for all the system correctly succeeds on
sceptical([holds(protected,t)])
and fails on
sceptical([neg(holds(protected,t))]).
Testing with Example 2
Example 2 can be used to test how RES
can reason from effects to causes, and how it is able
to reason both forwards and backwards in time.
The observed value of Picture at time is explained by the fact that
at the time , when a Take action occurred, Loaded held (there is no
alternative way to explain this in the given domain).
Hence the system reasons both backwards and forwards in time so that
sceptical([holds(loaded,2)])
succeeds. Furthermore, by persistence,
sceptical([holds(loaded,t)])
also succeeds for any time t.
Note however that if we remove (3) from the domain, then Loaded holds only credulously at any time t, and hence the system fails on
sceptical([holds(loaded,t)])
but succeeds on
credulous([holds(loaded,t)]).
If is augmented with the
cproposition
(5) 
then Loaded no longer holds sceptically at any time, since there is now an alternative assumption to explain the observation given by (4), namely that Digital holds at . Thus, for any time t the system fails on
sceptical([holds(loaded,t)])
sceptical([holds(digital,t)])
but succeeds on
credulous([holds(loaded,t)])
credulous([holds(digital,t)]).
If we pose the query
credulous([holds(picture,3)],X).
then we will get the two explanations for this observation in terms
of the possible generation rules for and assumptions on
corresponding fluents in their preconditions. These
will be given by the system as:
X = [rule(gen,picture,3,2),rule(ass,loaded,2)],
X = [rule(gen,picture,3,2),rule(ass,digital,2)].
Testing with Example 3
Example 3 can be used to test how RES can
reason with ramification statements (rpropositions) and how it can
reason with a series of action occurrences where later occurrences
override the effects of earlier ones.
As required, RES succeeds on each of the following queries:
sceptical([holds(running,0)])
sceptical([neg(holds(running,3))])
sceptical([holds(running,6)])
sceptical([neg(holds(running,10))])
sceptical([holds(petrol,0)])
sceptical([holds(petrol,3)])
sceptical([holds(petrol,6)])
sceptical([neg(holds(petrol,10))])
and fails on each converse query.
A more complex example (a variation of Example 1) that reasons with alternatives from observations and ramifications is given as follows. Consider the domain description given by:
(1)  
(2)  
(3)  
(4)  
(5)  
(6)  
(7)  
(8)  
(9)  
(10) 
From the observation at time the system is able to reason (both backwards and forwards in time) to prove that under any of the two possible alternatives Danger holds after time . Thus, for any time t after 4 the system succeeds on
sceptical([holds(danger,t)]).
Similarly, the system succeeds on
sceptical([holds(allergic,t)]),
for any time t after 3.
5 Conclusions and Future Work
We have described RES, a Language based system for reasoning about narratives involving actions, change, observations and indirect effects via ramifications. The functionality of RES has been demonstrated both via theoretical results and by testing with benchmark problems. We have shown that the system is versatile enough to handle a variety of reasoning tasks in simple domains. In particular, RES can correctly reason with incomplete information, both forwards and backwards in time, from causes to effects and from effects to causes, and about the indirect effects of action occurrences.
The system still needs to be tested with very large problems, and possibly developed further to cope with the challenges that these pose. In particular, the current handling of tpropositions and ramification statements will probably be unsatisfactory for very large domains, and techniques will need to be devised to select and reason with only the t and rpropositions that are
relevant the the goal being asked.Work is currently underway to extend the RES system so that it can carry out planning. The implementation will correspond to the Planner described in [Kakas, Miller, & Toni2000]. In our setting, planning amounts to finding a suitable set of hpropositions which, when added to the given domain description, allow the entailment of a desired goal. The Planner is especially suitable for planning under incomplete information, e.g. when we do not have full knowledge of the initial state of the problem, and the missing information cannot be “filled in” by performing additional actions, (either because no actions exist which can affect the missing information, or because there is no time to perform such actions). The planner needs to be able to reason correctly despite this incompleteness, and construct plans (when such plans exist) in cases where missing information is not necessary for achieving the desired goal. For instance, in Example 1, if and are missing, and the goal to achieve is Protected holdsat 4, then the Planner generates and , with .
5.1 Obtaining the System
Both RES and codings of example test domains are available from the Language and RES website at http://www.ucl.ac.uk/~uczcrsm/LanguageE/.
References
 [Eshghi & Kowalski1989] Eshghi, K., and Kowalski, R. 1989. Abduction compared with negation as failure. In ICLP’89, MIT Press.
 [Gelfond & Lifschitz1993] Gelfond, M., and Lifschitz, V. 1993. Representing action and change by logic programs. In JLP, 17 (2,3,4) 301–322.
 [Kakas & Mancarella1990] Kakas, A., and Mancarella, P. 1990. On the relation between truth maintenance and abduction. In Proceedings of the 2nd Pacific Rim International Conference on Artificial Intelligence.
 [Kakas & Miller1997a] Kakas, A., and Miller, R. 1997a. Reasoning about actions, narratives and ramifications. In J. of Electronic Transactions on A.I. 1(4), Linkoping University E. Press, http://www.ep.liu.se/ea/cis/1997/012/.
 [Kakas & Miller1997b] Kakas, A., and Miller, R. 1997b. A simple declarative language for describing narratives with actions. In JLP 31(1–3), 157–200.
 [Kakas & Toni1999] Kakas, A., and Toni, F. 1999. Computing argumentation in logic programming. In JLC 9(4), 515–562, O.U.P.
 [Kakas, Miller, & Toni1999] Kakas, A.; Miller, R.; and Toni, F. 1999. An argumentation framework for reasoning about actions and change. In LPNMR’99, 78–91, Springer Verlag.
 [Kakas, Miller, & Toni2000] Kakas, A.; Miller, R.; and Toni, F. 2000. Planning with incomplete information. In NMR’00, Session on Representing Actions and Planning.
Comments
There are no comments yet.