E-RES: A System for Reasoning about Actions, Events and Observations

E-RES is a system that implements the Language E, a logic for reasoning about narratives of action occurrences and observations. E's semantics is model-theoretic, but this implementation is based on a sound and complete reformulation of E in terms of argumentation, and uses general computational techniques of argumentation frameworks. The system derives sceptical non-monotonic consequences of a given reformulated theory which exactly correspond to consequences entailed by E's model-theory. The computation relies on a complimentary ability of the system to derive credulous non-monotonic consequences together with a set of supporting assumptions which is sufficient for the (credulous) conclusion to hold. E-RES allows theories to contain general action laws, statements about action occurrences, observations and statements of ramifications (or universal laws). It is able to derive consequences both forward and backward in time. This paper gives a short overview of the theoretical basis of E-RES and illustrates its use on a variety of examples. Currently, E-RES is being extended so that the system can be used for planning.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/09/2011

Reasoning about Action: An Argumentation - Theoretic Approach

We present a uniform non-monotonic solution to the problems of reasoning...
09/04/2018

Non-monotonic Reasoning in Deductive Argumentation

Argumentation is a non-monotonic process. This reflects the fact that ar...
03/06/2013

From Conditional Oughts to Qualitative Decision Theory

The primary theme of this investigation is a decision theoretic account ...
03/27/2013

Time, Chance, and Action

To operate intelligently in the world, an agent must reason about its ac...
02/04/2014

Identifying the Class of Maxi-Consistent Operators in Argumentation

Dung's abstract argumentation theory can be seen as a general framework ...
07/10/2020

Cautious Monotonicity in Case-Based Reasoning with Abstract Argumentation

Recently, abstract argumentation-based models of case-based reasoning (A...
07/13/2021

Monotonicity and Noise-Tolerance in Case-Based Reasoning with Abstract Argumentation (with Appendix)

Recently, abstract argumentation-based models of case-based reasoning (A...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 General Information

-RES is a system for modeling and reasoning about dynamic systems. Specifically, it implements the Language [Kakas & Miller1997b, Kakas & Miller1997a], a specialist logic for reasoning about narratives of action occurrences and observations. -RES is implemented in SICStus Prolog and runs on any platform for which SICStus is supported (e.g. Windows, Linux, UNIX, Mac). The program is about 300 lines long (a URL is given at the end of the paper). The semantics of is model-theoretic, but this implementation is based on a sound and complete reformulation of in terms of argumentation, and uses general computational techniques of argumentation frameworks. To describe the operation and utility of -RES, it is necessary to first review the Language .

1.1 The Language

Like many logics, the Language is really a collection of languages, since the particular vocabulary employed depends on the domain being modeled. The domain-dependent vocabulary always consists of a set of fluent constants, a set of action constants, and a partially ordered set of time-points. A fluent literal is either a fluent constant or its negation . In the current implementation of -RES the only time structure that is supported is that of the natural numbers, so we restrict our attention here to domains of this type, using the standard ordering relation in all examples.

Domain descriptions in the Language are collections of four kinds of statements (where is an action constant, is a time-point, is a fluent constant, is a fluent literal and is a set of fluent literals):

  • t-propositions (“t” for “time-point”), of the form

  • h-propositions (“h” for “happens”), of the form

  • c-propositions (“c” for “causes”), either of the form

    or of the form

  • r-propositions (“r” for “ramification”), of the form

    .

The precise semantics of is described in [Kakas & Miller1997b] and [Kakas & Miller1997a]. T-propositions are used to record observations that particular fluents hold or do not hold at particular time-points, and h-propositions are used to state that particular actions occur at particular time-points. C-propositions state general “action laws” – the intended meaning of “ initiates when ” is “ is a minimally sufficient set of conditions for an occurrence of to have an initiating effect on ”. (When is empty the proposition is stated simply as “ initiates ”.) R-propositions serve a dual role in that they describe both static constraints between fluents and ways in which fluents may be indirectly affected by action occurrences. The intended meaning of “ whenever ” is “at every time-point that holds, holds, and hence every action occurrence that brings about also brings about ”.

’s semantics is perhaps best understood by examples, and so several are given in the next sub-section. The key features of the semantics are as follows.

  • Models are simply mappings of fluent/time-point pairs to which satisfy various properties relating to the propositions in the domain.

  • The semantics describes entailment () of extra t-propositions (but not h-, c- or r-propositions) from domain descriptions.

  • is monotonic as regards addition of t-propositions to domain descriptions, but non-monotonic (in order to eliminate the frame problem) as regards addition of h-, c- and r-propositions. The semantics encapsulates the assumptions that (i) no actions occur other than those explicitly represented by h-propositions, (ii) actions have no direct effects other than those explicitly described by c-propositions, and (iii) actions have no indirect effects other than those that can be explained by “chains” of r-propositions in the domain description. (Technically, these “chains” are defined using the notion of a least fixed point.)

  • The semantics ensures that fluents have a default persistence. In each model, fluents change truth values only at time-points (called initiation points and termination points) where an h-proposition and a c-proposition (whose preconditions are satisfied in the model) combine to cause a change, or where an h-proposition, a c-proposition and a “chain” of r-propositions all combine to give an indirect or knock-on effect. All effects (direct and indirect) of an action occurrence are instantaneous, i.e. all changes are apparent immediately after the occurrence.

  • As well as indicating how the effects of action occurrences instantaneously propagate, r-propositions place constraints on which combinations of t-propositions referring to the same time-point are allowable. In this latter respect they behave as ordinary classical implications.

1.2 Example Language Domain Descriptions

Each of the following domain descriptions illustrates how supports particular modes of reasoning about the effects of actions. These domain descriptions are used in subsequent sections of the paper to illustrate the functionality of the -RES system.

Example 1

(Vaccinations)
This example concerns vaccinations against a particular disease. Vaccine A only provides protection for people with blood type O, and vaccine B only works on people with blood type other than O. Fred’s blood type is not known, so he is injected with vaccine A at 2 o’clock and vaccine B at 3 o’clock. To describe this scenario we need a vocabulary of two action constants InjectA and InjectB, and two fluent constants Protected and TypeO. The domain description consists of two c-propositions and two h-propositions:

(1)
(2)
(3)
(4)

If we now consider some time later than 3 o’clock, say 6 o’clock, we can see intuitively that Fred should be protected, and indeed it is the case that

.

This is because there are two classes of models for this domain. In models of the first type, TypeO holds for all time-points, so that (1) and (3) combine to form an initiation point for Protected at . In models of the second type, TypeO does not hold for any time-point, and so (2) and (4) combine to form an initiation point for Protected at . In either type of model, Protected then persists from its initiation point up to time-point , since the fluent has no intervening termination points to override its initiation. Note, however, that there are no default assumptions directly attached to t-propositions, so that for any time it is neither the case that entails nor the case that entails .

Example 2

(Photographs)
This example shows that the Language can be used to infer information about what conditions hold at the time of an action occurrence, given other information about what held at times before and afterwards. It concerns taking a photograph. There is a single action Take, and two fluents Picture (representing that a photograph has been successfully taken) and Loaded (representing that the camera is loaded with film). Suppose that the domain description consists of a single c-proposition, a single h-proposition and two t-propositions:

(1)
(2)
(3)
(4)

Since a change occurs in the truth value of Picture between and , in all models an action must occur at some time-point between and whose initiating conditions for the property Picture are satisfied at that point. The only candidate is the Take occurrence at 2, whose condition for initiating Picture is Loaded. Hence

.

Indeed, by the persistence of Loaded (in the absence of possible initiation or termination points for this fluent), for any time , .

Example 3

(Cars)
This example illustrates the use of r-propositions, and shows how the effects of later action occurrences override the effects of earlier action occurrences. It concerns a car engine. The fluent Running represents that the engine is running, the fluent Petrol represents that there is petrol (gas) in the tank, the action TurnOn represents the action of turning on the engine, the action TurnOff represents the action of turning off the engine, and the action Empty represents the event of the tank becoming empty (or the action of someone emptying the tank). We describe a narrative where the engine is initially running, is turned off at time , is turned back on at time , and runs out of petrol at time . We also want to state the general constraint that the engine cannot run without petrol. The domain description consists of:

(1)
TurnOff terminates Running (2)
Empty terminates Petrol (3)
(4)
(5)
(6)
(7)
(8)

The Language supports the following conclusions concerning the fluents Running and Petrol:
(i) For , . This is because of (5), and because there are no relevant action occurrences before time to override Running’s default persistence.
(ii) For , . This is because we obtain directly from (4) and (5), and because there are no relevant action occurrences before time to override Petrol’s default persistence. Note that in this case (4) has been used (in the contrapositive) in its capacity as a static constraint at time .
(iii) For , . This is because (2) and (6) combine to form a termination point for Running at (in all models).
(iv) For , . This is because (1) and (7) combine to form an initiation point for Running at , and this overrides the earlier termination point (for all times greater than ).
(v) For , . This is because (3) and (8) combine to form a termination point for Petrol at .
(vi) For , . This is because (3), (4) and (8) combine to form a termination point for Running at which overrides the earlier initiation point.

Note that does not allow r-propositions to be used in the contrapositive to generate extra initiation or termination points. For example, if we were to add the two propositions

JumpStart initiates Running (9)
(10)

to the domain description, we would have inconsistency. The combination of (9) and (10) would give an initiation point for Running at time , so that at subsequent times Running would be true. However, (v) above shows that for such times Petrol is false, and this contradicts (4) in its capacity as a static constraint. (4) cannot be used in the contrapositive to “fix” this by generating a termination point for Petrol from the termination point for Running.

2 Description of the System

The system relies upon a reformulation of the Language into argumentation as described in [Kakas, Miller, & Toni1999].

2.1 Argumentation Formulation of

A domain description without t-propositions and without r-propositions is translated into an argumentation program , where is the background theory, is the argumentation theory, i.e. a set of argument rules, is the argument base, and is a priority relation over the (ground instances of the) argument rules. Intuitively, the sentences in the monotonic background theory can be seen as non-defeasible argument rules which must belong to any non-monotonic extension of the theory. These extensions are given by the admissible subsets of , namely subsets that are both non-self-attacking and (counter)attack any set of argument rules attacking them. Whereas an admissible set can consist only of argument rules in the argument base, attacks against an admissible set are allowed to be subsets of the larger argument theory. The exact definition of an attack, which is dependent on the priority relation and the derivation of complimentary literals, is given in [Kakas, Miller, & Toni1999].

Both and use the predicates HappensAt, HoldsAt, Initiation and Termination. is a set of Horn clauses corresponding to the h- and c-propositions in defining the above predicates expect HoldsAt. is a domain independent set of generation, persistence and assumption rules for HoldsAt. For example, a generation rule is given by , where is any fluent and are any two time points. The relation is such that the effects of later events take priority over the effects of earlier ones (see [Kakas, Miller, & Toni1999]). Given the translation, results in [Kakas, Miller, & Toni1999] show that there is a one-to-one correspondence between (i) models of and maximal admissible sets of arguments of , and (ii) t-propositions entailed by and sceptical non-monotonic consequences of the form of , where a given literal is a sceptical (resp. credulous) non-monotonic consequence of an argumentation program iff for all (resp. some) maximal admissible extension(s) of the program.

This method can be applied directly for conjunctions of literals rather than individual literals. Hence the above techniques can be straightforwardly applied to domains with t-propositions simply by adding all t-propositions in the domain to the conjunctions of literals whose entailment we want to check. Similarly, the above techniques can be directly adapted for domains with r-propositions by conjoining to the given literals the conclusion of ramification statements that are “fired”.

2.2 Proof theory

Given the translation of an domain description into , a proof theory can be developed directly [Kakas, Miller, & Toni1999], in terms of derivations of trees, whose nodes are sets of arguments in attacking the arguments in their parent nodes. Suppose that we wish to demonstrate that a t-proposition is entailed by . Let be a (non-self-attacking) set of arguments in such that ( can be easily built by backward reasoning). Two kinds of derivations are defined:

  • successful derivations, building, from a tree consisting only of the root , a tree whose root is an admissible subset of such that , and

  • finitely failed derivations, guaranteeing the absence of any admissible set of arguments containing .

Hence the given t-proposition is entailed by if there exists a successful derivation with initial tree consisting only of the root and, for every set of argument rules in such that derives (via ) the complement of the (literal translation of the) t-proposition, every derivation for is finitely failed.

2.3 Implementation

The system is an implementation of the proof theory presented in [Kakas, Miller, & Toni1999]

, but it does not rely explicitly on tree-derivations. Instead, it implicitly manipulates trees via their frontiers, in a way similar to the proof procedure for computing partial stable models of logic programs in

[Eshghi & Kowalski1989, Kakas & Mancarella1990]. (See also [Kakas & Toni1999] for a general discussion of this technique.)

-RES defines the Prolog predicates sceptical/1 and credulous/1. For some given Goal which is a list of literals, with each literal either of the form holds(f,t) or neg(holds(f,t)) (where the Prolog constant symbols f and t represent a ground fluent constant and time point respectively),

  • if sceptical(Goal) succeeds then each literal in Goal is a sceptical non-monotonic consequence of the domain

  • if sceptical(Goal) finitely fails then some literal in Goal is not a sceptical non-monotonic consequence of the domain

  • if credulous(Goal) succeeds then each literal in Goal is a credulous non-monotonic consequence of the domain

  • if credulous(Goal) finitely fails then some literal in Goal is not a credulous non-monotonic consequence of the domain.

The implementation also defines the Prolog predicate credulous/2. This is such that for some given Goal,

  • if credulous(Goal,X) succeeds then each literal in Goal is a credulous non-monotonic consequence of the domain, and the set of arguments in X provides the corresponding admissible extension of the argumentation program translation of the domain.

Hence credulous(Goal,X) can be used to provide an explanation X for the goal Goal.

Domain descriptions in may sometimes be described using meta-level quantification, and -RES can support a restricted form of non-propositional programs where all c-propositions are “strongly range-restricted”, i.e. only of the form where . (We assume the usual convention of universal quantification over the whole proposition.) However, all h-propositions and queries must be ground. Ramifications could also be specified with variables provided that they all have the general form
where . However, in the present implementation such statements need to be ground before they can be handled by the system.

3 Applying the System

3.1 Methodology

The system relies upon the formulation of problems as domains in the Language , and a simple and straightforward translation of these -domains into their logic-programming based counterparts, which are directly manipulated by the system. At the time of writing this report, the translation needs to be performed by hand by the user. However, the problem of automating this translation presents no conceptual difficulties, and is scheduled to be implemented in the near future. As an illustration, consider Example 3. Its translation is:

initiation(running,T):-
     happens(turnOn,T), holds(petrol,T), true.
termination(running,T):-
     happens(turnOff,T), true.
termination(petrol,T):-
     happens(empty,T), true.
ram(neg(holds(running,T))):-
     neg(holds(petrol,T)).
tprop(holds(running,1)).
happens(turnOff,2).
happens(turnOn,5).
happens(empty,8).

3.2 Specifics

The system relies upon a logic-based representation of concrete domains. The system has been developed systematically from its specification given by the model-theoretic semantics, and this guarantees its correctness.

The system performs the kind of reasoning which forms the basis of a number of applications in computer science and artificial intelligence, such as simulation, fault diagnosis, planning and cognitive robotics. We are currently studying extensions of the system that can be used directly to perform planning in domains that are partially unknown

[Kakas, Miller, & Toni2000].

3.3 Users and Usability

The use of -RES requires knowledge of the Language , which (like the Language [Gelfond & Lifschitz1993]) has been designed as a high-level specification tool, in ’s case for modeling dynamic systems as narratives of action occurrences and observations, where actions can have both direct and indirect effects. As mentioned above, -RES is at an early stage of development, but we aim to soon have a user interface that will allow domain descriptions to be described directly in ’s syntax.

4 Evaluating the System

The -RES system is an initial prototype. The prototype has been evaluated in two distinct ways. First, theoretical results have been developed (documented in [Kakas, Miller, & Toni1999]) which verify that the system meets its specification, i.e. that it faithfully captures the entailment relation of the Language . Second, the system has been evaluated by testing it on a suite of examples that involve different modes of reasoning about actions and change. These examples, which include those given above, provide “proof-of-principle” evidence for the Language (and the argumentation approach taken in providing a computational counterpart to it) as a suitable framework for reasoning about actions and change.

-RES correctly computes all the t-propositions entailed by Examples 1,  2 and 3. This involves reasoning with incomplete information, reasoning from effects to causes, reasoning backwards and forwards in time, reasoning about alternative causes of effects, reasoning about indirect effects, and combining these forms of reasoning. In the remainder of this section we consider in detail how -RES processes a small selection of the queries that can be associated with these example domains.

Testing with Example 1
Example 1 can be used to test how -RES deals with incomplete information about fluents, and how it is able to reason with alternatives. As explained previously, up until time the truth value of Protected is unknown. In other words, for times less than or equal to , the literals Protected and both hold credulously, but neither holds sceptically. Reflecting this, for all , -RES succeeds on

credulous([holds(protected,t)])

credulous([neg(holds(protected,t))])

but fails on

sceptical([holds(protected,t)])

sceptical([neg(holds(protected,t))])

After time 3, however, the fluent Protected holds sceptically and so for all the system correctly succeeds on

sceptical([holds(protected,t)])

and fails on

sceptical([neg(holds(protected,t))]).

Testing with Example 2
Example 2 can be used to test how -RES can reason from effects to causes, and how it is able to reason both forwards and backwards in time. The observed value of Picture at time is explained by the fact that at the time , when a Take action occurred, Loaded held (there is no alternative way to explain this in the given domain). Hence the system reasons both backwards and forwards in time so that

sceptical([holds(loaded,2)])

succeeds. Furthermore, by persistence,

sceptical([holds(loaded,t)])

also succeeds for any time t.

Note however that if we remove (3) from the domain, then Loaded holds only credulously at any time t, and hence the system fails on

sceptical([holds(loaded,t)])

but succeeds on

credulous([holds(loaded,t)]).

If is augmented with the c-proposition

(5)

then Loaded no longer holds sceptically at any time, since there is now an alternative assumption to explain the observation given by (4), namely that Digital holds at . Thus, for any time t the system fails on

sceptical([holds(loaded,t)])

sceptical([holds(digital,t)])

but succeeds on

credulous([holds(loaded,t)])

credulous([holds(digital,t)]).

If we pose the query

credulous([holds(picture,3)],X).

then we will get the two explanations for this observation in terms of the possible generation rules for and assumptions on corresponding fluents in their preconditions. These will be given by the system as:
X = [rule(gen,picture,3,2),rule(ass,loaded,2)],
X = [rule(gen,picture,3,2),rule(ass,digital,2)].

Testing with Example 3
Example 3 can be used to test how -RES can reason with ramification statements (r-propositions) and how it can reason with a series of action occurrences where later occurrences override the effects of earlier ones. As required, -RES succeeds on each of the following queries:

sceptical([holds(running,0)])

sceptical([neg(holds(running,3))])

sceptical([holds(running,6)])

sceptical([neg(holds(running,10))])

sceptical([holds(petrol,0)])

sceptical([holds(petrol,3)])

sceptical([holds(petrol,6)])

sceptical([neg(holds(petrol,10))])

and fails on each converse query.

A more complex example (a variation of Example 1) that reasons with alternatives from observations and ramifications is given as follows. Consider the domain description given by:


(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)

From the observation at time the system is able to reason (both backwards and forwards in time) to prove that under any of the two possible alternatives Danger holds after time . Thus, for any time t after 4 the system succeeds on

sceptical([holds(danger,t)]).

Similarly, the system succeeds on

sceptical([holds(allergic,t)]),

for any time t after 3.

5 Conclusions and Future Work

We have described -RES, a Language based system for reasoning about narratives involving actions, change, observations and indirect effects via ramifications. The functionality of -RES has been demonstrated both via theoretical results and by testing with benchmark problems. We have shown that the system is versatile enough to handle a variety of reasoning tasks in simple domains. In particular, -RES can correctly reason with incomplete information, both forwards and backwards in time, from causes to effects and from effects to causes, and about the indirect effects of action occurrences.

The system still needs to be tested with very large problems, and possibly developed further to cope with the challenges that these pose. In particular, the current handling of t-propositions and ramification statements will probably be unsatisfactory for very large domains, and techniques will need to be devised to select and reason with only the t- and r-propositions that are

relevant the the goal being asked.

Work is currently underway to extend the -RES system so that it can carry out planning. The implementation will correspond to the -Planner described in [Kakas, Miller, & Toni2000]. In our setting, planning amounts to finding a suitable set of h-propositions which, when added to the given domain description, allow the entailment of a desired goal. The -Planner is especially suitable for planning under incomplete information, e.g. when we do not have full knowledge of the initial state of the problem, and the missing information cannot be “filled in” by performing additional actions, (either because no actions exist which can affect the missing information, or because there is no time to perform such actions). The planner needs to be able to reason correctly despite this incompleteness, and construct plans (when such plans exist) in cases where missing information is not necessary for achieving the desired goal. For instance, in Example 1, if and are missing, and the goal to achieve is Protected holds-at 4, then the -Planner generates and , with .

5.1 Obtaining the System

Both -RES and codings of example test domains are available from the Language and -RES website at http://www.ucl.ac.uk/~uczcrsm/LanguageE/.

References

  • [Eshghi & Kowalski1989] Eshghi, K., and Kowalski, R. 1989. Abduction compared with negation as failure. In ICLP’89, MIT Press.
  • [Gelfond & Lifschitz1993] Gelfond, M., and Lifschitz, V. 1993. Representing action and change by logic programs. In JLP, 17 (2,3,4) 301–322.
  • [Kakas & Mancarella1990] Kakas, A., and Mancarella, P. 1990. On the relation between truth maintenance and abduction. In Proceedings of the 2nd Pacific Rim International Conference on Artificial Intelligence.
  • [Kakas & Miller1997a] Kakas, A., and Miller, R. 1997a. Reasoning about actions, narratives and ramifications. In J. of Electronic Transactions on A.I. 1(4), Linkoping University E. Press, http://www.ep.liu.se/ea/cis/1997/012/.
  • [Kakas & Miller1997b] Kakas, A., and Miller, R. 1997b. A simple declarative language for describing narratives with actions. In JLP 31(1–3), 157–200.
  • [Kakas & Toni1999] Kakas, A., and Toni, F. 1999. Computing argumentation in logic programming. In JLC 9(4), 515–562, O.U.P.
  • [Kakas, Miller, & Toni1999] Kakas, A.; Miller, R.; and Toni, F. 1999. An argumentation framework for reasoning about actions and change. In LPNMR’99, 78–91, Springer Verlag.
  • [Kakas, Miller, & Toni2000] Kakas, A.; Miller, R.; and Toni, F. 2000. Planning with incomplete information. In NMR’00, Session on Representing Actions and Planning.