Reasoning about Actions with Temporal Answer Sets

10/17/2011 ∙ by Laura Giordano, et al. ∙ 0

In this paper we combine Answer Set Programming (ASP) with Dynamic Linear Time Temporal Logic (DLTL) to define a temporal logic programming language for reasoning about complex actions and infinite computations. DLTL extends propositional temporal logic of linear time with regular programs of propositional dynamic logic, which are used for indexing temporal modalities. The action language allows general DLTL formulas to be included in domain descriptions to constrain the space of possible extensions. We introduce a notion of Temporal Answer Set for domain descriptions, based on the usual notion of Answer Set. Also, we provide a translation of domain descriptions into standard ASP and we use Bounded Model Checking techniques for the verification of DLTL constraints.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Temporal logic is one of the main tools used in the verification of dynamic systems. In the last decades, temporal logic has been widely used also in AI in the context of planning, diagnosis, web service verification, agent interaction and, in general, in most of those areas having to do with some form of reasoning about actions.

The need of temporally extended goals in the context of planning has been first motivated in [Bacchus and Kabanza (1998), Kabanza et al. (1997), Giunchiglia and Traverso (1999)]. In particular, [Giunchiglia and Traverso (1999)] developed the idea of planning as model checking in a temporal logic, where the properties of planning domains are formalized as temporal formulas in CTL. In general, temporal formulas can be usefully exploited both in the specification of a domain and in the verification of its properties. This has been done, for instance, for modeling the interaction of services on the web [Pistore et al. (2005)], as well as for the specification and verification of agent communication protocols [Giordano et al. (2007)]. Recently, Claßen and Lakemeyer [Claßen and Lakemeyer (2008)] have introduced a second order extension of the temporal logic CTL*, , to express and reason about non-terminating Golog programs. The ability to capture infinite computations is important as agents and robots usually fulfill non-terminating tasks.

In this paper we combine Answer Set Programming (ASP) [Gelfond (2007)] with Dynamic Linear Time Temporal Logic (DLTL) [Henriksen and Thiagarajan (1999)] to define a temporal logic programming language for reasoning about complex actions and infinite computations. DLTL extends propositional temporal logic of linear time with regular programs of propositional dynamic logic, which are used for indexing temporal modalities. Allowing program expressions within temporal formulas and including arbitrary temporal formulas in domain descriptions provides a simple way of constraining the (possibly infinite) evolutions of a system, as in Propositional Dynamic Logic (PDL). To combine ASP and DLTL, we define a temporal extension of ASP by allowing temporal modalities to occur within rules and we introduce a notion of Temporal Answer Set, which captures the temporal dimension of the language as a linear structure and naturally allows to deal with infinite computations. A domain description consists of two parts: a set of temporal rules (action laws, causal laws, etc.) and a set of constraints (arbitrary DLTL formulas). The temporal answer sets of the rules in the domain description which also satisfy the constraints are defined to be the extensions of the domain description.

We provide a translation into standard ASP for the temporal rules of the domain description. The temporal answer sets of an action theory can then be computed as the standard answer sets of the translation. To compute the extensions of a domain description, the temporal constraints are evaluated over temporal answer sets using bounded model checking techniques [Biere et al. (2003)]. The approach proposed for the verification of DLTL formulas extends the one developed in [Heljanko and Niemelä (2003)] for bounded LTL model checking with Stable Models.

The outline of the paper is as follows. In Section 2, we recall the temporal logic DLTL. In Section 3, we introduce our action theory in temporal ASP. In Section 4, we define the notions of temporal answer set and extension of a domain description. Section 5 describes the reasoning tasks, while Sections 6 and 7 describe the model checking problem and provide a translation of temporal domain descriptions into ASP. Section 8 is devoted to conclusions and related work.

2 Dynamic Linear Time Temporal Logic

In this section we briefly define the syntax and semantics of DLTL as introduced in [Henriksen and Thiagarajan (1999)]. In such a linear time temporal logic the next state modality is indexed by actions. Moreover (and this is the extension to LTL), the until operator is indexed by a program as in PDL. In addition to the usual (always) and (eventually) temporal modalities of LTL, new modalities and are allowed. Informally, a formula is true in a world of a linear temporal model (a sequence of propositional interpretations) if holds in all the worlds of the model which are reachable from through any execution of the program . A formula is true in a world of a linear temporal model if there exists a world of the model reachable from through an execution of the program , in which holds. The program can be any regular expression built from atomic actions using sequence (), nondeterministic choice () and finite iteration (). The usual modalities , and (next) of LTL are definable.

Let be a finite non-empty alphabet representing actions. Let and be the set of finite and infinite words on , and let =. We denote by the words over and by the words over . For , we denote by prf(u) the set of finite prefixes of . Moreover, we denote by the usual prefix ordering over namely, iff such that , and iff and .

The set of programs (regular expressions) generated by is:

::= ,

where and range over . A set of finite words is associated with each program by the mapping , which is defined as follows:

  • ;

  • ;

  • ;

  • , where

    • , for every

where is the empty word (the empty action sequence).

Let be a countable set of atomic propositions containing and (standing for true and false), and let DLTL() ::= , where and range over DLTL().

A model of DLTL() is a pair where and is a valuation function. Given a model , a finite word and a formula , the satisfiability of a formula at in , written , is defined as follows:

  • ;

  • ;

  • iff ;

  • iff ;

  • iff or ;

  • iff there exists such that and . Moreover, for every such that , .

A formula is satisfiable iff there is a model and a finite word such that . The formula is true at if “ until ” is true on a finite stretch of behavior which is in the linear time behavior of the program .

The classical connectives and are defined as usual. The derived modalities and can be defined as follows: and . Furthermore, if we let , the (until), (next), and operators of LTL can be defined as follows: ,  ,   ,   ,  where, in , is taken to be a shorthand for the program . Hence, LTL() is a fragment of DLTL(). As shown in [Henriksen and Thiagarajan (1999)], DLTL() is strictly more expressive than LTL(). In fact, DLTL has the full expressive power of the monadic second order theory of -sequences.

3 Action theories in Temporal ASP

Let be a set of atomic propositions, the fluent names. A simple fluent literal is a fluent name or its negation . Given a fluent literal , such that or , we define . We denote by the set of all simple fluent literals and, for each , we denote by the complementary literal (namely, and ). is the set of temporal fluent literals: if , then (for ). Let . Given a (temporal) fluent literal , represents the default negation of . A (temporal) fluent literal, possibly preceded by a default negation, will be called an extended fluent literal.

A state is a set of fluent literals in . A state is consistent if it is not the case that both and belong to the state, or that belongs to the state. A state is complete if, for each fluent name , either or belongs to it. The execution of an action in a state may possibly change the values of fluents in the state through its direct and indirect effects, thus giving rise to a new state.

Given a set of actions , a domain description over is defined as a tuple , where is a set of laws (action laws, causal laws, precondition laws, etc.) describing the preconditions and effects of actions, and is a set of DLTL constraints. While contains the laws that are usually included in a domain description, which define the executability conditions for actions, their direct and indirect effects as well as conditions on the initial state, contains general DLTL constraints which must be satisfied by the intended interpretations of the domain description. While the laws in define conditions on single states or on pairs of consecutive states, DLTL constraints define more general conditions on possible sequences of states and actions. Let us first describe the laws occurring in .

The action laws describe the immediate effects of actions. They have the form:

(1)

where is a simple fluent literal and the ’s are either simple fluent literals or temporal fluent literals of the form . Its meaning is that executing action in a state in which the conditions hold and conditions do not hold causes the effect to hold. Observe that a temporal literal is true in a state when the execution of action in that state causes to become true in the next state. For instance, the following action laws describe the deterministic effect of the actions shoot and load for the Russian Turkey problem:

Non deterministic actions can be defined using default negation in the body of action laws. In the example, after spinning the gun, it may be loaded or not:

Observe that, in this case, temporal fluent literals occur in the body of action laws.

Causal laws are intended to express “causal” dependencies among fluents. In we allow two kinds of causal laws. Static causal laws have the form:

(2)

where the ’s are simple fluent literals. Their meaning is: if hold in a state and do not hold in that state, than is caused to hold in that state.

Dynamic causal laws have the form:

(3)

where is a simple fluent literal and the ’s are either simple fluent literals or temporal fluent literals of the form . Their meaning is: if hold and do not hold in a state, then is caused to hold in the next state. Observe that holds in a state when holds in the next state.

For instance, the static causal law states that the turkey being in sight of the hunter causes it to be frightened, if it is alive; alternatively, the dynamic causal law states that if the turkey is alive, it becomes frightened (if it is not already) when it starts seeing the hunter; but it can possibly become non-frightened later, due to other events, while still being in sight of the hunter111Shorthands like those in [Denecker et al. (1998)] could be used, even though we do not introduce them in this paper, to express that a fluent or a complex formula is initiated (i.e., it is false in the current state and caused true in the next one)..

Besides action laws and causal laws, that apply to all states, we also allow for laws in that only apply to the initial state. They are called initial state laws and have the form:

(4)

where the ’s are simple fluent literals. Observe that initial state laws, unlike static causal laws, only apply to the initial state as they are not prefixed by the modality. As a special case, the initial state can be defined as a set of simple fluent literals. For instance, the initial state is defined by the initial state laws:

Given the laws introduced above, all the usual ingredients of action theories can be introduced in . In particular, let us consider the case when can occur as a literal in the head of those laws.

Precondition laws are special kinds of action laws (1) with as effect. They have the form:

where and the ’s are simple fluent literals. The meaning is that the execution of an action is not possible in a state in which hold and do not hold (that is, no state may result from the execution of in a state in which hold and do not hold).

State constraints that apply to the initial state or to all states can be obtained when occurs in the head of initial state laws (4) or static causal laws (2):

The first one means that it is not the case that, in the initial state, hold and do not hold. The second one means that there is no state in which hold and do not hold.

As in [Lifschitz (1990)] we call frame fluents those fluents to which the law of inertia applies. The persistency of frame fluents from a state to the next one can be enforced by introducing in a set of laws, called persistency laws,

for each simple fluent to which inertia applies. Their meaning is that, if holds in a state, then will hold in the next state, unless its complement is caused to hold. And similarly for . Note that persistency laws are instances of dynamic causal laws (3). In the following, we use inertial f as a shorthand for the persistency laws for .

The persistency of a fluent from a state to the next one is blocked by the execution of an action which causes the value of the fluent to change, or by a nondeterministic action which may cause it to change. For instance, the persistency of is blocked by and by .

Examples of non-inertial fluents, for which persistency laws are not included, are those taking a default truth value, as for a spring door which is normally closed:

or those which always change, at least by default, e.g., in case of a pendulum (see [Giunchiglia et al. (2004)]) always switching between left and right position:

Such default action laws play a role similar to that of inertia rules in [Giunchiglia and Lifschitz (1998)], [Giunchiglia et al. (2004)] and [Eiter et al. (2004)].

Initial state laws may incompletely specify the initial state. In this paper we want to reason about complete states so that the execution of an infinite sequence of actions gives rise to a linear model as defined in section 2. For this reason, we assume that, for each fluent , contains the laws:

As we will see later, this assumption in general is not sufficient to guarantee that all the states are complete.

Test actions, useful for checking the value of a fluent in a state in the definition of complex actions, can be defined through suitable laws as follows. Given a simple fluent literal , the test action is executable only if holds, and it has no effect on any fluent :

The second component of a domain description is the set of DLTL constraints, which allow very general temporal conditions to be imposed on the executions of the domain description (we will call them extensions). Their effect is that of restricting the space of the possible executions. For instance, the constraint:

states that the gun is not loaded until the turkey is in sight. Its addition filters out all the executions in which the gun is loaded before the turkey is in sight.

A temporal constraint can also require a complex behavior to be performed. The program

(5)

describes the behavior of the hunter who waits for a turkey until it appears and, when it is in sight, loads the gun and shoots. Actions and are test actions, as introduced before. If the constraint

is included in then all the runs of the domain description which do not start with an execution of the given program will be filtered out. For instance, an extension in which in the initial state the turkey is not in sight and the hunter loads the gun and shoots is not allowed. In general, the inclusion of a constraint in requires that there is an execution of the program starting from the initial state.

Example 1

Let us consider a variant of the Yale shooting problem including some of the laws above, and some more stating that: if the hunter is in sight and the turkey is alive, the turkey is frightened; the turkey may come in sight or out of sight (nondeterministically) during waiting.

Let and . We define a domain description (,), where contains the following laws:

Immediate effects:

Causal laws:            

Initial state laws:                      

Precondition laws:    

All fluents in are inertial: inertial alive, inertial loaded, inertial in_sight, inertial frightened; and .

Given this domain description we may want to ask if it is possible for the hunter to execute a behavior described by program in (5) so that the turkey is not alive after that execution. The intended answer to the query would be yes, since there is a possible scenario in which this can happen.

Example 2

In order to see that the action theory in this paper is well suited to deal with infinite executions, consider a mail delivery agent, which repeatedly checks if there is mail in the mailboxes of and and then it delivers the mail to or to , if there is any; otherwise, it waits. Then, the agent starts again the cycle. The actions in are: , (the agent verifies if there is mail in the mailbox of ), , (the agent delivers the mail to ), , (the agent waits). The fluent names are (there is mail in the mailbox of ) and . The domain description contains the following laws for :

Immediate effects:


Precondition laws:


Their meaning is (in the order) that: after delivering mail to , there is no mail for anymore; the action of verifying if there is mail for , may (non-monotonically) cause to become true; in case there is no mail for , is not executable; in case there is mail for , wait is not executable. The same laws are also introduced for the actions involving .

All fluents in are inertial: inertial mail(a), inertial mail(b). Observe that, the persistency laws for inertial fluents interact with the immediate effect laws above. The execution of in a state in which there is no mail for (), may either lead to a state in which holds (by the second action law) or to a state in which holds (by the persistency of ).

contains the following constraints:



The first one means that the action must be executed in the initial state. The second one means that, after any execution of action , the agent must execute and in the order, then either deliver the mail to or to or wait and, then, execute action again, to start a new cycle.

We may want to check that if there is mail for , the agent will eventually deliver it to . This property, which can be formalized by the formula , does not hold as there is a possible scenario in which there is mail for , but the mail is repeatedly delivered to and never to . The mail delivery agent we have described is not fair.

Example 3

As an example of modeling a controlled system and its possible faults, we describe an adaptation of the qualitative causal model of the “common rail” diesel injection system from [Panati and Theseider Dupré (2001)] where:

  • Pressurized fuel is stored in a container, the rail, in order to be injected at high pressure into the cylinders. We ignore in the model the output flow through the injectors. Fuel from the tank is input to the rail through a pump.

  • A regulating system, including, in the physical system, a pressure sensor, a pressure regulator and an Electronic Control Unit, controls pressure in the rail; in particular, the pressure regulator, commanded by the ECU based on the measured pressure, outputs fuel back to the tank.

  • The control system repeatedly executes the sense_p (sense pressure) action while the physical system evolves through internal events.

Examples of formulas from the model are contained in :

shows the effect of the fault event . Flows influence the derivative of the pressure in the rail, and the pressure derivative influences pressure via the event : The model of the pressure regulating subsystem includes: with the obvious mutual exclusion constraints among fluents. Initially, everything is normal and pressure is steady: , , , , .

All fluents are inertial. We have the following temporal constraints in :




The first one models conditions which imply a pressure change. The second one models the fact that a mode switch occurs when the system is operating in normal mode and the measured pressure is low. The third one models the fact that the control system repeatedly executes , but other actions may occur in between. The fourth one imposes that at most one fault may occur in a run.

Given this specification we can, for instance, check that if pressure is low in one state, it will be normal in the third next one, namely, that the temporal formula is satisfied in all the possible scenarios admitted by the domain description. That is, the system tolerates a weak fault of the pump — the only fault included in this model. In general, we could, e.g., be interested in proving properties that hold if at most one fault occurs, or at most one fault in a set of “weak” faults occurs.

As we have seen from the examples, our formalism allows naturally to deal with infinite executions of actions. Such infinite executions define the models over which temporal formulas can be evaluated. In order to deal with cases (e.g., in planning) where we want to reason on finite action sequences, it is easy to see that any finite action sequence can be represented as an infinite one, adding to the domain description an action dummy, and the constraints and stating that action dummy is eventually executed and, from that point on, only the action dummy is executed. In the following, we will restrict our attention to infinite executions, assuming that the dummy action is introduced when needed.

4 Temporal answer sets and extensions for domain descriptions

Given a domain description , the laws in are rules of a general logic program extended with a restricted use of temporal modalities. In order to define the extensions of a domain description, we introduce a notion of temporal answer set, extending the usual notion of answer set [Gelfond (2007)]. The extensions of a domain description will then be defined as the temporal answer sets of satisfying the integrity constraints .

In the following, for conciseness, we call “simple (temporal) literals” the “simple (temporal) fluent literals”. We call rules the laws in , having one of the two forms:

(6)

where the ’s are simple literals, and

(7)

where the ’s are simple or temporal literals, the first one capturing initial state laws, the second one all the other laws. To define the notion of extension, we also need to introduce rules of the form: , where the ’s are simple or temporal literals, which will be used to define the reduct of a program. The modality means that the rule applies in the state obtained after the execution of actions . Conveniently, also the notion of temporal literal used so far needs to be extended to include literals of the form , meaning that holds after the action sequence .

As we have seen, temporal models of DLTL are linear models, consisting of an action sequence and a valuation function , associating a propositional evaluation with each state in the sequence (denoted by a prefix of ). We extend the notion of answer set to capture this linear structure of temporal models, by defining a partial temporal interpretation as a pair , where and is a set of literals of the form , where is a prefix of .

Definition 1

Let . A partial temporal interpretation over is a pair where is a set of temporal literals of the form , such that is a prefix of , and it is not the case that both and belong to or that belongs to (namely, is a consistent set of temporal literals).

A temporal interpretation is said to be total if either or , for each prefix of and for each fluent name .

Observe that a partial interpretation provides, for each prefix , a partial evaluation of fluents in the state corresponding to that prefix. The (partial) state obtained by the execution of the actions in the sequence can be defined as follows:

Given a partial temporal interpretation and a prefix of , we define the satisfiability of a simple, temporal and extended literal in at (written ) as follows:

 iff  , for a simple literal

 iff  or is not a prefix of

 iff , where is a prefix of

 iff  

The satisfiability of rule bodies in a partial interpretation is defined as usual:

 iff   for .

A rule is satisfied in a partial temporal interpretation if, implies , where is the empty action sequence.

A rule is satisfied in a partial temporal interpretation if, for all action sequences (including the empty one), implies .

A rule is satisfied in a partial temporal interpretation if implies .

We are now ready to define the notion of answer set for a set of rules that does not contain default negation. Let be a set of rules over an action alphabet , not containing default negation, and let .

Definition 2

A partial temporal interpretation is a temporal answer set of if is minimal (in the sense of set inclusion) among the such that is a partial interpretation satisfying the rules in .

In order to define answer sets of a program possibly containing negation, given a partial temporal interpretation over , we define the reduct, , of relative to extending Gelfond and Lifschitz’ transform [Gelfond and Lifschitz (1988)] to compute a different reduct of for each prefix of .

Definition 3

The reduct, , of relative to and to the prefix of is the set of all the rules

such that is in and , for all . The reduct of relative to is the union of all reducts for all prefixes of .

In essence, given , a different reduct is defined for each finite prefix of , i.e., for each possible state corresponding to a prefix of .

Definition 4

A partial temporal interpretation is a temporal answer set of if is a temporal answer set of the reduct .

The definition above is a natural generalization of the usual notion of answer set to programs with temporal rules. Observe that has infinitely many prefixes, so that the reduct is infinite as well as its answer sets. This is in accordance with the fact that temporal models are infinite.

In the following, we will devote our attention to those domain descriptions such that has total temporal answer sets. We will call such domain descriptions well-defined domain descriptions. As we will see below, total temporal answer sets can indeed be regarded as temporal models (according to the definition of model in Section 2). Although it is not possible to define general syntactic conditions which guarantee that the temporal answer sets of are total, this can be done in some specific case. It is possible to prove the following:

Proposition 1

Let be a domain description over , such that all fluents are inertial. Let . Any answer set of over is a total answer set over .

This result is not surprising, since, as we have assumed in the previous section, the laws for completing the initial state are implicitly added to , so that the initial state is complete. Moreover, it can be shown that (under the conditions, stated in Proposition 1, that all fluents are inertial) the execution of an action in a complete state produces (nondeterministically, due to the presence of nondeterministic actions) a new complete state, which can be only determined by the action laws, causal laws and persistency laws executed in that state.

In the following, we define the notion of extension of a well-defined domain description over in two steps: first, we find the temporal answer sets of ; second, we filter out all the temporal answer sets which do not satisfy the temporal constraints in . For the second step, we need to define when a temporal formula is satisfied in a total temporal interpretation . Observe that a total answer set can be easily seen as a temporal model, as defined in Section 2. Given a total answer set we define the corresponding temporal model as , where if and only if , for all atomic propositions . We say that a total answer set over satisfies a DLTL formula if .

Definition 5

An extension of a well-defined domain domain description over is a (total) answer set of which satisfies the constraints in .

Notice that, in general, a domain description may have more than one extension even for the same action sequence : the different extensions of with the same account for the different possible initial states (when the initial state is incompletely specified) as well as for the different possible effects of nondeterministic actions.

Example 4

Assume the dummy action is added to the Russian Turkey domain in Section 3. Given the infinite sequence , the domain description has (among the others) an extension over containing the following temporal literals (for the sake of brevity, we write to mean that holds in for all ’s):
,
,
  ,
 ,
,
,

and so on. This extension satisfies the constraints in the domain description and corresponds to a linear temporal model .

To conclude this section we would like to point out that, given a domain description over such that only admits total answer sets, a transition system can be associated with , as follows:

  • is the set of all the possible consistent and complete states of the domain description;

  • is the set of all the states in satisfying the initial state laws in ;

  • is the set of all triples such that: , and for some total answer set of : and

Intuitively, is the set of transitions between states. A transition labelled from to (represented by the triple ) is present in if, there is a (total) answer set of , in which is a state and the execution of action in leads to the state .

5 Reasoning tasks

Given a domain description over and a temporal goal (a DLTL formula), we are interested in finding out the extensions of satisfying/falsifying . While in the next section we will focus on the use of bounded model checking techniques for answering this question, in this one we show that many reasoning problems, including temporal projection, planning and diagnosis can be characterized in this way.

Suppose that in Example 1 we want to know if there is a scenario in which the turkey is not alive after the action sequence . We can solve this temporal projection problem by finding out an extension of the domain description which satisfies the temporal formula

The extension in Example 4 indeed satisfies the temporal formula above, since is true in the linear model associated with the extension .

It is well known that a planning problem can be formulated as a satisfiability problem [Giunchiglia and Traverso (1999)]. In case of complete state and deterministic actions, the problem of finding a plan which makes the turkey not alive and the gun loaded, can be stated as the problem of finding out an extension of the domain description in which the formula is satisfied. The extension provides a plan for achieving the goal .

With an incomplete initial state, or with nondeterministic actions, the problem of finding a conformant/universal plan which works for all the possible completions of the initial state and for all the possible outcomes of nondeterministic actions cannot be simply solved by checking the satisfiability of the formula above. The computed plan must also be tested to be a conformant plan. On the one hand, it must be verified that the computed plan always achieves the given , i.e., there is no extension of the domain description satisfying the formula . On the other hand, it must be verified that is executable in all initial states. This can be done, for instance, adopting techniques similar to those in [Giunchiglia (2000)]. [Eiter et al. (2003)] addresses the problem of conformant planning in the DLV system. [Tu et al. (2011)] develops conformant planners based on a notion of approximation of action theories in the action language [Baral and Gelfond (2000)].

As concerns diagnosis, consider systems like the one in Example 3. A diagnosis of a fault observation is a run from the initial state to a state in which holds and which does not contain fault observations in the previous states [Panati and Theseider Dupré (2000)], i.e., an extension satisfying the formula: , where , are all the possible observations of fault. In Example 3, is the only possible fault observation, hence a diagnosis for it is an extension of the domain description which satisfies .

As concerns property verification, an example has been given in Example 2. We observe that the verification that a domain description is well-defined can be done by adding to the domain description a static law