Agent Programming with Declarative Goals

07/03/2002
by   F. S. de Boer, et al.
0

A long and lasting problem in agent research has been to close the gap between agent logics and agent programming frameworks. The main reason for this problem of establishing a link between agent logics and agent programming frameworks is identified and explained by the fact that agent programming frameworks have not incorporated the concept of a `declarative goal'. Instead, such frameworks have focused mainly on plans or `goals-to-do' instead of the end goals to be realised which are also called `goals-to-be'. In this paper, a new programming language called GOAL is introduced which incorporates such declarative goals. The notion of a `commitment strategy' - one of the main theoretical insights due to agent logics, which explains the relation between beliefs and goals - is used to construct a computational semantics for GOAL. Finally, a proof theory for proving properties of GOAL agents is introduced. Thus, we offer a complete theory of agent programming in the sense that our theory provides both for a programming framework and a programming logic for such agents. An example program is proven correct by using this programming logic.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/11/2020

GOAL-DTU: Development of Distributed Intelligence for the Multi-Agent Programming Contest

We provide a brief description of the GOAL-DTU system for the agent cont...
07/17/2012

Reasoning about Agent Programs using ATL-like Logics

We propose a variant of Alternating-time Temporal Logic (ATL) grounded i...
11/14/2019

Tractable reasoning about Agent Programming in Dynamic Preference Logic

While several BDI logics have been proposed in the area of Agent Program...
12/13/2015

The Rationale behind the Concept of Goal

The paper proposes a fresh look at the concept of goal and advances that...
08/04/2020

Exploiting Game Theory for Analysing Justifications

Justification theory is a unifying semantic framework. While it has its ...
03/21/2014

Towards Active Logic Programming

In this paper we present the new logic programming language DALI, aimed ...
01/15/2014

Computational Logic Foundations of KGP Agents

This paper presents the computational logic foundations of a model of ag...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Goal-Oriented Agent Programming

Agent technology has come more and more into the limelight of computer science. Intelligent agents have not only become one of the central topics of artificial intelligence (nowadays sometimes even defined as “the study of agents”, 

[RN95]), but also mainstream computer science, especially software engineering, has taken up agent-oriented programming as a new and exciting paradigm to investigate, while industries experiment with the use of it on a large scale, witness the results reported in conferences like Autonomous Agents (e.g. [AA97]) and books like e.g. [JW97].

Although the definition of an agent is subject to controversy, many researchers view it as a software (or hardware) entity that displays some form of autonomy, in the sense that an agent is both reactive (responding to its environment) and pro-active (taking initiative, independent of a user). Often this aspect of autonomy is translated to agents having a mental state comprising (at least) beliefs on the environment and goals that are to be achieved ([Wool95]).

In the early days of agent research, an attempt was made to make the concept of agents more precise by means of logical systems. This effort resulted in a number of - mainly - modal logics for the specification of agents which formally defined notions like belief, goal, intention, etc. associated with agents [Rao96, Lind96, Coh90, Coh95]. The relation of these logics with more practical approaches remains unclear, however, to this day. Several efforts to bridge this gap have been attempted. In particular, a number of agent programming languages have been developed to bridge the gap between theory and practice [Rao96a, Hin97]. These languages show a clear family resemblance with one of the first agent programming languages Agent-0 [Sho93, Hin99a], and also with the language ConGolog [Gia99, Hin9807, Hin00].

These programming languages define agents in terms of their corresponding beliefs, goals, plans and capabilities. Although they define similar notions as in the logical approaches, there is one notable difference. In logical approaches, a goal is a declarative concept, (also called a goal-to-be), whereas in the cited programming languages goals are defined as sequences of actions or plans (or goals-to-do). The terminology used differs from case to case. However, whether they are called commitments (Agent-0), intentions (AgentSpeak [Rao96a]), or goals (3APL [Hin99b]) makes little difference: all these notions are structures built from actions and therefore similar in nature to plans. With respect to ConGolog, a more traditional computer science perspective is adopted, and the corresponding structures are simply called programs. The PLACA language [Tho93], a successor of AGENT0, also focuses more on extending AGENT0 to a language with complex planning structures (which are not part of the programming language itself!) than on providing a clear theory of declarative goals of agents as part of a programming language and in this respect is similar to AgentSpeak and 3APL. The type of goal included in these languages may also be called a goal-to-do and provides for a kind of procedural perspective on goals.

In contrast, a declarative perspective on goals in agent languages is still missing. Because of this mismatch it has not been possible so far to use modal logics which include both belief and goal modalities for the specification and verification of programs written in such agent languages and it has been impossible to close the gap between agent logics and programming frameworks so far. The value of adding declarative goals to agent programming lies both in the fact that it offers a new abstraction mechanism as well as that agent programs with declarative goals more closely approximate the intuitive concept of an intelligent agent. To fully realise the potential of the notion of an intelligent agent, a declarative notion of a goal, therefore, should also be incorporated into agent programming languages.

In this paper, we introduce the agent programming language GOAL (for Goal-Oriented Agent Language), which takes the declarative concept of a goal seriously and which provides a concrete proposal to bridge the gap between theory and practice. GOAL is inspired in particular by the language UNITY designed by Chandy and Misra [Cha88], be it that GOAL incorporates complex agent notions. We offer a complete theory of agent programming in the sense that our theory provides both for a programming framework and a programming logic for such agents. In contrast with other attempts [Sho93, Wob99] to bridge the gap, our programming language and programming logic are related by means of a formal semantics. Only by providing such a formal relation it is possible to make sure that statements proven in the logic concern properties of the agent.

2 The Programming Language GOAL

In this section, we introduce the programming language GOAL. As mentioned in the previous section, GOAL is influenced by by work in concurrent programming, in particular by the language UNITY ([Cha88]). The basic idea is that a set of actions which execute in parallel constitutes a program. However, whereas UNITY is a language based on assignment to variables, the language GOAL is an agent-oriented programming language that incorporates more complex notions such as belief, goal, and agent capabilities which operate on high-level information instead of simple values.

2.1 Mental States

As in most agent programming languages, GOAL agents select actions on the basis of their current mental state. A mental state consists of the beliefs and goals of the agent. However, in contrast to most agent languages, GOAL incorporates a declarative notion of a goal that is used by the agent to decide what to do. Both the beliefs and the goals are drawn from one and the same logical language, , with associated consequence relation . In this paper, is a propositional language, and one may think about as ‘classical consequence’. In general however, the language may also be conceived as an arbitrary constraint system, allowing one to combine tokens (predicates over a given universe) using the operator (to accumulate pieces of information) and (to hide information) to represent constraints over the universe of discourse (Cf. [saraswat91]). In such a setting, one often assumes the presence of a constraint solver that tests , i.e., whether information entails .

Our GOAL-agent thus keeps two databases, respectively called the belief base and the goal base. The difference between these two databases originates from the different meaning assigned to sentences stored in the belief base and sentences stored in the goal base. To clarify the interaction between beliefs and goals, one of the more important problems that needs to be solved is establishing a meaningful relationship between beliefs and goals. This problem is solved here by imposing a constraint on mental states that is derived from the default commitment strategy that agents use. The notion of a commitment strategy is explained in more detail below. The constraint imposed on mental states requires that an agent does not believe that is the case if it has a goal to achieve , and, moreover, requires to be consistent if is a goal.

Definition 2.1

(mental state)
A mental state of an agent is a pair where are the agent’s beliefs and are the agent’s goals (both sets may be infinite) and and are such that:

  • is consistent ()

  • is such that, for any :

    1. is not entailed by the agent’s beliefs (),

    2. is consistent (), and

    3. for any , if and satisfies and above, then

A mental state does not contain a program or plan component in the ‘classical’ sense. Although both the beliefs and the goals of an agent are drawn from the same logical language, as we will see below, the formal meaning of beliefs and goals is very different. This difference in meaning reflects the different features of the beliefs and the goals of an agent. The declarative goals are best thought of as achievement goals in this paper. That is, these goals describe a goal state that the agent desires to reach. Mainly due to the temporal features of such goals many properties of beliefs fail for goals. For example, the fact that an agent has the goal to be at home and the goal to be at the movies does not allow the conclusion that this agent also has the conjunctive goal to be at home and at the movies at the same time. As a consequence, less stringent consistency requirements are imposed on goals than on beliefs. An agent may have the goal to be at home and the goal to be at the movies simultaneously; assuming these two goals cannot consistently be achieved at the same time does not mean that an agent cannot have adopted both in the language GOAL.

In this paper, we assume that the language used for representing beliefs and goals is a simple propositional language. As a consequence, we do not discuss the use of variables nor parameter mechanisms. Our motivation for this assumption is the fact that we want to present our main ideas in their simplest form and do not want to clutter the definitions below with details. Also, more research is needed to extend the programming language with a parameter passing mechanism, and to extend the programming logic for GOAL with first order features.

The language for representing beliefs and goals is extended to a new language which enables us to formulate conditions on the mental state of an agent. The language consists of so called mental state formulas. A mental state formula is a boolean combination of the basic mental state formulas , which expresses that is believed to be the case, and , which expresses that is a goal of the agent.

Definition 2.2

(mental state formula)
The set of mental state formulas is defined by:

  • if , then ,

  • if , then ,

  • if , then .

The usual abbreviations for the propositional operators , , and are used. We write as an abbreviation for for some and for .

The semantics of belief conditions , goal conditions and mental state formulas is defined in terms of the classical consequence relation .

Definition 2.3

(semantics of mental state formulas)
Let be a mental state.

  • iff ,

  • iff ,

  • iff ,

  • iff and .

We write for the fact that mental state formula is true in all mental states .

A number of properties of the belief and goal modalities and the relation between these operators are listed in Tables 1 and 2. Here, denotes derivability in classical logic, whereas refers to derivability in the language of mental state formulas .

The first rule () below states that mental state formulas that ‘have the form of a classical tautology’ (like and ), are also derivable in . By the necessitation rule (), an agent believes all classical tautologies. Then, () expresses that the belief modality distributes over implication. This implies that the beliefs of an agent are closed under logical consequence. Finally, states that the beliefs of an agent are consistent. In essence, the belief operator thus satisfies the properties of the system KD (see [fahamova94, MeyHoe94a]). Although in its current presentation, our language does not allow for nested (belief-) operators, from [MeyHoe94a, Section 1.7] we conclude that we may assume as if our agent has positive () and negative () introspective properties: every formula in the system KD45 (which is KD together with the two mentioned properties) is equivalent to a formula without nestings of operators.

if is an instantiation of a classical tautology, then , for

Table 1: Properties of Beliefs

Axiom below, is a consequence of the constraint on mental states and expresses that if an agent believes it does not have a goal to achieve . As a consequence, an agent cannot have a goal to achieve a tautology: . An agent also does not have inconsistent goals (), that is, is an axiom (see Table 2). Finally, the conditions that allow to conclude that the agent has a (sub)goal are that the agent has a goal that logically entails and that the agent does not believe that is the case. Axiom below then allows to conclude that holds. From now on, for any mental state formula , means that that there is a derivation of using the proof rules and and the axioms . If is a set of mental state formulas from , then means that there is a derivation of using the rules and axioms mentioned, and the formulas of as premises.

Table 2: Properties of Goals

The goal modality is a weak logical operator. For example, the goal modality does not distribute over implication. A counter example is provided by the goal base that is generated from . The consequences of goals are only computed locally, from individual goals. But even from the goal base one cannot conclude that is a goal, since this conclusion is blocked in a mental state in which is already believed. Deriving only consequences of goals locally ensures that from the fact that and hold, it is not possible to conclude that . This reflects the fact that individual goals cannot be added to a single bigger goal; recall that two individual goals may be inconsistent ( is satisfiable) in which case taking the conjunction would lead to an inconsistent goal. In sum, most of the usual problems that many logical operators for motivational attitudes suffer from do not apply to our operator (cf. also [Mey99]). On the other hand, the last property of Lemma 2.4 justifies to call a logical, and not just a syntactical operator:

Lemma 2.4
  • ,

  • ,

One finds a similar quest for such weak operators in awareness logics for doxastic and epistemic modalities, see e.g. [faghal88, Thijsse93]. As agents do not want all the side-effects of their goals, being limited reasoners they also do not always adopt all the logical consequences of their belief or knowledge. However, the question remains whether modal logic is the formal tool to reason with and about goals. Allowing explicitly for mutually inconsistent goals, our treatment of goals resides in the landscape of paraconsistent logic (cf. [gries89]). One might even go a step further and explore to use linear logic ([girard87] to reason about goals, enabling to have the same goal more than once, and to model process and resource use in a fine-tuned way. We will not pursue the different options for logics of goals in this paper.

Theorem 2.5

(Soundness and Completeness of )
For any , we have

Proof. We leave it for the reader to check soundness (i.e., the ‘’-direction). Here, and are immediate consequences of the definition of a belief as a consequence from a given consistent set, follows from condition of Definition 2.1, from property and from of that same definition.
For completeness, assume that . Then is consistent, and we will construct a mental state that verifies . First, we build a maximal -consistent set with . This can be split in a set and a set as follows: and . We now prove two properties of :

  1. is a mental state

  2. satisfies the following coincidence property:

The proofs for these claims are as follows:

  1. We must show that satisfies the properties of Definition 2.1. Obviously, is classically consistent, since otherwise we would have in the -consistent set , which is prohibited by axiom . Also, by axiom , no is equivalent to . We now show that no is classically entailed by . Suppose that we would have that , for certain and . Then, by construction of and , the formulas all are members of the maximal -consistent set . Since , by the deduction theorem for , and we conclude . But this means that both and are members of , which is prohibited by axiom . Finally, we show of Definition 2.1, Suppose , and that is consistent, and not classically entailed by . We have to , and this is immediately guaranteed by axiom .

  2. The base case for the second claim is about and , with . We have iff iff, by definition of , . Using compactness and the deduction theorem for classical logic, we find , for some propositional formulas . By the rule we conclude . By , this is equivalent to and, since all the are members of , we have . For the other base case, consider , which, using the truth-definition for , holds iff . By definition of , this means that , which was to be proven. The cases for negation and conjunction follow immediately from this. Hence, in particular, we have , and thus .

2.2 GOAL Agents

A third basic concept in GOAL is that of an agent capability. The capabilities of an agent consist of a set of so called basic actions. The effects of executing such a basic action are reflected in the beliefs of the agent and therefore a basic action is taken to be a belief update on the agent’s beliefs. A basic action thus is a mental state transformer. Two examples of agent capabilities are the actions for inserting in the belief base and for removing from the belief base. Agent capabilities directly affect the belief base of the agent and not its goals, but because of the constraints on mental states they may as a side effect modify the current goals. For the purpose of modifying the goals of the agent, two special actions and are introduced to respectively adopt a new goal or drop some old goals. We write and use it to denote the set of all belief update capabilities of an agent. thus does not include the two special actions for goal updating and . The set of all capabilities is then defined as . Individual capabilities are denoted by .

The set of basic actions or capabilities associated with an agent determines what an agent is able to do. It does not specify when such a capability should be exercised and when performing a basic action is to the agent’s advantage. To specify such conditions, the notion of a conditional action is introduced. A conditional action consists of a mental state condition expressed by a mental state formula and a basic action. The mental state condition of a conditional action states the conditions that must hold for the action to be selected. Conditional actions are denoted by the symbol throughout this paper.

Definition 2.6

(conditional action)
A conditional action is a pair such that and .

Informally, a conditional action means that if the mental condition holds, then the agent may consider doing basic action . Of course, if the mental state condition holds in the current state, the action can only be successfully executed if the action is enabled, that is, only if its precondition holds.

A GOAL agent consists of a specification of an initial mental state and a set of conditional actions.

Definition 2.7

(GOAL agent)
A GOAL agent is a triple where is a non-empty set of conditional actions, and is the initial mental state.

2.3 The Operational Semantics of GOAL

One of the key ideas in the semantics of GOAL is to incorporate into the semantics a particular commitment strategy (cf. [Rao90, Coh90]). The semantics is based on a particularly simple and transparent commitment strategy, called blind commitment. An agent that acts according to a blind commitment strategy drops a goal if and only if it believes that that goal has been achieved. By incorporating this commitment strategy into the semantics of GOAL, a default commitment strategy is built into agents. It is, however, only a default strategy and a programmer can overwrite this default strategy by means of the drop action. It is not possible, however, to adopt a goal in case the agent believes that is already achieved.

The semantics of action execution should now be defined in conformance with this basic commitment principle. Recall that the basic capabilities of an agent were interpreted as belief updates. Because of the default commitment strategy, there is a relation between beliefs and goals, however, and we should extend the belief update associated with a capability to a mental state transformer that updates beliefs as well as goals according to the blind commitment strategy. To get started, we thus assume that some specification of the belief update semantics of all capabilities - except for the two special actions and which only update goals - is given. Our task is, then, to construct a mental state transformer semantics from this specification for each action. That is, we must specify how a basic action updates the complete current mental state of an agent starting with a specification of the belief update associated with the capability only.

From the default blind commitment strategy, we conclude that if a basic action - different from an or action - is executed, then a goal is dropped only if the agent believes that the goal has been accomplished after doing . The revision of goals thus is based on the beliefs of the agent. The beliefs of an agent represent all the information that is available to an agent to decide whether or not to drop or adopt a goal. So, in case the agent believes that a goal has been achieved by performing some action, then this goal must be removed from the current goals of the agent. Besides the default commitment strategy, only the two special actions and can result in a change to the goal base.

The initial specification of the belief updates associated with the capabilities is formally represented by a partial function of type . returns the result of updating belief base by performing action . The fact that is a partial function represents the fact that an action may not be enabled or executable in some belief states. The mental state transformer function is derived from the semantic function and also is a partial function. As explained, removes any goals from the goal base that have been achieved by doing . The function also defines the semantics of the two special actions and . An action adds to the goal base if is consistent and is not believed to be the case. A action removes every goal that entails from the goal base. As an example, consider the two extreme cases: removes no goals, whereas removes all current goals.

Definition 2.8

(mental state transformer )
Let be a mental state, and be a partial function that associates belief updates with agent capabilities. Then the partial function is defined by: for , if is defined is undefined for if is undefined if and is undefined if or

The semantic function maps an agent capability and a mental state to a new mental state. The capabilities of an agent are thus interpreted as mental state transformers by . Although it is not allowed to adopt a goal that is inconsistent - an is not enabled - there is no check on the global consistency of the goal base of an agent built into the semantics. This means that it is allowed to adopt a new goal which is inconsistent with another goal present in the goal base. For example, if the current goal base contains , it is legal to execute the action resulting in a new goal base containing , (if was not already believed). Although inconsistent goals cannot be achieved at the same time, they may be achieved in some temporal order. Individual goals in the goal base, however, are required to be consistent. Thus, whereas local consistency is required (i.e. individual goals must be consistent), global consistency of the goal base is not required.

The second idea incorporated into the semantics concerns the selection of conditional actions. A conditional action may specify conditions on the beliefs as well as conditions on the goals of an agent. As is usual, conditions on the beliefs are taken as a precondition for action execution: only if the agent’s current beliefs entail the belief conditions associated with the agent will select for execution. The goal condition, however, is used in a different way. It is used as a means for the agent to determine whether or not the action will help bring about a particular goal of the agent. In short, the goal condition specifies where the action is good for. This does not mean that the action necessarily establishes the goal immediately, but rather may be taken as an indication that the action is helpful in bringing about a particular state of affairs.

In the definition below, we assume that the action component of an agent is fixed. The execution of an action gives rise to a computation step formally denoted by the transition relation where is the conditional action executed in the computation step. More than one computation step may be possible in a current state and the step relation thus denotes a possible computation step in a state. A computation step updates the current state and yields the next state of the computation. Note that because is a partial function, a conditional action can only be successfully executed if both the condition is satisfied and the basic action is enabled.

Definition 2.9

(action selection)
Let be a mental state and . Then, as a rule, we have:
If

  • the mental condition holds in , i.e. , and

  • is enabled in , i.e. is defined,

then is a possible computation step. The relation is the smallest relation closed under this rule.

Now, the semantics of GOAL agents is derived directly from the operational semantics and the computation step relation . The meaning of a GOAL agent consists of a set of so called traces. A trace is an infinite computation sequence of consecutive mental states interleaved with the actions that are scheduled for execution in each of those mental states. The fact that a conditional action is scheduled for execution in a trace does not mean that it is also enabled in the particular state for which it has been scheduled. In case an action is scheduled but not enabled, the action is simply skipped and the resulting state is the same as the state before. In other words, enabledness is not a criterion for selection, but rather it decides whether something is happening in a state, once selected.

Definition 2.10

(trace)
A trace is an infinite sequence such that is a mental state, is a conditional action, and for every we have: , or is not enabled in and .

An important assumption in the semantics for GOAL is a fairness assumption. Fairness assumptions concern the fair selection of actions during the execution of a program. In our case, we make a weak fairness assumption [Man92]. A trace is weakly fair if it is not the case that an action is always enabled from some point in time on but is never selected for execution. This weak fairness assumption is built into the semantics by imposing a constraint on traces. By definition, a fair trace is a trace in which each of the actions is scheduled infinitely often. In a fair trace, there always will be a future time point at which an action is scheduled (considered for execution) and by this scheduling policy a fair trace implements the weak fairness assumption. However, note that the fact that an action is scheduled does not mean that the action also is enabled (and therefore, the selection of the action may result in an idle step which does not change the state).

The meaning of a GOAL agent now is defined as the set of fair traces in which the initial state is the initial mental state of the agent and each of the steps in the trace corresponds to the execution of a conditional action or an idle transition.

Definition 2.11

(meaning of a GOAL agent)
The meaning of a GOAL agent is the set of fair traces such that for we have .

2.4 Mental States and Enabledness

We formally said that a capability is enabled in a mental state in case is defined. This definition implies that a belief update capability is enabled if is defined. Let us assume that this only depends on the action –this seems reasonable, since a paradigm like AGM ([agm]) only requires that a revision with fails iff is classically inconsistent, whereas expansions and contractions succeed for all , hence the question whether such an operation is enabled does not depend on the current beliefs. A conditional action is enabled in a mental state if there are such that . Note that if a capability is not enabled, a conditional action is also not enabled. The special predicate is introduced to denote that a capability or conditional action is enabled (denoted by respectively ).

The relation between the enabledness of capabilities and conditional actions is stated in the next table together with the fact that is always enabled and a proof rule for deriving . Let be the language obtained by Boolean combinations of mental state formulas and enabledness formulas. We denote derivability in the system for this language by . Then, consists of the axioms and rules for , plus

, , if is defined ()

Table 3: Enabledness

Rule enforces that we better write given a belief revision function , but in the sequel we will suppress this . The semantics for is based on truth in pairs , where is a mental state and a partial function for belief updates. For formulas of the format and , we just use the mental state and Definition 2.3 to determine their truth. For enabledness formulas, we have the following:

Definition 2.12

(Truth of enabledness)

  • iff is defined

  • iff true

  • iff and

  • iff and at the same time

Note that we can summarize this definition to:

  • iff is defined for ,

  • iff and there are such that for conditional actions where .

Theorem 2.13

(Soundness and Completeness of )
We have, for all formulas in ,

Proof. Again, checking soundness is straightforward and left to the reader. For the converse, we have to make a complexity measure explicit for -formulas, along which the induction can proceed. It suffices to stipulate that the complexity of is greater than that of and . Furthermore, the complexity of is greater than that of . Now, suppose that , i.e., is consistent. Note that the language is countable, so that we can by enumeration, extend to a maximal -consistent set . From this , we distill a pair as follows: , , and is defined iff , for any belief capability . We claim, for all :

For formulas of type and this is easily seen. Let us check it for enabledness formulas.

  • , with a belief capability. By construction of , the result immediately holds

  • . By construction of , every is an element of (because of axiom ), and also, every such formula is true in .

  • . Suppose . Then, and . By the induction hypothesis, we have that , hence , and, by , . For the converse, suppose . Then (by ), we cannot have that . Hence, , and by , we also have and hence, by applying the induction hypothesis, . Since is a sound rule, we finally conclude that .

  • . We can write this as and then use the induction hypothesis.

3 A Personal Assistant Example

In this section, we give an example to show how the programming language GOAL can be used to program agents. The example concerns a shopping agent that is able to buy books on the Internet on behalf of the user. The example provides for a simple illustration of how the programming language works. The agent in our example uses a standard procedure for buying a book. It first goes to a bookstore, in our case Am.com. At the web site of Am.com it searches for a particular book, and if the relevant page with the book details shows up, the agent puts the book in its shopping cart. In case the shopping cart of the agent contains some items, it is allowed to buy the items on behalf of the user. The idea is that the agent adopts a goal to buy a book if the user instructs it to do so.

The set of capabilities of the agent is defined by

The capability goes to the selected web page . In our example, relevant web pages are the home page of the user, the main page of Am.com, web pages with information about books to buy, and a web page that shows the current items in the shopping cart of the agent. The capability is an action that can be selected at the main page of Am.com and selects the web page with information about . The action can be selected on the page concerning and puts in the cart; a new web page called shows up showing the content of the cart. Finally, in case the cart is not empty the action can be selected to pay for the books in the cart.

In the program text below, we assume that is a variable referring to the specifics of the book the user wants to buy (in the example, we use variables as a means for abbreviation; variables should be thought of as being instantiated with the relevant arguments in such a way that predicates with variables reduce to propositions). The initial beliefs of the agent are that the current web page is the home page of the user, and that it is not possible to be on two different web pages at the same time. We also assume that the user has provided the agent with the goals to buy The Intentional Stance by Daniel Dennett and Intentions, Plans, and Practical Reason by Michael Bratman.

GOAL Shopping Agent

Some of the details of this program will be discussed in the sequel, when we prove some properties of the program. The agent basically follows the recipe for buying a book outlined above. For now, however, just note that the program is quite flexible, even though the agent more or less executes a fixed recipe for buying a book. The flexibility results from the agent’s knowledge state and the non-determinism of the program. In particular, the ordering in which the actions are performed by the agent - which book to find first, buy a book one at a time or both in the same shopping cart, etc. is not determined by the program. The scheduling of these actions thus is not fixed by the program, and might be fixed arbitrarily on a particular agent architecture used to run the program.

4 Logic for GOAL

On top of the language GOAL and its semantics, we now construct a temporal logic to prove properties of GOAL agents. The logic is similar to other temporal logics but its semantics is derived from the operational semantics for GOAL. Moreover, the logic incorporates the belief and goal modalities used in GOAL agents. We first informally discuss the use of Hoare triples for the specification of actions. In Section 4.3 we give a sound an complete system for such triples. These Hoare triples play an important role in the programming logic since it can be shown that temporal properties of agents can be proven by means of proving Hoare triples for actions only. Finally, in 4.4 the language for expressing temporal properties and its semantics is defined and the fact that certain classes of interesting temporal properties can be reduced to properties of actions, expressed by Hoare triples, is proven.

4.1 Hoare Triples

The specification of basic actions provides the basis for the programming logic, and, as we will show below, is all we need to prove properties of agents. Because they play such an important role in the proof theory of GOAL, the specification of the basic agent capabilities requires special care. In the proof theory of GOAL, Hoare triples of the form , where and are mental state formulas, are used to specify actions. The use of Hoare triples in a formal treatment of traditional assignments is well-understood [And91]. Because the agent capabilities of GOAL agents are quite different from assignment actions, however, the traditional predicate transformer semantics is not applicable. GOAL agent capabilities are mental state transformers and, therefore, we require more extensive basic action theories to formally capture the effects of such actions. Hoare triples are used to specify the postconditions and the frame conditions of actions. The postconditions of an action specify the effects of an action whereas the frame conditions specify what is not changed by the action. Axioms for the predicate specify the preconditions of actions.

The formal semantics of a Hoare triple for conditional actions is derived from the semantics of a GOAL agent and is defined relative to the set of traces associated with the GOAL agent . A Hoare triple for conditional actions thus expresses a property of an agent and not just a property of an action. The semantics of the basic capabilities are assumed to be fixed, however, and are not defined relative to an agent.

Definition 4.1

(semantics of Hoare triples for basic actions)
A Hoare triple for basic capabilities means that for all

  • , and

  • .

To explain this definition, note that we made a case distinction between states in which the basic action is enabled and in which it is not enabled. In case the action is enabled, the postcondition of the Hoare triple should be evaluated in the next state resulting from executing action . In case the action is not enabled, however, the postcondition should be evaluated in the same state because a failed attempt to execute action is interpreted as an idle step in which nothing changes.

Hoare triples for conditional actions are interpreted relative to the set of traces associated with the GOAL agent of which the action is a part. Below, we write to denote that a mental state formula holds in state .

Definition 4.2

(semantics of Hoare triples for conditional actions)
Given an agent , a Hoare triple for conditional actions (for ) means that for all traces and , we have that

where means that action is taken in state of trace .

Of course, there is a relation between the execution of basic actions and that of conditional actions, and therefore there also is a relation between the two types of Hoare triples. The following lemma makes this relation precise.

Lemma 4.3

Let be a GOAL agent and be the meaning of . Suppose that we have and . Then we also have .

Proof:

We need to prove that . Therefore, assume . Two cases need to be distinguished: The case that the condition holds in and the case that it does not hold in . In the former case, because we have we then know that . In the latter case, the conditional action is not executed and . From , and it then follows that since is a state formula.

The definition of Hoare triples presented here formalises a total correctness property. A Hoare triple ensures that if initially holds, then an attempt to execute results in a successor state and in that state holds. This is different from partial correctness where no claims about the termination of actions and the existence of successor states are made.

4.2 Basic Action Theories

A basic action theory specifies the effects of the basic capabilities of an agent. It specifies when an action is enabled, it specifies the effects of an action and what does not change when an action is executed. Therefore, a basic action theory consists of axioms for the predicate for each basic capability, Hoare triples that specify the effects of basic capabilities and Hoare triples that specify frame axioms associated with these capabilities. Since the belief update capabilities of an agent are not fixed by the language GOAL but are user-defined, the user should specify the axioms and Hoare triples for belief update capabilities. The special actions for goal updating and are part of GOAL and a set of axioms and Hoare triples for these actions is specified below.

4.2.1 Actions on beliefs: capabilities of the shopping assistant

Because in this paper, our concern is not with the specification of basic action theories in particular, but with providing a programming framework for agents in which such specifications can be plugged in, we only provide some example specifications of the capabilities defined in the personal assistant example that we need in the proof of correctness below.

First, we specify a set of axioms for each of our basic actions that state when that action is enabled. Below, we abbreviate the book titles of the example, and write for The Intentional Stance and for Intentions, Plans, and Practical Reason. In the shopping agent example, we then have:

Second, we list a number of effect axioms that specify the effects of a capability in particular situations defined by the preconditions of the Hoare triple.

  • The action results in moving to the relevant web page:
    ,

  • At Amazon.com, searching for a book results in finding a page with relevant information about the book:

  • On the page with information about a particular book, selecting the action
    results in the book being put in the cart; also, a new web page appears on which the contents of the cart are listed:

  • In case is in the cart, and the current web page presents a list of all the books in the cart, the action may be selected resulting in the buying of all listed books:

Finally, we need a number of frame axioms that specify which properties are not changed by each of the capabilities of the agent. For example, both the capabilities and do not change any beliefs about . Thus we have, e.g.:

It will be clear that we need more frame axioms than these two, and some of these will be specified below in the proof of the correctness of the shopping agent.

It is important to realise that the only Hoare triples that need to be specified for agent capabilities are Hoare triples that concern the effects upon the beliefs of the agent. Changes and persistence of (some) goals due to executing actions can be derived with the proof rules and axioms below that are specifically designed to reason about the effects of actions on goals.

4.2.2 Actions on goals

A theory of the belief update capabilities and their effects on the beliefs of an agent must be complemented with a theory about the effects of actions upon the goals of an agent. Such a theory should capture both the effects of the default commitment strategy as well as give a formal specification of the the and actions. Only in Section 4.3 we aim at providing a complete system, in the discussion in the current section, there are dependencies between the axioms and rules discussed.

Default commitment strategy

The default commitment strategy imposes a constraint on the persistence of goals. A goal persists if it is not the case that after doing the goal is believed to be achieved. Only action is allowed to overrule this constraint. Therefore, in case , we have that (using the rule for conditional actions from Table 9, one can derive that this triple also holds for general conditional actions , rather than just actions ). The Hoare triple precisely captures the default commitment strategy and states that after executing an action the agent either believes it has achieved or it still has the goal if was a goal initially.

Table 4: Persistence of goals

A similar Hoare triple can be given for the persistence of the absence of a goal. Formally, we have

(1)

This Hoare triple states that the absence of a goal persists, and in case it does not persist the agent does not believe (anymore). The adoption of a goal may be the result of executing an action, of course. However, it may also be the case that an agent believed it achieved but after doing no longer believes this to be the case and adopts as a goal again. For example, if the goal base is and the belief base , then the agent does not have a goal to achieve because it already believes to be the case; however, in case an action changes the belief base such that is no longer is believed, the agent has a goal to achieve (again). This provides for a mechanism similar to that of maintenance goals. We do not need the Hoare triple (1) as an axiom, however, since it is a direct consequence of the fact that (this is exactly the postcondition of (1). Note that the stronger does not hold, even if . This occurs for example if we have . Then the agent does not have as a goal, since he believes it has already been achieved, but, if he would give up as a belief, it becomes to be a goal.

In the semantics of Hoare triples (Definition 4.2) we stipulated that if is not enabled, we verify the postcondition in the same state as the pre-condition:

Table 5: Infeasible actions
Frame properties on Beliefs

The specification of the special actions and involves a number of frame axioms and a number of proof rules. The frame axioms capture the fact that neither of these actions has any effect on the beliefs of an agent. Note that, combining such properties with e.g. the Consequence Rule (Table 10) one can derive the triple .

Table 6: Frame Properties on Beliefs for and
(Non-)effects of

The proof rules for the actions and capture the effects on the goals of an agent. For each action, we list proof rules for the effect and the persistence (‘non-effect’) on the goal base for adoption (Table 7) and dropping (Table 8) of goals, respectively.

An agent adopts a new goal in case the agent does not believe and is not a contradiction. Concerning persistence, an adopt action does not remove any current goals of the agent. Any existing goals thus persist when is executed. The persistence of the absence of goals is somewhat more complicated in the case of an action. An action does not add a new goal in case is not entailed by or is believed to be the case:

Effects of Non-effect of

Table 7: (Non-)effects of

A drop action results in the removal of all goals that entail . This is captured by the first proof rule in Table 8

Effects of Non-Effects of

Table 8: (Non-)effects of