A Dynamic Epistemic Framework for Conformant Planning

06/24/2016
by   Quan Yu, et al.
0

In this paper, we introduce a lightweight dynamic epistemic logical framework for automated planning under initial uncertainty. We reduce plan verification and conformant planning to model checking problems of our logic. We show that the model checking problem of the iteration-free fragment is PSPACE-complete. By using two non-standard (but equivalent) semantics, we give novel model checking algorithms to the full language and the iteration-free language.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

04/03/2014

Reasoning about Knowledge and Strategies: Epistemic Strategy Logic

In this paper we introduce Epistemic Strategy Logic (ESL), an extension ...
06/22/2021

Knowing How to Plan

Various planning-based know-how logics have been studied in the recent l...
10/03/2019

GRAVITAS: A Model Checking Based Planning and Goal Reasoning Framework for Autonomous Systems

While AI techniques have found many successful applications in autonomou...
06/24/2016

Parameterized Complexity Results for a Model of Theory of Mind Based on Dynamic Epistemic Logic

In this paper we introduce a computational-level model of theory of mind...
05/24/2018

On the Computational Complexity of Model Checking for Dynamic Epistemic Logic with S5 Models

Dynamic epistemic logic (DEL) is a logical framework for representing an...
05/02/2022

On verifying expectations and observations of intelligent agents

Public observation logic (POL) is a variant of dynamic epistemic logic t...
11/10/2021

Software Model-Checking as Cyclic-Proof Search

This paper shows that a variety of software model-checking algorithms ca...

1 Introduction

Conformant planning is the problem of finding a linear plan (a sequence of action) to achieve a goal in presence of uncertainty about the initial state (cf. [30]). For example, suppose that you are a rookie spy trapped in a foreign hotel with the following map at hand:111It is a variant of the running example in [34].

Now somebody spots you and sets up the alarm. In this case you need to move fast to one of the safe hiding places marked in the map (i.e., and ). However, since you were in panic, you lost your way and you are not sure whether you are at or (denoted by the circle in the above graph). Now what should you do in order to reach a safe place quickly? Clearly, merely moving or moving may not guarantee your safety given the uncertainty. A simple plan is to move first and then , since this plan will take you to a safe place, no matter where you actually are initially. This plan is conformant since it does not require any feedback during the execution and it should work in presence of uncertainty about the initial state. More generally, a conformant plan should also work given actions with non-deterministic effects. Such a conformant plan is crucial when there are no feedbacks/observations available during the execution of the plan.222In many other cases, feedbacks may be just too ‘expensive’ to obtain during a plan aiming for quick actions [9]. Note that since no information is provided during the execution, the conformant plan is simply a finite sequence of actions without any conditional moves.

As discussed in [10, 26], conformant planning can be reduced to classical planning, the planning problem without any initial uncertainty, over the space of belief states. Intuitively, a belief state is a subset of the state space, which records the uncertainty during the execution of a plan, e.g., is an initial belief state in the above example. In order to make sure a goal is achieved eventually, it is crucial to track the transitions of belief states during the execution of the plan, and this may traverse exponentially many belief states in the size of the original state space. As one may expect, conformant planning is computationally harder than classical planning. The complexity of checking the existence of a conformant plan is EXPspace-complete in the size of the variables generating the state space [20]. In the literature, people proposed compact and implicit representations of the belief spaces, such as OBDD [14, 16, 15] and CNF [32]

, and different heuristics are used to guide the search for a plan, e.g.,

[12, 13].

Besides the traditional AI approaches, we can also take an epistemic-logical perspective on planning in presence of initial uncertainties, based on dynamic epistemic logic (DEL) (cf. e.g., [17]). The central philosophy of DEL takes the meaning of an action as the change it brings to the knowledge of the agents. Intuitively, this is what we need to track the belief states during the execution of a plan333Here the belief states are actually about knowledge in epistemic logic.. Indeed, in recent years, there has been a growing interest in using DEL to handle multi-agent planning with knowledge goals (cf. e.g., [8, 25, 3, 4, 35, 27]), while the traditional AI planning focuses on the single-agent case. In particular, the event models of DEL (cf. [7]) are used to handle non-public actions that may cause different knowledge updates to different agents. In these DEL-based planning frameworks, states are epistemic models, actions are event models and the state transitions are implicitly encoded by the update product which computes a new epistemic model based on an epistemic model and an event model.

One advantage of this approach is its expressiveness in handling scenarios which require reasoning about agents’ higher-order knowledge about each other in presence of partially observable actions. However, this expressiveness comes at a price, as shown in [8, 5], that multi-agent epistemic planning is undecidable in general. Many interesting decidable fragments are found in the literature [8, 25, 35, 2], which suggests that the single-agent cases and restrictions on the form of event models are the key to decidability. However, if we focus on the single-agent planning, a natural question arises: how do we compare such DEL approaches with the traditional AI planning? It seems that the DEL-based approaches are more suitable for planning with actions that change (higher-order) knowledge rather than planning with fact-changing actions, although the latter type of actions can also be handled in DEL. Moreover, the standard models of DEL are purely epistemic thus do not encode the temporal information of available actions directly. This may limit the applicability of such approaches to planning problems based on transition systems.

In this paper, we tackle the standard single-agent conformant planning problem over transition systems, by using the core idea of DEL, but not its standard formalism. Our formal framework is based on the logic proposed by Wang and Li in [34], where the model is simply a transition system with initial uncertainty as in the motivating example, and an action is interpreted in the semantics as an update on the uncertainty of the agent. Our contributions are summarized as follows:

  • A lightweight dynamic epistemic framework with a simple language and a complete axiomatization.

  • Non-trivial reduction of conformant planning to a model checking problem using our language with programs.

  • Two novel model checking algorithms based on two alternative semantics for the proposed logic, which make the context-dependency in the original semantics explicit.

  • The complexity of model checking the iteration-free fragment of our language is Pspace-complete. The model checking problem of the full language is in EXPtime. The model checking problem of the conformant planning is in Pspace.

The last result may sound contradictory to the aforementioned result that the complexity of conformant planning is EXPspace-complete. Actually, the apparent contradiction is due to the fact that the EXPspace complexity result is based on the number of state variables which require an exponential blow up to generate an explicit transition system that we use here. We will come back to this issue at the end of Section 4.3.

Our approach has the following advantages compared to the existing planning approaches:

  • The planning goals can be specified as arbitrary formulas in an epistemic language. Extra plan constraints (e.g., what actions to use) can be expressed explicitly by programs in the language. Therefore it may cover a richer class of (conformant) planning problems compared to the traditional AI approach where a goal is Boolean.444The goal in the standard conformant planning is simply a set of different valuations of basic propositional variables. Our approach can even handle epistemic goals in negative forms, e.g., we want to make sure the agent knows something but does not know too much in the end.

  • The plans can be specified as regular expressions with tests in terms of arbitrary EPDL formulas, which generalizes the knowledge-based programs in [19, 23].

  • By reducing conformant planning to a model checking problem in an explicit logical language, we also see the subtleties hidden in the planning problem. In principle, there are various model checking techniques to be applied to conformant planning based on this reduction.

  • Our logical language and models are very simple compared to the standard action-model based DEL approach, yet we can encode the externally given executability of the actions in the model, inspired by epistemic temporal logic (ETL) [18, 28].

  • Our approach is flexible enough to provide, in the future, a unified platform to compare different planning problems under uncertainty. By studying different fragments of the logical language and model classes, we may categorize planning problems according to their complexity.

The rest of the paper is organized as follows: We introduce our basic logical framework and its axiomatization in Section 2, and extend it in Section 3 with programs to handle the conformant planning. The complexity analysis of the model checking problems is in Section 4 and we conclude in Section 5 with future directions.

2 Basic framework

2.1 Epistemic action language

To talk about the knowledge of the agent during an execution of a plan, we use the following language proposed in [34].

Definition 2.1 (Epistemic Action Language (Eal))

Given a countable set A of action symbols and a countable set P of atomic proposition letters , the language is defined as follows:555We do need unboundedly many action symbols to encode the desired problem in the later discussion of model checking complexity.

where , . The following standard abbreviations are used: , .

says that the agent knows that , and expresses that if the agent can move forward by action , then after doing , holds. Throughout the paper, we fix some P and A, and refer to by EAL.

The size of EAL-formulas (notation ) is defined inductively: ; ; ; . The set of subformulas of , denoted as , is defined as usual.

Definition 2.2 (Uncertainty map)

Given P and A, a (multimodal) Kripke model is a tuple , where is a non-empty set of states, is a binary relation labelled by , is a valuation function. An uncertainty map is a Kripke model with a non-empty set . Given an uncertainty map , we refer to its components by , , , and . A pointed uncertainty map is an uncertainty map with a designated state . We write for .

Intuitively, a Kripke model encodes a map (transition system) and the uncertainty set encodes the uncertainty that the agent has about where he is in the map. The graph mentioned at the beginning of the introduction is a typical example of an uncertainty map. Note that there may be non-deterministic transitions in the model, i.e., there may be such that and for some .

Remark 1

It is crucial to notice that the designated state in a pointed uncertainty map must be one of the states in the uncertainty set.

Definition 2.3 (Semantics)

Given any uncertainty map and any state , the semantics is defined as follows:

where and . We say is valid (notation: ) if it is true on all the pointed uncertainty maps. For a action sequence , we write for . and write for .

Intuitively, the agent ‘carries’ the uncertainty set with him when moving forward and obtains a new uncertainty set . Note that here we differ from [34] where the updated uncertainty set is further refined according to what the agent can observe at the new state. For conformant planning, we do not consider the observational power of the agent during the execution of a plan.

Let us call the model mentioned in the introduction , it is not hard to see that and are as follows:

Thus we have:

The usual global model checking algorithm for modal logics labels the states with the subformulas that are true on the states. However, this cannot work here since the truth value of epistemic formulas on the states outside is simply undefined. Moreover, the exact truth value of an epistemic formula on a state depends on ‘how you get there’, as the following example shows (the underlined states mark the actual states):

Let the left-hand-side model be then it is clear that while thus This shows that the truth value of an epistemic subformula w.r.t. a state in the model is somehow ‘context-dependent’, which requires new techniques in model checking. We will make this explicit in Section 4.3 when we discuss the model checking algorithm.

2.2 Axiomatization

Following the axioms proposed in [34], we give the following axiomatization for EAL w.r.t. our semantics:

System
Axioms Rules
all axioms of propositional logic

where ranges over A, range over P. and denote the axioms of perfect recall and no miracles respectively (cf. [33]).

Note that since we do not assume that the agent can observe the available actions, the axiom in [34] is abandoned. Due to the same reason, the axiom of no miracles is also simplified.

We show the completeness of using a more direct proof strategy compared to the one used in [34].

Theorem 2.1

is sound and strongly complete w.r.t. EAL on uncertainty maps.

Proof: To prove that is sound on uncertainty maps, we need to show that all the axioms are valid and all the inference rules preserve validity. Since the uncertainty set in an UM denotes an equivalent class, axioms ,  and  are valid; due to the semantics, the validity of axioms and can be proved step by step; others can be proved as usual.

To prove that is strongly complete on uncertainty maps, we only need to show that every -consistent set of formulas is satisfiable on some uncertainty map. The proof idea is that we construct an uncertainty map consisting of maximal -consistent sets (MCSs), and then with the Lindenbaum-like lemma that every -consistent set of formulas can be extended in to a MCS (we omit the proof here), we only need to prove that every formula holds on the MCS to which it belongs.

Firstly, we construct a canonical Kripke model as follows:

  • is the set of all MCSs;

  • for any (equivalently for any );

  • .

Given , we define iff , and it is obvious that . Thus we have that for each , is an uncertainty map, and is a pointed uncertainty map.

Secondly, we prove the following claim.

Claim 2.1

If , then we have .

: Assuming , we need to show , namely we need to show that . Since , we have that there is such that . If , it follows by axiom  that . Thus we have . By axiom , it follows that . By and axiom , we have . It follows by that . If , we have . By axiom , we have . Similarly, we have . Thus we have .

: Assuming , we need to show , namely there is such that . Let be . Then is consistent. For suppose not, we have for some and . Since , we have . By rule  and axiom , we have . Since for each , we have . By axiom , it follows that . It follows by that . Since , by axiom , we have . This is contrary with for each . Thus is consistent. By Lindenbaum-like Lemma, there exists a MCS extending . It follows by that and . We conclude that .

Finally, we will show that iff . we prove it by induction on . Please note that the ‘existence lemmas’ (that implies for some such that and that implies for some ) also hold in the model . We only focus on the case of . With Claim 2.1, it follows that if . Then by the induction hypothesis and the existence lemmas, it is easy to show that iff .  

3 An extension of EAL for conformant planning

3.1 Epistemic PDL over uncertainty maps

In this section we extend the language of EAL with programs in propositional dynamic logic and use this extended language to express the existence of a conformant plan.

Definition 3.1 (Epistemic PDL)

The Epistemic PDL Language (EPDL) is defined as follows:

where , . We use to denote , which is logically equivalent to . Given a finite , we write for , i.e., the iteration over the ‘sum’ of all the action symbols in B. The size of EPDL formulas/programs is given by: , , , , and .

Given any uncertainty map , any state , the semantics is given by a mutual induction on and (we only show the case about , other cases are as in EAL):

where , at the right-hand side denote the usual composition, union and reflexive transitive closure of binary relations respectively. Clearly this semantics coincides with the semantics of EAL on EAL formulas.

Note that each program can be viewed as a set of computation sequences, which are sequences of actions in A and tests with :

Here are some valid formulas which are useful in our latter discussion:

()

We leave the complete axiomatization of EPDL on uncertainty maps to future work.

3.2 Conformant planning via model checking EPDL

Definition 3.2 (Conformant planning)

Given an uncertainty map , a goal formula , and a set , the conformant planning problem is to find a finite (possibly empty) sequence such that for each we have . The existence problem of conformant planning is to test whether such a sequence exists.

Recall that is the shorthand of . Intuitively, we want a plan which is both executable and safe w.r.t. non-deterministic actions and initial uncertainty of the agent. It is crucial to observe the difference between and by the following example:

Example 1

Given uncertainty map depicted as follows, we have but .

Given and , to verify whether is a conformant plan can be formulated as the model checking problem: . On the other hand, the existence problem of a conformant plan is more complicated to formulate: it asks whether there exists a such that it can be verified as a conformant plan. The simple-minded attempt would be to check whether holds. Despite the -vs.- distinction, may hold on a model where the sequences to guarantee on different states in are different, as the following example shows:

Example 2

Given uncertainty map depicted as follows, let the goal formula be and . We have , but there is no solution to this conformant planning problem.

The right formula to check for the existence of a conformant plan w.r.t.  and is:

For example, if then . Intuitively, the confrmant plan consists of actions that are always executable given the uncertainty of the agent (guaranteed by the guard ). In the end the plan should also make sure that must hold given the uncertainty of the agent (guaranteed by ). In the following, we will prove that this formula is indeed correct.

First, we observe that the rule of substitution of equivalents is valid ( is obtained by replacing any occurrence of by , similar for ):

Proposition 3.1

If , then:

  • ;

  • .

Proposition 3.2

Proof: Since and , we only need to show that .

Left to right:
(L1) , by validity of Axiom
(L2) , by semantics
(L3) , by semantics
(L4) , by (L1)-(L3)

Right to left:
(R1) , by validity of Axiom
(R2) , by semantics
(R3) , by R(1)-R(2)  

Lemma 3.1

For any :

Proof: It is trivial when (i.e., the sequence is ), since the claim then boils down to . We prove the non-trivial cases by induction on . When , it follows from Proposition 3.2. Now, as the induction hypothesis, we assume that:

We need to show:

By IH,

Due to Propositions 3.1 and 3.2, we have:

The conclusion is immediate by combining (1) and (2).  

The following theorem follows from the above lemma.

Theorem 3.1

Given a pointed uncertainty map , an EPDL formula and a set , the following two are equivalent:

  • There is a such that ;

  • .

We would like to emphasise that the operator right before in the definition of cannot be omitted, as demonstrated by the following example:

Example 3

Given uncertainty map depicted as follows, let the goal formula be . As we can see, there is no solution to this conformant planning problem. Indeed with , but we could have .

We close this section with an example about planning with both positive and negative epistemic goals (the agent should know something, but not too much).

Example 4

Given uncertainty map depicted as follows, let the goal be then both and are conformant plans. If the goal is , only is a good plan.

4 Model checking EPDL: complexity and algorithms

In this section, we first focus on the model checking problem of the following star-free fragment of EPDL (call it EPDL):

We will show that model checking EPDL is Pspace-complete. In particular, the upper bound is shown by making use of an alternative context-dependent semantics. Then we give an EXPtime algorithm for the model checking problem of the full EPDL inspired by another alternative semantics based on 2-dimensional models. Finally we give a Pspace algorithm for the conformant planning problem in EPDL. Note that throughout this section, we focus on uncertainty maps with finitely many states and assume for co-finitely many .

4.1 Complexity of model checking EPDL

4.1.1 Lower Bound

To show the Pspace lower bound, we provide a polynomial reduction of QBF (quantified Boolean formula) truth testing to the model checking problem of EPDL. Note that to determine whether a given QBF (even in prenex normal form based on a conjunctive normal form) is true or not is known to be Pspace-complete [31]. Our method is inspired by [29] which discusses the complexity of model checking temporal logics with past operators. Surprisingly, we can use the uncertainty sets to encode the ‘past’ and use the dual of the knowledge operator to ‘go back’ to the past. This intuitive idea will become more clear in the proof.


QBF formulas are where:

  • For is if

    is odd, and

    is if is even.

  • is a propositional formula in CNF based on variables ,

For each such QBF with variables, we need to find a pointed model and a formula such that is true iff . The model is defined below.

Definition 4.1

Let and , the uncertainty map is defined as:

  • , and for .

is linear in and can be depicted as the following:

Given , the formula is defined as

where is if is odd and is if is even, and is obtained from by replacing each with and with .

To ease the latter proof, we first define the valuation tree below.

Definition 4.2 (V-tree)

A V-tree is a rooted tree such that 1) each node is or (except the root ); 2) each internal node in an even level has only one successor; 3) each internal node in an odd level has two successors: one is and the other one is ; 4) each edge to node of level is labelled ; 5) each edge to node of level is labelled . Given a V-tree with depth , a path is a sequence of where or . A path can also be seen as a valuation assignment for with the convention that if occurs in and if occurs in . Let be the set of all paths of .

As an example, a V-tree can be depicted as below:

It is not hard to see the following:

Proposition 4.1

For each , we have: is true iff there exists a V-tree with depth such that for each ( as a valuation).

Now let us see the update result of running a path on . Due to the lack of space, we omit the proofs of the following two propositions.

Proposition 4.2

Given , let be a sequence of actions such that or for each , then we have where if else for each .

Given where is or for each , let if and if . By Proposition 4.2, we always have with . Thus given and and , is a pointed uncertainty map.

Proposition 4.3

For each , we have iff there exists a V-tree with depth such that for each , where and is the state corresponds to the last edge of , e.g., .

Theorem 4.1

The following two are equivalent:

  • is true

  • in which is obtained from by replacing each with and with .

Proof: By Propositions 4.1 and 4.3, we only need to show that given V-tree with depth , if and only if for each . Since is in CNF, is also in CNF-like form obtained by replacing each with and each with for . Thus we only need to show that iff and iff . Since iff , we only need to show that iff and iff . By the definition of , we know that where is or for each .

Firstly, we will show that if and only if . To verify the right-to-left direction, if , it follows by the definition of that . Then it must be the case that occurs in . Suppose not, occurs in . It follows by Proposition 4.2, . This is contrary with . Thus it must be that occurs in . It follows by Proposition 4.2 that