Cooperative Epistemic Multi-Agent Planning for Implicit Coordination

03/07/2017 ∙ by Thorsten Engesser, et al. ∙ University of Freiburg 0

Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Recently, Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. We extend the DEL-based epistemic planning framework to include perspective shifts, allowing us to define new notions of sequential and conditional planning with implicit coordination. With these, it is possible to solve planning tasks with joint goals in a decentralized manner without the agents having to negotiate about and commit to a joint policy at plan time. First we define the central planning notions and sketch the implementation of a planning system built on those notions. Afterwards we provide some case studies in order to evaluate the planner empirically and to show that the concept is useful for multi-agent systems in practice.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One important task in Multi-Agent Systems is to collaboratively reach a joint goal with multiple autonomous agents. The problem is particularly challenging in situations where the knowledge and capabilities required to reach the goal are distributed among the agents. Most existing approaches therefore apply some centralized coordinating instance from the outside, strictly separating the stages of communication and negotiation from the agents’ internal planning and reasoning processes. In contrast, building upon the epistemic planning framework by Bolander and Andersen [9], we propose a decentralized planning notion in which each agent has to individually reason about the entire problem and autonomously decide when and how to (inter-)act. For this, both reasoning about the other agents’ possible contributions and reasoning about their capabilities of performing the same reasoning is needed. We achieve our notion of implicitly coordinated plans by requiring all desired communicative abilities to be modeled as epistemic actions which then can be planned alongside their ontic counterparts, thus enabling the agents to perform observations and coordinate at run time. It captures the intuition that communication clearly constitutes an action by itself and, more subtly, that even a purely ontic action can play a communicative role (e.g. indirectly suggesting follow-up actions to another agent). Thus, for many problems our approach appears quite natural. On the practical side, the epistemic planning framework allows a very expressive way of defining both the agents’ physical and communicative abilities.

Consider the following example scenario. Bob would like to borrow the apartment of his friend Anne while she is away on vacation. Anne would be very happy to do him this favor. So they now have the joint goal of making sure that Bob can enter the apartment when he arrives. Anne will think about how to achieve the goal, and might come up with the following plan: Anne puts the key under the door mat; when Bob arrives, Bob takes the key from under the door mat; Bob opens the door with the key. Note that the plan does not only contain the actions required by Anne herself, but also the actions of Bob. These are the kind of multi-agent plans that this paper is about.

However, the plan just presented does not count as an implicitly coordinated plan. When Bob arrives at the apartment, he will clearly not know that the key is under the door mat, unless Anne has told him, and this announcement was not part of the plan just presented. If Anne has the ability to take Bob’s perspective (she has a Theory of Mind concerning Bob [30]), Anne should of course be able to foresee this problem, and realize that her plan can not be expected to be successful. An improved plan would then be: Anne puts the key under the door mat; Anne calls Bob to let him know where the key is; when Bob arrives, Bob takes the key from under the door mat; Bob opens the door with the key. This does qualify as an implicitly coordinated plan. Anne now knows that Bob will know that he can find the key under the door mat and hence will be able to reach the goal. Anne does not have to request or even coordinate the sub-plan for Bob (which is: take key under door mat; open door with key), as she knows he will himself be able to determine this sub-plan given the information she provides. This is an important aspect of implicit coordination: coordination happens implicitly as a consequence of observing the actions of others (including announcements), never explicitly through agreeing or committing to a specific plan. The essential contributions of this paper are to formally define this notion of implicitly coordinated plans as well as to document and benchmark an implemented epistemic planner that produces such plans.

Our work is situated in the area of distributed problem solving and planning [16] and directly builds upon the framework introduced by Bolander and Andersen [9] and Löwe, Pacuit, and Witzel [23], who formulated the planning problem in the context of Dynamic Epistemic Logic (DEL) [14]. Andersen, Bolander, and Jensen [5] extended the approach to allow strong and weak conditional planning in the single-agent case. Algorithmically, (multi-agent) epistemic planning can be approached either by compilation to classical planning [3, 22, 24] or by search in the space of “nested” [9] or “shallow” knowledge states [27, 28, 29]. Since compilation approaches to classical planning can only deal with bounded nesting of knowledge (or belief), similar to Bolander and Andersen [9], we use search in the space of epistemic states to find a solution. One of the important features that distinguishes our work from more traditional multi-agent planning [10] is the explicit notion of perspective shifts needed for agents to reason about the possible plan contributions of other agents—and hence needed to achieve implicit coordination.

Our concepts can be considered related to recent work in temporal epistemic logics [11, 20, 21], which addresses a question similar to ours, namely what groups of agents can jointly achieve under imperfect information. These approaches are based on concurrent epistemic game structures. Our approach is different in a number of ways, including: 1) As in classical planning, our actions and their effects are explicitly and compactly represented in an action description language (using the event models of DEL); 2) Instead of joint actions we have sequential action execution, where the order in which the agents act is not predefined; 3) None of the existing solution concepts considered in temporal epistemic logics capture the stepwise shifting perspective underlying our notion of implicitly coordinated plans.

The present paper is an extended and revised version of a paper presented at the Workshop on Distributed and Multi-Agent Planning (DMAP) 2015 (without archival proceedings). The present paper primarily offers an improved presentation: extended and improved introduction, improved motivation, better examples, improved formulation of definitions and theorems, simplified notation, and more discussions of related work. The technical content is essentially as in the original DMAP paper, except we now compare implicitly coordinated plans to standard sequential plans, we now formally derive the correct notion of an implicitly coordinated plan from the natural conditions it should satisfy, and we have added a proposition that gives a semantic characterisation of implicitly coordinated policies.

2 Theoretical Background

To represent planning tasks as the ‘apartment borrowing’ example of the introduction, we need a formal framework where: (1) agents can reason about the knowledge and ignorance of other agents; (2) both fully and partially observable actions can be described in a compact way (Bob doesn’t see Anne placing the key under the mat). Dynamic Epistemic Logic (DEL) satisfies these conditions. We first briefly recapitulate the foundations of DEL, following the conventions of Bolander and Andersen [9]. What is new in this exposition is mainly the parts on perspective shifts in Section 2.2.

We now define epistemic languages, epistemic states and epistemic actions. All of these are defined relative to a given finite set of agents and a given finite set of atomic propositions . To keep the exposition simple, we will not mention the dependency on and in the following.

2.1 Epistemic Language and Epistemic States

Definition 1.

The epistemic language is where and .

We read as “agent knows ” and as “it is common knowledge that ”.

Definition 2.

An epistemic model is where

  • The domain is a non-empty finite set of worlds.

  • is an equivalence relation called the indistinguishability relation for agent .

  • assigns a valuation to each atomic proposition.

For , the pair is called an epistemic state (or simply a state), and the worlds of are called the designated worlds. A state is called global if for some world (called the actual world), and we then often write instead of . We use to denote the set of global states. For any state , we let . A state is called a local state for agent if is closed under . A local state for is minimal if is a minimal set closed under . We use to denote the set of minimal local states of . Given a state , the associated local state of agent , denoted , is .

Definition 3.

Let be a state with . For , and , we define truth as follows (with the propositional cases being standard and hence left out):


where is the transitive closure of  .

Example 1.

Let and , where is intended to express that the key is under the door mat. Consider the following global state , where the nodes represent worlds, the edges represent the indistinguishability relations (reflexive edges left out), and

is used for designated worlds:

Each node is labeled by the name of the world, and the list of atomic propositions true at the world. The state represents a situation where the key is under the door mat ( is true at the actual world ), but Bob considers it possible that it isn’t ( is not true at the world indistinguishable from by Bob). We can verify that in this state Anne knows that the key is under the mat, Bob doesn’t, and Anne knows that he doesn’t: . The fact that Bob does not know the key to be under the mat can also be expressed in terms of local states. Bob’s local perspective on the state is his associated local state . We have , signifying that from Bob’s perspective, can not be verified.

2.2 Perspective Shifts

In general, given a global state , the associated local state will represent agent ’s internal perspective on that state. Going from to amounts to a perspective shift, where agent ’s perspective on the perspective of agent is taken. In Example 1, Anne’s perspective on the state is , which is itself. Bob’s perspective is . When Anne wants to reason about the available actions to Bob, e.g. whether he will be able to take the key from under the door mat or not, she will have to shift to his perspective, i.e. reason about what holds true in , which is the same as in this case. This type of perspective shift is going to be central in our notion of implicitly coordinated plans, since it is essential to the ability of an agent to reason about other agents’ possible contributions to a plan from their own perspective. As the introductory example shows, this ability is essential: If Anne can not reason about Bob’s contribution to the overall plan from his own perspective, she will not realize that she needs to call him to let him know where the key is.

Note that any local state induces a unique set of global states, , and that we can hence choose to think of as a compact representation of the “belief state” over global states.

We have the following basic properties concerning perspective shifts (associated local states), where the third follows directly from the two first:

Proposition 1.

Let be a state, and .

  1. iff .

  2. If is local for agent then .

  3. If is local for agent then iff .∎

2.3 Dynamic Language and Epistemic Actions

To model actions, like announcing locations of keys and picking them up, we use the event models of DEL.

Definition 4.

An event model is where

  • The domain is a non-empty finite set of events.

  • is an equivalence relation called the indistinguishability relation for agent .

  • assigns a precondition to each event.

  • assigns a postcondition to each event. For all , is a conjunction of literals (atomic propositions and their negations, including ).

For , the pair is called an epistemic action (or simply an action), and the events in are called the designated events. An action is called global if for some event , and we then often write instead of . Similar to states, is called a local action for agent when is closed under .

Each event of an epistemic action represents a different possible outcome. By using multiple events that are indistinguishable (i.e. ), it is possible to obfuscate the outcomes for some agent , i.e. modeling partially observable actions. Using event models with , it is also possible to model sensing actions (where a priori, multiple outcomes are considered possible), and nondeterministic actions [9].

The product update is used to specify the successor state resulting from the application of an action in a state.

Definition 5.

Let a state and an action be given with and . Then the product update of with is where

  • .

Example 2.

Consider the following epistemic action , using the same conventions as for epistemic models, except each event is labeled by :

It represents the action of Bob attempting to take the key from under the mat. The event represents that if the key is indeed under the mat (the precondition is true), then the result will be that Bob holds the key and it is no longer under the mat (the postcondition becomes true). The event represents that if the key is not under the mat, nothing will happen (the postcondition is the trivial one, ). Note the indistinguishability edge for Anne: She is not there to see whether the action is successful or not. Note however that she is still is aware that either or happens, so the action represents the situation where she knows that he is attempting to take the key, but not necessarily whether he is successful (except if she already either knows or knows ).

Letting denote the state from Example 1, we can calculate the result of executing try-take in :

Note that the result is for Bob to have the key and for this to be common knowledge among Anne and Bob (). So it seems that if we assume initially to be in the state , and want to find a plan to achieve , then the simple plan consisting only of the action try-take should work. It is, however, a bit more complicated than that. Let us assume that Bob does strong planning, that is, only looks for plans that are guaranteed to reach the goal. The problem is then that, from Bob’s perspective, try-take can not be guaranteed to reach the goal. This is formally because we have:

Both worlds being designated, but distinguishable, means that Bob at plan time (before executing the action) considers them both as possible outcomes of try-take, but is aware that he will at run time (after having executed the action) know which one it is (see [9] for a more full discussion of the plan time/run time distinction). Since the world is designated, we have . So from Bob’s perspective, try-take might fail to produce and is hence not a strong plan to achieve . Intuitively, it is of course simply because he does not, at plan time, know whether the key is under the mat or not.

Since and , it might seem that try-take is still a strong plan to achieve from the perspective of Anne. But in fact, it is not, at least not of the implicitly coordinated type we will define below. The issue is, try-take is still an action that Bob has to execute, but Anne knows that Bob does not know it to be succesful, and she can therefore not expect him to execute it. The idea is that when Anne comes up with a plan that involves actions of Bob, she should change her perspective to his, in order to find out what he can be expected to do. Technically speaking, it is because that the plan is not implicitly coordinated from the perspective of Anne.

We extend the language into the dynamic language by adding a modality for each global action . The truth conditions are extended with the following standard clause from DEL: .

We define the following abbreviations:

We say that an action is applicable in a state if for all there is an event s.t. . Intuitively, an action is applicable in a state if for each possible situation (designated world), at least one possible outcome (designated event) is specified.

Example 3.

Consider again the state from Example 1 and the action try-take from Example 2. The action try-take is trivially seen to be applicable in the state , since the designated event has its precondition satisfied in the designated world . The action try-take is also applicable in , since and . This shows that try-take is applicable from the perspective of Bob. Intuitively, it is so because it is only an action for attempting to take the key. Even if the key is not under the mat, the action will specify an outcome (having the trivial postcondition ).

Let denote an epistemic state and an action. Andersen [4] shows that is applicable in iff , and that iff . We now define a further abbreviation: . Hence:


Thus means that the application of is possible and will (necessarily) lead to a state fulfilling .

3 Cooperative Planning

As mentioned in the introduction, in this paper we assume each action to be executable by a single agent, that is, we are not considering joint actions. In our ‘apartment borrowing’ example there are two agents, Anne and Bob. They are supposed to execute different parts of the plan to reach the goal of Bob getting access to the apartment. For instance, Anne is the one to put the key under the mat, and Bob is the one to take it from under the mat. To represent who performs which actions of a plan, we will introduce what we call an owner function (inspired by the approach of Löwe, Pacuit, and Witzel [23]). An owner function is defined to be a mapping , mapping each action to the agent who can execute it. For any action , we call the owner of . Note that by defining the owner function this way, every action has a unique owner. This can be done without loss of generality, since we can always include any number of semantically equivalent actions in , if we wish some action to be executable by several agents (e.g., if we want both Anne and Bob to be able to open and close the door). We can now define epistemic planning tasks.

Definition 6.

A cooperative planning task (or simply a planning task) consists of an initial epistemic state , a finite set of epistemic actions , an owner function , and a goal formula of . Each has to be local for . When is a global state, we call it a global planning task. When is local for agent , we call it a planning task for agent . Given a planning task and a state , we define . Given a planning task , the associated planning task for agent is .

Given a multi-agent system facing a global planning task , each individual agent is facing the planning task (agent cannot observe the global initial state directly, only the associated local state ).

In the following, we will investigate various possible solution concepts for cooperative planning tasks. The solution to a planning task is called a plan. A plan can either be sequential (a sequence of actions) or conditional (a policy). We will first, in Section 3.1, consider the simplest possible type of solution, a standard sequential plan. Then, in Section 3.2, we are going to introduce the more complex notion of a sequential implicitly coordinated plan, and in Section 3.3 this will be generalized to implicitly coordinated policies.

3.1 Standard Sequential Plans

The standard notion of a sequential plan in automated planning is as follows (see, e.g., [18]). An action sequence is called a (standard sequential) plan if for every , is applicable in the result of executing the action sequence in the initial state, and when executing the entire sequence in the initial state, a state satisfying the goal formula is reached. Let us transfer this notion into the DEL-based setting of this paper. In our setting, the result of executing an action in a state is given as . Hence, the above conditions for being a plan can be expressed in the following way in our setting, where denotes the initial state, and the goal formula: for every , is applicable in , and . Note that by equation (1) above, these conditions are equivalent to simply requiring . Hence we get the following definition.

Definition 7.

Let be a planning task. A standard sequential plan for is a sequence of actions from satisfying .

This solution concept is equivalent to the one considered in [9]. As the following example shows, it is not sufficiently strong to capture the notion of an ‘implicitly coordinated plan’ that we are after.

Example 4.

Consider again the scenario of Example 2 where the key is initially under the mat, Bob does not know this, and the goal is for Bob to have the key. The only action available in the scenario is for Bob to attempt to take the key from under the mat. Using the states and actions defined in Examples 1 and 2, we can express this scenario as a cooperative planning task where , , , . In Example 2, we informally concluded that the plan only consisting of try-take is a strong plan, since it is guaranteed to reach the goal, but that it is not a strong plan from the perspective of Bob. Given our formal definitions, we can now make this precise as follows:

  1. The sequence is a standard sequential plan for .

  2. The sequence is not a standard sequential plan for .

The first item follows from the fact that try-take is applicable in (Example 3), and that (Example 2). The second item follows from (Example 2).

We also have that is a standard sequential plan for , since . This proves that the notion of a standard sequential plan is insufficient for our purposes. If Anne is faced with the planning task , she should not be allowed to consider as a sufficient solution to the problem. She should be aware that the action try-take is to be executed by Bob, and from Bob’s perspective, is not a (strong) solution to the planning task (Item 2 above). So we need a way of explicitly integrating perspective-shifts into our notion of a solution to a planning task, and this is what we will do next.

3.2 Implicitly Coordinated Sequential Plans

It follows from Definition 7 that is a standard sequential plan for some planning task iff and the formula is true in . More generally, we can think of a planning notion as being defined via a mapping that takes a plan and a planning task as parameters, and returns a logical formula such that, for all states , iff is a plan for . In the case of standard sequential plans, it follows directly from Definition 7 that would be defined like this:


for all planning tasks and all .

We now wish to define a similar mapping , so that iff is an implicitly coordinated plan for . Our strategy is to list the natural conditions that should satisfy, and then derive the exact definition of (and hence implicitly coordinated plans) directly from those. First of all, the empty action sequence, denoted by , should be an implicitly coordinated plan iff it satisfies the goal formula, which is expressed by the following simple condition on :


For non-empty action sequences, the ‘apartment borrowing’ example studied above gives us the following insights. If Anne is trying to come up with a plan where one of the steps is to be executed by Bob, then Anne has to make sure that Bob can himself verify his action to be applicable, and that he can himself verify that executing the action will lead to a state where the rest of the plan will succeed. More generally, for an action sequence to be considered implicitly coordinated, the owner of the first action has to know that is applicable and will lead to a situation where is again an implicitly coordinated plan. This leads us directly to the following condition on , for all planning tasks and all with :


It is now easy to see that any mapping satisfying (3) and (4) must necessarily be defined as follows, for all planning tasks and all action sequences :

This leads us directly to the following definition.

Definition 8.

Let be a cooperative planning task. An implicitly coordinated plan for is a sequence of actions from such that:


If is an implicitly coordinated plan for , then it is said to be an implicitly coordinated plan for agent to the planning task .

Note that the formula used to define implicitly coordinated plans above is uniquely determined by the natural conditions (3) and (4).

The following proposition gives a more structural, and semantic, characterization of implicitly coordinated plans. It becomes clear that such plans can be found by performing a breadth-first search over the set of successively applicable actions, shifting the perspective for each state transition to the owner of the respective action.

Proposition 2.

For a cooperative planning task , a non-empty sequence of actions from is an implicitly coordinated plan for iff is applicable in and is an implicitly coordinated plan for .

The proposition can be seen as a semantic counterpart of (4), and is easily proven using (1), Proposition 1 and (5).

Example 5.

Consider again the planning task of Example 4 with , , , . In Example 4 we concluded that is a standard sequential plan for . From Example 2, we have that , and hence (using (1) and Proposition 1). This shows that, as expected, (try-take) is not an implicitly coordinated plan for .

In the introduction, we noted that the solution to this problem would be for Anne to make sure to announce the location of the key to Bob. Let us now treat this formally within the developed framework. We need to give Anne the possibility of announcing the location of the key, so we define a new planning task with . Here announce is the action

  with . In DEL, this action is known as a public announcement of . It can now easily be formally verified that

In words: Anne knows that she can announce the location of the key, and that this will lead to a situation where Bob knows he can attempt to take the key, and he knows that he will be successful in this attempt. In other words, is indeed an implicitly coordinated plan to achieve that Bob has the key, consistent with our informal analysis in the introduction of the paper.

Example 6.

Consider a situation with agents where a letter is to be passed from agent 1 to one of the other two agents, possibly via the third agent. Mutually exclusive propositions are used to denote the current carrier of the letter, while denote the addressee. In our example, agent has a letter for agent , so and are initially true.

In , all agents know that agent has the letter (), but agents and do not know who of them is the addressee ( or ). We assume that agent 1 can only exchange letters with agent 2 and agent 2 can only exchange letters with agent 3. We thus define the four possible actions , with being the composite action of agent publicly passing the letter to agent and privately informing him about the correct addressee (the name of the addressee is on the envelope, but only visible to the receiver). I.e.

Given that the joint goal is to pass a letter to its addressee, the global planning task then is with , for all , and . Consider the action sequence : Agent 1 passes the letter to agent 2, and agent 2 passes it on to agent 3. It can now be verified that and for . Hence is an implicitly coordinated plan for agent , but not for agents and . This is because in the beginning agents and do not know to whom of them the letter is intended and hence cannot verify that will lead to a goal state. However, after agent ’s execution of , agent can distinguish between the possible addressees at run time, and find his subplan , as contemplated by agent .

3.3 Conditional Plans

Sequential plans are often not sufficient to solve a given epistemic planning task. In particular, as soon as branching on nondeterministic action outcomes or obtained observations becomes necessary, we need conditional plans to solve such a task. Unlike Andersen, Bolander, and Jensen [5], who represent conditional plans as action trees with branches depending on knowledge formula conditions, we represent them as policy functions , where each maps minimal local states of agent into actions of agent . We now define two different types of policies, joint policies and global policies, and later show them to be equivalent.

Definition 9 (Joint policy).

Let be a cooperative planning task. Then a joint policy consists of partial functions satisfying for all states and actions : if then and is applicable in .

Definition 10 (Global policy).

Let be a cooperative planning task. Then a global policy is a mapping satisfying the requirements knowledge of preconditions (kop), per-agent determinism (det), and uniformity (unif):

  • For all , : is applicable in .

  • For all with : .

  • For all with : .

Proposition 3.

Any joint policy induces a global policy given by

Conversely, any global policy induces a joint policy given by

Furthermore, the two mappings (mapping joint policies to induced global policies) and (mapping global policies to induced joint policies) are each other’s inverse.


First we prove that the induced mapping as defined above is a global policy. Condition (kop): If then for some , and by definition of joint policy this implies is applicable in . Condition (det): Assume with . By definition of we have and for some . By definition of joint policy, and . Since we get and hence . This implies . Condition (unif): Assume and . By definition of and joint policy, we get for . Thus , and since , we immediately get and hence . We now prove that the induced mappings defined above form a joint policy. Constraint (kop) ensures the applicability property as required by Definition 9, while the constraints (det) and (unif) ensure the right-uniqueness of each partial function . It is easy to show that the two mappings and are each other’s inverse, using their definition. ∎

By Proposition 3, we can identify joint and global policies, and will in the following move back and forth between the two. Notice that Definitions 9 and 10 allow a policy to distinguish between modally equivalent states. A more sophisticated definition avoiding this is possible, but is beyond the scope of this paper. Usually, a policy is only considered to be a solution to a planning task if it is closed in the sense that is defined for all non-goal states reachable following . Here, we want to distinguish between two different notions of closedness: one that refers to all states reachable from a centralized perspective, and one that refers to all states considered reachable when tracking perspective shifts. To that end, we distinguish between centralized and perspective-sensitive successor functions.

We take a successor function to be any function . Successor functions are intended to map pairs of states and actions into the states that can result from executing in . Which states can result from executing in depend on whether we take the objective, centralized view, or whether we take the subjective view of an agent. An agent might subjectively consider more outcomes possible than are objectively possible. We define the centralized successor function as . It specifies the global states that are possible after the application of in . If closedness of a global policy based on the centralized successor function is required, then no execution of will ever lead to a non-goal state where is undefined. Like for sequential planning, we are again interested in the decentralized scenario where each agent has to plan and decide when and how to act by himself under incomplete knowledge. We achieve this by encoding the perspective shifts to the next agent to act in the perspective-sensitive successor function . Unlike , considers a state to be a successor of after application of if agent considers possible after the application of , not only if is actually possible from a global perspective. Thus, is always a (possibly strict) subset of , and a policy that is closed wrt.  must be defined for at least the states for which a policy that is closed wrt.  must be defined. This corresponds to the intuition that solution existence for decentralized planning with implicit coordination is a stronger property than solution existence for centralized planning. For both successor functions, we can now formalize what a strong solution is that can be executed by the agents. Our notion satisfies the usual properties of strong plans [12], namely closedness, properness and acyclicity.

Definition 11 (Strong Policy).

Let be a cooperative planning task and a successor function. A global policy is called a strong policy for with respect to if

  1. Finiteness: is finite.

  2. Foundedness: for all , (1) , or (2) .

  3. Closedness: for all , , (1) , or (2) .

Note that we do not explicitly require acyclicity, since this is already implied by a literal interpretation of the product update semantic that ensures unique new world names after each update. It then follows from (i) and (iii) that is proper. We call strong plans with respect to centralized policies and strong plans with respect to implicitly coordinated policies.

Analogous to Proposition 2, we want to give a semantic characterization of implicitly coordinated policies. For this, we first define a successor of a state by following a policy to be a state for arbitrary states and arbitrary actions . We can then show that if is an implicitly coordinated policy for a planning task , each successor state of the initial state either will already be a goal state, or there will be some agent who can find an implicitly coordinated policy for his own associated planning task prescribing an action for himself.

Proposition 4.

Let be an implicitly coordinated policy for a planning task and let be a non-goal successor state of by following . Then there is an agent such that contains at least one of agent ’s actions and is an implicitly coordinated policy of .


The existence of an action with some owner follows directly from the closedness of implicitly coordinated policies. We need to show that is also implicitly coordinated for . Finiteness and closedness of still hold for , since was already finite and closed for , and this does not change when replacing with . For foundedness of for , we have to show that is defined and returns a nonempty set of actions for all global states . For itself, we already known that . By uniformity, since all such are indistinguishable from for agent , must assign the same action to all states . ∎

Example 7.

Consider again the letter passing problem introduced in Example 6. Let and denote the global states that are initially considered possible by agent .


With , a policy for agent is given by After the contemplated application of by agent (in both cases), agent can distinguish between , where the goal is already reached and nothing has to be done, and , where agent can apply , leading directly to the goal state . Thus, is an implicitly coordinated policy for . While in the sequential case, agent has to wait for the first action of agent to be able to find its subplan, it can find the policy in advance by explicitly planning for a run-time distinction.

In general, strong policies can be found by performing an AND-OR search, where AND branching corresponds to branching over different epistemic worlds and OR branching corresponds to branching over different actions. By considering modally equivalent states as duplicates and thereby transforming the procedure into a graph search, space and time requirements can be reduced, although great care has to be taken to deal with cycles correctly.

4 Experiments

We implemented a planner that is capable of finding implicitly coordinated plans and policies111Our code can be downloaded at, and conducted two experiments: one small case study of the Russian card problem [13] intended to show how this problem can be modeled and solved from an individual perspective, and one experiment investigating the scaling behavior of our approach on private transportation problems in the style of Examples 6 and 7, using instances of increasing size.

4.1 Russian Card Problem

In the Russian card problem, seven cards numbered are randomly dealt to three agents. Alice and Bob get three cards each, while Eve gets the single remaining card. Initially, each agent only knows its own cards. The task is now for Alice and Bob to inform each other about their respective cards using only public announcements, without revealing the holder of any single card to Eve. The problem was analyzed and solved from the global perspective by van Ditmarsch et al. [15], and a given protocol was verified from an individual perspective by Ågotnes et al. [2]. We want to solve the problem from the individual perspective of agent Alice and find an implicitly coordinated policy for her. To keep the problem computationally feasible, we impose some restrictions on the resulting policy, namely that the first action has to be Alice truthfully announcing five possible alternatives for her own hand, and that the second one has to be Bob announcing the card Eve is holding. Without loss of generality, we fix one specific initial hand for Alice, namely . From a plan for this initial hand, plans for all other initial hands can be obtained by renaming. For simplicity, we only generate applicable actions for Alice, i.e. announcements that include her true hand . This results in the planning task having a total of options for the first action, and for the second action. Still, the initial state consists of worlds, one for each possible deal of cards. Agents can only distinguish worlds where their own hands differ. Alice’s designated worlds in her associated local state of are those four worlds in which she holds hand .

Our planner needs approximately two hours and MB of memory to come up with a solution policy. In the solution, Alice first announces her hand to be one of , , , , and . It can be seen that each of the five hands other than the true hand contains at least one of Alice’s and one of Bob’s cards, meaning that Bob will immediately be able to identify the complete deal. Also, Eve stays unaware of the individual cards of Alice and Bob since she will be able to rule out exactly two of the hands, with each of Alice and Bob’s cards being present and absent in at least one of the remaining hands. Afterwards, Alice can wait for Bob to announce that Eve has either card , , or .

4.2 Mail Instances

Our second experiment concerns the letter passing problem from Examples 6 and 7. We generalized the scenario to allow an arbitrary number of agents with an arbitrary undirected neighborhood graph, indicating which agents are allowed to directly pass letters to each other. As neighborhood graphs, we used randomly generated Watts-Strogatz small-world networks [31], exhibiting characteristics that can also be found in social networks. Watts-Strogatz networks have three parameters: The number of nodes (determining the number of agents in our setting), the average number

of neighbors per node (roughly determining the average branching factor of a search for a plan), and the probability

of an edge being a “random shortcut” instead of a “local connection” (thereby influencing the shortest path lengths between agents). We only generate connected networks in order to guarantee plan existence.

We distinguish between the example domains MailTell and MailCheck. To guarantee plan existence, in both scenarios the actions are modeled such as to ensure that the letter position remains common knowledge among the agents in all reachable states. The mechanics of MailTell directly correspond to those given in Example 6. There is only one type of action, publicly passing the letter to a neighboring agent while privately informing him about the final addressee. This allows for sequential implicitly coordinated plans. In the resulting plans, letters are simply moved along a shortest path to the addressee. In contrast, in MailCheck, an agent that has the letter can only check if he himself is the addressee or not using a separate action (without learning the actual addressee if it is not him). To ensure plan existence in this scenario, we allow an agent to pass on the letter only if it is destined for someone else. Unlike in MailTell, conditional plans are required here. In a solution policy, the worst-case sequence of successively applied actions contains an action passing the letter to each agent at least once. As soon as the addressee has been reached, execution is stopped.

direct path
(a) MailTell, ,
full path
(b) MailCheck, ,
Table 1: Runtime evaluation for randomly generated MailTell and MailCheck instances

Experiments were conducted for both scenarios with different parameters (Table 1). For finding sequential as well as conditional plans, our implementation uses breadth-first search over a regular graph and over an AND-OR graph, respectively. For each set of parameters, 100 trials were performed. For MailTell, direct path denotes the average shortest path length between sender and addressee, while for MailCheck, full path denotes the average length of a shortest path passing through all agents starting from the sender.

While the shortest path length between sender and addressee grows very slowly with the number of agents (due to the shortcut connections in the network), the shortest path passing through all agents roughly corresponds to the number of agents. Since these measures directly correspond to the minimal plan lengths, the observed exponential growth of space and time requirements with respect to them (and to the base ) is unsurprising.

Note also that in both scenarios, the number of agents determines the number of worlds (one for each possible addressee) in the initial state. Since the preconditions of the available actions are mutually exclusive, this constitutes an upper bound on the number of worlds per state throughout the search. Thus we get only a linear overhead in comparison to directly searching the networks for the relevant paths.

5 Conclusion and Future Work

We introduced an interesting new cooperative, decentralized planning concept without the necessity of explicit coordination or negotiation. Instead, by modeling all possible communication directly as plannable actions and relying on the ability of the autonomous agents to put themselves into each other’s shoes (using perspective shifts), some problems can be elegantly solved achieving implicit coordination between the agents. We briefly demonstrated an implementation of both the sequential and conditional solution algorithms and its performance on the Russian card problem and two letter passing problems.

Based on the foundation this paper provides, a number of challenging problems needs to be addressed. First of all, concrete problems (e.g. epistemic versions of established multi-agent planning tasks) need to be formalized with a particular focus on the question of which kinds of communicative actions the agents would need to solve these problems in an implicitly coordinated way. As seen in the MailTell benchmark, the dynamic epistemic treatment of a problem does not necessarily lead to more than linear overhead. It will be interesting to identify classes of tractable problems and see how agents cope in a simulated environment. Another issue that is relevant in practice concerns the interplay of the single agents’ individual plans. In our setting, the agents have to plan individually and decide autonomously when and how to act. Also, when it comes to action application, there is no predefined notion of agent precedence. This leads to the possibility of incompatible plans, and in consequence to the necessity for agents having to replan in some cases. While our notion of implicitly coordinated planning explicitly forbids the execution of actions leading to dead-end situations (i.e. non-goal states where there is no implicitly coordinated plan for any of the agents), replanning can still lead to livelocks. Both the conditions leading to livelocks and individually applicable strategies to avoid them need to be investigated.


  • [1]
  • [2] Thomas Ågotnes, Philippe Balbiani, Hans P. van Ditmarsch & Pablo Seban (2010): Group announcement logic. Journal of Applied Logic 8(1), pp. 62–81, doi:
  • [3] Alexandre Albore, Hector Palacios & Hector Geffner (2009): A Translation-Based Approach to Contingent Planning. In: Proc. IJCAI 2009, pp. 1623–1628.
  • [4] Mikkel Birkegaard Andersen (2015): Towards Theory-of-Mind agents using Automated Planning and Dynamic Epistemic Logic. Ph.D. thesis, Technical University of Denmark.
  • [5] Mikkel Birkegaard Andersen, Thomas Bolander & Martin Holm Jensen (2012): Conditional Epistemic Planning. In: Proc. JELIA 2012, pp. 94–106, doi:˙8.
  • [6] Mikkel Birkegaard Andersen, Thomas Bolander & Martin Holm Jensen (2015): Don’t Plan for the Unexpected: Planning Based on Plausibility Models. Logique et Analyse 58(230), doi:
  • [7] Guillaume Aucher & Thomas Bolander (2013): Undecidability in Epistemic Planning. In:

    Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013)

  • [8] Johan van Benthem, Jan van Eijck, Malvin Gattinger & Kaile Su (2015): Symbolic Model Checking for Dynamic Epistemic Logic. In: Proceedings of the 5th International Workshop on Logic, Rationality, and Interaction (LORI 2015), pp. 366–378, doi:˙30.
  • [9] Thomas Bolander & Mikkel Birkegaard Andersen (2011): Epistemic planning for single and multi-agent systems. Journal of Applied Non-Classical Logics 21(1), pp. 9–34, doi:
  • [10] Michael Brenner & Bernhard Nebel (2009): Continual planning and acting in dynamic multiagent environments. Autonomous Agents and Multi-Agent Systems 19(3), pp. 297–331, doi:
  • [11] Nils Bulling & Wojciech Jamroga (2014): Comparing variants of strategic ability: how uncertainty and memory influence general properties of games. Autonomous Agents and Multi-Agent Systems 28(3), pp. 474–518, doi:
  • [12] Alessandro Cimatti, Marco Pistore, Marco Roveri & Paolo Traverso (2003): Weak, Strong, and Strong Cyclic Planning via Symbolic Model Checking. Artificial Intelligence 147(1–2), pp. 35–84, doi:
  • [13] Hans P. van Ditmarsch (2003): The Russian Cards Problem. Studia Logica 75(1), pp. 31–62, doi:
  • [14] Hans P. van Ditmarsch, Wiebe van der Hoek & Barteld Kooi (2007): Dynamic Epistemic Logic. Springer Heidelberg, doi:
  • [15] Hans P. van Ditmarsch, Wiebe van der Hoek, Ron van der Meyden & Ji Ruan (2006): Model Checking Russian Cards. Electronic Notes in Theoretical Computer Science 149(2), pp. 105–123, doi:
  • [16] Edmund H. Durfee (2001): Distributed Problem Solving and Planning. In: Proc. ACAI 2001, pp. 118–149.
  • [17] Jan van Eijck (2004): Dynamic epistemic modelling. CWI. Software Engineering [SEN].
  • [18] Malik Ghallab, Dana S. Nau & Paolo Traverso (2004): Automated Planning: Theory and Practice. Morgan Kaufmann.
  • [19] Wojciech Jamroga (2004): Strategic Planning through Model Checking of ATL Formulae. In: ICAISC, pp. 879–884, doi:˙136.
  • [20] Wojciech Jamroga & Thomas Ågotnes (2007): Constructive knowledge: what agents can achieve under imperfect information. Journal of Applied Non-Classical Logics 17(4), pp. 423–475, doi:
  • [21] Wojciech Jamroga & Wiebe van der Hoek (2004): Agents that Know How to Play. Fundam. Inform. 63(2–3), pp. 185–219.
  • [22] Filippos Kominis & Hector Geffner (2015): Beliefs in Multiagent Planning: From One Agent to Many. In: Proc. ICAPS 2015, pp. 147–155.
  • [23] Benedikt Löwe, Eric Pacuit & Andreas Witzel (2011): DEL planning and some tractable cases. In: Proc. LORI 2011, pp. 179–192, doi:˙13.
  • [24] Christian Muise, Vaishak Belle, Paolo Felli, Sheila McIlraith, Tim Miller, Adrian R. Pearce & Liz Sonenberg (2015): Planning over Multi-Agent Epistemic States: A Classical Planning Approach. In: Proc. AAAI 2015, pp. 3327–3334, doi:
  • [25] Raz Nissim & Ronen I. Brafman (2014):

    Distributed Heuristic Forward Search for Multi-agent Planning

    Journal of Artificial Intelligence Research 51, pp. 293–332, doi:
  • [26] Héctor Palacios & Hector Geffner (2009): Compiling Uncertainty Away in Conformant Planning Problems with Bounded Width. Journal of Artificial Intelligence Research 35, pp. 623–675, doi:
  • [27] Ronald P. A. Petrick & Fahiem Bacchus (2002): A Knowledge-Based Approach to Planning with Incomplete Information and Sensing. In: Proc. AIPS 2002, pp. 212–222, doi:
  • [28] Ronald P. A. Petrick & Fahiem Bacchus (2004): Extending the Knowledge-Based Approach to Planning with Incomplete Information and Sensing. In: Proc. ICAPS 2004, pp. 2–11.
  • [29] Ronald P. A. Petrick & Mary Ellen Foster (2013): Planning for Social Interaction in a Robot Bartender Domain. In: Proc. ICAPS 2013, pp. 389–397.
  • [30] David Premack & Guy Woodruff (1978): Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences 1(4), pp. 515–526, doi:
  • [31] Duncan J. Watts & Steven H. Strogatz (1998): Collective dynamics of ’small-world’ networks. Nature 393(6684), pp. 440–442, doi: