Privacy Preserving Multi-Agent Planning with Provable Guarantees

10/31/2018 ∙ by Amos Beimel, et al. ∙ Ben-Gurion University of the Negev 0

In privacy-preserving multi-agent planning, a group of agents attempt to cooperatively solve a multi-agent planning problem while maintaining private their data and actions. Although much work was carried out in this area in past years, its theoretical foundations have not been fully worked out. Specifically, although algorithms with precise privacy guarantees exist, even their most efficient implementations are not fast enough on realistic instances, whereas for practical algorithms no meaningful privacy guarantees exist. Secure-MAFS, a variant of the multi-agent forward search algorithm (MAFS) is the only practical algorithm to attempt to offer more precise guarantees, but only in very limited settings and with proof sketches only. In this paper we formulate a precise notion of secure computation for search-based algorithms and prove that Secure MAFS has this property in all domains.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As our world becomes better connected and more open ended, and autonomous agents are no longer science fiction, a need arises for enabling groups of agents to cooperate in generating a plan for diverse tasks that none of them can perform alone, in a cost-effective manner. Indeed, much like ad-hoc networks, one would expect various contexts to naturally lead to the emergence of ad-hoc teams of agents that can benefit from cooperation. Such teams could range from groups of manufacturers teaming up to build a product that none can build on their own, to groups of robots sent by different agencies or countries to help in disaster settings. To perform complex tasks, these agents need to combine their diverse skills effectively. Planning algorithms can help achieve this goal.

Most planning algorithms require full information about the set of actions and state variables in the domain. However, often, various aspects of this information are private to an agent, and it is not eager to share them. For example, the manufacturer is eager to let everyone know that it can supply motherboards, but it will not want to disclose the local process used to construct them, its suppliers, its inventory level, and the identity of its employees. Similarly, rescue forces of country A may be eager to help citizens of country B suffering from a tsunami, but without having to provide detailed information about the technology behind their autonomous bobcat to country B, or to country C’s humanoid evacuation robots. In both cases, agents have public capabilities they are happy to share, and private processes and information that support these capabilities, which they prefer (or possibly require) to be kept private.

With this motivation in mind, a number of algorithms have recently been devised for distributed privacy-preserving planning [Bonisoli, Gerevini, Saetti,  SerinaBonisoli et al.2014, Torreño, Onaindia,  SapenaTorreño et al.2014, Luis  BorrajoLuis  Borrajo2014, Nissim  BrafmanNissim  Brafman2014b]. In these algorithms, agents supply a public interface only, and through a distributed planning process, come up with a plan that achieves the desired goal without being required to share a complete model of their actions and local state with other agents. But there is a major caveat: it is well known from the literature on secure multi-party computation [YaoYao1982] that the fact that a distributed algorithm does not require an agent to explicitly reveal private information does not imply that other agents cannot deduce such private information from other information communicated during the run of the algorithm. Consequently, given that privacy is the raison-d’etre for these algorithms, it is important to strive to improve the level of privacy provided, and to provide formal guarantees of such privacy properties.

To the best of our knowledge, to date, there have been two attempts to address this issue. In [Tozicka, Stolba,  KomendaTozicka et al.2017], the authors describe a secure planner for multi-agent systems. However, as they themselves admit, this planner is impractical, as it requires computing all possible solutions. [BrafmanBrafman2015] describes secure mafs  a modification of the multi-agent forward search algorithm [Nissim  BrafmanNissim  Brafman2014a] in which an agent never sends similar states. secure mafs is an efficient algorithm. In fact, an implementation of it based on an equivalent macro sending technique [Maliah, Shani,  BrafmanMaliah et al.2016] shows state of the art performance. But it is not clear what security guarantees it offers. While [BrafmanBrafman2015] provides some privacy guarantees, they are restricted to very special cases, and it seems most plausible that secure mafs is not secure in general.

The goal of this paper is to place the development of secure mafs on firm footing by developing appropriate notions of privacy that are useful and realizable in the context of search algorithms, to characterize the privacy preserving properties of secure mafs and to provide rigorous proofs for its correctness and completeness. We define a notion of -indistinguishable secure computation, and more specifically, we suggest a notion of PST-secure computation which is not as strong as that of strong privacy, but is meaningful and more stringent than weak privacy. Roughly speaking, given a function on planning instances, we say that an algorithm is -indistinguishable if it will send the same messages during computation for any two instances whose value is identical. PST-secure computation refers to the special case in which returns a projected version of the search space – one in which only the value of public variables is available.

The paper is structured as follows: First, we describe the basic model of privacy-preserving classical multi-agent planning. Then, we discuss some basic notions of privacy. Next, we gradually develop more practical versions of PST-secure planning algorithms, eventually describing an algorithm that is, essentially secure mafs, and prove that the latter is sound and complete, and is PST-secure.

2 The Model

ma-strips [Brafman  DomshlakBrafman  Domshlak2008] is a minimal extension of strips to multi-agent domains. A strips problem is a 4-tuple , where

  • is a finite set of primitive propositions, which are essentially the state variables; a state is a truth assignment to .

  • is the initial state.

  • is the set of goal states.

  • is a set of actions. Each action has the form , where is the set of preconditions of and is a set of literals, denoting the effects of action . We use to denote the state attained by applying in . The state is well defined iff . In that case, (for ) iff or and .

A plan is a solution to iff .

An ma-strips problem is a strips problem in which the action set is partitioned among a set of agents. Formally, , where are as above, and is the set of actions of .

Work on privacy-preserving multi-agent planning seeks algorithms that generate good, or possibly optimal plans while not disclosing private information about their actions and the variables that they manipulate. For this to be meaningful, one has to first define what information is private and what information is not. Here we focus on the standard notion of private actions and private propositions. Thus, each action is either private to agent or public. Similarly, each proposition is either private to some agent or public. To make sense, however, can be private to agent only if does not appear in the description of an action for . Similarly, can be private to only if all propositions in ’s preconditions are either public or private to and all propositions in ’s effects are private to .

Hence, a privacy preserving ma-strips problem (pp-mas) is defined by as a set of local planning problems: where . Here, is the value of in the initial state, and the goal is shared among all agents and involves public propositions only. Furthermore, any action involves private propositions only. We use to denote . A solution for a pp-mas problem is the sequence of all the public actions in a solution for the ma-strips problem.

We note that a more refined notion of privacy was suggested in . While we believe that the ideas discussed in this paper can be extended to this setting, we leave this for future work.

Recall that in classical planning, we assume that the world state is fully observable to the acting agent and actions are deterministic. The multi-agent setting shares these assumptions, except that full observability is w.r.t. the primitive propositions in .

An issue that often arises is whether private goals should be allowed, or should all goals be public. Public goals make it easier for all agents to detect goal achievement, and have been assumed in most past work. As there is a simple reduction from private to public goals, albeit one that makes public the fact that all private goals of an agent have been achieved, we will maintain the assumption that all goal propositions are public.

Next, we define the notion of a public projection. The public projection of an action , , is defined as . That is, the same action, but with its private propositions removed. Accordingly, for is empty. The public projection of a state is the partial assignment obtained by projecting to .

Now, we define , the public projection of to be the strips planning problem: .

The search-tree induced by a planning problem plays a key role in our definition of privacy in distributed forward search planning.

Definition 1.

The search tree associated with an MA planning problem , denoted by , is a tree inductively defined below, where every node is labeled by a state and is either private to some agent or public, and every edge is labeled by an action. The root is labeled by , and is public. The children of a node labeled by a state are defined as follows:

  • If is public, then for every applicable in there is a child labeled by .

  • If is private to , then for every applicable in there is a child labeled by .

  • In both cases, the node is public if is public, and is private to if is private to .

  • The edge from to is labeld by .

We will also assume the existence of some lexicographic ordering over states which defines the order of the children of a node. We assume that public variables appear before private variables in this order.

Next we define a concept of the public projection of a search tree. First, we project all states into their public parts. Then, we connect every public node to its closest public descendants, remove all private nodes, and remove duplicate children in the resulting tree. Formally:

Definition 2.

The public-projection of the search tree of (denoted ) is a tree, defined below, whose nodes are labeled by assignments to the public variables of and edges are labeled by public actions. Each node in corresponds to a list of public nodes in the search-tree , where the public states of all the nodes in the list are the public state of the node in (this list is used only to construct from and is not part of ). The tree is inductively defined.

  • The root of corresponds to the root of and is labeled by .

  • Let be a node in , with public state , that corresponds to public nodes in the search tree . Denote the (public and private) states of by respectively. We define the children of in two stages:

    • First, for every and every public descendants of such that all internal nodes in the path from to are private, i.e., the labels of the edges on the path from to are actions such that are private actions and is a public action, we construct a child . We label the edge from to by the last actions on this path, namely, by . The public state of is the public state in and we associate to .

    • We remove duplicated children. That is, if and are children of such that the actions labeling the edges and are the same and the public states of and are the same, then we merge and and associate all the nodes associated to them to the merged node. We repeat this process until there are no children that can be merged.

3 Privacy Guarantees

The main property of interest from a solution algorithm to a pp-mas planning problem, aside from soundness and completeness, is the level of privacy it preserves. The main privacy-related question one asks regarding a pp-mas algorithm is whether coalitions of agents participating in the planning algorithm will be able to gain information about the private propositions and actions of other agents.

In what follows we work under the following assumptions:

  • Agents are honest, but curious. This is a well known assumption in secure multi-party computation (see, e.g., [Lindell  PinkasLindell  Pinkas2009]). According to this assumption, which we believe applies to many real-world interactions among business partners and ad-hoc teams, the agents perform the algorithm as specified, but are curious enough to collude and try to learn what they can about the other agents without acting maliciously. (Alternatively, consider malicious agents that eavesdrop on the communication among agents, but are not part of the team, so they cannot intervene.)

  • The algorithm is synchronous. That is agent operate with a common clock, and send messages in rounds and these messages are immediately delivered without corruption or delay.

  • Perfect security, that is, even an unbounded adversary cannot learn any additional information beyond the leakage function (defined below).

To date, most work was satisfied with algorithms that never explicitly expose private information, typically by encrypting this information prior to communicating it to other agents. Consequently, we say that an algorithm is weakly private if the names of private actions and private state variables and their values are never communicated explicitly.

However, the fact that information is not explicitly communicated is not sufficient. Consider, for example an algorithm in which agents share with each other their complete domains, except that the names of private actions and state variables are obfuscated by (consistently) replacing each with some arbitrary random string. This satisfies the requirement of weak privacy, but provides the other agents with a complete model that is isomorphic to the real model. For example, imagine a producer who expects exclusivity from its suppliers. With this scheme, the producer will not know the real names of other customers of its suppliers, but it will certainly learn of their existence. Similarly, a shipping company may not want to have others learn about the size of its fleet, or the number of workers it employs.

At the other extreme we have strong privacy. We say that an algorithm is strongly private if no coalition of agents can deduce from the information obtained during a run of this algorithm any information that it cannot deduce from the public projection of the planning problem, the private information the coalition has (i.e., the initial states and the actions of the agents in the coalition), and the public projection of “its solution”. As we are considering search problems, where many solutions can exists, the traditional privacy definition for functions does not apply. The problem is that the solution chosen by the algorithm can leak information (e.g., an algorithm that returns the lexicographically first solution leaks no previous solutions exists). See [Beimel, Carmi, Nissim,  WeinrebBeimel et al.2008] for a discussion on this problem and a suggestion of a definition of privacy for search problems.

Furthermore, strong privacy is likely to be very difficult to achieve and to prove unless stronger cryptographic methods are introduced. With the latter, it will be possible to develop algorithms that are strongly private, but, at least with our current knowledge, this is likely to come at substantial computational cost that will render them not practical for the size of inputs we would like to consider. Weak privacy, on the other hand, seems too weak in most cases, and provides no real guarantee, as it is not clear what information is deducible from the algorithm.

Given this state of affairs, where in the existing algorithms strong privacy is not as practical as desired, whereas weak privacy tells us little, if anything, about the information that might be leaked, it is important to provide tools that will specify the privacy guarantees of existing and new algorithms. Here we would like to suggest a type of privacy “lower-bound” in the form of an indistinguishability guarantee. More specifically, given a function defined on planning domains, we say that an algorithm is -indistinguishable, if a coalition of agents participating in the planning algorithm solving a problem cannot distinguish between the current domain and any other domains such that . We provide two equivalent definitions of privacy.

We define the view of the of a set of agents , denoted , in an execution of a deterministic algorithm with inputs as all the information it sees during the execution, namely, the inputs of the agents in (namely, ) and the messages exchanged during the execution of the algorithm.

Definition 3.

Let be a (leakage) function. We say that a deterministic algorithm is -indistinguishable if for every set of agents and for every two inputs and such that for every and the view of is the same, i.e., .

Definition 4.

Let be a (leakage) function. We say that a deterministic algorithm is -indistinguishable if there exists a simulator Sim such that for every set of agents and for every input the view of is the same as the output of the simulator that is given and , i.e., .

In Definition 4, the simulator is given the inputs of the agents in and – the output of the leakage function applied to the inputs of all agents. The simulator is required to produce all the messages that were exchanged during the algorithm. If such simulator exists, then all the information that the adversary can learn from the execution of the algorithm is implied by the inputs of the parties in and .

Claim 1.

The two definitions are equivalent.

Proof.

Assume that an algorithm is -indistinguishable according to Definition 4. Let and be two inputs such that for every and . Thus, Therefore, by Definition 4, .

Assume that an algorithm is -indistinguishable according to Definition 3. Let be any input. We define a simulator for the algorithm. Given we construct a simulator Sim as follows:

  • Finds inputs such that , where if .

  • Outputs .

By Definition 3, , thus, , as required in Definition 4. ∎

Note that the simulator is not given the output of the function computed by the algorithm, information that is implied by the messages exchanged in the algorithm. The simulator can compute the view of , hence the output, from the information it gets. This implies that the leakage (together with ) determines the output of the algorithm. This is an important feature of our definition, as we consider search problems where there can be many possible outputs. The output that an algorithm returns might leak information on the inputs (see [Beimel, Carmi, Nissim,  WeinrebBeimel et al.2008]), and it is not clear how to compare the privacy provided by two algorithms returning different solutions. Our definition bypasses this problem as it explicitly specifies the leakage.

In this paper, we will focus on a particular function that returns the public projection of the problem’s search tree. That is, the algorithms we will consider will have the property that a set of agents cannot distinguish between two problem instances whose public projection and their PST are identical. We will refer to this as PST-indistinguishable security.

A recently proposed example of privacy w.r.t. a class of domains is cardinality preserving privacy [Maliah, Shani,  SternMaliah et al.2017] where the idea is that agents cannot learn the number of values of a some variable, such as the number of locations served by a track. (Defining this formally requires using multi-valued variable domains.) Another notion of privacy recently introduced is agent privacy [Faltings, Léauté,  PetcuFaltings et al.2008] in which agents are not aware of other agents with whom they do not have direct interactions – i.e., agents that require or affect some of the variables that appear in their own actions. This notion is more natural when such interactions are explicitly modelled using the notion of subset-private variables [Bonisoli, Gerevini, Saetti,  SerinaBonisoli et al.2014]. These notions seem more ad-hoc and weaker than our definition of privacy. We will not discuss these notions in this paper.

4 A PST-Indistinguishable Algorithm

The goal of this section is to show that secure mafs is PST-Indistinguishable. We will do it by gradually refining a very simple (and inefficient) algorithm to obtain an algorithm that is essentially identical to secure mafs, which, as shown by [Maliah, Shani,  BrafmanMaliah et al.2016], is quite efficient in practice, and thus the first algorithm to be both practical and have clear theoretical guarantees. This gradual progression will make the proofs and ideas simpler.

4.1 A Simple Algorithm

We start with a very simple algorithm, which we shall call PST-Forward Search. The algorithm simply constructs – the public-projection of the search tree of . The search progresses level by level in the public-projection of the search tree. In a given level of the tree, each agents : (1) computes the children of all the nodes in , where a child of a node results from a sequence of private actions followed by a single public action by the agent, and (2) sends the public state of each child (as well as a description of the path to the child) to all other agents (removing duplicates). The PST-Forward Search algorithm is described in Algorithm 1. In this algorithm, the agents maintain a set for every level in the tree, which will contain all nodes in level . Every element in the set is a node represented as a pair , where is a sequence of public states such that and is a sequence of public actions. Such a pair describes a path in the PST from the root to the node in level . To find the actions that an agent can apply from a node, it needs to compute the possible private states of that node, as this information is not contained in the message it received. To do this , the agent reconstructs its private state, as described in Algorithm compute-private-states. This is, of course, highly inefficient, but has the desired privacy property.

1:  initialization: ; for set .// will contain the states at level of the PST. Each agent maintains a copy of it.
2:  while goal has not been achieved do
3:     ; for every agent sets and .
4:     for  to  do
5:        Agent does the following:
6:        for each  do
7:           let be the last state in .
8:           executes compute-private-states.
9:           for each private state  do
10:              for each sequence of actions of applicable from , where are private and is public  do
11:                 computes and
12:        sends to all agents (where the elements of are sent according to some canonical order).
13:        each agent updates its copy: .
14:        if the last state in some satisfies the goal then
15:           all agents output and halt.
Algorithm 1 PST Forward Search
1:  /* The algorithm reconstructs the possible private states of agent starting from and updating it according to the states is and the actions of in . */
2:  let .
3:  for  to  do
4:     if  is not an action of  then
5:        .
6:     else
7:        .
8:        for each  and sequence of private actions in such that is applicable from  do
9:           let .
10:           if  then
11:              .
12:  return  .
Algorithm 2 compute-private-states

In Algorithm 1, the messages sent correspond exactly to the PST nodes, and therefore, two domains with an identical PST will yield identical messages. To enable an exact simulation, we need to specify the order in which each agent sends the possible sequences of children in a given level; we assume that this is done in some canonical order. We supply the formal proof of privacy in the next claim.

Claim 2.

Algorithm 1, the simple private search algorithm, is a PST-indistinguishable secure algorithm.

Proof.

The simulator, given the PST , traverses the tree level by level, in each level it goes over all agents starting from and ending at , and for each agent it sends the nodes of level resulting from an action of , where for each node it sends the public states and the actions on the path from the root to the node. The order of sending the nodes is as in the algorithm, according to the fixed canonical order. ∎

4.2 Using IDs

Next, we present an optimization of Algorithm 1, which eliminates the need to compute private states, and merges some nodes in the tree, reducing the communication complexity of the algorithm. We call this version: PST-ID Forward Search.

Notice that only actions of change the local state of . There are two approaches to use this observation. In one approach, for each node that is sent, the agent sending the node can locally keep a list containing its possible local states in that node. When an agent wants to compute the children of some node, it looks for its last action in the path to the node and retrieves its possible local states after that action. In the second approach, which we use, each agent associates the possible local states with a unique id and keeps the possible local states associated with this id. Each time an agent sends a node in the tree, it sends the public state of the node as well as the ids, encoding the local states of each agent. Notice that each id is not a function of these local states, but only of the particular PST node with which it is associated. When an agent wants to compute the children of a node resulting from its actions, it does the following:

  • It retrieves all private states associated with its id in this state.

  • It expands the public state and each possible private state using all possible actions sequences containing and ending with a single public action.

  • For every node reached, it generates a new id and associates with it its local state in the states generated with this projected state, keeping the ids of all other agents associated with the original node.

  • It orders the nodes based on some lexicographic order.

  • It sends these nodes, with their public states and their associated ids, in this order to all agents.

Note that the above algorithm sends at each stage a vector consisting of a public state and an id for each agent. As this id encodes the private state(s) of the agent, we can think of the message as representing the state, with its private components encoded. The agent does not need to send neither the actions leading to the new node nor the father of the new node. Furthermore, if two (or more) children of a node have the same public state, the agent does not need to send them twice; it can send one public state, together with the ids of the other agents taken from the original node, and one new id for the agent associated with all its possible private states associated with any one of these children. We go one step further, merging all nodes generated by an agent in level

(possibly with different fathers) if they have the same public state and the same ids for all other agents.

The formal description of the algorithm appears in Algorithm 3. The algorithm that recovers a solution after the goal has been reached is described in Algorithm 4.

1:  initialization: ; for every agent sets , , and . // denotes the local states associated with the id .
2:  while goal has not been achieved do
3:     ; for every agent sets and .
4:     for  to  do
5:        agent does the following:
6:        for each  do
7:           for each private state  do
8:              for each sequence of actions of applicable from , where are private and is public  do
9:                 
10:                 .
11:        agent sorts the elements of , first by the public state, then by the ids, and then by the private state. Let be the sorted elements of .
12:        for  to  do
13:           if  and and  then
14:              .
15:           else
16:              .
17:               and .
18:         sends to all agents (where the elements of are sent according to some canonical order).
19:        each agent updates: .
20:     if the state in some element in satisfies the goal then
21:        the agents execute recover-solution, output , and halt.
Algorithm 3 PST-ID Forward Search

Algorithm 4 described below returns a solution to the planning problem, i.e., a sequence of public actions on a path from the root of the PST to a node in level that satisfies the goal. Clearly, this sequence of actions should be computed from the information computed by the algorithm so far. Furthermore, to guarantee privacy, this sequence of actions should be determined by the PST (that is, a simulator can generate it from the PST). In Algorithm 4 we choose it in a specific way that is fairly efficient (especially, if the agents keep additional information during Algorithm 3).

In Algorithm 4, we say that leads to by agent if there exist private states , and a sequence of actions of agent such that are private and is public and are applicable from and lead to .

1:  let be the first element in that satisfies the goal.
2:  /* recall that all agents have a copy of . */
3:  for  downto 1  do
4:     let be the agent performing the last action leading to .
5:     agent finds the first element leading to .
6:     let be the last action in a sequence of actions leading from to (if there is more than one such action, choose the lexicographically first action).
7:     agent sends and to all other agents.
8:  return  .
Algorithm 4 recover-solution
Claim 3.

Algorithm 3, the PST-ID Forward Search algorithm, is a PST-indistinguishable secure algorithm.

Proof.

We construct a simulator proving that Algorithm 3 is a PST-indistinguishable secure algorithm. We first supply a high level description of the simulator. The simulator, given the PST , traverses the tree level by level and simulates the algorithm. For some level , it goes over the agents from agent to and for each agent it produces a list as the agent would have sent, using the nodes in level resulting from an action of . Recall that each element in is a public state and a list of ids. To produce these ids (and to know which nodes should be merged), for every vertex in level the simulator computes a label, denoted by , that contains ids; this label is computed using the label of the father of a node , denoted by . The labels of and are the same except for the th id, which is carefully computed to simulate Algorithm 3. After reaching the first level in which there is a node satisfying the goal, the simulator, using the PST tree, reconstructs the solution that Algorithm 4 returns.

The simulator is formally described in Algorithm 5. The input in Algorithm 5 is a PST ; we denote its root by . It can be easily proved by induction that the simulator computes the same messages as Algorithm 3. ∎

0:  A PST tree
1:  initialization: ; for every set , .
2:  while goal has not been achieved do
3:     ; for every set , , and .
4:     .
5:     for  to  do
6:        for each node in level s.t. the edge is labeled by an action of  do
7:           let and be the state of node .
8:           .
9:        sort the elements of , first by the public state, then by the ids, and then by .
10:        let be the sorted elements of .
11:        for  to  do
12:           if  or or  then
13:              .
14:              .
15:           .
16:        send on behalf of to all agents (where the elements of are sent according to some canonical order).
17:        for every set .
18:     if  the state in some element in satisfies the goal then
19:        execute sim-recover-solution, output , and halt.
Algorithm 5 Simulator for Algorithm 3 – The PST-ID Forward Search Algorithm
1:  let be the first element in that satisfies the goal and be all nodes in level whose public state is and whose label is .
2:  for  downto 1  do
3:     let .
4:     let be the first element in (where is the public state in the node ).
5:     let be the lexicographically first action labeling an edge from a node such that and to a node in .
6:     let be all nodes in such that , and there exists an edge from them to a node in labeled by the action .
7:     Send the message and .
8:  return  .
Algorithm 6 sim-recover-solution

4.3 Merging More Nodes

In PST-ID Forward Search, an agent merged two nodes if they were in the same level, they had the same public state, and the ids of the other agents were the same. The simple case when two nodes were merged is if they had the same parent and there were two sequences of actions ending with the same public state (if the last action in these sequences is the same, then they are already merged in the PPT). There are somewhat more complicated scenario when nodes are merged. For example, suppose that in some public state and private state in level , agent can apply two sequences of public actions and , and both sequences result in the same public state . Then, the resulting two nodes are in the same level and they will be merged. However, suppose that also action is applicable in the state and results in state (in level ). The resulting node is not in the same level and the previous nodes are not merged with the new node. As a result, the current algorithm will send two nodes that are identical in every respect, except for its id. One key motivation for the original secure mafs algorithm was to prevent this situation and never send two nodes that differ only in the private state of the sending agent.

There is a simple (though probably inefficient) way of overcoming this. For this observe that, under the assumption that an agent will never send two states that differ only in its own id, the only way two states

generated by an agent can be identical is if they have a common ancestor , and and were generated by applying actions of only. As in the above example, these could be sequences containing different numbers of public actions, and hence at different levels of the PST. However, once a public action is applied by some other agent , its id will change, and hence and will differ on ’s id. Given this observation, it is easy to modify PST-ID FS to have the property that an agent never sends two nodes that are identical in all but (possibly) its id, which we call PST-ID-E Forward Search. Whereas in PST-ID FS an agent will send each state obtained by applying exactly one public action, in PST-ID-E, the agent expands the entire local sub-tree below a node in its open list. That is, it will consider state reachable by applying more than one (of its) public actions. This could be a large sub-tree, of course, but under the assumption that all variables have finite-domains, it is finite and with appropriate book keeping (maintaining a closed list) can be constructed in finite time. Thus, the only change is in line 8 of Algorithm 3, where the new line is

for each sequence of actions of applicable from , where are public or private and is public do.

Claim 4.

The PST-ID-E Forward Search algorithm is a PST-indistinguishable secure algorithm.

Proof.

This follows immediately from the proof of Claim 3 using the following observation: Take the PST, and add to it additional edges between every node and all its descendants that are reachable using public actions of the same agent only. Now, use the simulator for PST-ID FS on this modified tree. ∎

Note that given the modified tree in the proof above, it is possible to recover the original ordering by simply taking into account the number of public actions that were applied in the path from the initial state to the current state.

Claim 5.

In the PST-ID-E Forward Search algorithm, an agent never sends two states that differ only in its own id.

Proof.

Consider two states sent by an agent during the run of the algorithm. Let the level of a state denote the number of times (plus 1) a public action was applied in the path to this state by an agent such that this agent did not apply the previous public action on the path. First, assume that have a common ancestor such that all actions on the paths from this ancestor to and are of the same agent . In this case, if they are identical in all other respects, an id that contains both their private states is formed, and only one state is sent. Suppose that do not have such ancestor. Consider the sequence of states sent by agents on the paths from the root to and . At some points, these states differ, and hence the id of the agent that sent the states will differ too. But from this point on, the ids of all sending agent must change. ∎

4.4 Heuristic Search

So far, the algorithms we described expanded nodes in breadth-first manner, and followed some canonical ordering within each level. PST-ID-E also fits this view, when levels are defined such that the level increases only when a public action is applied by an agent who did not apply the last public action. However, the privacy guarantees do not rest on this property. In principle, the PST can be traversed in any order, and all the above results are correct provided the traversal ordering is a function of the PST only. Thus, for example, any heuristic search algorithm can be used, provided the heuristic depends on the history of the public part of the state only, or on the current public state. This follows trivially from the fact that a simulator that has access to the PST can simulate any such ordering.

4.5 secure mafs

We are now ready to describe a PST-indistinguishable secure algorithm that is essentially a synchronous, breadth-first version of secure mafs [BrafmanBrafman2015]. secure mafs is similar to PST-ID Forward Search (i.e., a message is sent after the application of a public action), except that an agent never sends two states that differ only in its own private state – in our case, its own id. The PST-ID-E algorithm has this property, but requires that an agent first explore its entire sub-tree.

To prevent resending identical states (modulo its own id), in secure mafs the agent must maintain a list of states sent so far. Whenever it wishes to send a state with local state , it first checks if the state was sent before. If it was, it simply updates the id associated with to include .

Figure 1: An example for secure mafs.

However, this change alone is insufficient to maintain completeness. See Figure 1 for illustration of the following example. Consider some state that is being expanded by . Suppose that the non-private part of the state is identical in and , but the local state is different. Here are public actions of that only change the private state of . Let be a public action of another agent and an action of . We claim that may never generate , although, as we shall see, it should. To see this, note that will receive from and will expand it before it generates . Now, suppose that cannot be applied in because of ’s local state, but it can be applied in . Eventually, will generate . However, it will not send it to . It will simply update the id associated with to include the local state of . Since was already expanded, it will not attempt to re-expand it, and will miss the state .

To address this issue, secure mafs must re-expand states previously expanded when their id is modified. Specifically, in the above example, when we modify the id of , secure mafs will add (with the appropriate ids) to a local queue and later see that is applicable from this state.

1:  initialization: ; ; for every agent sets , , and for every .
2:  while goal has not been achieved do
3:     ; for every agent sets , , and .
4:     for  to  do
5:        agent does the following:
6:        for each  do
7:           for each private state  do
8:              if  and where not evaluated previously by  then
9:                 for each sequence of actions of applicable from , where are private and is public  do
10:                    .
11:                    if  was not generated before then
12:                       .
13:        agent sorts the elements of , first by the public state, and then by the ids, and then by the private state. Let be the sorted elements of .
14:        for  to  do
15:           if there exist and such that  then
16:              update .
17:              for each  s.t. for some  do
18:                  update .
19:           else if  and and  then
20:              . //Collects ids of similar states in a level
21:           else
22:              update
23:              update , and