# The Role of A-priori Information in Networks of Rational Agents

Until now, distributed algorithms for rational agents have assumed a-priori knowledge of n, the size of the network. This assumption is challenged here by proving how much a-priori knowledge is necessary for equilibrium in different distributed computing problems. Duplication - pretending to be more than one agent - is the main tool used by agents to deviate and increase their utility when not enough knowledge about n is given. The a-priori knowledge of n is formalized as a Bayesian setting where at the beginning of the algorithm agents only know a prior σ, a distribution from which they know n originates. We begin by providing new algorithms for the Knowledge Sharing and Coloring problems when n is a-priori known to all agents. We then prove that when agents have no a-priori knowledge of n, i.e., the support for σ is infinite, equilibrium is impossible for the Knowledge Sharing problem. Finally, we consider priors with finite support and find bounds on the necessary interval [α,β] that contains the support of σ, i.e., α≤ n ≤β, for which we have an equilibrium. When possible, we extend these bounds to hold for any possible protocol.

There are no comments yet.

## Authors

• 10 publications
• 62 publications
• 2 publications
• 4 publications
• ### Cheating by Duplication: Equilibrium Requires Global Knowledge

Distributed algorithms with rational agents have always assumed the size...
11/13/2017 ∙ by Yehuda Afek, et al. ∙ 0

• ### Reaching Distributed Equilibrium with Limited ID Space

We examine the relation between the size of the id space and the number ...
04/17/2018 ∙ by Dor Bank, et al. ∙ 0

• ### Want to Gather? No Need to Chatter!

A team of mobile agents, starting from different nodes of an unknown net...
08/29/2019 ∙ by Sébastien Bouchard, et al. ∙ 0

• ### Kripke Semantics of the Perfectly Transparent Equilibrium

The Perfectly Transparent Equilibrium is algorithmically defined, for an...
07/20/2018 ∙ by Ghislain Fourny, et al. ∙ 0

• ### Learning by Observation of Agent Software Images

Learning by observation can be of key importance whenever agents sharing...
02/04/2014 ∙ by Paulo Roberto Costa, et al. ∙ 0

• ### On systems of quotas based on bankruptcy with a priori unions: estimating random arrival-style rules

This paper addresses a sampling procedure for estimating extensions of t...
11/16/2020 ∙ by A. Saavedra-Nieves, et al. ∙ 0

• ### Optimal Algorithm for Bayesian Incentive-Compatible

We consider a social planner faced with a stream of myopic selfish agent...
10/24/2018 ∙ by Lee Cohen, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The complexity and simplicity of most distributed computing problems depend on the inherent a-priori knowledge given to all participants. Usually, the more information processors in a network start with, the more efficient and simple the algorithm for a problem is. Sometimes, this information renders an otherwise unsolvable problem, solvable.

In game-theoretic distributed computing, algorithms run in a network of rational agents that may deviate from an algorithm if they deem the deviation more profitable for them. Rational agents have always been assumed to know the number of participants in the network [1, 4, 5, 24, 38], when in fact this assumption is not only unrealistic in today’s internet, but also provides agents with non-trivial information which is critical for equilibrium.

Consider for example a large world-wide social network on which a distributed algorithm between a large portion of its members is run. It does not necessarily have the time to verify the number of participants, or the service it provides with the algorithm will be irrelevantly slow. If is known to all participants, as was assumed in previous works about rational agents, that would not be a problem. However, what if

is not known beforehand, allowing one of the participants to skew the result in his favor?

The problems we examine here can be solved in the game-theoretic setting when is a-priori known. However, learning the size of the network reliably is not possible with rational agents and thus we show that some a-priori knowledge of is critical for equilibrium. That is, without any knowledge of , equilibrium for some problems is impossible. In contrast, these problems can be solved without knowledge of if the participants are not rational, since we can acquire the size of the network using broadcast and echo.

When is not a-priori known, agents may deviate from the algorithm by duplicating themselves to affect the outcome. This deviation is also known as a Sybil Attack [20], commonly used to manipulate internet polls, increase page rankings in Google [15] and affect reputation systems such as eBay [14, 16]. In this paper, we use a Sybil Attack as a method for agents to skew protocols in their favor and increase their utility. For each problem presented, an equilibrium when is known is provided or was provided in a previous work, thus for these problems, agents that do not duplicate cannot increase their utility. Obviously, deviations from the algorithm that include both duplicating and additional cheating are also possible.

Intuitively, the more agents an agent is disguised as, the more power to affect the output of the algorithm it has. For every problem, we strive to find the maximum number of duplications a cheater may be allowed to duplicate without gaining an advantage when compared to following the protocol legally, i.e., the maximum number of duplications for which equilibrium is still possible. This maximum number of duplications depends on whether other agents will detect that a duplication has taken place, since the network could not possibly be this large. To detect this situation they need to possess some knowledge about the network size, or about a specific structure.

Agents may possess partial a-priori knowledge of , i.e., they do not know precisely but instead have a prior belief on the possible values of . We formalize this notion by using a Bayesian setting in which agents a-priori know that for some discrete distribution over . We use this Bayesian setting to determine the requirements necessary of agents’ a-priori knowledge of for equilibria to be possible. More specifically, we prove impossibility of equilibria in some settings, and find bounds on such that if agents a-priori know that originates from , i.e., , an equilibrium exists. In the problems we examine, the important characteristic of is its support: whether the support is finite and the size of the interval that contains the support of from which is drawn. These bounds hold for both deterministic and non-deterministic algorithms.

Using these bounds, we show what algorithms may be used in specific networks. For example, in an internal business network, some algorithms may work because every member in the network knows there are no more than several thousand computers in the network, while for other algorithms this knowledge is not tight enough.

Table 1 summarizes our contributions and related previous work (where there is a citation). In every row, a different distributed computing problem is examined in different settings of a-priori knowledge. Known refers to the case where all agents in the network start the algorithm when they know . Unknown refers to the Bayesian environment where agents know a prior from which is drawn, but is a distribution with infinite support, namely, there is some such that for any ,

. The two rightmost columns refer to two types of priors with finite support: A uniform distribution on integers in an interval

, and a geometrically decreasing distribution on integers in an interval .

### 1.1 Related Work

The connection between distributed computing and game theory stemmed from the problem of secret sharing

[32]. Further works continued the research on secret sharing and multiparty computation when both Byzantine and rational agents are present [2, 18, 21, 22, 23, 29].

Another line of research presented the BAR model (Byzantine, acquiescent and rational) [8, 31, 37], while a related line of research discusses converting solutions with a mediator to cheap talk [2, 3, 12, 13, 19, 26, 30, 33, 35, 36].

Abraham, Dolev, and Halpern [4] were the first to present protocols for networks of rational agents, specifically protocols for Leader Election. In [5] the authors continue this line of research by providing basic building blocks for game theoretic distributed algorithms, namely a wake-up and knowledge sharing equilibrium building blocks. Algorithms for consensus, renaming, and leader election are presented using these building blocks. Consensus was researched further by Halpern and Vilacça [24], who showed that there is no ex-post Nash equilibrium, and a Nash equilibrium that tolerates failures under some minimal assumptions on the failure pattern. Yifrach and Mansour [38] studied fair Leader Election protocols, giving an almost tight resilience analysis. Bank, Sulamy, and Waserman [11] examined the case where the space is limited, calculating the minimal threshold for equilibrium.

Coloring and Knowledge Sharing have been studied extensively in a distributed setting [9, 10, 17, 25, 27, 28, 34]. An algorithm for Knowledge Sharing with rational agents was presented in [5], while Coloring with rational agents has not been studied previously, to the best of our knowledge.

## 2 Model

We use the standard message-passing, synchronous model, where the network is a bidirectional graph with nodes, each node representing an agent with unlimited computational power, and edges over which the agents communicate in rounds. is assumed to be -vertex-connected222 This property was shown necessary in [5], since if a bottleneck node exists it can alter any message passing through it. Such a deviation cannot be detected since all messages between the sub-graphs this node connects must traverse through it. This node can then skew the algorithm according to its preferences. . Throughout the entire paper, always denotes the actual number of nodes in the network. In Section 3 it is the exact size of the network. In Section 4 and Section 5

, agents treat it as a random variable drawn from a prior

.

Initially, each agent knows its id and input, but not the id or input of any other agent. The agents in the network have a prior over the information they do not know. For any problem, we demand the set of private information agents have is finite, and so we assume that the prior agents have over other agents’ private information is uniform. The exception for this assumption is , the size of the network: agents know precisely or know , an explicitly stated prior.

We assume all agents start the protocol together at round , i.e., all agents wake up at the same time. If not, we can use the Wake-Up [5] building block to relax this assumption.

### 2.1 Equilibrium in Distributed Algorithms

Informally, a distributed algorithm is an equilibrium if no agent at no point in the execution can do better by unilaterally deviating from the algorithm. When considering a deviation, an agent assumes all other agents follow the algorithm, i.e., it is the only agent deviating.

Formally, let be the output of agent , let

be the set of all possible output vectors, and denote the output vector

, where . Let be the set of legal output vectors, in which the protocol terminates successfully, and let be the set of erroneous output vectors, such that and .

Each agent has a utility function . The higher the value assigned by to an output vector, the better this vector is for . As in previous works [4, 5, 38], the utility function is required to satisfy the Solution Preference property which guarantees that an agent never has an incentive to fail the algorithm. Otherwise, they would simply be Byzantine faults. An agent fails the algorithm only when it detects that another agent had deviated.

###### Definition 2.1 (Solution Preference).

The utility function of an agent never assigns a higher utility to an erroneous output than to a legal one, i.e.:

 ∀a,OL∈ΘL,OE∈ΘE:ua(OL)≥ua(OE)

We differentiate the legal output vectors, which ensure the output is valid and not erroneous, from the correct output vectors, which are output vectors that are a result of a correct execution of the algorithm, i.e., without any deviation. Solution Preference guarantees agents never prefer an erroneous output. However, they may prefer a legal but incorrect output.

The Solution Preference property introduces the threat agents face when deviating: Agents know that if another agent catches them cheating, it outputs and the algorithm fails. In other words, the output is erroneous, i.e., in .

For simplicity, we assume agents only have preferences over their own output, i.e., for any in which , . Additionally, each agent has a single preferred output value , and we normalize the utility function values, such that333 This is the weakest assumption since it still leaves a cheating agent with the highest incentive to deviate, while still satisfying Solution Preference. A utility assigning a lower value for failure than would deter a cheating agent from deviating. :

 ua(O)={1oa=pa$and$O∈ΘL0oa≠pa$or$O∈ΘE (1)

These assumptions are for convenience only and can easily be removed. Our results hold for any utility function that satisfies Solution Preference.

###### Definition 2.2 (Expected Utility).

Let be a round in a specific execution of an algorithm. Let be an arbitrary agent. For each possible output vector , let

be the probability, estimated by agent

at round , that is output by the algorithm if takes step  444 A step specifies the entire operation of the agent in a round. This may include drawing a random number, performing any internal computation, and the contents and timing of any message delivery. , and all other agents follow the algorithm. The Expected Utility estimates for step in round of that specific execution is:

 Es,r[ua]=∑O∈ΘxO(s,r)⋅ua(O)

We can also consider the expected utility from following a strategy. Let denote a series of steps for agent beginning at round until the end of the protocol, i.e., a list of actions performs every round from and on. These may specify different reactions to situations during the course of the protocol. Then we can consider all possible outputs where acts according to and other agents act according to the protocol, and denote the expected utility over all these outputs as .

An agent will deviate whenever the deviating step has a strictly higher expected utility than the expected utility of the next step of the algorithm, even if that deviating step also increases the risk of an erroneous output.

Let be an algorithm. If by deviating from and taking step , the expected utility of is higher, we say that agent has an incentive to deviate (i.e., cheat). For example, at round algorithm may dictate that flips a fair binary coin and sends the result to all of its neighbors. Any other action by is considered a deviation: whether the message was not sent to all neighbors, sent later than it should have, or whether the coin toss was not fair, e.g., only sends instead of a random value. If no agent can unilaterally increase its expected utility by deviating from , we say that the protocol is an equilibrium. Equilibrium is defined over a single deviating agent, i.e., there are no coalitions of agents.

###### Definition 2.3 (Distributed Equilibrium).

Let denote the next step of algorithm in round . is an equilibrium if for any deviating step , at any round of every possible execution of where all steps by all agents up to round were according to :

 ∀a,r,¯s:Es(r),r[ua]≥E¯s,r[ua]

Distributed Equilibrium is a sort of truthful equilibrium: a protocol where if agents assume all other agents follow the protocol truthfully, their best action is to truthfully follow the protocol as well.

It is important to emphasize that for any non-trivial distributed algorithm, the outcome cannot be calculated using only private data, without communication. For rational agents, no agent can calculate the output privately at the beginning of the algorithm, since if it could calculate the output and know that its resulting utility will be , it would surely lie over its initial information to avoid losing, preventing equilibrium. If it knows its resulting utility is , it has no incentive to cheat. But then there exists an instance of the same problem where the agents has a different preference over the output, making that protocol not an equilibrium. Formally, for any agent and any step of the agent that does not necessarily result in algorithm failure, it must hold that: (a value of means an agent will surely not get its preference, and means it is guaranteed to get its preference).

### 2.2 Priors on n

In Section 4 and Section 5 agents are in a Bayesian environment where they do not know , the size of the network, but instead know a prior over from which originates. The support of is:

 supp(σ)={x|Pr[n=x]>0}
###### Definition 2.4 (Infinite Support).

A prior agents have on has infinite support if there is some network size that any network size larger than is possible with non-zero probability. Formally, a prior on has an infinite support if there exists s.t.:

 ∀x>n0:Pr[n=x]>0
###### Definition 2.5 (Finite support).

A prior has finite support in if there exists an interval over for which . Denoted in short .

###### Definition 2.6 (Uniform Prior).

A uniform distribution over integers in an interval is defined using the following probability mass function:

 F(t)=Pr[n≤t]=t−α+1β−α+1
###### Definition 2.7 (Geometric Prior).

As a decreasing geometric distribution we use a factor

geometric distribution starting at and decreasing until . Formally, let be a random variable drawn from . is a geometric distribution starting at until , where the tail above is spread uniformly along . The ”tail” is:

 Pr[n>β]=1−Pr[n≤β]=1−(1−(12)β−α+1)=2α−1−β

Thus for any :

 f(t)=Pr[n=t]=2−t+α−1+2α−β−1β−α+1=2α−t−1+c
 F(t)=Pr[n≤t]=t∑j=α(2α−j−1+c)=(1−2α−t−1)+(t−α+1)c

Where .

### 2.3 Knowledge Sharing

The Knowledge Sharing problem (adapted from [5]) is defined as follows:

1. Each agent has a private input , in addition to its , and a function , where is identical at all agents. Agents know the possible output space of before the algorithm begins.

2. A Knowledge Sharing protocol terminates legally if all agents output the same value, i.e., . Thus the set is defined as: .

3. A Knowledge Sharing protocol terminates correctly (as described in Section 2.1) if each agent outputs the value over the input values of all other agents555Notice that any output is legal as long as it is the output of all agents, but only a single output value is considered correct for a given input vector..

4. The function satisfies the Full Knowledge property:

###### Definition 2.8 (Full Knowledge Property).

A function fulfills the full knowledge property if, for each agent that does not know the input values of all other agents, any output in the range of is equally possible. Formally, for any , fix and denote . A function fulfills the full knowledge property if, for any possible output in the range of , is the same666The definition assumes input values are drawn uniformly, otherwise the definition of can be expanded to the sum of probabilities over every input value for ..

We differentiate two variants of Knowledge Sharing:

• -Knowledge Sharing - where and is known to all agents at the beginning of the algorithm. For example, in -Knowledge Sharing the output space is and all agents know the possible outputs of the protocol are from . If every agent’s input is a random bit, a -Knowledge Sharing protocol performs a shared coin flip.

• Knowledge Sharing - A protocol for Knowledge Sharing is a protocol that solves -Knowledge Sharing for any possible . In the same manner, a protocol is an equilibrium if for any possible , no agent has an incentive to cheat.

We assume that each agent prefers a certain output value .

### 2.4 Coloring

In the Coloring problem [17, 27], is any such that and . We assume that every agent prefers a specific color .

### 2.5 Sybil Attacks

Let be a malicious agent with outgoing edges. A possible deviation for is to simulate imaginary agents , and to answer over some of its edges as , and over the others as , as illustrated in Figure 1. From this point on acts as if it is 2 agents. We assume that the id space is much larger than , allowing us to disregard the probability that the fake id collides with an existing id, an issue dealt with in [11].

Regarding the output vector, notice that an agent that pretends to be more than one agent still outputs a single output at the end. The duplication causes agents to execute the algorithm as if it is executed on a graph (with the duplicated agents) instead of the original graph ; however, the output is considered legal if rather than if .

## 3 Algorithms

Here we present algorithms for Knowledge Sharing (Section 3.1) and Coloring (Section 3.2).

The Knowledge Sharing algorithm presented here is an equilibrium in a ring network when no cheating agent pretends to be more than agents, improving the Knowledge Sharing algorithm in [5]. The Coloring algorithms are equilibria in any -connected graph when agents a-priori know .

Notice that using an algorithm as a subroutine is not trivial in this setting, even if the algorithm is an equilibrium, as the new context as a subroutine may allow agents to deviate towards a different objective than was originally proven. Thus, whenever a subroutine is used, the fact that it is an equilibrium should be proven.

### 3.1 Knowledge Sharing in a Ring

First we describe the Secret-Transmit(,,) building block in which agent learns the input of agent at round , and no other agent in the ring learns any information about this input. To achieve this, agent selects a random number , and let . It sends clockwise and counter-clockwise until each reaches the agent before . At round , these neighbors of simultaneously send the values and , thus receives the information at round .

We assume a global orientation around the ring. This assumption can easily be relaxed via Leader Election [5], which is an equilibrium in this application since the orientation has no effect on the output. The algorithm works as follows:

###### Theorem 3.1.

In a ring, Algorithm 1 is an equilibrium when no cheating agent pretends to be more than agents.

###### Proof.

Assume by contradiction that a cheating agent pretending to be agents has an incentive to deviate. W.l.o.g., the duplicated agents are (recall the indices are not known to the agents).

Let be the size of the ring including the duplicated agents, i.e., . The clockwise neighbor of is , denoted . Denote the agent at distance counter-clockwise from , and note that .

When calls Secret-Transmit to , holds of that transmission until round . When calls Secret-Transmit to , holds of that transmission until round . By our assumption, the cheating agent duplicated into . Since , the cheater receives at most one piece ( or ) of each of ’s transmissions before round . So, there is at least one input that the cheater does not learn before round . According to the Full Knowledge property (Definition 2.8), for the cheater at round any output is equally possible, so its expected utility for any value it sends is the same, thus it has no incentive to cheat regarding the values it sends in round .

Let be an arbitrary duplicated agent. In round , is known by its clockwise neighbor and by , the agent at distance counter-clockwise from . Since the number of counter-clockwise consecutive agents in is greater than , at least one of is not a duplicated agent. Thus, at round , the input of each agent in is already known by at least one agent .

At round the cheater does not know the input value of at least one other agent, so by the Full Knowledge property it has no incentive to deviate. At round for each duplicated agent, its input is already known by a non-duplicated agent, which disables the cheater from lying about its input from round and on.

Thus, the cheating agent has no incentive to deviate, contradicting our assumption. ∎

### 3.2 Coloring in General Graphs

Here, agents are given exact a-priori knowledge of .

We present Algorithm 2, the Witness Algorithm, that uses two subroutines to obtain a coloring: Draw and Prompt.

The Witness algorithm includes three colorings: The agents already have an -coloring set by their s. But agents can cheat by lying about their s. So each agent selects a new unique number from to , in a way that prevents it from cheating about the number. That is another -coloring of the agents, but this coloring does not ensure that every agent with no neighbors that output will output . Thus, an agent that can still output legally, i.e., none of its neighbors are numbered , will output instead of its unique number, which may also cause an illegal coloring in case two neighbors with the same preferred color behave this way. Instead, this second -coloring is used as the order by which agents choose their final color, resulting in a legal coloring in which agents have no incentive to cheat.

The algorithm begins by running Wake-Up [5] to learn the graph topology and the s of all agents. Then, in order of s, each agent draws a random number with a neighboring ”witness” agent as specified in Algorithm 3, and sends it to all of its neighbors. The number is drawn in the range and since agents draw numbers one by one, each agent knows the set of numbers already taken by its neighbors and uses it as the input for its call to Draw, ensuring no two neighbors draw the same number and resulting in a coloring of . However, this coloring is not enough since any equilibrium must ensure an agent that can output its desired color will output it. In Draw, the agent cannot cheat in the random number generation process since it exchanges a random number simultaneously with its respective witness, and is marked as a witness for future verification, ensuring will not lie about its drawn number. When done, each agent simultaneously verifies the numbers published by its neighbors using Algorithm 4, which enables it to receive each value through two disjoint paths: directly from the neighbor, and via the shortest simple path to the neighbor’s witness that does not include the neighbor. Then each agent, in order of the values drawn randomly, picks its preferred color if available, or the minimal available color otherwise, and sends its color to all of its neighbors.

Draw (Algorithm 3) is an equilibrium in which agent randomizes a number different from those of its neighbors and commits to it, and is depicted in Figure 5. For an arbitrary agent and a subset of numbers from , Draw begins with sending its neighbor with minimum the string witness. The following round, and the agent that received the witness string exchange a random number from to at the same time, where . The sum of these two randomized numbers , denoted , is calculated at and at its witness, and then notifies all its neighbors that its result from the Draw process is the ’th largest number in , ensuring that when these neighbors use Draw, they do not include that number in their respective . Draw takes a constant number of rounds to complete, denoted rounds. Prompt (Algorithm 4) is a query that ensures receives the correct drawn number from a neighbor.

The complexity of the Witness Algorithm is .

The resulting message complexity is:

• Wake Up is

• Drawing a random number is called times and uses a total of messages in total to publish these values to neighbors

• Verifying the value of a neighbor uses messages and is called times, for a total of

• Sending the output color takes an additional messages

Since the diameter of the network is at most (line graph), we have a total of .

###### Theorem 3.2.

The Witness Algorithm (Algorithm 2) is an equilibrium for Coloring when agents a-priori know .

###### Proof.

Let be an arbitrary agent. Assume in contradiction that at some round there is a possible cheating step such that and . Consider the possible deviations for in every phase of Algorithm 2:

• Wake-Up: The order by which agents initiate Algorithm 3 has no effect on the order by which they will later set their colors. Hence, has no incentive to publish a false in the Wake-Up building block.

• Draw is an equilibrium: The topology and s are known to all agents from Wake-Up, so all agents know who should be the witness of each agent (thus no agent can use Draw with the wrong witness). Agent and its witness then exchange a random number at the same time, so cannot affect the draw: Its expected utility is the same regardless of the number it sends in the exchange.

• Publishing a false value will be caught by the verification in step 10 of Algorithm 2.

• Sending a color message not in order will be immediately recognized by the neighbors, since values were verified.

• Agent might output a different color than the color dictated by Algorithm 2. But if the preferred color is available, then outputting it is the only rational behavior. Otherwise, the utility for the agent is already in any case. ∎

## 4 Impossibility With No Knowledge

Here we prove that the common assumption that is known is the key to the possibility of equilibrium for the Knowledge Sharing problem. When agents know that where has infinite support, there is no equilibrium for the Knowledge Sharing problem. In other words, agents must have some a-priori information that is part of a finite set.

In this section we label agents in the graph as in an arbitrary order in any topology. These labels are not known to the agents themselves.

### 4.1 Impossibility of Knowledge Sharing

###### Theorem 4.1.

There is no protocol for Knowledge Sharing that is an equilibrium in a -connected graph when agents’ prior on has infinite support. Formally, for any protocol for Knowledge Sharing there exists a graph where agents know when has infinite support such that there exists an agent and a strategy for such that:

 ED[ua]>EΛ[ua]
###### Proof.

Assume by contradiction that is a Knowledge Sharing algorithm that is an equilibrium in any graph of agents with a prior with infinite support on . Let , be two -connected graphs of rational agents. Consider the execution of on graph created by , and adding two nodes and connecting these nodes to or more arbitrary nodes in both and (see Figure 8).

Recall that the vector of agents’ inputs is denoted by , and . Let be the first round after which can be calculated from the collective information that all agents in have (regardless of the complexity of the computation), and similarly the first round after which can be calculated in . Consider the following three cases:

1. : cannot yet be calculated in at round . Let . Since , the collective information in at round is enough to calculate . Since is not known, an agent could emulate the behavior of , making the agents believe the algorithm runs on rather than . In this case, this cheating agent knows at round the value of in this execution, but the collective information of agents in is not enough to calculate , which means the output of agents in still depends on messages from , the cheater. Thus, if learns that the output , it can simulate all possible runs of the algorithm in a state-tree, and select a course of action that has at least some probability of leading to an outcome . Such a message surely exists because otherwise, would have also known the value of . In other words, finds a set of messages that might cause the agents in to decide a value . In the case where , agent increases its expected utility by sending a set of messages different than that decreed by the protocol. Thus, agent has an incentive to deviate, contradicting distributed equilibrium.

2. : both and have enough collective information to calculate at the same round. The collective information in at round already exists in at round .

Since , the collective information in is not enough to calculate in round . Thus, similarly to Case 1, can emulate and has an incentive to deviate.

3. : Symmetric to Case 1.

Thus, is not an equilibrium for the Knowledge Sharing problem. ∎

## 5 How Much Knowledge Is Necessary?

In this chapter, agents know that where is a prior on with finite support, i.e., for some . We examine several distributed computing algorithms and find bounds on the interval for which we have equilibria. We use two test-cases for possible priors, namely, a uniform prior and a geometric prior. We begin by showing bounds on the interval that for any , Algorithm 1 is an equilibrium for Knowledge Sharing in a ring network. Then, we use a subset of these results to show a new algorithm for Coloring a ring network, and show that if Algorithm 1 is an equilibrium when , then the Coloring algorithm provided is an equilibrium when .

Finally, we look into bounds on , for Leader Election, Partition and Orientation.

Notice that of all the problems we examine, Partition and Orientation are an exception: We show that they have equilibria without any knowledge of , i.e., even if the support agents have for is infinite; however, the former is constrained to even-sized rings, and the latter is a trivial problem in distributed computing (radius in the LOCAL model [28]).

Consider an agent at the start of a protocol: If pretends to be a group of agents, it is caught when , since when agents count the number of agents they identify that a deviation occurred. The finite support for creates a situation where any duplication now involves some risk since the actual value of is not known to the cheater (similar to [11]).

Let denote the strategy of following the protocol, and for any agent let be the expected utility for by following the protocol without any deviation. An arbitrary cheating agent simulates executions of the algorithm for every possible duplication, and evaluates its expected utility compared to . Denote a duplication scheme in which an agent pretends to be agents.

Let be the probability, from agent ’s perspective, that the overall size of the network exceeds . If for agent there exists a duplication scheme at round such that , then agent has an incentive to deviate and duplicate itself 777Agents are assumed to duplicate at round because any other duplication can be detected by adding a Wake-Up routine which starts by mapping the topology. We consider round 0 as deciding on before the Wake-Up is completed.

### 5.1 Knowledge Sharing in a Ring

In ring networks, we have shown Algorithm 1 is an equilibrium if no agent duplicates . In other words, the only way an agent can increase its expected utility is by duplicating into at least more agents than the size of the remainder of the ring, i.e., is made of honest agents and at least duplicated cheater agents. It is easy to see that if a cheating agent had duplicated agents and was not detected (i.e., ), its utility is since it controls the output. Notice that Algorithm 1 begins with Wake-Up, so a duplicating agent must decide on its duplication scheme before any communication other than Wake-Up occurs.

We consider a specific duplication scheme for the cheating agent: Duplicating . This cheating strategy maximizes a cheater’s expected utility by obtaining the optimal balance between and successfully duplicating (recall that is a random variable from the cheater’s perspective). Notice that duplicating is beneficial only when . If , duplicating would either be detected and lead to utility, or in case , would result in and not increase the cheater’s utility. Any duplication other than would also result in (which does not improve the cheater’s expected utility) or in detection.

Let be an agent in the ring at round with prior . Let be a cheating strategy in which duplicates into agents, and if is not caught, it skews the protocol to output . The expected utility by following strategy D is:

 ED,0[ua]=0⋅Pr[detected]+1⋅Pr[undetected, duplicated d>n]+1k⋅Pr[% undetected, duplicated d≤n]=1⋅Pr[n

If for all agents this utility is at most , the expected utility from following the protocol, then Algorithm 1 is an equilibrium. Setting the probability mass function F and yields the following requirement for Algorithm 1 to be an equilibrium:

 ED,0[ua]=F(⌊β2⌋)+1k(F(⌈β2⌉)−F(⌊β2⌋))≤1k (2)

For each prior and each Knowledge Sharing variant, if then there is an incentive to cheat by duplicating , and then using an optimal cheating strategy that derives a utility of . Table 2 shows the conditions on for Algorithm 1 to be an equilibrium. Following the table are explanations of each entry.

1. For uniform and , setting the probability mass function for the uniform distribution into Equation 2 yields:

 ⌊β2⌋−α+1β−α+1≤12(β−α+1−⌈β2⌉+⌊β2⌋β−α+1)
 ⌊β2⌋−α+1≤12(2⋅⌈β2⌉−α+1)
 2⋅⌊β2⌋−2α+2≤2⋅⌊β2⌋−α+1
 α≥1

This inequality holds for any . Thus when and is a uniform prior, Algorithm 1 is an equilibrium for any .

2. When is uniform and , setting the probability mass function for the uniform distribution into Equation 2:

 ⌊β2⌋−α+1β−α+1≤1k(β−α+1+⌊β2⌋−⌈β2⌉β−α+1)
 (⌊β2⌋−α+1)≤1k(β−α+1+⌊β2⌋−⌈β2⌉)=1k(2⋅⌊β2⌋−α+1)
 k≤2⋅⌊β2⌋−α+1⌊β2⌋−α+1=1+⌊β2⌋⌊β2⌋−α+1 (3)

We can then consider two cases for :

• is even:

 k≤1+ββ−2α+2
 β≤(2α−2)(k−1)k−2
• is odd, then using the same derivation:

 k≤β−1β−1−2α+2
 β≤2α(k−1)−kk−2

Notice that for any , the even upper bound is lower, so Algorithm 1 is an equilibrium if and only if:

 β≤(2α−2)(k−1)k−2
3. We now want to know the bounds on for which we have a general Knowledge Sharing protocol that is an equilibrium, namely, a Knowledge Sharing protocol that is an equilibrium for any . Notice that we assume agents always know beforehand, but we require that the same equilibrium should apply to any . We can see that the demand for an equilibrium for constant , namely:

 β≤(2α−2)(k−1)k−2

Actually shows that as a function of , the upper bound for an equilibrium converges to . In other words, for any uniform with , there exists a large enough so that the protocol is not an equilibrium, i.e., there exists an agent with an incentive to cheat. As stated above, if , Algorithm 1 is an equilibrium, because no agent can duplicate without surely being detected.

4. When is geometric, consider Equation 2. Notice that if the inequality holds, Algorithm 1 is an equilibrium. If is even, the left hand side of the inequality is smaller. Assume that is even, we get that Equation 2 yields that Algorithm 1 is an equilibrium when :

 F(β2)≤1k

By definition of for a geometric prior, . In other words, for any and any , . So a cheating agent has an incentive to deviate for any and any . The same holds for the case where is odd.

### 5.2 Coloring

In this section we show that for any ring and any prior on with finite support in which Algorithm 1 is an equilibrium for -Knowledge Sharing, there is an equilibrium for Coloring. We describe Algorithm 5, which uses a shared coin toss subroutine. Using Algorithm 1, we set each agent’s input be a random bit and define the output function . This shared coin toss subroutine is an equilibrium as a subroutine if agents’ utilities are either a direct result of the output of the coin flip, or completely independent of it.

###### Theorem 5.1.

Algorithm 5 always ends in a legal coloring of the ring

###### Proof.
1. Every agent outputs a color. All agents Know the orientation and the order of agents around the ring according to it, and the vector of preferences. Out of these they can deduce their preference group and their position in it. All agents with parity output a color at step . The rest of the agents output at step . The only exception is the sink, if there is one, which outputs at step . Any agent either has an even parity, an odd parity or is the sink. Thus any agent has chosen and output a color.

2. For any neighbors , the round in which chooses its color is different from the round chooses its color. This is immediate: Since they are neighbors, their parity is different, so if neither of them is the sink, one would choose a color at step and the other at step . Separation of steps and is merely to prevent a monochromatic edge between two different preference groups when both ”edge” agents did not receive their desired color (and outputting the minimal available color may collide). In this case, both agents already have a sure utility of . If one of them is the sink, assume w.l.o.g. it is , then it chooses a color at step , and there is only one sink so chooses at step or steps .

3. No agent would choose a color already taken. This is immediate from Solution Preference.

###### Theorem 5.2.

Let G be a ring network and a finite-support prior that agents have on . If Algorithm 1 is an equilibrium for -Knowledge Sharing in , then Algorithm 5 is an equilibrium for Coloring in G.

###### Proof.

Denote the expected utility for agent by following Algorithm 5. Our proof uses claims:

###### Claim (1).

For any agent and any strategy that does not include duplication, . In other words, no agent has an incentive to deviate without duplication.

###### Claim (2).

For any agent and any duplicating strategy , let be the round in which learns . Then one of the following holds:

1. had already stopped duplicating at round , i.e., .

2. had duplicated at least agents.

###### Claim (3).

If then the probability that is detected and the algorithm is aborted is at least .

###### Claim (4).

If , the expected utility for is at most the same as following the protocol without duplication.

These 4 claims suffice to prove that Algorithm 5 is an equilibrium: If does not duplicate, it has no incentive to deviate (Claim 1), so any possible cheating strategy involves duplication. Let be the number of duplications disguises itself as. According to Claim 2, decides whether to duplicate into before learning . Then one of the following holds:

1. . Then by Claim 4 the cheater’s expected utility by deviating is at most that of following the protocol without deviation.

2. . By Claim , the probability of detection is at least . In other words, the probability the duplication is not detected is at most . Thus, even if guarantees a utility of when it is not detected, the expected utility from duplicating into agents is at most .

Thus for any , the expected utility from deviating using duplication is at most .

On the other hand, consider the expected utility for agent following Algorithm 5 with preferred color :

• Let be the probability given by , according to its prior beliefs, that is odd and all agents prefer .

• Let be the probability given by , according to its prior beliefs, that at least one of its neighbors prefer .

• Let be the probability that is selected as the sink in a sink-election process.

Now consider the expected utility for before it learns , i.e., before the Wake-Up is complete: If none of its neighbors prefer , will be colored , thus its utility in this case will be . If at least one neighbor prefers , one of the following must hold:

• Agents are in an odd ring and all agents prefer . Then is colored if it is not the sink and it wins in the coin flip.

• Otherwise, is colored if it wins the coin flip.

Recall that denotes the expected utility for by following Algorithm 5. Then we have:

 EΓ[ua]=(1−Pr[A])⋅1+Pr[A](Pr[u|A](1−Pr[s])12+(1−Pr[u|A])12))==1−Pr[A]+Pr[A]2(Pr[u|A]−Pr[u|A]Pr[s]+1−Pr[u|A])==1−Pr[A]−Pr[A]⋅Pr[u|A]Pr[s]2+Pr[A]2==1−Pr[A]2−Pr[A]⋅Pr[u|A]Pr[s]2 (4)

Recall that we assume the agents’ prior on the preferences of other agents is uniform. Denote the set of possible color preferences. so . Since agents know there are at least colors (otherwise, there would be no coloring for an odd ring), . Furthermore, in a sink selection process where agents follow the protocol the probability that is selected as the sink is .

Thus we can bound the term in Equation 4 from below:

 EΓ[ua]=1−Pr[A]2−Pr[A]⋅Pr[u|A]Pr[s]2≥1−592−5921nPr