1 Introduction
The complexity and simplicity of most distributed computing problems depend on the inherent apriori knowledge given to all participants. Usually, the more information processors in a network start with, the more efficient and simple the algorithm for a problem is. Sometimes, this information renders an otherwise unsolvable problem, solvable.
We consider a network of rational agents [1, 4] who participate in an algorithm and may deviate from it if they deem a deviation more profitable for them, i.e., the execution is more likely to output their desired output. To differentiate from Byzantine faults, we require the Solution Preference property that ensures agents never prefer an outcome in which the algorithm fails (e.g., terminates incorrectly) over an outcome with a legal output. Previous works in this setting [4, 5, 22] assumed agents apriori know , the number of agents in the network (henceforth called the actual number).
Our model is motivated by multiagent protocols in which the participants may cheat in order to achieve the result they think is best for them. Consider a distributed frequency assignment (Coloring) between cellular network providers, in which each prefers a certain frequency (color) for which it already has equipment or infrastructure. Companies may then cheat in the distribution process in order for their preferred frequency to be assigned to them. Another example is an online game, in which players start by selecting the player that will host the game and thus enjoy the best network latency in the game to follow.
In this paper we examine the apriori knowledge about required for equilibrium in a distributed network of rational agents, each of which has a preference over the output. Unlike the case in which is known, agents may also deviate from the algorithm by duplicating themselves to affect the outcome. This deviation is also known as a Sybil Attack [17], commonly used to manipulate internet polls, increase page rankings in Google [12] and affect reputation systems such as eBay [11, 13]. In this paper, we use a Sybil Attack to prove impossibility of equilibria. For each problem presented, an equilibrium when is known is provided or was provided in a previous work, thus in these cases deviations that do not include duplication cannot benefit the agents. Obviously, deviations from the algorithm that include both duplicating and additional cheating are also possible.
The problems we examine here can be solved in standard distributed computing without any knowledge about , since we can easily acquire the size of the network by a broadcast and echo. However, learning the size of the network reliably is no longer possible with rational agents and thus, for some problems, apriori knowledge of is critical for equilibrium.
Intuitively, the more agents an agent is disguised as, the more power to affect the output of the algorithm it has. For every problem, we strive to find the maximum number of duplications a cheater may be allowed to duplicate without gaining the ability to affect the output, i.e., equilibrium is still possible. This maximum number of duplications depends on whether other agents will detect that a duplication has taken place, since the network could not possibly be this large. To detect this situation they need to possess knowledge about the network size, or about a specific structure.
In this paper, we translate this intuition into a precise computation of the relation between the lower bound and the upper bound on , that must be apriori known in order for equilibrium to be possible. We denote this relation bound. These bounds hold for both deterministic and nondeterministic algorithms.
To find the bound of a problem, we first show what is the minimum number of duplications for which equilibrium is impossible when is not known at all, and then show an algorithm that is an equilibrium when the amount of duplications is limited to that number, as well as when itself is apriori known. Finally, we calculate the bound by balancing out the profit an agent may gain by duplicating itself against the risk it takes of being caught.
Table 1 summarizes our contributions and related previous work (where there is a citation). A ✓ mark denotes that we provide herein an algorithm for this case, and an denotes that we prove that no equilibrium is possible in that case. Known refers to algorithms in which is apriori known to all agents. Unknown refers to algorithm or impossibility of equilibrium when agents apriori know no bound on . The bound for each problem is a function for which there is an equilibrium when the upper and lower bounds on satisfy , and no equilibrium exists when . A problem is bound if there is an equilibrium given any finite bound, but no equilibrium exists if no bound or information about is apriori given. A problem is unbounded if there is an equilibrium even when neither nor any bound on is given.
Problem  Known  Unknown  bound 

Coloring  ✓  *  
Leader Election  ✓ ADH’13 [4]  ADH’13 [4]  
Knowledge Sharing  ✓ AGLS’14 [5]  *  
Knowledge Sharing  *  
Partition, Orientation  ✓  ✓  Unbounded 
* bound proven for a ring graph
1.1 Related Work
The connection between distributed computing and game theory stemmed from the problem of secret sharing
[32]. Further works continued the research on secret sharing and multiparty computation when both Byzantine and rational agents are present [2, 15, 18, 20, 21, 29].Another line of research presented the BAR model (Byzantine, acquiescent [37] and rational) [6, 31, 37], while a related line of research discusses converting solutions with a mediator to cheap talk [2, 3, 9, 10, 16, 25, 30, 33, 35, 36].
Abraham, Dolev, and Halpern [4] were the first to present protocols where processors in the network behave as rational agents, specifically protocols for Leader Election. In [5] the authors continue this line of research by providing basic building blocks for game theoretic distributed algorithms, namely a wakeup and knowledge sharing equilibrium building blocks. Algorithms for consensus, renaming, and leader election are presented using these building blocks. Consensus was researched further by Halpern and Vilacça [22], who showed that there is no expost Nash equilibrium, and a Nash equilibrium that tolerates failures under some minimal assumptions on the failure pattern.
2 Model
We use the standard messagepassing model, where the network is a bidirectional graph with nodes, each node representing an agent, and edges over which they communicate. is assumed to be vertexconnected^{3}^{3}3 This property was shown necessary in [5]
, since if such a node exists it can alter any message passing through it. Such a deviation cannot be detected since all messages between the subgraphs this node connects must traverse through it. This node can then skew the algorithm according to its preferences.
. Throughout the entire paper, always denotes the actual number of nodes in the network.Initially, each agent knows its id and input, but not the id
or input of any other agent. We assume the prior of each agent over any information it does not know is uniformly distributed over all possible values. Each agent is assigned a unique
id, taken from the set of natural numbers. Furthermore, we assume all agents start the protocol together, i.e., all agents wakeup at the same time. If not, we can use the WakeUp [5] building block to relax this assumption.2.1 Equilibrium in Distributed Algorithms
Informally, a distributed algorithm is an equilibrium if no agent at no point in the execution can do better by unilaterally deviating from the algorithm. When considering a deviation, an agent assumes all other agents follow the algorithm, i.e., it is the only agent deviating.
Following the model in [4, 5] we assume an agent always aborts the algorithm whenever it detects a deviation made by another agent, even if the detecting agent can gain by not aborting. In this manner, our notion of equilibrium is basically BayesNash equilibrium^{4}^{4}4 In [4], sequential equilibrium is shown by an additional assumption on the utility function; however, if an agent detects a deviation that does not necessarily lead to the algorithm failure, it is still not a sequential equilibrium. , but not sequential equilibrium [23]. We elaborate on this difference in the Discussion (Section 6).
Each node in the network is a rational agent, following the model in [5]. The algorithms produce a single output per agent, once, at the end of the execution. Each agent has a preference only over its own output.
Formally, let be the output of agent , let
be the set of all possible output vectors, and denote the output vector
, where . Let be the set of legal output vectors, in which the protocol terminates successfully, and let be the set of erroneous output vectors, such that and .Each agent has a utility function . The higher the value assigned by to an output vector, the better this vector is for . To differentiate rational agents from Byzantine faults, we assume the utility function satisfies Solution Preference [4, 5] which guarantees that an agent never has an incentive to cause the algorithm to fail.
Definition 2.1 (Solution Preference).
The utility function of an agent never assigns a higher utility to an erroneous output than to a legal one, i.e.:
We differentiate the legal output vectors, which ensure the output is valid and not erroneous, from the correct output vectors, which are output vectors that are a result of a correct execution of the algorithm, i.e., without any deviation. The Solution Preference guarantees agents never prefer an erroneous output. However, they may prefer a legal but incorrect output.
Recall that we assume agents only have preferences over their own output, i.e., for any where , . For simplicity, we also assume each agent has a single preferred output value , and we normalize the utility function values, such that^{5}^{5}5 This is the weakest assumption that satisfies Solution Preference, since it gives cheating agents the highest incentive to deviate. A utility assigning a lower value for failure than would deter a cheating agent from deviating. :
(1) 
Our results hold for any utility function that satisfies Solution Preference.
Definition 2.2 (Expected Utility).
Let be a round in a specific execution of an algorithm. Let be an arbitrary agent. For each possible output vector , let
be the probability, estimated by agent
at round , that is output by the algorithm if takes step ^{6}^{6}6 A step specifies the entire operation of the agent in a round. This may include drawing a random number, performing any internal computation, and the contents and timing of any message delivery. , and all other agents follow the algorithm. The Expected Utility estimates for step in round of that specific execution is:Note that agents can also estimate the expected utility of other agents by simply considering a different utility function.
An agent will deviate whenever the deviating step leads to a strictly higher expected utility than the expected utility of the next step of the algorithm. By the utility function 1, an agent will prefer any deviating step that increases the probability of getting its preferred output, even if that deviating step also increases the risk of an erroneous output.
Let be an algorithm. If by deviating from and taking step , the expected utility of is higher, we say that agent has an incentive to deviate (i.e., cheat). For example, at round algorithm may dictate that flips a fair binary coin and sends the result to all of its neighbors. Any other action by is considered a deviation: whether the message was not sent to all neighbors, sent later than it should have, or whether the coin toss was not fair, e.g., only sends instead of a random value. If no agent can unilaterally increase its expected utility by deviating from , we say that the protocol is an equilibrium. We assume a single deviating agent, i.e., there are no coalitions of agents.
Definition 2.3 (Distributed Equilibrium).
Let denote the next step of algorithm in round . is an equilibrium if for any deviating step , at any round of every possible execution of :
2.2 Knowledge Sharing
The Knowledge Sharing problem (adapted from [5]) is defined as follows:

Each agent has a private input , in addition to its , and a function , where is identical at all agents.

A Knowledge Sharing protocol terminates legally if all agents output the same value, i.e., . Thus the set is defined as: .

A Knowledge Sharing protocol terminates correctly (as described in Section 2.1) if each agent outputs at the end the value over the input values of all other agents^{7}^{7}7Notice that any output is legal as long as it is the output of all agents, but only a single output value is considered correct for a given input vector..

The function satisfies the Full Knowledge property:
Definition 2.4 (Full Knowledge Property).
A function fulfills the full knowledge property if, for each agent that does not know the input values of all other agents, any output in the range of is equally possible. Formally, for any , fix and denote . A function fulfills the full knowledge property if, for any possible output in the range of , is the same^{8}^{8}8The definition assumes input values are drawn uniformly, otherwise the definition of can be expanded to the sum of probabilities over every input value for ..
We assume that each agent prefers a certain output value .
Knowledge Sharing
The Knowledge Sharing problem is a Knowledge Sharing problem with exactly distinct possible output values.
2.3 Coloring
We assume that every agent prefers a specific color .
3 Impossibility With No Knowledge
Here we show that without any apriori knowledge about , there is no algorithm that is an equilibrium for both Knowledge Sharing and Coloring.
A fundamental building block in many algorithms is WakeUp [5], in which agents learn the graph topology. If is not known at all, how can an agent be sure the topology it learned is correct?
Let be a malicious agent with outgoing edges. A possible deviation for is to simulate imaginary agents , and to answer over some of its edges as , and over the others as , as illustrated in Figure 1. From this point on acts as if it is 2 agents. Here we assume that the id space is much larger than , allowing us to disregard the probability that the fake id collides with an existing id.
Note that an agent may be forced by the protocol to commit to its fake duplication early, by starting the algorithm with a process that tries to map the graph topology, such as the WakeUp [5] algorithm. If an algorithm does not begin by mapping the topology, an agent could begin the protocol as a single agent and duplicate itself at a later stage. This allows the agent to ”collect information” and increase its ability to effect the output. Since a WakeUp protocol can be added at the beginning of every algorithm, we assume every algorithm starts by mapping the network, thus forcing a duplicating agent to commit to its duplication scheme at the beginning of the algorithm.
Regarding the output vector, notice that an agent that pretends to be more than one agent still outputs a single output at the end. The duplication causes agents to execute the algorithm as if it is executed on a graph (with the duplicated agents) instead of the original graph ; however, the output is considered legal if rather than if .
It is important to emphasize that for any nontrivial distributed algorithm, the outcome cannot be calculated using only private data without communication, i.e., for rational agents, no agent can calculate the outcome privately at the beginning of the algorithm. This means that at round , for any agent and any step of the agent that does not necessarily result in algorithm failure, it must hold that: (a value of means an agent will surely not get its preference, and means it is guaranteed to get its preference).
In this section we label agents in graph as , set in a clockwise manner in a ring and arbitrarily in any other topology. These labels are not known to the agents themselves.
3.1 Impossibility of Knowledge Sharing
Theorem 3.1.
There is no algorithm for Knowledge Sharing that is an equilibrium in a connected graph when agents have no apriori knowledge of .
Proof.
Assume by contradiction that is a Knowledge Sharing algorithm that is an equilibrium in any graph without knowing . Let , be two connected graphs of rational agents. Consider the execution of on graph created by , and adding two nodes and connecting these nodes to or more arbitrary nodes in both and (see Figure 3).
Recall that the vector of agents’ inputs is denoted by , and . Let be the first round after which can be calculated from the collective information that all agents in have^{9}^{9}9 Regardless of the complexity of the computation. , and similarly the first round after which can be calculated in . Consider the following three cases:

: cannot yet be calculated in at round . Let . Since , the collective information in at round is enough to calculate . Since is not known, an agent could emulate the behavior of , making the agents believe the algorithm runs on rather than . In this case, this cheating agent knows at round the value of in this execution, but the collective information of agents in is not enough to calculate , which means the output of agents in still depends on messages from , the cheater. Thus, if learns that the output , it can send messages that may cause the agents in to decide a value . In the case where , agent increases its expected utility by sending a set of messages different than that decreed by the protocol. Thus, agent has an incentive to deviate, contradicting distributed equilibrium.

: both and have enough collective information to calculate at the same round. The collective information in at round already exists in at round . Since , the collective information in is not enough to calculate in round . Thus, similarly to Case 1, can emulate and has an incentive to deviate.

: Symmetric to Case 1.
Thus, is not an equilibrium for the Knowledge Sharing problem. ∎
When , the proof of Theorem 3.1 brings us to the following corollary:
Corollary 3.2.
When a cheating agent pretends to be more than agents, there is no algorithm for Knowledge Sharing that is an equilibrium when agents have no apriori knowledge of .
3.2 Impossibility of Coloring in a Ring
The proof of Theorem 3.1 relies on the Full Knowledge property of the Knowledge Sharing problem, i.e., no agent can calculate the output before knowing all the inputs. Recall that the Coloring problem, however, is a more local problem [28], and nodes may color themselves without knowing anything about distant nodes.
Theorem 3.3.
There is no algorithm for Coloring that is an equilibrium in a connected graph when agents have no apriori knowledge of .
Proof.
In order to show an incentive to deviate, we generalize the notion of expected utility. Recall that an agent outputs a single color, even if it pretends to be several agents. In Coloring, a cheating agent only wishes to influence the output color of its original neighbors to enable it to output its preferred color while maintaining the legality of the output.
Definition 3.4 (Group Expected Utility).
Let be a round in an execution , and let be a group of agents. For any set of steps of agents in , let be the set of all possible executions for which the same messages traverse the links that income and outgo to/from as in until round , and in round each agent in takes the corresponding step in . For each possible output vector , let be the sum of probabilities over that is decided by the protocol. For any agent , the Group Expected Utility of by taking steps at round in execution is: .
Assume by contradiction that is a Coloring algorithm that is an equilibrium in a ring with agents . Let be a ring with a segment of consecutive agents, , all of which have the same color preference . Assume w.l.o.g., they are centered around if
is odd and around
if even. Let be the group of agents , and the group of agents . Denote and (see Figures 5, and 5).Definition 3.5.
Let be a group of agents (e.g., or ). In any round in an execution, let denote the vector of steps of agents in according to the protocol. We say knows the utility of agent if it holds that . We say does not know the utility of agent if .
Recall that at round no agent (or group of agents) knows its utility or the utility of any other agent. Consider an execution of on ring and the groups in the following cases:

does not know throughout the entire execution of the algorithm, i.e., for agents in it holds that . Then if is emulated by a cheating agent, it has an incentive to deviate and set its output to (as otherwise its utility is guaranteed to be ).

knows at some round , and does not know before round . Consider round and group : In round , knows the utility of , thus the collective information of agents in at round already exists in at round . If knows that , then it had already won; otherwise, knows that . Consider the group , that does not know at round . If is emulated by a cheating agent , it can send messages that increase its probability to output , thus increasing its expected utility.

knows before round : symmetric to Case 2.
By the contradictory example for a ring, there is no equilibrium for Coloring for connected graphs, thus is not an equilibrium for the Coloring problem. ∎
4 Algorithms
Here we present algorithms for Knowledge Sharing (Section 4.1) and Coloring (Section 4.2). The Knowledge Sharing algorithm is an equilibrium in a ring when no cheating agent pretends to be more than agents. The Coloring algorithm is an equilibrium in any connected graph when agents apriori know .
Using an algorithm as a subroutine is not trivial in this setting, even if the algorithm is an equilibrium, as the new context as a subroutine may allow agents to deviate towards a different objective than was originally proven. Thus, whenever a subroutine is used, its equilibrium should be justified.
The full descriptions and proofs of the algorithms and Theorem 4.1 can be found in Appendix A, in addition to a Coloring algorithm with improved time complexity.
4.1 Knowledge Sharing in a Ring
First we describe the SecretTransmit building block in which agent delivers its input at round to some agent , and no other agent in the ring learns any information about this input.
Agent selects a random number and let . It then sends clockwise and counterclockwise until each reaches the agent before . At round , each neighbor of simultaneously sends the value it received, either or .
We assume a global orientation around the ring. This assumption can be easily relaxed via Leader Election [5], which is an equilibrium in this application since the orientation has no effect on the output. The algorithm works as follows:
All agents execute WakeUp [5] to learn , the size of the ring which may include duplications. For each agent , denote the clockwise neighbor of , and the agent at distance counterclockwise from . All agents around the ring simultaneously use SecretTransmit, each to transmit its input secretly to its corresponding and so all arrive at their destination at the same round . At round , each agent sends its input around the ring.
Theorem 4.1.
In a ring, the algorithm above is an equilibrium when no cheating agent pretends to be more than agents.
4.2 Coloring

All agents execute Renaming [5] which gives new names to the agents. Since agents strive to minimize their name’s numeral value, Renaming is still an equilibrium.

Each agent, in order of the new names, picks its preferred color if available, or the minimal available color otherwise, and sends its color to all of its neighbors.
5 How Much Knowledge Is Necessary?
Here we examine the effects of apriori knowledge that bound the possible value of . We show that the possibility of algorithms that are equilibria depends on the range in which might be, and show these ranges for different problems.
Table 2 summarizes our results. Partition and Orientation have equilibria without any knowledge of ; however, the former is constrained to evensized rings, and the latter is a trivial problem in distributed computing (radius in the LOCAL model [27]).
Definition 5.1 (Knowledge).
We say agents have Knowledge about the actual number of agents , , if all agents know that the value of is in . We assume agents have no information about the distribution over , i.e., they assume it is uniform.
Definition 5.2 (Bound).
Let . A distributed computing problem is bound if:

There exists an algorithm for that is an equilibrium given Knowledge for any such that .

For any algorithm for , there exist where such that given Knowledge the algorithm is not an equilibrium.
In other words, a problem is bound if given Knowledge, there is an equilibrium when , and there is no equilibrium when . A problem is bound if there is an equilibrium given any bound , but there is no equilibrium with Knowledge. A problem is unbounded if there is an equilibrium with Knowledge.
Bound  Problem (in a ring) 

Leader Election^{10}^{10}10 These results hold in general graphs, as well.  
Knowledge Sharing  
Coloring, Knowledge Sharing  
Partition, Orientation^{10} 
Consider an agent at the start of a protocol given Knowledge. If pretends to be a group of agents, it can be caught when , since agents might discover the number of agents and catch the cheater. Moreover, any duplication now involves some risk since the actual value of is not known to the cheater.
An arbitrary cheating agent simulates executions of the algorithm for every possible duplication, and evaluates its expected utility. Denote a duplication scheme in which an agent pretends to be agents. Let be the probability, from agent ’s perspective, that the overall size does not exceed . If for agent there exists a duplication scheme at round such that , then agent has an incentive to deviate and duplicate itself.
The proofs of the following theorems and corollaries can be found in Appendix B. For each problem we look for the maximal range of where no exists that satisfies the inequation above.
Knowledge Sharing
Theorem 5.3.
Knowledge Sharing is bound.
Corollary 5.4.
Knowledge Sharing is bound.
Coloring
Theorem 5.5.
Coloring in a ring is bound.
Leader Election
In the Leader Election problem, each agent outputs , where means that was elected leader and means otherwise. . We assume that every agent prefers either or .
Theorem 5.6.
Leader Election is bound.
Ring Partition
In the Ring Partition problem, the goal is to partition the agents of an evensized ring into two, equallysized groups: group and group . We assume that every agent prefers to belong to either group or .
Theorem 5.7.
Ring Partition is unbounded.
Orientation
In the Orientation problem the two ends of each edge must agree on a direction for this edge. We assume that every agent prefers certain directions for its edges.
Unlike Ring Partition, Orientation is defined for any graph. It is, however, a very local problem (radius in the LOCAL model [27]).
Theorem 5.8.
The Orientation problem is unbounded.
6 Discussion
Distributed algorithms are commonly required to work in an arbitrarily large network. In a realistic scenario, the exact size of the network may not be known to all of its members. In this paper, we have shown that in most problems the use of duplication gives an agent power to affect the outcome of the algorithm. The amount of duplications an agent can create is limited by the ability of other agents to detect this deviation, and the only tool for this ability is apriori knowledge about . Section 3 shows that for some problems, without any such knowledge, distributed problems become impossible to solve without any agent having an incentive to deviate from the algorithm.
The bounds we have proven for common distributed problems show that the initial knowledge required for equilibrium to be possible depends on the balance between two factors: (1) The amount of duplications necessary to increase an agent’s expected utility and by how much it increases, and (2) the expected utility for an agent if it follows the protocol. In order for an agent to have an incentive to duplicate itself, an undetected duplication needs to be either a lot more profitable than following the algorithm or it must involve low risk of being caught.
Our results produce several directions that may be of interest:

Proving impossibility and bounds in general topology graphs, as for some of the problems we only discussed ring networks.

Proving impossibility and showing algorithms for other problems with rational agents, which result in other tight bounds.

Finding a problem that is bound, i.e., has an equilibrium only when is known exactly.

What defines a trivial or nontrivial problem with rational agents? More specifically, finding a characteristic that separates problems that can be solved without any knowledge about from ones in which at least some bounds must be apriori known.

Finding an unbounded problem not inherently limited (as Orientation or ring Partition are), or finding proof that no such problem exists.

Exploring the effects of initial knowledge about network size in an asynchronous setting.

Similar to [4, 5], our notion of equilibrium is basically BayesNash equilibrium, but not sequential equilibrium [23]. Sequential equilibrium removes the assumption that agents fail the algorithm if they detect another agent cheating. In [4], the authors suggest an additional assumption on agents’ utility functions in order to obtain sequential equilibrium; however, in case an agent detects a deviation that does not necessarily lead to the algorithm failure, it still has no incentive to cause the algorithm failure, and thus it is not a sequential equilibrium. It would be interesting to find sequential equilibria and the problems for which they are possible.
7 Acknowledgment
We would like to thank Doron Mukhtar for showing us the ring partition problem and proving it is unbounded, when we thought such problems do not exist. We would also like to thank Michal Feldman, Amos Fiat, and Yishay Mansour for helpful discussions.
References
 [1] I. Abraham, L. Alvisi, and J. Y. Halpern. Distributed computing meets game theory: Combining insights from two fields. SIGACT News, 42(2):69–76, June 2011.
 [2] I. Abraham, D. Dolev, R. Gonen, and J. Y. Halpern. Distributed computing meets game theory: robust mechanisms for rational secret sharing and multiparty computation. In PODC, pages 53–62, 2006.
 [3] I. Abraham, D. Dolev, and J. Y. Halpern. Lower bounds on implementing robust and resilient mediators. In TCC, pages 302–319, 2008.
 [4] I. Abraham, D. Dolev, and J. Y. Halpern. Distributed protocols for leader election: A gametheoretic perspective. In DISC, pages 61–75, 2013.
 [5] Y. Afek, Y. Ginzberg, S. Landau Feibish, and M. Sulamy. Distributed computing building blocks for rational agents. In Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, PODC ’14, pages 406–415, New York, NY, USA, 2014. ACM.
 [6] A. S. Aiyer, L. Alvisi, A. Clement, M. Dahlin, J.P. Martin, and C. Porth. Bar fault tolerance for cooperative services. In SOSP, pages 45–58, 2005.
 [7] H. Attiya and J. Welch. Distributed Computing: Fundamentals, Simulations and Advanced Topics. John Wiley & Sons, 2004.
 [8] B. Awerbuch, M. Luby, A. V. Goldberg, and S. A. Plotkin. Network decomposition and locality in distributed computation. In Proceedings of the 30th Annual Symposium on Foundations of Computer Science, SFCS ’89, pages 364–369, Washington, DC, USA, 1989. IEEE Computer Society.
 [9] I. Bárány. Fair distribution protocols or how the players replace fortune. Math. Oper. Res., 17(2):327–340, May 1992.
 [10] E. BenPorath. Cheap talk in games with incomplete information. J. Economic Theory, 108(1):45–71, 2003.
 [11] R. Bhattacharjee and A. Goel. Avoiding ballot stuffing in ebaylike reputation systems. In Proceedings of the 2005 ACM SIGCOMM Workshop on Economics of Peertopeer Systems, P2PECON ’05, pages 133–137, New York, NY, USA, 2005. ACM.
 [12] M. Bianchini, M. Gori, and F. Scarselli. Inside pagerank. ACM Trans. Internet Technol., 5(1):92–128, Feb. 2005.
 [13] A. Cheng and E. Friedman. Sybilproof reputation mechanisms. In Proceedings of the 2005 ACM SIGCOMM Workshop on Economics of Peertopeer Systems, P2PECON ’05, pages 128–132, New York, NY, USA, 2005. ACM.
 [14] R. Cole and U. Vishkin. Deterministic coin tossing with applications to optimal parallel list ranking. Inf. Control, 70(1):32–53, July 1986.
 [15] V. Dani, M. Movahedi, Y. Rodriguez, and J. Saia. Scalable rational secret sharing. In PODC, pages 187–196, 2011.
 [16] Y. Dodis, S. Halevi, and T. Rabin. A cryptographic solution to a game theoretic problem. In CRYPTO, pages 112–130, 2000.
 [17] J. R. Douceur. The sybil attack. In Revised Papers from the First International Workshop on PeertoPeer Systems, IPTPS ’01, pages 251–260, London, UK, UK, 2002. SpringerVerlag.
 [18] G. Fuchsbauer, J. Katz, and D. Naccache. Efficient rational secret sharing in standard communication networks. In TCC, pages 419–436, 2010.

[19]
A. Goldberg, S. Plotkin, and G. Shannon.
Parallel symmetrybreaking in sparse graphs.
In
Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing
, STOC ’87, pages 315–324, New York, NY, USA, 1987. ACM.  [20] S. D. Gordon and J. Katz. Rational secret sharing, revisited. In SCN, pages 229–241, 2006.
 [21] A. Groce, J. Katz, A. Thiruvengadam, and V. Zikas. Byzantine agreement with a rational adversary. In ICALP (2), pages 561–572, 2012.
 [22] J. Y. Halpern and X. Vilaça. Rational consensus: Extended abstract. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, PODC ’16, pages 137–146, New York, NY, USA, 2016. ACM.
 [23] D. M. Kreps and R. Wilson. Sequential equilibria. Econometrica, 50(4):863–894, 1982.
 [24] F. Kuhn and R. Wattenhofer. On the complexity of distributed graph coloring. In Proceedings of the Twentyfifth Annual ACM Symposium on Principles of Distributed Computing, PODC ’06, pages 7–15, New York, NY, USA, 2006. ACM.
 [25] M. Lepinski, S. Micali, C. Peikert, and A. Shelat. Completely fair sfe and coalitionsafe cheap talk. In PODC, pages 1–10, 2004.
 [26] N. Linial. Legal coloring of graphs. Combinatorica, 6(1):49–54, 1986.
 [27] N. Linial. Distributive graph algorithms global solutions from local data. In Proceedings of the 28th Annual Symposium on Foundations of Computer Science, SFCS ’87, pages 331–335, Washington, DC, USA, 1987. IEEE Computer Society.
 [28] N. Linial. Locality in distributed graph algorithms. SIAM Journal on Computing, 21(1):193–201, 1992.
 [29] A. Lysyanskaya and N. Triandopoulos. Rationality and adversarial behavior in multiparty computation. In CRYPTO, pages 180–197, 2006.
 [30] R. McGrew, R. Porter, and Y. Shoham. Towards a general theory of noncooperative computation. In TARK, pages 59–71, 2003.
 [31] T. Moscibroda, S. Schmid, and R. Wattenhofer. When selfish meets evil: byzantine players in a virus inoculation game. In PODC, pages 35–44, 2006.
 [32] A. Shamir. How to share a secret. Commun. ACM, 22(11):612–613, 1979.
 [33] Y. Shoham and M. Tennenholtz. Noncooperative computation: Boolean functions with correctness and exclusivity. Theoretical Computer Science, 343(1–2):97 – 113, 2005.
 [34] M. Szegedy and S. Vishwanathan. Locality based graph coloring. In Proceedings of the Twentyfifth Annual ACM Symposium on Theory of Computing, STOC ’93, pages 201–207, New York, NY, USA, 1993. ACM.
 [35] A. Urbano and J. E. Vila. Computational complexity and communication: Coordination in twoplayer games. Econometrica, 70(5):1893–1927, September 2002.
 [36] A. Urbano and J. E. Vila. Computationally restricted unmediated talk under incomplete information. Economic theory, 2004.
 [37] E. L. Wong, I. Levy, L. Alvisi, A. Clement, and M. Dahlin. Regret freedom isn’t free. In OPODIS, pages 80–95, 2011.
Appendix A Algorithms
a.1 Knowledge Sharing in a Ring
Here we present an algorithm for Knowledge Sharing that is an equilibrium in a ring when no cheating agent pretends to be more than agents. Clearly, when agents apriori know it is an equilibrium, since a cheating agent is further constrained not to duplicate at all. At any point in the algorithm, whenever an agent recognizes that another agent has deviated from the protocol it immediately outputs resulting in the failure of the algorithm.
We assume a global orientation around the ring. This assumption can be easily relaxed via Leader Election [5]. Since the orientation has no effect on the output, Leader Election is an equilibrium in this application.
According to Corollary 3.2, when a cheating agent pretends to be more than agents, there is no algorithm for Knowledge Sharing that is an equilibrium. On the other hand, the algorithm presented here is an equilibrium when the cheating agent pretends to be no more than agents, proving that the bound is tight.
We start by describing the intuition behind the algorithm. Let be the size of the ring, which may include duplications. Since a cheater may be at most agents, our algorithm must ensure that any group of consecutive agents never gains enough collective information to calculate , the output of Knowledge Sharing, before the collective information at the rest of the ring is also enough to calculate .
To ensure this property, we employ a method by which agent delivers its input to some agent at a specific round , without revealing any information about to any of the agents other than and . This method is used by every agent to send its input to two other agents that are distant enough to prevent a consecutive group of size from learning this input too early.
At round , the input values sent by this method are revealed simultaneously. Afterwards, every possible group of consecutive agents had already committed to the inputs of all its members, so it is too late to change them. Now every agent can simply send its input around the ring.
Algorithm 1 describes the SecretTransmit building block. The building block is called by an agent and receives three arguments: its input , a round , and a target agent . It assumes neighbors of know they are neighbors of . The building block delivers at round to agent , and no other agent around the ring gains any information about this input. Additionally, agent learns the input at round and not before. In the SecretTransmit building block, agent selects a random number and a value , the XOR of its input with . Each value is sent in a different direction around the ring until reaching a neighbor of . At round , both neighbors send the values and to , thus learns at round and no other agent around the ring has any information about at round .
Algorithm 2 solves Knowledge Sharing in a ring using the SecretTransmit building block. All agents simultaneously transmit their input, each to other agents. For each agent , the input is sent using SecretTransmit to its clockwise neighbor, and to the agent that is at distance counterclockwise from . Note that these agents form the two ends of a group of consecutive agents that do not include . This guarantees that if is a cheater pretending to be agents, it does not learn the input before round , since at least one piece of each transmission has not reached any agent in at any round . At round , the agents in already committed all of their input values to some agents in the ring that are not in .
Theorem A.1.
In a ring, Algorithm 2 is an equilibrium when no cheating agent pretends to be more than agents.
Proof.
Assume by contradiction that a cheating agent pretending to be agents has an incentive to deviate from Algorithm 2, w.l.o.g., the duplicated agents are (recall the indices are not known to the agents).
Let be the size of the ring including the duplicated agents, i.e., . The clockwise neighbor of is . Denote the agent at distance counterclockwise from , and note that .
When calls SecretTransmit to , holds the piece of that transmission until round . When calls SecretTransmit to , holds the piece of that transmission until round . By our assumption, the cheating agent duplicated into . Since , the cheater receives at most one piece ( or ) of each of ’s transmissions before round . So, there is at least one input that the cheater does not learn before round . According to the Full Knowledge property (Definition 2.4), for the cheater at round any output is equally possible, so its expected utility for any value it sends is the same, thus it has no incentive to cheat regarding the values it sends in round .
Let be an arbitrary duplicated agent. In round , is known by its clockwise neighbor and by , the agent at distance counterclockwise from . Since the number of counterclockwise consecutive agents in is greater than , at least one of is not a duplicated agent. Thus, at round , the input of each agent in is already known by at least one agent .
At round the cheater does not know the input value of at least one other agent, so it has no incentive to deviate. At round for each duplicated agent the cheating agent pretends to be, its input is already known by a nonduplicated agent, which disables the cheater from lying about its input from round and on.
Thus, the cheating agent has no incentive to deviate, contradicting our assumption. ∎
a.2 Coloring Algorithm
Here, agents are given exact apriori knowledge of , i.e., they know the exact value of at the beginning of the protocol. We present two protocols for Coloring with rational agents, and discuss their properties. In both algorithms, whenever an agent recognizes that another agent has deviated from the protocol, it immediately outputs resulting in the failure of the algorithm.
a.2.1 Tie Breaking
In most algorithms with rational agents, a prominent strategy [5, 22, 16] is to create a neutral mechanism that when two agents’ preferences conflict, the mechanism decides which agent gets its preference, and which does not. We refer to such a mechanism as tie breaking.
Since agent s are private and agents may cheat about their , they cannot be used to break ties. However, an orientation over an edge shared by both agents, achieved without any agent deviating from the protocol that leads to it, can be such a tie breaking mechanism for coloring: whenever neighbors prefer the same color, we break ties according to the orientation of the link between them. Breaking ties for coloring also requires the orientation to be acyclic, since a cycle in which all agents prefer the same color creates a ”tie” that isn’t broken by the orientation.
Note that since the agents are rational, unless agent knows that one or more of its neighbors output its preferred color , it will output it itself regardless of the result of the algorithm, which is a deviation. Thus, any coloring algorithm must ensure that whenever an agent can output its preferred color, it does, otherwise the agent has an incentive to deviate.
We create an acyclic orientation by a Renaming algorithm that reaches equilibrium [5]. The algorithm gives new names to the agents, which is in fact an coloring of ; however due to the circumstances described above (each agent should output its preference if none of its neighbors does), this coloring is not enough. Instead, each agent, in order of the new names, picks its preferred color if available, or the minimal available color otherwise, and sends its color to all of its neighbors.
Theorem A.2.
Algorithm 3 reaches Distributed Equilibrium for the coloring problem.
Proof.
Let be an arbitrary agent. Assume in contradiction that at some round there is a possible step such that:
First, it must be shown that an agent does not have an incentive to deviate in the subroutine in order to affect the output of the entire algorithm. In the case of Algorithm 3, the only deviation that would benefit in the Renaming subroutine is to minimize , i.e. ensuring it picks a color as early as possible. From the building block in [5] we get that the Renaming building block is an equilibrium for agents with preferences on the resulting names, thus ensuring that there is no relevant deviation possible in the subroutine as no agent can unilaterally improve the probability of having a lower value.
Another property of the Renaming subroutine is that, after its completion, all agents know the names assigned to all agents in the network.
Consider the possible steps could take at any round following the Renaming subroutine:

Sending a message out of order is immediately recognized, as all values are known to all the agents as well as the round number. i.e., in any round , has no incentive send any message at all, since it fails the algorithm.

At , must output a color and send it to its neighbors. If then outputs . It also has no incentive not to correctly notify its neighbors that it is its output, as this notification ensures none of them output (as that would result in utility for that neighbor). If then the color is taken by a neighbor, and has no incentive to deviate since its utility is already .
Thus, the algorithm solves Coloring and is an equilibrium. ∎
a.2.2 Improving The Algorithm
The Renaming process induces more than an acyclic orientation of graph , it is a total ordering of all agents in the graph. Coloring, however, is in many cases a local property and can be decided locally [28, 14, 19]. Additionally, the Renaming protocol in [5] uses a costly message complexity.
We present another algorithm for coloring, detailed in Algorithm 6, which improves the message complexity to by computing an acyclic orientation of graph .
First, run WakeUp [5] to learn the graph topology and the s of all agents. Then, in order of s, each agent draws a random number with a neighboring ”witness” agent as specified in Algorithm 4, and sends it to all of its neighbors. The number is drawn in the range and is different than the numbers of all neighbors of , which is in fact a coloring of . However, due to the circumstances described in A.2.1, this coloring is not enough. By picking a random number with a witness, the agent cannot cheat in the random number generation process, and is marked as a witness for future verification. When done, each agent simultaneously verifies the numbers published by its neighbors using Algorithm 5, which enables it to receive each value through two disjoint paths: directly from the neighbor, and via the shortest simple path to the neighbor’s witness that does not include the neighbor. Then each agent, in order of the values drawn randomly, picks its preferred color if available, or the minimal available color otherwise, and sends its color to all of its neighbors.
The resulting message complexity of the algorithm is as follows: is . Drawing a random number is called times and thus uses messages in total, to publish values to neighbors. Verifying the value of a neighbor uses messages and is called times, for a total of messages. Sending the output color to all neighbors uses an additional messages. The total number of messages is thus .
Theorem A.3.
Algorithm 6 reaches Distributed Equilibrium for the coloring problem.
Proof.
Let be an arbitrary agent. Assume in contradiction that at some round there is a possible step such that:
Consider the possible deviations for in every phase of Algorithm 6:

Cheating in WakeUp. The expected utility is independent of the order by which agents draw their random number in Algorithm 4, i.e., the order by which agents initiate Algorithm 4 has no effect on the order by which they will later set their colors. so has no incentive to publish a false in the WakeUp building block.

Drawing a random number with a witness is an equilibrium: Both agents send a random number at the same round.

Publishing a false value will be caught by a future verification process with when all values are verified (step 10 of Algorithm 6).

Sending a color message not in order will be immediately recognized by the neighbors, since values were verified.

might output a different color than the color dictated by Algorithm 6. But if the preferred color is available, then outputting it is the only rational behavior. Otherwise, the utility for the agent is already in any case.
Thus, the algorithm solves Coloring and is an equilibrium. ∎
Appendix B How Much Knowledge Is Necessary?
b.1 Knowledge Sharing is bound
Proof.
Assume agents have knowledge for some . A cheating agent ’s goal is to choose a value , the number of agents it pretends to be, that maximizes its expected utility.
Let be the number of possible outputs of a Knowledge Sharing algorithm, i.e., the range of the output function is of size . By the Full Knowledge property (definition 2.4), any output is equally possible. Therefore, without deviation the expected utility of at round is: .
According to Theorem 4.1, Algorithm 2 is an equilibrium for Knowledge Sharing in a ring when a cheating agent pretends to be agents or less. Corollary 3.2 shows that when a cheating agent pretends to be more than agents, no algorithm for Knowledge Sharing is an equilibrium. Thus, looking at all possible values of in the range , wants to maximize the probability that and the duplication increases its utility, while also minimizing the probability that and the algorithm fails.
If cheats and pretends to be agents, then necessarily , otherwise according to Theorem 4.1 there is an equilibrium, and the duplication does not increase its utility for any value of in . Additionally it holds that , since any higher value of increases ’s chances of being caught and failing the algorithm (when ), without increasing the number of possible values of for which its utility is higher (when and ).
From ’s perspective at the beginning of the algorithm, the value of is uniformly distributed over , i.e., there are a total of equally possible values for . According to Corollary 3.2 when agent can increase its expected utility by deviating. Let be the utility gains by pretending to be agents successfully (i.e., when and the algorithm does not fail). Since is uniformly distributed over this has a probability of to occur. On the other hand, when pretending to be agents successfully the utility of does not change and is , and this has a probability of . In all other cases and the algorithm fails, resulting in a utility of . Thus, the expected utility of agent at round :
(2) 
By the constraints specified above, the value of that maximizes (2) is , and will deviate from the algorithm whenever . To find the bound on we derive as a function of :
(3) 
From (3) we can derive the following:

The inequations are satisfiable only if . Since , Knowledge Sharing () cannot satisfy the inequations and never has an incentive to deviate, given any bound. This proves Corollary 5.4.

For Knowledge Sharing, we find the range that holds for any . As grows large, nears . Assuming the profit when duplication is successful is , agent has an incentive to deviate when . When is even: , and when is odd: . Thus, Algorithm 2 is an equilibrium for Knowledge Sharing when agents have knowledge such that , and for any algorithm for Knowledge Sharing there exist such that there is no equilibrium when agents have knowledge. This proves Theorem 5.3.
∎
b.2 Coloring is bound
Here we prove Theorem 5.5.
b.3 Leader Election is bound
Here we prove Theorem 5.6.
Proof.
Recall that any Leader Election algorithm must be fair [4], i.e., every agent must have equal probability of being elected leader for the algorithm to be an equilibrium.
Given , the actual number of agents is either or , decided by some distribution unknown to the agents. If an agent follows the protocol, the probability of being elected is . If it duplicates itself once, the probability that a duplicate is elected is , but if the protocol fails and the utility is . Thus , i.e., no agent has an incentive to deviate.
Given , then is in . If an agent follows the protocol, its expected utility is still . If it duplicates itself once, the probability that a duplicate is elected is still , however only if the protocol fails. Thus, for any . So the agent has an incentive to deviate.
Thus for the algorithm presented in [4] is an equilibrium, while for no algorithm for Leader Election is an equilibrium, since any algorithm must be fair. ∎
b.4 Ring Partition is Unbounded
Here we prove Theorem 5.7.
Proof.
It is clear that an agent will not duplicate itself to change the parity of the graph, as that will necessarily cause an erroneous output. So it is enough to show an algorithm that is an equilibrium for even graphs, when agents have no knowledge about . Consider the following algorithm:

Either one arbitrary agent wakes up or we run a WakeUp subroutine and then Leader Election [5]. Since the initiator (leader) has no effect on the output, both are an equilibrium in this application.

The initiating agent sends a token which alternatively marks agents by 0 or 1 and also defines the direction of communication in the ring.

Upon reception of the token with value , an agent does one of the following:

If , send predecessor (denoted ) a random bit .

Else, if , wait for 1 round and send successor (denoted ) a random bit .


Upon reception of the neighbor’s bit (one round after receiving the token), set

As the token arrives back at the initiator, it checks the token’s parity. For even rings, it must be the opposite value from the value it originally sent.
This algorithm divides every pair of agents to one with output and one with output , as the token value is different, thus achieving a partition.
We show that it is also an equilibrium. Assume an agent deviates at some round . If is in the WakeUp or Leader Election phase in order to be the initiator, it cannot improve its utility since choosing the starting value of the token, choosing the direction, or being first cannot increase the agent’s utility. If it is a deviation while the token traverses other parts of the graph, any message sends will eventually be discovered, as the real token has either already passed or will eventually pass through the ”cheated” agent. If changes the value of the token, a randomization between two agents will be out of sync eventually at the end of the token traversal, and also the initiator will recognize that the ring does not contain an even number of agents. During the exchange of the result is independent of ’s choice of value for . So there is no round in which can deviate from the protocol. ∎
b.5 Orientation is Unbounded
Here we prove Theorem 5.8.
Proof.
We show a simple algorithm and prove that it is an equilibrium without any apriori knowledge of or bounds on . Assuming all agents start at the same round (otherwise run WakeUp), consider the following algorithm:

Each agent simultaneously send a random number and its on each of its edges.

For each edge, XOR the bit you sent and the bit received over that edge

If the result is 1, the edge is directed towards the agent with the higher , otherwise it is directed towards the lower .

Every agent outputs the list of pairs with and direction for each of its neighbors.
Since an agent’s utility is defined over its personal output, Solution Preference inhibits agents to output a correct set of pairs, so a cheater may only influence the direction of the edges. Since duplication does not create any new edges between the cheater and the original graph, and the orientation is decided over each edge independently, it does not effect any agent’s utility. Other than that, randomizing a single bit over an edge at the same round is in equilibrium. So the algorithm is an equilibrium, and Orientation is unbounded. ∎
Comments
There are no comments yet.