1 Introduction
Searching in groups is ubiquitous in multiple contexts, including in the biological world, in human populations as well as on the internet [10, 13, 18]. In many cases there is some prior on the distribution of the searched target. Moreover, when the space is large, each searcher typically needs to inspect multiple possibilities, which in some circumstances can only be done sequentially. This paper introduces a game theoretic perspective to such multiround treasure hunt searches, generalizing a basic collaborative Bayesian framework previously introduced in [10].
Consider the case that a treasure is placed in one of boxes according to a known distribution and that searchers are searching for it in parallel during rounds, each specifying a box to visit in each round. Assume w.l.o.g. that the boxes are ordered such that lower index boxes have higher probability to host the treasure, i.e., . We evaluate the group performance by the success probability, that is, the probability that the treasure is found by at least one searcher.
If coordination is allowed, letting searcher visit box at time will maximize success probability. However, as simple as this algorithm is, it is very sensitive to faults of all sorts. For example, if an adversary that knows where the treasure is can crash a searcher before the search starts (i.e., prevent it from searching), then it can reduce the search probability to zero.
The authors of [10] suggested the use of identical noncoordinating algorithms. In such scenarios all processors act independently, using no communication or coordination, executing the same probabilistic algorithm, differing only by the results of their coin flips. As argued in [10], in addition to their economic use of communication, identical noncoordinating algorithms enjoy inherent robustness to different kind of faults. For example, assume that there are searchers, and that an adversary can fail up to searchers. Letting all searchers run the best noncoordinating algorithm for searchers guarantees that regardless of which searchers fail, the overall search efficiency is at least as good as the noncoordinating one for players. Of course, since players might fail, any solution can only hope to achieve the best performance of players. As it applies to the group performance we term this property as group robustness. Among the main results in [10] is identifying a noncoordinating algorithm, denoted , whose expected running time is minimal among noncoordinating algorithms. Moreover, for every given , if this algorithm runs for rounds, it also maximizes the success probability.
The current paper studies the game theoretic version of this multiround search problem^{2}^{2}2We concentrate on the normal form version in which players do not receive any feedback during the search (except when the treasure is found in which case the game ends). In particular, we assume that players cannot communicate with each other.. The setting of [10] assumes that the searchers adhere fully to the instructions of a central entity. In contrast, in a game theoretical context, searchers are selfinterested and one needs to incentivize them to behave as desired, e.g., by awarding those players that find the treasure first. Choosing a good rewarding policy now becomes a problem in algorithmic mechanism design [27]. Typically, a reward policy is evaluated by its price of anarchy (PoA), namely, the ratio between the performances of the best collaborative algorithm and the worst equilibrium [22]. Aiming to both accelerate the convergence process to an equilibrium and obtain a preferable one, the announcement of the reward policy can be accompanied by a proposition for players to play particular strategies that form a profile at equilibrium.
This paper highlights the benefits of suggesting (noncoordinating) symmetric equilibria in such scenarios, that is, to suggest the same noncoordinating strategy to be used by all players, such that the resulting profile is at equilibrium. This is of course relevant assuming that the price of symmetric stability (PoSS), namely, the ratio between the performances of the best collaborative algorithm and the best symmetric equilibrium, is low. Besides the obvious reasons of fairness and simplicity, from the perspective of a central entity who is interested in the overall success probability, we obtain the group robustness property mentioned above, by suggesting that the players play according to the strategy that is a symmetric equilibrium for players. Obviously, this group robustness is valid only provided that the players indeed play according to the suggested strategy. However, the suggested strategy is guaranteed to be an equilibrium only for players, while in fact, the adversary may keep some of the extra players alive. Interestingly, however, in many cases, a symmetric equilibrium for players also serves as an approximate equilibrium for players, as long as . As we show, this equilibrium robustness property is rather general, holding for a class of games, that we call monotonously scalable games.
1.1 The Collaborative Search Game
A treasure is placed in one of boxes according to a known distribution and players are searching for it in parallel during rounds. Assume w.l.o.g. that for every and that .
Strategies.
An execution of rounds is a sequence of box visitations , one for each round . We assume that a player visiting a box has no information on whether other players have already visited that box or are currently visiting it. Hence, a strategy
of a player is a probability distribution over the space of executions of
rounds. Note that the probability of visiting a box in a certain round may depend on the boxes visited by the player until this round, but not on the actions of other players. A strategy is nonredundant if at any given round it always checks a box it hasn’t check before (as long as there are such boxes).A profile is a collection of strategies, one for each player. Special attention will be devoted to symmetric profiles which correspond to the cases in which all players play the same strategy (note that their actual executions may be different, due to different probabilistic choices).
Probability Matrix.
While slightly abusing notation, we shall associate each strategy with its probability matrix, , where is the probability that strategy visits for the first time at round . We also denote as the probability that does not visit by, and including, time . That is, and . For convenience we denote by the matrix of all zeros except at . Its dimensions will be clear from context.
Group Performance.
A profile is evaluated by its success probability, namely, the probability that at least one player finds the treasure by time . Formally, let be a profile. Then,
The expected running time in the symmetric case, which is , was studied in [10]. That paper identified a strategy, denoted , that minimizes this quantity. In fact, it does so by minimizing the term for each separately. Note that minimizing the case is exactly the same as maximizing the success probability. Thus, restricted to the case where all searchers use the same strategy, simultaneously optimizes the success probability as well as optimizes the expected running time. For completeness, a description of is provided below.
Algorithm .
Congestion Policies.
A natural way to incentivize players is by rewarding those players that find the treasure before others. A congestion policy is a function specifying the reward that a player receives if it is one of players that (simultaneously) find the treasure for the first time. We assume that , and that is nonnegative and nonincreasing. Due to the fact that the policy is rather degenerate, we henceforth assume that . We shall give special attention to the following policies.

The sharing policy is defined by , namely, the treasure is shared equally among all those who find it first.

The exclusive policy is defined by , and for , namely, the treasure is given to the first one that finds it exclusively; if more than one discover it, they get nothing^{3}^{3}3In the one round game, the exclusive policy yields a utility for a player that equals its marginal contribution to the social welfare, i.e., the success probability [29]. However, this is not the case in the multiround game..
A configuration is a triplet , where is a congestion policy, is a positive integer, and is a positive nonincreasing probability distribution on boxes.
Values, Utilities and Equilibria.
Let be a configuration. The value of box at round when playing against a profile is the expected gain from visiting at round . Formally,
The utility of in round and the utility of are defined as:
(1) 
where is the set of players of excluding . Here are some specific cases we are interested in:

For symmetric profiles, denotes the value when playing against players playing . Then

For the exclusive policy, .

For the exclusive policy in symmetric profiles, .
A profile is a Nash equilibrium under a configuration, if for any and any other strategy , Similarly, a strategy is called a symmetric equilibrium if the profile consisting of all players playing according to is an equilibrium. We also use the notion of approximate equilibrium. For , we say a profile is a equilibrium if for every and for every strategy ,
A Game of DoublySubstochastic Matrices.
Both the expressions for the success probability and utility solely depend on the values of the probability matrices associated with the strategies in question. Hence we view all strategies sharing the same matrix as equivalent. Note that a matrix does not necessarily correspond to a unique strategy, as illustrated by the following equivalent strategies, for which for every and thereafter:

Strategy chooses uniformly at every round one of the boxes it didn’t choose yet.

Strategy chooses once . Then, at round it visits box .
Matrices are much simpler to handle than strategies, and so we would rather think of our game as a game of probability matrices than a game of strategies. For this we need to characterize which matrices are indeed probability matrices of strategies. Clearly, a probability matrix is nonnegative. Also, by their definition, each row and each column sums to at most 1. Such a matrix is called doublysubstochastic. In Appendix B we prove the converse, i.e., that every doublysubstochastic matrix is a probability matrix of some strategy. Furthermore, this strategy is implementable as a polynomial algorithm. We will therefore view our game as a game of doublysubstochastic matrices.
Greediness.
Informally, a strategy is greedy at a round if its utility in this round is the maximum possible in this round. Formally, given a profile and some strategy , we say that is greedy w.r.t. at time if for any strategy such that for every and , , we have . We say is greedy w.r.t. if it is greedy w.r.t. for each . A strategy is called selfgreedy (or sgreedy for short) if it is greedy w.r.t. the profile with players playing .
Evaluating Policies.
Let be a configuration. Denote by the set of round equilibria, and by the set of round symmetric equilibria. Let be the set of all profiles of round strategies. We shall be interested in the following measures.

The Price of Anarchy (PoA) is .

The Price of Symmetric Stability (PoSS) is .

The Price of Symmetric Anarchy (PoSA) is .
On the Difficulty of the MultiRound Game.
The setting of multirounds poses several challenges that do not exist in the single round game. An important one is the fact that, in contrast to the single round game, the multiround game is not a potential game. Indeed, being a potential game has several implications, and a significant one is that such a game has always a pure equilibrium. However, we show that multiround games do not always have pure equilibria and hence they are not potential games. Another important difference is that for policies that incur high levels of competition (such as the exclusive policy), profiles that maximize the success probability are at equilibrium in the single round case, whereas they are not in the multiround game. See Appendix A for more details.
1.2 Our Results
Equilibrium Robustness.
We first provide a simple, yet general, robustness result, that holds for symmetric (approximate) equilibria in a family of games, termed monotonously scalable. Informally, these are games in which the sum of utilities of players can only increase when more players are added, yet for each player, its individual utility can only decrease. Our search game with the sharing policy is one such example.
Theorem 1.
Consider a symmetric monotonously scalable game. If is a symmetric equilibrium for players, then it is an equilibrium when played by players.
Theorem 1 is applicable in fault tolerant contexts. Consider a monotonously scalable game with players out of which at most may fail. Let be a symmetric (approximate) equilibrium designed for players and assume that its social utility is high compared to the optimal profile with players. The theorem implies that if players play , then regardless of which players fail (or decline to participate), the incentive to switch strategy would be very small, as long as . Moreover, due to symmetry, if the social utility of the game is monotone, then the social utility of when played with players is guaranteed when playing with more. Hence, in such cases we obtain both group robustness and equilibrium robustness.
General Congestion Policies.
Coming back to our search game, we consider general policies, focus on symmetric profiles, and specifically, on the properties of sgreedy strategies.
Theorem 2.
For every policy there exists a nonredundant sgreedy strategy. Moreover, all such strategies are equivalent and are symmetric equilibria.
When this shows that a nonredundant sgreedy strategy is actually a symmetric equilibrium. For this case, we also get that this is the only symmetric equilibrium (up to equivalence):
Claim 3.
For any policy such that , all symmetric equilibria are equivalent.
Theorem 2 is nonconstructive because it requires calculating the inverse of nontrivial functions. Therefore, we resort to an approximate solution.
Theorem 4.
Given , there exists an algorithm that takes as input a configuration, and produces a symmetric equilibrium. The algorithm runs in polynomial time in , , , , , and .
The Exclusive Policy.
Recall that the exclusive policy is defined by and for every . We show that is a nonredundant and sgreedy strategy in the exclusive policy. Hence, Theorem 2 implies the following.
Theorem 5.
Under the exclusive policy, Strategy of [10] is a symmetric equilibrium.
Claim 3 together with the fact (established in [10]) that has the highest success probability among symmetric profiles, implies that both the PoSS and the PoSA of are optimal (and equal) on any and when compared to any other policy. The next theorem considers general equilibria.
Theorem 6.
Consider the exclusive policy. For any profile at equilibrium and any symmetric profile , .
Observe that, as is a symmetric equilibrium, Theorem 6 provides an alternative proof for the optimality of (established in [10]). Interestingly, this alternative proof is based on game theoretic considerations, which is, to the best of our knowledge, quite rare in optimality proofs.
Corollary 7.
For any and , . Moreover, for any policy , .
At first glance the effectiveness of might not seem so surprising. Indeed, it seems natural that high levels of competition would incentivize players to disperse. However, it is important to note that is not extreme in this sense, as one may allow congestion policies to also have negative values upon collisions. Moreover, one could potentially define more complex kinds of policies, for example, policies that depend on time, and reward early finds more. However, the fact that is optimal among all symmetric profiles combined with the fact that any symmetric policy has a symmetric equilibrium [26] implies that no symmetric reward mechanism can improve either the PoSS, the PoSA, or the PoA of the exclusive policy.
We proceed to show a tight upper bound on the PoA of . Note that as goes to infinity the bound converges to .
Theorem 8.
For every ,
Concluding the results on the exclusive policy, we study the robustness of in Appendices E.3 and E.4. Let denote algorithm when set to work for players. Unfortunately, for any , there are cases where is not a equilibrium even when played by players. However, as indicated below, is robust to failures under reasonable assumptions regarding the distribution .
Theorem 9.
If , then is a equilibrium when played by players.
The Sharing Policy.
Another important policy to consider is the sharing policy. This policy naturally arises in some circumstances, and may be considered as a less harsh alternative to the exclusive one. Although not optimal, it follows from Vetta [33] that its PoA is at most 2 (see Appendix F). Furthermore, as this policy yields a monotonously scalable game, a symmetric equilibrium under it is also robust. Therefore, the existence of a symmetric profile which is both robust and has a reasonable success probability is guaranteed.
Unfortunately, we did not manage to find a polynomial algorithm that generates a symmetric equilibrium for this policy. However, Theorem 4 gives a symmetric equilibrium in polynomial time for any . This strategy is also robust thanks to Theorem 1. Moreover, the proof in [33] regarding the PoA can be extended to hold for approximate equilibria. In particular, if is some equilibrium in the sharing policy, then for every and , (see Appendix F).
1.3 Related Works
Fault tolerance has been a major topic in distributed computing for several decades, and in recent years more attention has been given to these concepts in game theory
[15, 16]. For example, Gradwohl and Reingold studied conditions under which games are robust to faults, showing that equilibria in anonymous games are fault tolerant if they are “mixed enough” [14].Restricted to a single round the search problem becomes a coverage problem, which has been investigated in several papers. For example, Collet and Korman studied in [32] (oneround) coverage while restricting attention to symmetric profiles only. The main result therein is that the exclusive policy yields the best coverage among symmetric profiles. Gairing [12] also considered the single round setting, but studied the optimal PoA of a more general family of games called covering games (see also [29, 30]). Motivated by policies for research grants, Kleinberg and Oren [20] considered a oneround model similar to that in [32]. Their focus however was on pure strategies only. The aforementioned papers give a good understanding of coverage games in the single round setting. As mentioned, however, the multiround setting studied here is substantially more complex than the singleround setting.
The area of “incentivizing exploration” also studies the tradeoff between exploration, exploitation and incentives [24, 11, 28, 23]. This area often focuses on different variants of the MultiArmed Bandit problem. The settings of selfish routing, job scheduling, and congested games [25, 31] all bear similarities to the search game studied here, however, the social welfare measurements of success probability or running time are very different from the measures studied in these frameworks, such as makespan or latency [1, 4, 27, 2].
2 Robustness in Symmetric Monotonously Scalable Games
Consider a symmetric game where the number of players is not fixed. Let denote the utility that a player playing gets if the other players play according to and let . We say that such a game is monotonously scalable if:

Adding more players can only increase the sum of utilities, i.e., if then .

Adding more players can only decrease the individual utilities, i.e., if then for all , .
See 1
Proof.
On the one hand by symmetry,
where the last step is because is nondecreasing. On the other hand, if is some other strategy,
The first inequality is because is nonincreasing, and the second is because is a equilibrium for players. Therefore, what a player can gain by switching from to is at most a multiplicative factor of . ∎
An example of such a game is our setting with the sharing policy. Note however, that our game with the exclusive policy does not satisfy the first property, as adding more players can actually deteriorate the sum of utilities. Another example is a generalization known as covering games [12]. This sort of game is the same as our game in a singleround version, except that each player chooses not necessarily one element, but a set of elements, from a prescribed set of sets. Again, to be a monotonously scalable game, the congestion policy should be the sharing policy. Note that one may consider a multiround version of these games, which will be monotonously scalable as well.
3 General Policies
The proofs of this section appear in Appendix C.
3.1 NonRedundancy and Monotonicity
A doublysubstochastic matrix is called nonredundant at time if . It is nonredundant if it is nonredundant for every . In the algorithmic view, as is the probability that a new box is opened at time , then a strategy’s matrix is nonredundant iff it never checks a box twice, unless it already checked all boxes.
Lemma 10.
If a profile is at equilibrium and then every player is nonredundant.
We will later see that in the symmetric case the condition in the lemma is not needed. However, the following example shows it is necessary in general. Let , , and assume that for every , player goes to box in every round. Under the exclusive policy, this strategy is an equilibrium, whereas each player is clearly redundant. The following monotonicity lemmas hold under any congestion policy .
Lemma 11.
Consider two doublysubstochastic matrices and . If , and for all , then .
Lemma 12.
Let be doublysubstochastic. For every and , . Moreover, if then the inequality is strict.
Using the above, we prove a stronger result than Lemma 10 for the symmetric case:
Lemma 13.
If is a symmetric equilibrium then it is nonredundant.
Proof.
Because of Lemma 10 it is sufficient to consider only the case where . Let , and assume by contradiction that is redundant. Thus there is some where . Hence, Therefore, there is some such that and so . As , there is some such that . Define Taking small enough, is doublysubstochastic. Also, , which is strictly positive by Lemma 12. Contradicting the fact that is an equilibrium. ∎
3.2 Greedy Strategies
Lemma 14.
A nonredundant strategy is greedy w.r.t. at time iff for every and , if and then .
The lemma above gives a useful equivalent definition for greediness. We can then prove: See 2
Proof.
Proving the existence of a strategy that is nonredundant and sgreedy is deferred to the appendix (see Lemma 24). We prove here that such a strategy is a equilibrium. Consider a strategy . We compare the utility of to that of when both play against players playing . By nonredundancy, all of are when , and so we can assume .
Denote . Since the utility of in any round is a convex combination of , we have . We say that fills box at round if and . The following four claims hold for any round :

If does not fill any box at round then . This is because is a convex combination of for the boxes where , which by the characterization of greediness in Lemma 14, all have the same value at time .

. Why? if no box is filled in round 1, then Item 1 applies. Otherwise, for some box , , and all other boxes have . The result follows again by Lemma 14.

For any , . We prove this by showing that for every , . If , then the claim is clear. Otherwise, or or both. Either way, . Therefore, as is sgreedy, for every such that , . The last inequality follows from monotonicity, i.e., Lemma 12. As is a convex combination of such ’s we conclude.

If fills box at time then for any , . To see why, first note that . On the other hand, since , , because . Combining the above two inequalities gives the result.
Denote by the set of rounds for which there is no box that is filled by . Let be the rest of the rounds, except for which is in neither. Also denote , and . Since , by Items 1,2 and 3 above,
We conclude by using Items 4 and 2 and showing:
In Appendix C.3.2 we provide an example showing that in the sharing policy, a nonredundant sgreedy strategy is not necessarily at equilibrium. On the other hand, it is worth noting that for any policy, the existence of a symmetric equilibrium follows from [26], and for we can get a full characterization of such equilibria: See 3 Interestingly, this result does not extend to nonsymmetric profiles even for the exclusive policy, as is demonstrated by the following example of a nongreedy nonredundant equilibrium. Consider three players and two rounds. , for some small positive . Player 1 plays first 4 and then 1. Player 2 plays 2 and then 3, and player 3 plays 3 and then 2. This can be seen to be an equilibrium, yet player 1 is not greedy.
Finally, the proof of Theorem 4, which shows how to construct an approximate equilibrium in polynomial time, is deferred to Appendix D. This proof involves defining notions of approximate greediness and nonredundancy, proving an equivalent of Theorem 2 for them, and then using bounds on the rate of change that goes through as a function of . This allows us to polynomially find an approximate sgreedy and nonredundant matrix, thus giving a polynomial strategy with our use of the Birkhoff vonNeumann theorem (Appendix B).
4 The Exclusive Policy
Missing proofs of this section appear in Appendix E. There, we first prove that under the exclusive policy, is sgreedy and nonredundant. Hence, according to Theorem 2, See 5 According to Claim 3 all symmetric equilibria under the exclusive policy are equivalent, and thus equivalent to . Hence, the optimality of (w.r.t. symmetric profiles) implies that both the PoSA and PoSS of the exclusive policy are optimal. That is, for every , , and policy ,
Our next goal is to establish the PoA of the exclusive policy. For this purpose, we first prove that the success probability of any equilibrium is at least as large as that of any symmetric profile. Since is a symmetric equilibrium, its optimality among symmetric profiles follows. Hence, the proof provides as alternative proof to the one in [10]. See 6
Proof.
Let be a strategy and be a profile at equilibrium with respect to . If , then the inequality is trivial. According to Lemma 10, we can therefore assume that all players of are nonredundant and that . Denote the probability of visiting in profile by
We say that box is high with respect to a profile if , low if , and saturated if they are equal. The next lemma uses the fact that is symmetric.
Lemma 15.
If a profile is nonredundant and contains no high boxes, then all boxes are saturated.
We proceed to prove a weak greediness property for equilibria. Denote a box full for player if . Also, for readability of what follows, when is clear from the context, we shall denote
Lemma 16.
Consider a profile at equilibrium. For every and such that is not full in , if then .
Proof.
Assume otherwise. Define an alternative matrix for player , as . For a sufficiently small , is a doublysubstochastic matrix because is not full in . Then, in contradiction. ∎
Let us define a process that starts with the profile and changes it by a sequence of alterations, each shifting some amount of probability between two boxes. Importantly, we make sure that each alteration can only decrease the success probability. Hence, the proof is concluded once we show that the final profile has a success probability that is higher than that of .
We first describe the alternations. Each alteration considers the current profile , and changes it to . It takes some high box , some low box (both w.r.t. ), and the maximal such that there is a player with . It defines , and lets the player that played play instead. This is taken to be the largest so that does not become low, does not become high, and such that , so that the entries remain nonnegative. Note that is doubly substochastic, because taking care that remains low, also means that ’s row in still sums to less than .
After this alteration, either is saturated, is saturated, or . Clearly, in a finite number of alterations a profile is obtained, for which either no box is high or no box is low.
Lemma 17.
.
Proof.
Note that by Lemma 15, can only contain high and saturated boxes, that is, for every , . However, , and so . ∎
Lastly, the following lemma concludes the proof of Theorem 6.
Lemma 18.
An alteration can only decrease the probability of success. ∎
Since is a symmetric equilibrium, we immediately get that for every and , the PoA is attained by , that is, . Since has the best success probability among symmetric profiles, and that every policy has some symmetric equilibrium, we get Corollary 7. To make this more concrete, we show that in the worst case, See 8 Note that as goes to infinity the PoA converges to .
5 Future Work and Open Questions
In [10], the main complexity measure was actually the running time and not the success probability. Our results about equilibria are also relevant to this measure, but the social gain is different. For example, it is still true that is an equilibrium under the exclusive policy, and that all other symmetric equilibria in the exclusive policy are equivalent to it. As is optimal among symmetric profiles w.r.t. the running time, the PoSA of is equal to the PoSS, and it is also the best among all policies. Furthermore, importing from [10], we know that the PoSA (w.r.t. the running time) is about 4. However, showing the analogue of Corollary 7, namely, that the PoA of is that achieved by , seems difficult, especially because general equilibria are not necessarily greedy. Moreover, the results of Vetta [33] do not apply when analyzing the running time, and finding the PoA, PoSA, and PoSS of the sharing policy, for example, remains open.
Another interesting variant would be to consider feedback during the search. For example, assuming that a player visiting a box knows whether or not other players were there before. Such a feedback can help in the case that the players collaborate [6], but seems to significantly complicate the analysis in the game theoretic variant.
Finally, we would like to encourage game theoretical studies of other frameworks of collaborative search, e.g., [5, 7, 8, 17].
Appendix
Appendix A On the Difficulty of the MultiRound Game.
The MultiRound Game is Not a Potential Game.
It is interesting to note that the single round game is an exact potential game, yet the multiround game is not. Indeed, for the single round, assume that is a deterministic profile, and let be the number of players that choose box in . Denote
If a player changes strategy and chooses (deterministically) some box instead of box , then the change in its utility is . This is also the change that sees. This extends naturally to mixed strategies, and so the single round game is a potential game. This observation has several consequences, and in particular that there always exists a pure Nash equilibrium.
On the other hand, the multiround game does not always have a pure equilibrium, and so is not a potential game. For example, the following holds for any policy . Consider the case where , , , and all boxes have . Note that since here and . Assume there is some deterministic profile that is at equilibrium, and w.l.o.g. assume player 1’s first pick is box 1. There are two cases:

Player 1 picks it again in the second round. Player 2’s strictly best response is to pick box 2 and then 3 (or the other way around). In this case, player 1 would earn more by first picking box 3 (box 2) and then box 1. In contradiction.

Player 1 picks a different box in the second round. W.l.o.g. assume it is box 2. Player 2’s strictly best response is to first take box 2 and then take box 3. However, player 1 would then prefer to start with box 3 and then box 1. Again a contradiction.
Optimal Profiles may not be at Equilibrium.
A second notable difference concerns profiles that maximize the success probability. In the singleround game, when , the success probability is maximized when each player exclusively visits one box in with probability 1. Under the exclusive policy, for example, such a profile is also at equilibrium. In fact, if then the same is true also for the sharing policy. For the multiround setting, when , an optimal scenario is also achieved by a deterministic profile, e.g., when player visits box in round . However, this profile would typically not be an equilibrium, even under the exclusive policy. Indeed, when is strictly decreasing, player 2 for example, can gain more by stealing box from player 1 in the first round, then safely taking box 2 in the second round, and continuing from there as scheduled originally. This shows that in the multiround game, the best equilibrium has only suboptimal success probability.
Appendix B Every DoublySubstochastic Matrix is a Probability Matrix
A matrix is called doublysubstochastic if it is nonnegative, and each of its rows and columns sum to at most . Also, a doublysubstochastic matrix is a partial permutation if it consists of only 0 and 1 values. The following is a generalization of the Birkhoff  von Neumann theorem, proved for example in [19].
Theorem 19.
A matrix is doublysubstochastic iff it is a convex combination of partial permutations.
Furthermore, Birkhoff’s construction [3] finds this decomposition in polynomial time, and guarantees it contains at most a number of terms as the number of positive elements of the matrix. The generalization of [19] does not change this claim significantly, as it embeds the doublysubstochastic matrix in a doublystochastic one which is at most 4 times larger.
Corollary 20.
If matrix is doublysubstochastic then there is some strategy such that is its probability matrix. Furthermore, this strategy can be found in polynomial time, and is implementable as a polynomial algorithm.
Proof.
First note that the claim is true if is a partial permutation. The strategy in this case will be a deterministic strategy, which may sometimes choose not to visit any box. In the general case, Theorem 19 states that there exist , such that and partial permutations , such that . As mentioned, each is the probability matrix of some strategy . Define the following strategy as follows: with probability run strategy . Then, the probability matrix of is , as required. ∎
Appendix C Proofs for Section 3  General Policies
A first general observation is that if a box has some probability of not being chosen, then it has a positive value. This is clear from the definition of utility, from the fact that and because is nonnegative.
Observation 21.
If for all , , then .
c.1 NonRedundancy
First a simple observation:
Observation 22.
If is nonredundant then for all , and for all , .
See 10
Proof.
As , there is some such that , and so for all , . Fix such an . Assume that some player plays a redundant matrix . This means that there is some time where . Define a new matrix , for a small . Taking it small enough will ensure that is doublysubstochastic since (1) in column , there is space because of the redundancy of at time , and (2) in row there is space because .
Therefore our player can play according to instead of , and
Recall that by how we chose , for all , , and as is weakly decreasing in , . By Observation 21, , and so the utility strictly increases, in contradiction to being at equilibrium. ∎
c.2 Monotonicity
See 11
Proof.
Denote:
Then,
(2) 
and similarly for . Denote , and .
First, by the properties of congestion policies, all the , and, since , then at least one is strictly positive. Now,
(3) 
The sum is the probability that there are at most ones out of
Bernoulli random variables sampled with probability
. Therefore it is strictly decreasing in for any . In the case of ,For it is the same, and by the conditions of the lemma, they are equal. As is larger in , we get that is strictly larger in for any . By the interpretation of the sum in Eq. (3), , hence this term is the same in both and . Therefore, is strictly larger in , and so . ∎
See 12
Proof.
First Assume . The value at round is:
On the other hand,
because all the . As , we get that , as required.
Next, assume . Consider strategy and let be the same as except that for every , . By the above, . By Lemma 11,
and we conclude. ∎
c.3 Greediness
For a profile and some strategy , denote . Clearly, if is nonredundant then . For symmetric profiles, we simply denote for . A simple and useful observation is,
Observation 23.
Let be greedy at time w.r.t. profile . If then .
Proof.
Assume otherwise, that is, . Take some s.t.
Comments
There are no comments yet.