Empirical strategy-proofness

07/29/2019 ∙ by Rodrigo A. Velez, et al. ∙ Texas A&M University 0

We study the plausibility of sub-optimal Nash equilibria of the direct revelation mechanism associated with a strategy-proof social choice function. By using the recently introduced empirical equilibrium analysis (Velez and Brown, 2019, arXiv:1804.07986) we determine that this behavior is plausible only when the social choice function violates a non-bossiness condition and information is not interior. Analysis of the accumulated experimental and empirical evidence on these games supports our findings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Strategy proofness, a coveted property in market design, requires that truthful reports be dominant strategies in the simultaneous direct revelation game associated with a social choice function (scf). Despite the theoretical appeal of this property, experimental and empirical evidence suggests that when an scf satisfying this property is operated, agents may persistently exhibit weakly dominated behavior (Coppinger et al., 1980; Kagel et al., 1987; Kagel and Levin, 1993; Harstad, 2000; Attiyeh et al., 2000; Chen and Sönmez, 2006; Cason et al., 2006; Andreoni et al., 2007; Hassidim et al., 2016; Rees-Jones, 2017; Li, 2017; Artemov et al., 2017; Chen and Pereyra, 2018). In this paper we study the plausibility of Nash equilibria of the direct revelation game of strategy-proof scfs. By doing so we identify the circumstances in which empirical distributions of play in these games may persistently exhibit weakly dominated actions that approximate a Nash equilibrium that produces sub-optimal outcomes.222Given an scf we refer to a sub-optimal outcome as one that is different from the one intended by the scf for the true characteristics of the agents.

The conventional wisdom on plausibility of Nash equilibria offers no explanation on why weakly dominated behavior can be persistent in some dominant strategy games. Indeed, the most prominent theories either implicitly or explicitly assume that this behavior is not plausible (from the seminal tremble-based refinements of Selten (1975) and Myerson (1978), to their most recent forms in Milgrom and Mollner (2017, 2018) and Fudenberg and He (2018); see also Kohlberg and Mertens (1986), and van Damme (1991) for a survey up to the late 80’s where this literature was most active).

In Velez and Brown (2019b) we attack the problem of plausibility of Nash equilibria with an alternative approach. We consider a model of unobservables that explains observable behavior in our environment.333Our benchmark is an experimental environment in which the researcher observes payoffs and samples frequencies of play. This observable payoffs framework is also a valuable benchmark for the foundation of Nash equilibrium (Harsanyi, 1973). For instance, we construct a randomly disturbed payoff model (Harsanyi, 1973; van Damme, 1991), a control cost model (van Damme, 1991), a structural QRE model (McKelvey and Palfrey, 1995), a regular QRE model (McKelvey and Palfrey, 1996; Goeree et al., 2005), etc. In order to bring our model to accepted standards of science we need to make sure it is falsifiable. We observe that in the most popular models for the analysis of experimental data, including the ones just mentioned, this has been done by requiring consistency with an a priori observable restriction for which there is empirical support, weak payoff monotonicity.444Harsanyi (1973) does not explicitly impose weak payoff monotonicity in his randomly perturbed payoff models. The objective of his study is to show that certain properties hold for all randomly perturbed payoff models with vanishing perturbations for generic games. This makes unnecessary to discipline the model with a priori restrictions. van Damme (1991) requires permutation invariance on Harsanyi (1973)’s models. This induces weak payoff monotonicity. This property of the full profile of empirical distributions of play in a game requires that for each agent, differences in behavior reveal differences in expected utility. That is, between two alternative actions for an agent, say and , if the agent plays with higher frequency than , it is because given what the other agents are doing, has higher expected utility than . Finally, we proceed with our study and define a refinement of Nash equilibrium by means of “approachability” by behavior in our model à la Harsanyi (1973), van Damme (1991), and McKelvey and Palfrey (1996). That is, we label as implausible the Nash equilibria of our game that are not the limit of a sequence of behavior that can be generated by our model. If our model is well-specified, the equilibria that are ruled implausible by our refinement, will never be approached by observable behavior even when distributions of play approach mutual best responses.555We have in mind an unmodeled evolutionary process by which behavior approaches a Nash equilibrium. Thus, we are essentially interested in the situations in which eventually a game form is a good approximation of the strategic situation we model, as when perturbations vanish in Harsanyi (1973)’s approachability theory. Of course, we are not sure what the true model is. Our thought experiment was already fruitful, however. We learned that if we were able to construct the true model and our a priori restriction does not hinder its specification, the Nash equilibria that we would identify as implausible will necessarily contain those in the complement of the closure of weakly payoff monotone behavior. This leads us to the definition of empirical equilibrium, a Nash equilibrium for which there is a sequence of weakly payoff monotone distributions of play converging to it. The complement of this refinement (in the Nash equilibrium set), the empirically implausible equilibria, are the Nash equilibria that are determined implausible by any theory that is disciplined by weak payoff monotonicity.

We can considerably advance our understanding of the direct revelation game of a strategy-proof scf by calculating its empirical equilibria. On the one hand, suppose that we find that for a certain game each empirical equilibrium is truthful equivalent. Then, we learn that as long as empirical distributions of play are weakly payoff monotone, behavior will never approximate a sub-optimal Nash equilibrium. On the other hand, if we find that some empirical equilibria are not truthful equivalent, this alerts us about the possibility that we may plausibly observe persistent behavior that generates sub-optimal outcomes and approximates mutual best responses.

We present two main results. The first is that non-bosiness in welfare-outcome—i.e., the requirement on an scf that no agent be able to change the outcome without changing her own welfare—is necessary and sufficient to guarantee that for each common prior type space, each empirical equilibrium of the direct revelation game of a strategy-proof scf in a private values environment, produces, with certainty, the truthful outcome (Theorem 1). The second is that the requirement that a strategy-proof scf have essentially unique dominant strategies, characterizes this form of robust implementation for type spaces with full support (Theorem 2). The sharp predictions of our theorems are consistent with experimental and empirical evidence on strategy-proof mechanisms (Sec. 6). Indeed, they are in line with some of the most puzzling evidence on the second-price auction, a strategy-proof mechanism that violates non-bosiness but whose dominant strategies are unique. Deviations from truthful behavior are persistently observed when this mechanism is operated, but mainly for information structures for which agents’ types are common information (Andreoni et al., 2007).

The remainder of the paper is organized as follows. Sec. 2 places our contribution in the context of the literature. Sec. 3 presents the intuition of our results illustrated for the Top Trading Cycles (TTC) mechanism and the second-price auction, two cornerstones of the market design literature. Sec. 4 introduces the model. Section 5 presents our main results. Section 6 contrasts our results with experimental and empirical evidence. Section 7 contrasts them with the characterizations of robust full implementation (Bergemann and Morris, 2011; Saijo et al., 2007; Adachi, 2014) with which one can draw an informative parallel, and discusses our restriction to direct revelation mechanisms. Section 8 concludes. The Appendix collects all proofs.

2 Related literature

The literature on strategy-proof mechanisms was initiated by Gibbard (1973) and Satterthwaite (1975) who proved that this property implies dictatorship when there are at least three outcomes and preferences are unrestricted. The theoretical literature that followed has shown that this property is also restrictive in economic environments, but can be achieved by reasonable scfs in restricted preference environments (see Barbera, 2010, for a survey). Among these are the VCG mechanisms for the choice of an outcome with transferable utility, which include the second-price auction of an object and the Pivotal mechanism for the selection of a public project (see Green and Laffont, 1977, and references therein); the TTC mechanism for the reallocation of indivisible goods (Shapley and Scarf, 1974); the Student Proposing Deferred Acceptance (SPDA) mechanism for the allocation of school seats based on priorities (Gale and Shapley, 1962; Abdulkadiroğlu and Sönmez, 2003); the median voting rules for the selection of an outcome in an interval with satiable preferences (Moulin, 1980); and the Uniform rule in the rationing of a good with satiable preferences (Benassy, 1982; Sprumont, 1983). Even though Gibbard (1973) is not convinced about the positive content of dominant strategy equilibrium, the theoretical literature that followed endorsed the view that strategy-proofness was providing a bulletproof form of implementation. Thus, when economics experiments were developed and gained popularity in the 1980s, the dominant strategy hypothesis became the center of attention of the experimental studies of strategy-proof mechanisms. Until recently the accepted wisdom was that behavior in a game with dominant strategies should be evaluated with respect to the benchmark of the dominant strategies hypothesis (e.g. Andreoni et al., 2007). The common finding in these experimental studies is a lack of support for this hypothesis (Sec. 6).

Economic theorists have been slowly reacting to the findings in laboratory experiments. The first attempt was made by Saijo et al. (2007) who looked for scfs that are implementable simultaneously in dominant strategies and Nash equilibrium in complete information environments. This form of implementation has a robustness property under multiple forms of incomplete information (Saijo et al., 2007; Adachi, 2014). Experiments have confirmed to some extent that their additional requirements actually improve the performance of mechanisms (Cason et al., 2006). Bochet and Tumennassan (2017) advanced another approach to the problem based on focality of equilibria and coalitional incentives. The main difference between our approach and theirs is that we anchor our theory solely to individual incentives, and do not restrict our interest to complete information structures. Our results can be related with their Theorem 3.1 that finds strategy-proofness and non-bossiness in welfare to be a necessary and sufficient condition for a form of implementation in which truthful revelation is focal.666Our results can also be related with Schummer and Velez (2019) who show that non-bossiness in welfare-outcome is sufficient for a deterministic sequential direct revelation game associated with a strategy-proof scf to implement the scf itself in sequential equilibria for almost every prior. Also recently, Li (2017), looked for additional properties of mechanisms that induce agents to choose a dominant strategy in a dominant strategy game. Our study differs from Saijo et al. (2007) and Li (2017) in a similar form. We do not provide conditions guaranteeing that behavior will indeed quickly converge to a truthful equivalent Nash equilibrium. We characterize conditions that guarantee behavior will not accumulate around a sub-optimal Nash equilibrium. It is good news when we find an scf satisfies these properties. This means that an agent’s sub-optimal choices will always continue to be disciplined by the choices of the other agents. A growing literature is now showing us that the higher aims of Saijo et al. (2007) and Li (2017) lead us to come up empty handed in many problems of interest (c.f. Bochet and Sakai, 2010; Fujinaka and Wakayama, 2011; Bade and Gonczarowski, 2017). Thus, studies like ours, which produce a better understanding of the incentives across all strategy-proof mechanisms, have significant value in face of these imposibilities.

Strategy-proof mechanisms have been operated for some time in the field. Empirical studies of such mechanisms have generally corraborated the observations from laboratory experiments (e.g. Hassidim et al., 2016; Rees-Jones, 2017; Artemov et al., 2017; Chen and Pereyra, 2018), in such high stakes environments as career choice (Roth, 1984) and school choice (Abdulkadiroğlu and Sönmez, 2003). Among these papers, Artemov et al. (2017) and Chen and Pereyra (2018), are the closest to ours. Besides presenting empirical evidence of persistent violations of the dominant strategies hypothesis, they propose theoretical explanations for it. They restrict to school choice environments in which a particular mechanism is used. Artemov et al. (2017) study a continuum model in which the SPDA mechanism is operated in a full-support incomplete information environment. They conclude that it is reasonable that one can observe equilibria in which agents make inconsequential mistakes. Their construction is based on the approximation of the continuum economy by means of finite realizations of it in which agents are allowed to make mistakes that vanish as the population grows. Chen and Pereyra (2018) study a finite school choice environment in which there is a unique ranking of students across all schools. They argue that only when information is not interior an agent can be expected to deviate from her truthful report based on the analysis of an ordinal form of equilibrium. Our study substantially differs in its scope with these two papers, because our results apply to all strategy-proof scfs. When applied to a school choice problem, our results are qualitatively in line with those in these two studies and thus provide a rational for their empirical findings. However, our results additionally explain the causes of behavior in these environments (informational assumptions and specific properties of the mechanisms) and provide exact guidelines of when these phenomena will be present in any other environment that accepts a strategy-proof mechanism.

Our study is part of a growing literature on behavioral mechanism design, which aims to inform the design of mechanisms with regularities observed in laboratory experiments and empirical data (c.f., Eliaz, 2002; de Clippel, 2014; de Clippel et al., 2017; Kneeland, 2017). The most substantial difference with these papers, is that we study sub-optimal behavior when it is disciplined by convergence to a Nash equilibrium. Thus, our method can be understood as a continuous departure from the standard mechanism design and implementation theory based on Nash equilibrium. In this respect we have some similarities with Cabrales and Ponti (2000) and Tumennasan (2013) who study forms of implementation that like empirical equilibrium are determined by convergence processes. Their approaches require strong conditions be satisfied for convergence, which limit the applicability of their results to general games.

Finally, this paper is a part of the empirical equilibrium agenda, which consists on reevaluating game theory applications with the empirical equilibrium refinement. In

Velez and Brown (2019b) we define empirical equilibrium and provide a foundation to it by means of the regular QRE model of McKelvey and Palfrey (1996) and Goeree et al. (2005). In Velez and Brown (2019c) we study the relationship of empirical equilibrium and the refinements obtained by means of approximation by the separable randomly perturbed payoff models of Harsanyi (1973) and McKelvey and Palfrey (1995). In Velez and Brown (2019a) we apply empirical equilibrium to the problem of full implementation. Finally, in Brown and Velez (2019) we test the comparative statics predicted by empirical equilibrium in a partnership dissolution problem in which dominant strategy mechanisms are not available.

3 The intuition: empirical plausibility of equilibria of TTC and second-price auction

Two mechanisms illustrate our main findings. The first is TTC for the reallocation of indivisible goods from individual endowments (Shapley and Scarf, 1974). The second is the popular second-price auction. For simplicity, let us consider two-agent stylized versions of these market design environments.

Suppose that two agents, say , are to potentially trade the houses they own when each agent has strict preferences. TTC is the mechanism that operates as follows. Each agent is asked to point to the house that he or she prefers. Then, they trade if each agent points to the other agent’s house and remain in their houses otherwise. It is well known that this mechanism is strategy-proof. That is, it is a dominant strategy for each agent to point to her preferred house. Thus, if one predicts that truthful dominant strategies will result when this mechanism is operated, one would obtain an efficient trade. There are more Nash equilibria of the game that ensues when this mechanism is operated. Consider the strategy profile where each agent unconditionally points to his or her own house, regardless of information structure. This profile of strategies provides mutual best responses for expected utility maximizing agents, but does not necessarily produce the same outcomes as the truthful profile.

The second-price auction is a mechanism for the allocation of a good by a seller among some buyers. We suppose that there are two buyers who may have a type . The value that an agent assigns to the object depends on her type: , , and . Each agent has quasi-linear preferences, i.e., assigns zero utility to receiving no object, and to receiving the object and paying for it. In the second-price auction each agent reports his or her value for the object. Then an agent with higher valuation receives the object and pays the seller the valuation of the other agent. Ties are decided uniformly at random. It is well known that this mechanism is also strategy-proof. In its truthful dominant strategy equilibrium it obtains an efficient assignment of the object, i.e., an agent with higher value receives the object. Moreover, the revenue of the seller is the second highest valuation. There are more Nash equilibria of the game that ensue when this mechanism is operated. Contrary to TTC, these equilibria are not supported by actions that form a Nash equilibrium independent of the information structure. In order to exhibit such equilibria let us suppose that agent  has type , agent  has type , and both agents have complete information of their types. Table 1 presents the normal form of the complete information game that ensues. There are infinitely many Nash equilibria of this game. For instance, agent reports her true type and agent randomizes in some arbitrary way between and . In these equilibria, the seller generically obtains lower revenue than in the truthful equilibrium.

Agent
H M L
Agent H -1/4,0 0,0 1/2,0
M 0,1/2 0,1/4 1/2,0
L 0,1 0,1 1/4,1/2
Table 1: Normal form of second-price auction with complete information when and .

Our quest is then to determine which, if any, of the sub-optimal equilibria of TTC, the second-price auction, and for that matter any strategy-proof mechanism, should concern a social planner who operates one of these mechanisms. In order to do so we calculate the empirical equilibria of the games induced by the operation of these mechanisms. It turns out that the Nash equilibria of the TTC and the second-price auction have a very different nature. No sub-optimal Nash equilibrium of the TTC game is an empirical equilibrium. By contrast, for some information structures, the second-price auction has empirical equilibria whose outcomes differ from those of the truthful ones.

Consider the TTC game and a weakly payoff monotone distribution of play. Since revealing her true preference is dominant, each agent with each possible type will reveal her preferences with probability at least

in such a strategy. Thus, in any limit of a sequence of weakly payoff monotone strategies, each agent reveals her true preference with probability at least . Consequently, in each empirical equilibrium there is a lower bound on the probability with which each agent is truthful. Suppose that information is given by a common prior.777This can be relaxed to some extent. See Sec. 4. Given the realization of agents’ types, each agent always believes the true payoff type of the other agent is possible. Then, in each empirical equilibrium of the TTC, whenever trade is efficient (for the true types of the agents), each agent will place positive probability on the other agent pointing to her. Consequently, in each empirical equilibrium of the TTC, given that an agent prefers to trade, this agent will point to the other agent with probability one whenever efficient trade is possible. Thus, each empirical equilibrium of the TTC obtains the truthful outcome with certainty.

For the second-price auction consider the complete information structure whose associated normal form game is presented in Table 1. Fix . For , let

be the pair of probability distributions on each agent’s action space defined as follows:

, , , , , . One can easily see that when is small, is weakly payoff monotone. Indeed, for agent action weakly dominates and this last action weakly dominantes . Since is interior, is ordinally equivalent to the expected utility of actions for agent given . Now, for agent , action is weakly dominant. Moreover, for small , , thus the expected utility of for is strictly less than that of . Thus, is ordinally equivalent to the expected utility of actions for agent given . Clearly, as , these distributions converge to a Nash equilibrium in which agent plays and agent plays with probability and plays with probability . Thus, the seller ends up selling for zero price with positive probability for some types whose minimum valuation is positive.

Empirical equilibrium allows us to draw a clear difference between TTC and the second-price auction. Suppose that agents’ behavior is weakly payoff monotone. Then, if these mechanisms are operated, one will never observe that empirical distributions of play in TTC approximate an equilibrium producing a sub-optimal outcome. By contrast, this possibility is not ruled out for the second-price auction.

It turns out that these differences among these two mechanisms can be pinned down to a property that TTC satisfies and the second-price auction violates: non-bossiness in welfare-outcome, i.e., in the direct revelation game of the mechanism, an agent cannot change the outcome without changing her welfare (Theorem 1).

For the strategy-proof mechanisms that do violate non-bossiness, it is useful to examine which information structures produce undesirable empirical equilibria. It turns out that for a strategy-proof mechanism with essentially unique dominant strategies, like the second-price auction, this cannot happen for information structures with full support (Theorem 2). Thus, our example above with the second-price auction actually requires the type of information structure we used.

Together, Theorems 1 and 2 produce sharp predictions about the type of behavior that is plausible when a strategy-proof scf is operated in different information structures. In Sec. 6 we review the relevant experimental and empirical literature and find that these predictions are consistent with it.

4 Model

A group of agents is to select an alternative in a finite set . Agents have private values, i.e., each has a payoff type , determining an expected utility index . The set of possible payoff types for agent is and the set of possible payoff type profiles is . We assume that is finite. For each , is the cartesian product of the type spaces of the agents in . The generic element of is . When we simply write and . Consistently, whenever convenient, we concatenate partial profiles, as in

. We use this notation consistently when operating with vectors (as in strategy profiles). We assume that information is summarized by a common prior

. For each in the support of and each , let be the distribution conditional on agent drawing type .888Our results can be generalized for general type spaces à la Bergemann and Morris (2005) when one requires the type of robust implementation in our theorems only for the common support of the priors. We prefer to present our payoff type model for two reasons. First, it is much simpler and intuitive. Second, since our theorems are robust implementation characterizations, they are not necessarily stronger results when stated for larger sets of priors. By stating our theorems in our domain, the reader is sure that we do not make use of the additional freedoms that games with non-common priors allow.

A social choice correspondence (scc) selects a set of alternatives for each possible state. The generic scc is . A social choice function (scf) is a single-valued scc. The generic scf is . Three properties of scfs play an important role in our results. An scf ,

  1. is strategy-proof (dominant strategy incentive compatible) if for each , each , and each , .

  2. is Non-bossy in welfare-outcome if for each , each , and each , implies that .

  3. has essentially unique dominant strategies if for each , each , and each , if and , then there is such that .

The first property is well-known. The second property requires that no agent, when telling the truth (in the direct revelation mechanism associated with the scf), be able to change the outcome by changing her report without changing her welfare. It is satisfied, among other, by TTC, the Median Voting rule, and the Uniform rule. It is a strengthening of the non-bossiness condition of Satterthwaite and Sonnenschein (1981), which applies only to environments with private consumption. Non-bossiness in welfare outcome is violated by the Pivotal mechanism, the second-price auction, and SPDA. The third property requires that any consequential deviation from a truthful report by an agent, can have adverse consequences for her. Restricted to strategy-proof scfs, this property says that, in the direct revelation game associated with the scf, for each agent, all dominant strategies are redundant. This is satisfied whenever true reports are the unique dominant strategies, as in the second-price auction. It is not necessary that dominant strategies be unique for this property to be satisfied. A student in a school choice environment with strict preferences, and in which SPDA is operated, may have multiple dominant strategies (think for instance of a student who is at the top of the ranking of each school). However, any misreport that is also a dominant strategy for this student, cannot change the outcome.999The following discussion uses the standard language in school choice problems (c.f. Abdulkadiroğlu and Sönmez, 2003). Suppose that preferences are strict and starting from a profile in which student  is truthful, she changes her report but does not change the relative ranking of her assignment with respect to the other. The SPDA assignment for the first profile, say , is again stable for the second profile. Thus, for the new profile, each other agent is weakly better off. Agent ’s allotment is the same in both markets because SPDA is strategy-proof. If another agent changes her allotment, it is because the new SPDA assignment was blocked in the original profile. Since the preferences of the other agents did not change, agent needs to be in the blocking pair for the new assignment in the original market. However, this means she is in a blocking pair for the new assignment in the new market. Thus, with this type of lie, agent cannot change the allotment of anybody else. If agent changes the relative ranking of her allotment in the original market, she can be worse off with the lie. For instance, suppose that she moves from her lower contour set at her allotment to the upper contour set. In the preference profile in which each agent different from and ranks top her allotment at , and in which agent ranks top, agent receives in the SPDA assignment. See Fernandez (2018) for a related property of SPDA that guarantees students do not regret to lie when one also considers possible changes in the priorities of schools.

A mechanism is a pair where is an unrestricted message space and is an outcome function. Given the common prior , determines a standard Bayesian game . When the prior is degenerate, i.e., places probability one in a payoff type , we refer to this as a game of complete information and denote it simply by . A (behavior) strategy for agent  in is a function . We denote a profile of strategies by . For each , and each , is the corresponding product measure . When we simply write . We denote the measure that places probability one on by . The expected utility of agent  with type in from playing strategy when the other agents select actions as prescribed by is

where the summation is over all and . A profile of strategies is a Bayesian Nash equilibrium of if for each in the support of , each , and each , . The set of Bayesian Nash equilibria of is . We say that an action is a weakly dominant action for agent with type in if for each , and each , .

Our main basis for empirical plausibility of behavior is the following weak form of rationality.

Definition 1 (Velez and Brown, 2019b).

is weakly payoff monotone for if for each , each , and each pair such that , .

We then identify the Nash equilibria that can be approximated by empirically plausible behavior.

Definition 2 (Velez and Brown, 2019b).

An empirical equilibrium of is a Bayesian equilibrium of that is the limit of a sequence of weakly payoff monotone distributions for .

In any finite game, proper equilibria (Myerson, 1978), firm equilibria and approachable equilibria (van Damme, 1991), and the limiting logistic equilibrium (McKelvey and Palfrey, 1995) are empirical equilibria. Thus, existence of empirical equilibrium holds for each finite game (Velez and Brown, 2019b).

5 Results

We start with a key lemma that states that weakly dominant actions will always be part of the support of empirical equilibria.

Lemma 1.

Let be a mechanism and a common prior. Let and . Suppose that is a weakly dominant action for agent  with type in . Let be an empirical equilibrium of . Then, is in the support of .

The following theorem characterizes the strategy-proof scfs for which the empirical equilibria of its revelation game produce with certainty, for each common prior information structure, the truthful outcome.

Theorem 1.

Let be an scf. The following statements are equivalent.

  1. For each common prior and each empirical equilibrium of , say , we have that for each pair where is in the support of and is in the support of , .

  2. is strategy-proof and non-bossy in welfare-outcome.

We now discuss the proof of Theorem 1. Let us discuss first why a srategy-proof and non-bossy in welfare-outcome scf has the robustness property in statement 1 in the theorem. Suppose that , that the true type of the agents is , and that the agents end up reporting  with positive probability under . Consider an arbitrary agent, say . Since is strategy-proof, can be a best response for agent with type  only if it gives the agent the same utility as reporting for each report of the other agents that agent  believes will be observed with positive probability. Thus, since there are rational expectations in a common prior game, report needs to give agent the same utility as when the other agents report . Since is non-bossy in welfare outcome, it has to be the case that . By Lemma 1, if is an empirical equilibrium of , agent  reports her true type with positive probability in . Thus, is played with positive probability in . Thus, we can iterate over the set of agents and conclude that .

Let us discuss now the proof of the converse statement. First, we observe that it is well-known that the type of robust implementation in statement 1 of the theorem implies the scf is strategy-proof (Dasgupta et al., 1979; Bergemann and Morris, 2005). Thus, it is enough to prove that if  is strategy-proof and satisfies the robustness property, it has to be non-bossy in welfare-outcome. Our proof of this statement is by contradiction. We suppose to the contrary that for some type , an agent, say , can change the outcome of  by reporting some alternative without changing her welfare. We then show that the complete information game has an empirical equilibrium in which is observed with positive probability. The subtlety of doing this resides in that our statement is free of details about the payoff environment in which it applies. We have an arbitrary number of agents and we know little about the structure of agents’ preferences. If we had additional information about the environment, as say for the second-price auction, the construction could be greatly simplified as in our illustrating example.

To solve this problem we design an operator that responds to four different types of signals, , where and . This operator has fixed points that are always weakly payoff monotone distributions for . For a given , the operator restricts its search of distributions to those that place at least probability in each action for agent . For a given , the operator restricts its search of distributions to those that place at least probability in each action for each agent . If we take to infinity, the operator looks for distributions in which agent ’s frequency of play is almost a best response to the other agents’ distribution (constrained by ). If we take to infinity, the operator looks for distributions in which for each agent , her frequency of play is almost a best response to the other agents’ distribution (constrained by ). The proof is completed by proving that for the right sequence of signals, the operator will have fixed points that in the limit exhibit the required properties. To simplify our discussion without losing the core of the argument, let us suppose that each agent has a unique weakly dominant action for each type. Fix and . Since we base the construction of our operator on continuous functions, one can prove that there is such that for each fixed point of the operator, if the expected utility of reports and does not differ in more than , then agent places probability almost the same on these two reports. Let . If each agent approximately places probability in each action that is not weakly dominant and the rest in her dominant action, the utility of agent from reports and will be almost the same when is small. Thus, one can calibrate for this difference to be less than . Let be this value. If we take to infinity keeping constant, the distribution of each agent in each fixed point of the operator will place, approximately, probability in each action that is not weakly dominant. Thus, for large , has a fixed point in which agent is playing and with almost the same probability and all other agents are playing their dominant strategy with almost certainty. We grab one of this distributions. It is the first point in our sequence, which we construct by repeating this argument starting from smaller s and larger s.

Interestingly, the conclusions of Theorem 1 depend on our requirement that the empirical equilibria of the scf generate only truthful outcomes for type spaces in which an agent may know, with certainty, the payoff type of the other agents.

Theorem 2.

Let be an scf. The following statements are equivalent.

  1. For each full-support prior and each empirical equilibrium of , say , we have that for each pair where is in the support of , .

  2. is strategy-proof and has essentially unique dominant strategies.

Lemma 1 and Theorems 1 and 2 give us a clear description of the weakly payoff monotone behavior that can be observed when a strategy-proof scf is operated. In the next section we contrast these predictions with experimental evidence on strategy-proof mechanisms.

6 Experimental and empirical evidence

6.1 Dominant strategy play

The performance of strategy-proof mechanisms in an experimental environment has attracted a fair amount of attention. Essentially, experiments have been run to test the hypothesis that dominant strategy equilibrium is a reasonable prediction for these games. The common finding is a lack of support for this hypothesis in most mechanisms. The only exceptions appear to be mechanisms for which dominant strategies are “obvious” (Li, 2017).

Our results provide an alternative theoretical framework from which one can reevaluate these experimental results. Theorems 1 and 2 state that as long as empirical distributions of play are weakly payoff monotone we should expect two features in data. First, we will never see agents’ behavior approximate a Nash equilibrium that is not truthful equivalent in two situations: (i) the scf is strategy-proof and non-bossy in welfare outcome; or (ii) each agent believes all other payoff types are possible and the scf has essentially unique dominant strategies. Second, one cannot rule out that sub-optimal equilibria are approximated by weakly payoff monotone behavior when the scf violates non-bossiness in welfare-outcome and information is complete.

It is informative to note that our first conclusion still holds if we only require, instead of weak payoff monotonicity, that there is a lower bound on the probability with which an agent reports truthfully, an easier hypothesis to test. Thus, in order to investigate whether a sub-optimal Nash equilibrium is approximated in situations (i) and (ii), it is enough to verify that truthful play is non-negligible and does not dissipate in experiments with multiple rounds. This is largely supported by data.

In Table LABEL:Tab:Frq-dom-st, we survey the literature for experimental results with dominant strategy mechanisms. We find nine studies across a variety of mechanisms. In all of these studies we are able to determine, based on the number of pure strategies available to each player, how often a dominant strategy would be played if subjects uniformly played all pure strategies. In every experiment, rates of dominant strategy play exceed this threshold.101010These results are not different if one looks only at initial or late play in the experiments.

A simple binomial test—treating each of these nine papers as a single observation—rejects any null hypothesis that these rates of dominant strategy play are drawn from a random distribution with median probability at or below these levels (

). Thus one would reasonably conclude that rates of dominant strategy play should exceed that under uniform support.111111Our benchmark of uniform bids is well defined in each finite environment. Thus it allows for a meaningful aggregation of the different studies. For the second-price auction, an alternative comparison is the rate of dominant strategy play in this mechanism and the frequency of bids that are equal to the agent’s own value in the first-price auction. Among the experiments we survey, Andreoni et al. (2007) allows for this direct comparison in experimental sessions that differ only on the price rule. In this experiment, dominant strategy play in the second-price auction is 68.25%, 57.50%, 51.25%; and in the first-price auction the percentage of agents bidding their value is 6.17%, 11.92%, 19.48% for three corresponding information structures. Because there are only two sessions each under the two auction mechanisms, non-parametric tests cannot show these differences to be significant at the session level (). They are significantly different at the subject level for a variety of non-parametric and parametric tests ().

It is evident then that the accumulated experimental data supports the conclusion that under conditions (i) and (ii) agents’ behavior is not likely to settle on a sub-optimal equilibrium. As long as agents are not choosing a best response, the behavior of the other agents will continue flagging their consequential deviations from truthful behavior as considerably inferior.121212Note that our prediction is that under conditions (i) and (ii), behavior will not settle in a suboptimal equilibrium, not that behavior will necessarily converge to a truthful equilibrium.

6.2 Observing empirical equilibria

Among the experiments we surveyed, Cason et al. (2006) and Andreoni et al. (2007) involve the operation of a strategy-proof mechanism that violates non-bossiness in welfare-outcome in an information environment in which information is not interior. These experiments offer us the chance to observe Nash equilibria attaining outcomes different from the truthful one with positive probability.

In Cason et al. (2006), two-agent groups (row and column) play simultaneously eight to ten rounds in randomly rematched groups with the same pivotal mechanism payoff matrix over all rounds. This experiment was designed to test “secure implementation” (Saijo et al., 2007). This theory obtains a characterization of scfs whose direct revelation game implements the scf itself both in dominant strategies and Nash equilibria for all complete information priors. By running experiments with the pivotal mechanism, which has untruthful equilibria for some complete information structures, the authors illustrated that the lack of secure implementation may lead to equilibria that are not truthful equivalent. Indeed, these authors argue that even though deviations from dominant strategy play are arguably persistent in their pivotal mechanism experiment, virtually all subjects are playing mutual best responses to the population of subjects by the end of the experiment (c.f., Figure 7, Cason et al., 2006).131313Secure implementation is achieved by strategy-proof scfs that are non-bossy in welfare outcome and satisfy a rectangularity condition we state in Theorem 3. Empirical equilibrium analysis reveals that Cason et al. (2006)’s experiment likely succeeded in exhibiting an undesirable equilibrium because the scf they chose violates non-bosiness in welfare outcome in an information structure that is arguably complete. Had these authors chosen an scf violating secure implementation but satisfying non-bossiness in welfare outcome, like the TTC, it is unlikely that they would have observed behavior accumulating towards an untruthful Nash equilibrium (Sec. 6.1). See also our analysis of secure implementation in the context of robust implementation in Sec. 7.

Figure 1: Group outcomes in 4-person, second-price auctions in Andreoni et al. (2007) under full-support incomplete (a, left) and complete (b, right) information. The dark gray area indicates the proportion of outcomes where all subjects play mutual best responses to the actions of all other group members. The light gray area indicates outcomes where the transaction associated with the dominant strategy outcome occurs, that is, the subject with the highest valuation obtains the item and pays the amount of the second highest valuation. The medium gray area indicates the percentage of group outcomes where all subjects play a dominant strategy. Note that each level necessarily contains the subsequent level. Subjects are rematched randomly across a group of 20 each period.

In Andreoni et al. (2007), groups of four agents sequentially play three simultaneous games in each round for thirty rounds. Groups are rematched each round and play an auction game with the same values but increasing precision of information about the other players. The first game involves no information about the other players’ valuations beyond the distribution from which they are drawn. The final game involves complete information. These authors run separate sessions with the first-price auction and the second-price auction.

Andreoni et al. (2007)’s experiment offers a remarkable opportunity to test a range of predictions of empirical equilibrium analysis. These authors main objective is to experimentally evaluate the comparative statics predicted for the first-price auction by symmetric risk-neutral Nash equilibrium for a family of information structures introduced by Kim and Che (2004). As is usual in experimental studies, as a placebo treatment, these authors complemented their design by running the same experiments for the second-price auction, a mechanism for which the dominant strategy hypothesis predicts that the information structure should have no effect on agents’ behavior. Thus, inadvertently, these authors designed an ideal experiment to test the effect of information on the operation of a bossy strategy-proof scf that has unique dominant strategies.

One can argue that frequencies of play in all treatments accumulate towards a Nash equilibrium. Fig. 1 shows the proportion of outcomes where all subjects in a group play best-responses (dark gray), where the subject with the highest valuation obtains it at the second highest valuation (light gray), and all subjects play dominant strategies (medium gray) in Andreoni et al. (2007) under full-support incomplete information (left) and complete information (right).141414We concentrate our analysis on the extreme information structures in Andreoni et al. (2007) design for which Theorems 1 and 2 produce sharp predictions. In both cases, virtually all subjects are playing mutual best responses to the population of subjects in the second half of the experiment. Note that frequencies of best response play plotted in Fig. 2 are the percentage of groups in which all four agents end up playing a best response to each other. Even when this percentage is 80%, individual rates of best response play is about 95%.

Empirical equilibrium analysis reveals that behavior that is weakly payoff monotone and approximates mutual best responses in this experiment will necessarily have certain characteristics. For the second-price auction if information is interior, as in the first information treatment, this type of behavior can only approximate a truthful equivalent Nash equilibrium. If information is complete, as in the last information treatment, this type of behavior can accumulate towards a Nash equilibrium in which the lower value agents randomize with positive probability. Both phenomena are supported by the data.

Fig. 2 allows us to understand behavior in both information structures. The figure standardizes bids to valuations (the highest valuation is assigned a value of 4, the second highest a value of 3, and so on) and shows the median bid and the range that contains the higher and lower 85% of bids for bids by each of the four ranked valuation types. In both treatments the median bid for any of the four types generally falls on its respective valuation, consistent with dominant strategy play.

In the full-support incomplete information treatment, agents’ deviations from their dominant strategies do not induce consequential deviations from the truthful equilibrium. After the initial five rounds, median bids are the agents’ own values (Fig. 2 (a)). In the last twenty five rounds, 74.4% outcomes are truthful (Fig. 2 (a)); 97.2% outcomes are efficient, i.e., such that a highest valuation agent wins the auction (Fig. 3 (a)); in 94.4% of outcomes the price is determined by the bid of a second valuation agent; and on average the price paid by the winner differs in 1.188 points from the second highest valuation (Fig. 3 (b)). Thus, the mechanism is arguably achieving the social planner’s objectives. It is virtually assigning the good to a highest valuation agent and it is essentially raising revenue equal to the second highest valuation.

Figure 2: Median bid and 15th-85th percentile range by valuation type in 4-person, second-price auctions of Andreoni et al. (2007) under incomplete (a, left) and complete (b, right) information. Bids are standardized so that the valuation of the 1st-4th valuations in the specific auction are assigned values 4–1, respectively. Bids of 100 (the highest possible valuation) and 200 (the highest possible bid) are assigned values of 5 and 6, respectively. If two valuation types have the same value, valuation order is randomly assigned. Bids between two valuations are standardized by where is the highest valuation a bid exceeds and is the next highest valuation. Bids below the lowest valuation are standardized on the interval between 0 and the lowest valuation. Bids above the highest valuation are standardized either on the interval between the highest valuation and 100 (values of 4–5), or 100 and 200 (values of 5–6). For example, for the four valuations 80, 40, 25, 10, bids of 150, 40, 30, and 5 would be 5.5, 3, 2.33, and 0.5, respectively.

In the complete information treatment, after five rounds median bids are also the agents’ own values (Fig. 2 (b)). Differently from the incomplete information case, deviations from truthful behavior do not dissipate and are consequential. In the last twenty five rounds, 38.4% outcomes are truthful (Fig. 2 (b)); 91.6% outcomes are efficient, i.e., such that a highest valuation agent wins the auction (Fig. 3 (a)); in 68.4% of outcomes the price is determined by the bid of a second valuation agent; and on average the price paid by the winner differs in 8.704 points from the second highest valuation (Fig. 3 (b)).151515Andreoni et al. (2007) only report two sessions under the second price auction. Each features a within-session comparison of these two information structures. Because there are only two paired comparisons at the session level, non-parametric tests cannot show these differences to be significant (). At the subject level, they are significantly different for a variety of non-parametric and parametric tests (). Thus, even though the mechanism is assigning the good to the right agent, it is raising a revenue that is persistently away from the social planner’s objective.

A simple reason explains the differences in behavior between treatments. Under incomplete information there is a penalty for a player to deviate too much from his/her dominant strategy. There is no corresponding penalty under complete information. As long as a second or lower valuation player does not outbid the first, her payoff will be zero regardless. Together these experiments reveal that agents do react to pecuniary incentives and use information and observed frequencies of play of the other agents in a meaningful way. They do not preemptively react to a hypothetical tremble of the other agents, however. In the complete information case the highest valuation agent persistently overbids and the other agents persistently bid on a wide range under the highest valuation agent’s value. As long as these behaviors are essentially separated, they are mutual best responses. On the other hand, in the incomplete information treatment, for each bid, there is a positive probability that at least an agent draws that bid as valuation. Since agents bid their values with high probability (68.2% on average), there is a non-trivial chance that a significant deviation from truthful behavior is suboptimal. Thus, agents take into account a potential loss in utility, but only when there is an actual significant probability of it being realized.161616Since the first experiments on the second-price auctions with private values of Coppinger et al. (1980) and Kagel and Levin (1993), experimental economists have observed that even though agents do not play their dominant strategy in these games, the probability with which they would have ended up disciplined by the market given what the other are doing is very low. Our analysis goes beyond this observation by showing that as predicted by empirical equilibrium analysis, the degree to which these deviations are consequential is linked to the non-bossiness properties of the scf and the information structure.

Figure 3: Frequency of efficient outcomes (left) and average distance between the price and a second valuation (right) in the second-price auction experiments of Andreoni et al. (2007) in the full-support incomplete and complete information treatments.

6.3 Payoff monotonicity

One of the advantages of empirical equilibrium analysis is that it is based on an observable property of behavior, not on a thought experiment. That is, the conclusions of our theorems will hold whenever empirical distributions are weakly payoff monotone. Thus, evaluating the extent to which agents frequencies of play satisfy this property allows us to understand better the positive content of our theory.

Evaluating weak payoff monotonicity is an elusive task, however. In realistic games as those in the experiments we surveyed, action spaces and type spaces are large (e.g., Attiyeh et al., 2000

has 2001 actions). This makes the data requirements for fully testing payoff monotonicity unrealistic. It is plausible that data can point to differences on frequencies of play between two given actions for a certain agent type. In order to test that this is consistent with weak payoff monotonicity one would need to verify that the expected payoffs of these actions given what the other agents are doing are ranked in accordance to the frequencies of play of these actions. Doing so requires that one has a good estimate of the

whole distribution of play for all agent types.

Even though fully testing weak payoff monotonicity is not feasible with realistic data sets, one can test for certain markers of this property that are less demanding on data. First, in weakly payoff monotone data sets there should be a positive association between the frequencies with which actions are played and their empirical expected utility. For the four studies where we have sufficient data (Andreoni et al., 2007; Attiyeh et al., 2000; Cason et al., 2006; Li, 2017), we can compare the actual payoffs earned with each action choice with the counterfactual payoffs had a subject chosen a different action. If subjects choose actions independent of payoffs—a gross violation of weak monotonicity—we should suspect the differences between the average payoffs of played strategies and counterfactual payoffs of non-played strategies to be evenly distributed around zero. Instead we find in all cases the average payoffs of played strategies exceed those of non-played strategies.171717

Using a conditional-logistic regression also produces positive coefficients in all cases. It also assumes a specific formalized structure on subject choice, making it a less general test.

Treating the 30 total sessions across these four studies as independent observations, we can easily reject the null hypothesis that strategies are played independent of expected payoffs .181818Specifically, in 30 out of 30 sessions the average strategy subjects played in a round had higher expected payoffs than those they didn’t play. If we exclude all instances where subjects played a dominant strategy, this result holds in 28 out of 30 sessions.

Not all features of data are in line with weak payoff monotonicity, however. We are aware of three of these. First, in the Pivotal mechanism experiment of Cason et al. (2006)

, there are two dominant strategies for each agent. While the Column agent chooses them with similar frequencies (36.1% and 38.3%), the Row agent chooses them with frequencies 51.1% and 19.4%. Parametric paired t-tests and non-parametric signed rank and sign tests suggest the later difference is statistically significant at the subject level

, but not the former. Second, there is a well documented propensity of overbidding in second-price auctions. This does not have to be necessarily at odds with weak payoff monotonicity. Agents who draw larger values will find overbidding with respect to value to be a less costly mistake than underbidding. Low value agents will have fewer bids below their value than above their value. Thus, such an agent’s distribution of play can still be weakly payoff monotone and in aggregate overbid more than underbid. However, Figure 1 in

Andreoni et al. (2007), which depicts the frequency of the difference between the bid of the low value agents and the maximal value, shows that these agents place significantly higher weight in the bids that are close to the maximal value agent. This is a clear violation of weak payoff monotonicity, which as Andreoni et al. (2007) argue, may have origin in spiteful behavior of the low value agents. Finally, a simple behavioral regularity as rounding to multiples of five, can easily induce violations of weak payoff monotonicity (such patterns are present in the auction data of Andreoni et al., 2007; Brown and Velez, 2019; Li, 2017, for instance).

In order to evaluate the positive content of empirical equilibrium analysis, it is necessary to understand the consequences for our analysis of these and other possible violations of weak payoff monotonicity. One avenue is to reconsider our construction and restart from a more basic principle than weak payoff monotonicity. Observe that this property can be stated in its contrapositive form as follows: If between two actions, say and , an agent’s expected utility of given what the other are doing is greater than or equal to that of , then the frequency with which the agent plays should be no less than the frequency with which the agent plays . Stated in this form this property can be naturally weakened as follows. One can require the existence of some constant such that for any two actions available to an agent, say and , if the expected utility of given what the other are doing is greater than or equal than that of , then the frequency with which the agent plays should be no less than times the frequency with which the agent plays . One can determine that all our results follow through if we take as basis for plausibility this weaker property. It is interesting in itself to see that such a weak property still provides empirical restrictions on data. Moreover, the main message of empirical equilibrium analysis in other applications, like full implementation, is also preserved under this generalization (Velez and Brown, 2019b).

We prefer to maintain the analysis based on weak payoff monotonicity because it strikes a balance between the regularity it provides while being challenged only by phenomena that (i) do not seem universally relevant, and (ii) seem to induce only continuous violations of this principle. Agents may round their bids, may be attracted by labels attached to certain actions, may exhibit other regarding preferences in certain contexts, and so on. At the end, what matters for empirical equilibrium analysis is that these features of behavior will be part of a bigger scheme in which agents are to a significant extent trying to hit their best payoffs given what the other agents are doing. By analyzing what happens when these less well understood effects are absent, we obtain a powerful benchmark producing policy relevant comparative statics.

Finally, an emerging empirical literature concerning strategy-proof mechanisms presents evidence in line with the predictions of empirical equilibrium. In empirical data, in which payoff types are not observable, it is of course challenging to determine what a deviation from truthful behavior is. However, in some instances the researcher is able to identify dominated actions, as an agent refusing to apply for financial support when this does not influence her acceptance to an academic position (Hassidim et al., 2016) or by means of ex-post surveys (Rees-Jones, 2017). The common finding is that these types of reports are observed with positive probability. However, in line with our results, they are more common among the agents for whom they are less likely to be consequential (Hassidim et al., 2016; Rees-Jones, 2017; Artemov et al., 2017; Chen and Pereyra, 2018).

7 Robust mechanism design and revelation principle

One can draw an informative parallel between our results and the literature on robust full implementation of scfs (Bergemann and Morris, 2005). This literature articulates the idea that the designer should look for mechanisms that operate well independently of informational assumptions. Of course one’s judgement about this depends on the prediction that one uses. Here are the news if one considers the Nash equilibrium prediction.191919One can even go further and require this type of robustness for all realizations of agents’ types for type spaces with no rational expectations a la Bergemann and Morris (2005). In a private values model without imposing common prior discipline, very little can be done (Bergemann and Morris, 2011; Adachi, 2014). On the other hand, if one aims at obtaining the right outcomes at least when agents consider themselves mutually possible, which covers each possible realization in each common prior payoff-type space, the mechanisms characterized in Theorem 3 still do the job (Adachi, 2014).

Theorem 3.

Let be an scf. The following are equivalent.

  1. There is a finite mechanism such that for each possible common prior , each Bayesian Nash equilibrium of , each possible in the support of , and each message in the support of , .

  2. (i) is strategy-proof and non-bossy in welfare-outcome, and (ii) satisfies the outcome rectangular property, i.e., for each pair of payoff types , if for each , , then .

Theorem 3 allows us to make a precise comparison of Theorems 1 and 2 with the literature on robust implementation. A parallel statement to this result is due to Saijo et al. (2007) () and Adachi (2014) () in an environment in which they restrict to pure-strategy equilibria and they consider implementation for type spaces larger than our payoff type space. Our statement includes mixed-strategy equilibria and does not make any requirement for type spaces in which payoff types can be “cloned.” Thus, Saijo et al. (2007) and Adachi (2014)’s results do not trivially imply Theorem 3 by means of Bergemann and Morris (Sec. 6.3, 2011)’s purification argument. The proof of Theorem 3 can be completed by adapting the arguments in these papers, however. We include it in an online Appendix.

As mentioned in the introduction, the conditions in Theorem 3 are quite restrictive (c.f. Saijo et al., 2007; Bochet and Sakai, 2010; Fujinaka and Wakayama, 2011). The outcome rectangular property is responsible for large part of these restrictions (Table 2).

scf Strategy proofness Essentially unique dominant strategies Non-bossiness in welfare-outcome outcome rectangular property
TTC
Uniform rule
Median voting
Second price auction
Pivotal
SPDA
Table 2: Strategy-proof scfs and the outcome rectangular property.

Thus, the aim of designing mechanisms that produce, in all Nash equilibria for all information structures, only the desired outcomes may be unnecessarily pessimistic. None of the mechanisms in Table 2 pass the test. However, if one already believes that a Nash equilibrium will be a good prediction when the mechanism is operated, it is enough to be concerned that the Nash equilibria that is plausible will be observed. By Theorem 1, TTC, Uniform rule, and median voting pass the more realistic test for all common prior type spaces. By Theorem 2, the second-price auction, Pivotal mechanism, and SPDA pass the test for all full-support common prior type spaces.

It is worth noting that statement 1 in Theorem 3 is quantified over all finite mechanisms, while statement 1 in Theorem 1 only refers to the direct revelation game of the scf. It turns out that whenever statement 1 in Theorem 3 is satisfied by some mechanism for an scf, it is also satisfied by the scf’s direct revelation mechanism (Saijo et al., 2007). This means that a “revelation principle” holds for this type of implementation.

It is not clear that a revelation principle holds when empirical equilibrium is one’s prediction in these games. That is, we do not known whether there is a strategy-proof scf that violates non-bossiness in welfare outcome for which there is a mechanism that has the properties in statement 1 of Theorem 1. The issue is very interesting and subtle.

It is known that the restriction to direct revelation mechanisms is not without loss of generality for full implementation. That is, dominant strategy full implementation may require richer message spaces than the payoff type spaces (Dasgupta et al., 1979; Repullo, 1985). Strikingly, Repullo (1985) constructs a finite social choice environment that admits a strategy-proof social choice function whose direct revelation game for certain type has a dominant strategy equilibrium that Pareto dominates the outcome selected by the scf for that type. Moreover, the social choice environment in this example also admits a mechanism that implements in dominant strategies the social choice function.

By Lemma 1 we know that a dominant strategy profile in a game will always be observed with positive probability in each empirical equilibrium of the game.202020Observe also that by Theorem 1, Repullo (1985)’s scf necessarily violates non-bossiness in welfare-outcome. Thus, Repullo (1985)’s concern that undesirable outcomes —in this case dominant strategy equilibrium outcomes— of a direct revelation game for a strategy-proof scf may be empirically plausible, is well founded. As Repullo (1985) proves, it is possible to enlarge the message spaces and tighten the incentives for the selection of a particular outcome in a way that the desired outcome is the only dominant strategy outcome. It turns out that this type of message space enlargement, i.e., those that retain the existence of dominant strategies, will not resolve the issue.

Theorem 4 (Revelation principle for dominant strategy finite mechanisms).

Let be an scf. The following are equivalent.

  1. There is a finite mechanism for which each agent type has a dominant strategy and such that for each possible common prior , each empirical equilibrium of , each possible type in the support of , and each message in the support of , .

  2. For each common prior and each empirical equilibrium of , say , we have that for each pair where is in the support of and is in the support of , .

  3. is strategy-proof and non-bossy in welfare-outcome.

Theorem 4 implies that it is impossible to obtain robust implementation in empirical equilibrium of a social choice correspondence that violates non-bossiness in welfare-outcome by a dominant strategies mechanism. It is worth noting that enlarging the message space on the direct revelation game of a strategy-proof scf that violates non-bossiness in welfare-outcome may have a meaningful effect on the performance of the mechanism, even when one preserves the existence of dominant strategies.

Example 1.

Consider an environment with two agents whose payoff type spaces are and . There are two possible outcomes ; and , , and . Suppose that a social planner desires to implement the efficient dictatorship in which agent gets her top choice. One can easily see that for any common prior , for each empirical equilibrium of , say , agent with payoff type uniformly randomizes in . Thus, in each empirical equilibrium of , agent always achieves her top choice and agent receives her top choice with probability when this does not conflict with agent ’s preferences. Suppose now that the social planner uses mechanism defined as follows: , where , , and for each , . One can see easily that in each empirical equilibrium of , agent always achieves her top choice and agent receives her top choice with probability when this does not conflict with agent ’s preferences.

Finally, our results hold only for single-valued sccs. Bergemann and Morris (2005, Example 2) show that “partial” robust implementation can be achieved for an scc that does not posses any strategy-proof single-valued selection. The same argument allows one to construct an environment and a mechanism that robustly implements in empirical equilibrium an scc that has no strategy-proof selection (See Example 2 in the Appendix).

8 Conclusion

We have presented theoretical evidence that strategy-proof mechanisms are not all the same. Our analysis is based on empirical equilibrium, a refinement of Nash equilibrium that is based only on observables. It selects all the Nash equilibria that are not rejected as implausible by some model that is disciplined by weak payoff monotonicity. We draw two main conclusions under the hypothesis that observable behavior satisfies this property. First, behavior from the operation of a strategy-proof and non-bossy in welfare-outcome scf will never approximate a sub-optimal Nash equilibrium. Second, if the mechanism violates the non-bossiness condition but has essentially unique dominant strategies, then behavior can approximate a sub-optimal equilibrium only if information is not interior. These predictions are supported by experimental data on multiple mechanisms. The weak payoff monotonicity hypothesis fares well in data, but violations of it can be spotted in particular environments. These violations do not hinder the main conclusions of our study, however.

Our results can be interpreted as positive developments in the theory of mechanism design. Existence of strategy-proof mechanisms is difficult on itself. Many of them do not pass the higher bar set by other approaches (e.g. Saijo et al., 2007; Li, 2017). Instead of trying to redesign strategy-proof mechanisms, we tried to understand them better. Our results then allowed us to come to terms with the experimental data that is against the dominant strategy hypothesis. Essentially, we learned that even though behavior in strategy-proof mechanisms may not quickly converge to a truthful equilibrium, many of these mechanisms (the non-bossy in welfare outcome) will likely never get stuck in a sub-optimal self-enforcing state, and most of these mechanisms (the ones with essentially unique dominant strategies) will have this problem only for corner information structures.

Appendix

Proof of Lemma 1.

Let and be as in the statement of the lemma. Consider a sequence of weakly payoff monotone distributions for , , such that for each and each , as , . Let and . Since is a dominant action for agent with type in , for each ,

Since is weakly payoff monotone for , we have that for each ,

Convergence implies that

Thus, is in the support of . ∎

Proof of Theorem 1.

Suppose that is strategy-proof and non-bossy in welfare-outcome. Let be a common prior and an empirical equilibrium of . Let be in the common support of . Thus, for each , . Let be in the support of , i.e., is a report that is observed with positive probability when the true types are . Let . Since , we have that

Since is strategy-proof, the integrand of the expression on the right dominates point-wise the integrand of the expression on the left. Thus, the integrands are equal on the support of the common integrating measure. Notice that since and is in the support of , agent assigns positive probability that the other agents profile of reports is . Thus,

Since is non-bossy in welfare-outcome,

(1)

By Lemma 1, is in the support of . Thus, is in the support of . Thus, the recursive argument shows that .

Suppose that for each common prior there is a Bayesian Nash equilibrium of that obtains for each , with probability one. It is well known that this implies is strategy-proof (Dasgupta et al., 1979; Bergemann and Morris, 2005).212121This can be easily seen by analyzing for each and , the common prior . See Theorem 2 for a slightly stronger result where this is obtained for interior common priors. We now prove that if is strategy-proof and statement 1 in the theorem holds, then is non-bossy in welfare-outcome. The proof is by contradiction. Suppose that violates non-bossiness in welfare-outcome. Then there is , , and , such that and . For simplicity we will assume that . We construct a complete information structure and an empirical equilibrium for the revelation game of that is not truthful equivalent. In order to do so we will make use of the so-called Quantal Response Equilibria (McKelvey and Palfrey, 1995), which are weakly payoff monotone distributions.

A quantal response function for agent in type space is a continuous function . For each , denotes the value assigned to by . We refer to the list as a quantal response function for type space . Agent ’s quantal response function is monotone if, for each , each , and each pair such that , (Goeree et al., 2005).

The (type-independent) logistic quantal response function with parameter , denoted by , assigns to each and each the value,

(2)

It can easily be checked that for each , the corresponding logistic quantal response function is continuous and monotone (McKelvey and Palfrey, 1995). A quantal response equilibrium of with respect to quantal response function is a fixed point of the composition of and the expected payoff operator in (McKelvey and Palfrey, 1995), i.e., a profile of type-contingent distributions such that for each and each , . Brouwer’s fixed point theorem guarantees that for each continuous quantal response function there is a quantal response equilibrium associates with it (McKelvey and Palfrey, 1995). One can easily see that if the quantal response function is monotone, each of its quantal response equilibria are weakly payoff monotone.

Consider the complete information prior that places probability one in . Let be a Nash equilibrium in which, for type , agents play all their dominant strategies with equal probability and agent places equal probability among her best responses. For each , and let be the quantal response function that for each , . Since is continuous and monotone, so is . Fix , , and . As ,

By monotonicity of , places maximal probability on the best responses to . Thus there is such that for each that is a best response for to , the distance between and is at most . Let be a sequence that diverges to . Consider a sequence of quantal response equilibria for the sequence of quantal response functions

Compactness of the simplex of probabilities implies that there is a convergent subsequence. Without loss of generality we assume then that the sequence of quantal response equilibria is convergent. Since each agent places probability at least in each action, then the limit of the convergent sequence is interior. Thus, for each agent different from , say , the limit probability is . For agent , since both parameters in her quantal response function are fixed in the sequence, the limit is . There is such that the max distance, between the limit and the corresponding quantal response equilibrium in the sequence is . Let be the corresponding quantal response equilibrium. Thus, for each that is a best response to for agent  with type , the distance between and is at most . Consider sequences , , and . Then, . Thus, for each , . Thus, is also convergent. Let be the limit of this sequence. For each that is a best response for to , we have that , where the strict inequality holds because is a dominant strategy for type and is an empirical equilibrium (see Lemma 1). Suppose that for each agent with type , there is a unique dominant strategy, i.e., . Thus, places positive probability in . Thus there is an empirical equilibrium that obtains with positive probability when the true type is . This contradicts statement 1. Thus, it remains to prove that we have a contradiction when there is at least one agent different from who has another dominant strategy than her true type at . Suppose that is a profile of dominant strategies for true types such that at least an agent is not truthfully reporting. Let , , and . Since is an empirical equilibrium, both and are realized with positive probability when the true type is . Since statement 1 holds, . By Lemma 1, in the complete information prior that places probability one in , the truthful report is realized with positive probability. Thus, for this prior both and are realized with positive probability. Since statement 1 holds, . Thus, is a best response to for agent . Since places positive probability on the best responses for agent with type to , we have that obtains with positive probability when the true type is . This contradicts statement 1. ∎

Proof of Theorem 2.

Suppose that statement 1 is satisfied. Then, is strategy-proof (Bergemann and Morris, 2005, Proposition 3). We prove this here for completeness because we include mixed strategy equilibria in our analysis. Let , , and . Let . Consider the common prior that places probability on and on , and places uniform probability on all other payoff types. Thus, has full-support. Let be a Bayesian Nash equilibrium of such that for each and each message in the support of produces . Thus, the expected value of a report in the support of has an expected value for type that is greater than or equal to the expected value of a report in the support of , i.e.,

Since as , , we have that . Thus, is strategy-proof.

We claim that has essentially unique dominant strategies. Suppose by contradiction that there are , , , such that , , and for each , . Let have full support. Let be an empirical equilibrium of . Since is strategy-proof, is a weakly dominant action for agent with type and for each , is a dominant strategy for agent  with type . By Lemma 1, places positive probability on . This contradicts statement 1 in the theorem.

Suppose now that is strategy-proof and has essentially unique dominant strategies. Let have full support and be an empirical equilibrium of . Let . We prove that obtains with probability one. Let . Suppose that is in the support of . We claim that for each , . Since is a Bayesian Nash equilibrium

Since is strategy-proof, the integrand of the expression on the right dominates point-wise the integrand of the expression on the left. Thus, the integrands are equal on the support of the common integrating measure. Since and since by Lemma 1 the probability with which is realized for is positive, we have that