1 Introduction
When the effects of an agent’s actions also depend upon the actions of other selfinterested agents, a fully rational agent must form probabilistic beliefs about the other agents’ actions. When all of the agents in an environment are fully rational, they must therefore model not only the actions of the other agents, but the beliefs of the agents about each other that lead to their actions, and the beliefs about those beliefs, and so on. This motivates the predictions of classical game theory, in which all agents are assumed to be fully rational and therefore to have beliefs about each others’ actions, beliefs, and higherorder beliefs that are in some sense accurate [e.g., Shoham and LeytonBrown, 2008, chapter 3]. The models of the other agents’ reasoning which drive those beliefs, however, are typically underspecified.
Full rationality is an unrealistic goal in most real applications. But the strong prescriptions of classical game theory start to break down when the agents are only boundedly rational. For example, when the other agents behave according to a fixed, known rule, then there is no need for even a fully rational agent to model their beliefs. In strategic environments containing boundedly rational agents, we can begin to distinguish between degrees of strategic behavior, which is informed by a model of the other agents, and nonstrategic behavior, which is not.
It is well known that models from classical game theory—especially Nash equilibrium—often predict poorly in environments that contain human agents. The field of behavioral game theory aims to develop models that more accurately describe human behavior in strategic environments, based on both field data and experimental data [e.g., Camerer, 2003]. A prominent class of models from the behavioral game theory literature are the iterative models such as then level [Nagel, 1995; CostaGomes et al., 2001], cognitive hierarchy [Camerer et al., 2004], and quantal cognitive hierarchy models [Wright and LeytonBrown, 2017]. In all of these models, each agent has a nonnegative integer level, which represents the degree of strategic reasoning (i.e., modeling of recursive beliefs) of which each agent is capable. Level agents are nonstrategic—they do not model other agents’ beliefs or actions at all; level agents model level agents’ actions; level agents model the beliefs and actions of level agents; and so forth.
Much work in this space considers uniform randomization for level0 behavior; however, recent work has shown that model performance can be substantially improved by allowing for richer level0 specifications [Wright and LeytonBrown, pear]. This raises the question: how rich should these specifications be allowed to become, given that level 0 is meant to describe nonstrategic behavior? We adopt the view that a strategic agent is one whose behavior relies partly upon an explicit, probabilistic belief about the other agents’ actions, which is driven by a model of the other agents’ beliefs and actions. A nonstrategic agent is thus one whose actions do not depend upon such a model.
The question of how strategically an agent behaves is distinct from the question of how rationally an agent behaves—a fully rational agent in the presence of other fully rational agents must also be fully strategic, but an agent can behave more or less rationally for a given degree of strategic reasoning, and depending on the other agents, different degrees of strategic reasoning may be optimal, especially when strategic reasoning is costly. Nonstrategic behavior may be quite sophisticated. In our companion paper, we proposed a collection of nonstrategic decision rules as a foundation for iterative models. However, those rules were handcrafted. One might aspire to learn the decision rules directly from data, as in Hartford et al. [2016], or by searching over the class of nonstrategic decision rules. However, defining an appropriate class of functions to search over requires a formal definition that corresponds to our intuitions about what strategic reasoning is.
This paper is about constructing just such a formal definition. It is not sufficient to characterize strategic rules as those that can be represented as a bestresponse to beliefs about the other agents, for two reasons. First, the class of strategic decision rules must be strictly more powerful than the class of nonstrategic decision rules, because a strategic agent’s beliefs must be able to compute beliefs about nonstrategic behavior on the part of the other agents. Put another way, strategic models are those that require a model of the other agents, not those that can be represented as a response to such a model.^{1}^{1}1For example, consider the maxmaxpayoff decision rule, which prescribes that an agent choose the action with the highest maximum payoff. This rule clearly does not require a model of the other agents’ actions. However, it can also be represented as a best response to the belief that the other agents will play in such a way as to make the make the maxmaxpayoff action actually yield its maximum payoff.
In this work we instead take the opposite approach, by proposing a characterization of nonstrategic decision rules, and showing that it cannot represent strategic reasoning. We start from a general class of agent models—functions that describe how an agent might choose how to play in a given game. We seek to partition this class into strategic and nonstrategic agent models. We begin by defining elementary agent models. Intuitively, these are agent models that can be computed by first assigning a single number to each outcome, considered in isolation from all the others. In Section 4 we prove that none of the exemplary strategic solution concepts that we define in Section 3.2 can be computed in this form. We then show that combining the output from a finite collection of elementary agent models, while strictly more powerful, is still not able to compute any of the same strategic agent models. This latter class, finite combinations of elementary agent models, is our proposed mathematical characterization of nonstrategic agent models.
2 Related Work
There is broad agreement in the economics and artificial intelligence literatures that
strategic environments are those which contain multiple independent agents with independent goals, each of whose payoffs depend upon the actions of more than one agent. However, possibly because of the equivalence in strategic environments between perfectly rational behavior and strategic behavior, there is much less agreement on what constitutes strategic behavior.In economics, it is common to use “strategic behavior” strictly to mean behavior in a strategic environment [e.g., Bernheim, 1984; Pearce, 1984]. In the algorithmic game theory and artificial intelligence communities, there is a broad range of meanings. Some work refers to strategic agents as those that take account of the effects of their own actions at all, and work to maximize their own utility for those effects [Walsh et al., 2000; Airiau and Sen, 2003; BarIsaac and Ganuza, 2005]. In this sense, there can be a distinction between nonstrategic and strategic agents even in singleagent environments. The most common notion of a strategic agent in these communities, however, is one who will act to maximize its own utility based on explicit probabilistic beliefs about the actions of the other agents [Roth and Ockenfels, 2002; Li and Tesauro, 2003; Babaioff et al., 2004; Lee, 2014; Gerding et al., 2011; Ghosh and Hummel, 2012; Grabisch et al., 2017]. Similarly, the most common notion of a nonstrategic is one who follows some fixed, known decision rule, often truthtelling [Sandholm and Lesser, 2001; Airiau and Sen, 2003; Li and Tesauro, 2003; Lee, 2014; Gerding et al., 2011; Grabisch et al., 2017]
In the behavioral game theory literature, there can be a distinction between degrees of strategic reasoning or behavior, especially in work that treats iterative models [e.g., Nagel, 1995; CostaGomes et al., 2001; Camerer et al., 2004; Crawford et al., 2010]. This distinction is less common in the artificial intelligence community, but Sandholm and Lesser [2001] do distinguish between strategic threshold strategies and Nash threshold strategies.
3 Background
We begin by briefly defining our formal framework and notation.
3.1 NormalForm Games
In this work, we focus on unrepeated, simultaneousmove normalform games. These games turn out to be perfectly general, in a mathematical sense: any game, including repeated games or dynamic “game trees”, can be represented as a normalform game.
A normalform game is defined by a tuple , where is a finite set of agents; is the set of possible action profiles; is the finite set of actions available to agent ; and is a set of utility functions , each of which maps from an action profile to a utility for agent . Agents may play stochastic actions, called mixed actions.^{2}^{2}2We use this terminology rather than the more standard “mixed strategies” to avoid causing confusion by talking about the strategies of nonstrategic agents. We denote the set of agent ’s mixed actions by , and the set of possible mixed action profiles by , where
is the set of probability distributions over a finite set
. Overloading notation, we represent the expected utility to agent of a profile of mixed actions by . We use the notation to refer to the profile of mixed actions of all agents except , and to represent a full mixed action profile.3.2 Solution Concepts
A solution concept is a mapping from a game to a mixed action profile (or set of mixed action profiles) that satisfies some criteria. Solution concepts can be interpreted descriptively, as a prediction of how agents will actually play a game. They can also be interpreted normatively, as a claim about how rational agents ought to play a game.^{3}^{3}3These two senses frequently overlap, as it is common to assume that agents will play a game rationally. We will primarily be concerned with these solution concepts as formalizations of strategic behavior in games.
The foundational solution concept in game theory, and by far the most commonly used, is the Nash equilibrium.
Definition 1 (Nash equilibrium).
Let denote the set of agent ’s best responses to a mixed action profile . A Nash equilibrium is a mixed action profile in which every agent simultaneously best responds to all the other agents. Formally, is a Nash equilibrium if
When agents play a Nash equilibrium, they must randomize independently. A correlated equilibrium relaxes this requirement, and allows for joint distributions of actions that are correlated.
Definition 2 (Correlated equilibrium).
A correlated equilibrium is a distribution over action profiles which satisfies the following for every agent and every mapping :
Note that every Nash equilibrium induces a correlated equilibrium .
One important idea from behavioral economics is that people are more likely to make errors when those errors are less costly. This can be modeled by assuming that agents best respond quantally, rather than via strict maximization.
Definition 3 (Quantal best response).
A
(logit) quantal best response
by agent to in game is a mixed action such that(1) 
where (the precision parameter) indicates how sensitive agents are to utility differences. When , quantal best response is equivalent to uniform randomization; as , quantal best response corresponds to best response in the sense that actions are played with positive probability only if they are best responses, i.e. .
The generalization from best response to quantal best response gives rise to a generalization of Nash equilibrium known as the quantal response equilibrium (“QRE”) [McKelvey and Palfrey, 1995].
Definition 4 (Qre).
A quantal response equilibrium with precision is a mixed action profile in which every agent’s strategy is a quantal best response to the strategies of the other agents; i.e.,
for all agents .
The level model captures the insight from behavioral economics that humans can only perform a limited number of steps of strategic reasoning. Unlike the previous concepts, which are defined as fixed points, the level model [Nagel, 1995; CostaGomes et al., 2001] is built up iteratively. Each agent is associated with a level , corresponding to the number of steps of reasoning the agent is able to perform. A level agent plays nonstrategically (i.e., without reasoning about its opponent); a level agent (for ) best responds to the belief that all other agents are level. The level model implies a distribution over play for all agents when combined with a distribution over levels.
Definition 5 (Level prediction).
Fix a distribution over levels and a level behavior . Then the level strategy for an agent is defined as
The level prediction for a game is the average of the behavior of the level strategies weighted by the frequency of the levels,
Cognitive hierarchy [Camerer et al., 2004] is a very similar model from behavioral game theory in which agents respond to the distribution of lowerlevel agents, rather than believing that every agent performs exactly one step less of reasoning.
Definition 6 (Cognitive hierarchy prediction).
Fix a distribution over levels and a level behavior . Then the level hierarchical strategy for an agent is
where and . The cognitive hierarchy prediction is again the average of the level hierarchical strategies weighted by the frequencies of the levels,
As with Nash equilibrium, it is possible to generalize these iterative solution concepts by basing agents’ behavior on quantal best response to the lower levels rather than best response. The resulting models are called quantal level and quantal cognitive hierarchy [e.g., Stahl and Wilson, 1994; Wright and LeytonBrown, 2017].
All of the iterative solution concepts described above rely on the specification of a nonstrategic level behavior. This need motivates the current paper, in which we explore what behaviors can be candidates for this specification.
4 Elementary Agent Models
We start by formally defining what we mean to model an agent’s behavior.
Definition 7 (Agent models).
An agent model is a function that always outputs a distribution; i.e., for all games . We use a function name with no agent subscript, such as , to denote a profile of agent models with one function for each agent. We write to denote the mixed action profile that results from applying each to .
Agent models differ from solution concepts in two ways. First, they represent a prediction of a single agent’s behavior, rather than a prediction about the joint behavior of all the agents. Second, they are required to return a single distribution over the agent’s actions (i.e., a single mixed action), rather than a set of actions.
We gave the intuition that elementary agent models make their decisions based on “a single number”. Since it is possible to encode a multidimensional number into a single real number^{4}^{4}4In information economics this is referred to as dimension smuggling [e.g., Nisan and Segal, 2006]., we must make that intuition more precise using the following definitions.
Definition 8 (Dictatorial function).
A function is dictatorial if its value is completely determined by a single input: such that , , .
Definition 9 (Nosmuggling condition).
A function , for , satisfies the Nosmuggling condition iff, for every , either is dictatorial in input or there exist such that and .
Definition 10 (Elementary).
An agent model is elementary if it can be represented as , where

for every action profile ,

satisfies the strong nosmuggling condition, and

is an arbitrary function.
For convenience, when condition 1 holds we refer to as the potential map for .
Clearly any agent model that can be computed using only the agent’s own utilities (such as the minimax regret rule [Savage, 1951]) qualifies as elementary; agent models of this kind correspond to . However, elementary agent models may also consider the other agents’ utilities, so long as they are aggregated in some way that does not smuggle dimensions; e.g., the “maxmax welfare” agent model that plays only actions leading to the highest total payoffs among all agents.
We are particularly concerned with agent models that represent behavior that is at least partially selfinterested.
Definition 11 (Selfresponsive).
We say that an agent model that responds to changes in the associated agent’s payoffs is selfresponsive. Formally, an agent model is selfresponsive if for any game , there exists a game such that

, and

for all and .
We now have enough definitions to prove our first result. In our main lemma, we show that elementary agent models are not sufficiently expressive to represent quantal best response to any selfresponsive agent model (which need not, itself, be elementary). We then show that quantal best response is, itself, selfresponsive. Given these two facts, it is straightforward to demonstrate that both iterative strategic reasoning and fixedpoint strategic reasoning cannot be represented as basic agent models.
Lemma 12.
For any and profile of selfresponsive agent models, the agent model is not elementary.
Proof.
Suppose for contradiction that , where is the potential map for , and satisfies the nosmuggling condition. There are two cases:

is a dictator for . Consider the following game :
Let be the row player and be the column player. Choose a game with that differs only in ’s payoffs; such a game is guaranteed exist because is selfresponsive. Notice that . Since , , and therefore . But is a dictator for , so , a contradiction.

is not a dictator for . That means that there exist such that and . Now consider the following games and :
Since , . Similarly . But , so , and hence . But that means that
a contradiction.∎
∎
Lemma 13.
For any profile of agent models and , the agent model is selfresponsive.
Proof.
We prove this by showing how to construct a game satisfying Definition 11 from any game .
Fix an arbitrary and . One of agent ’s actions must be assigned weakly less probability by than all others. Call this action . Now construct as follows:
Clearly differs from only in ’s payoffs. Action was assigned weakly less probability by than any other action. However, is strictly dominant in , and hence must be assigned strictly greater probability than any other action by . Therefore, , and is selfresponsive. ∎
Putting these pieces together, there is no way to represent any of the most common fixedpoint or iterative solution concepts using elementary agent models.
Theorem 14 (No elementary representation of fixedpoint strategic reasoning).
There do not exist and a profile of elementary agent models that satisfy
for all games and players .
Corollary 15.
None of QRE, Nash equilibrium, or correlated equilibrium can be represented as a profile of elementary agent models.
Proof.
Immediate from Theorem 14 and the observation that best response is a special case of quantal best response. ∎
Theorem 16 (No elementary representation of iterative strategic reasoning).
Let be a profile of agent models and for all . Then there do not exist profiles of elementary agent models with and for all games and players .
Corollary 17.
None of level, cognitive hierarchy, or quantal cognitive hierarchy can be represented as a profile of elementary agent models.
Proof.
Immediate from Theorem 16 and the observation that best response is a special case of quantal best response. ∎
4.1 Finite Combinations of Elementary Models
It turns out that the class of functions that can be represented by combining a finite number of elementary models is strictly larger than the class of elementary agent models. However, under a natural condition on potential functions, finite combinations of elementary agent models are still not expressive enough to represent the exemplar strategic solution concepts that we consider.
We first formally define finite combinations:
Definition 18.
An agent model is a finite combination of elementary agent models if it can be represented as
where , is an arbitrary function, and are all elementary agent models.
Finite combinations of elementary agent models are strictly more expressive than individual elementary agent models.
Proposition 19.
Finite combinations of elementary agent models are not necessarily elementary themselves.
Proof.
We prove the result by providing an example of two elementary agent models whose linear combination is not elementary.
Let be the maxmax payoff decision rule,
where .
Let be the maxmax welfare decision rule,
where . Let and
Suppose that is elementary. Since is a function of all agents’ utilities, its potential is not dictatorial for any agent; therefore, it must satisfy the nosmuggling condition. Let
be two utility vectors satisfying
and , where is ’s associated potential. Let , , and be any utility vector satisfying and .Now consider the following twoplayer games and , with row player :
Since the two games differ only in outcome , and , it must be that . But . So we have a contradiction, and is not elementary. ∎
However, when the potentials associated with a finite collection of elementary models are jointly nosmuggling, we recover the results of Theorems 14 and 16.
Definition 20 (Jointly nosmuggling).
A set of functions for is jointly nosmuggling if the function
satisfies the nosmuggling condition.
Theorem 21.
Let be a set of elementary agent models. If the potentials associated with the models in are jointly nosmuggling, then:

There do not exist and a profile of finite combinations of that satisfy
for all games and players .

For any profile of agent models and set of precisions, there do not exist profiles of finite combinations of with and for all games and players .
Proof.
Lemma 12 can be reproved for finite combinations in exactly the same way, since the joint nosmuggling condition guarantees the existence of utility vectors that are indistinguishable by the joint potential but have . The proof of statement 1 then follows the proof of Theorem 14, and the proof of statement 2 follows the proof of Theorem 16. ∎
The joint nosmuggling condition is a benign assumption. In particular, it is satisfied by every set of continuous potentials,^{5}^{5}5Note that an agent model with a continuous potential need not be continuous itself. including all linear combinations.
5 Discussion and Future Work
In this work, we propose finite combinations of elementary agent models as a mathematical characterization of the class of nonstrategic decision rules. This class is constructively defined, in the sense that membership of a rule is verified by demonstrating how to represent the rule in a specific form—as a function of the output of a nonsmuggling potential map—rather than by proving that it cannot be represented as a response to probabilistic beliefs. Indeed, many intuitively nonstrategic rules can be represented as responses to probabilistic beliefs; the crucial component of our notion of nonstrategic behavior is that they need not be.
There are two different approaches to conceptualizing bounded rationality. The first is to view boundedness as error. For example, quantal response notions, especially quantal response equilibrium, assume that agents make an error by choosing a lessthanoptimal action with increasing probability as the suboptimal action becomes closer to the optimum [McKelvey and Palfrey, 1995; Train, 2009]. An equilibrium is an equilibrium in which every agent comes withing of bestresponding to the others [Shoham and LeytonBrown, 2008]. More generally, a common approach to modeling less than fullyrational agents is to mix in uniform noise representing the probability of making an error [e.g., CostaGomes et al., 2001].
The other approach is to view boundedness as a structural property of the agent’s reasoning process. For example, work in program equilibrium often models boundedness by a bounded number of states in a finite state automaton [Gilboa and Samet, 1989; Rubinstein, 1998]. Iterative models both distinguish between strategic and nonstrategic reasoning, and distinguish degrees of strategic reasoning [Nagel, 1995; CostaGomes et al., 2001; Camerer et al., 2004]. Similarly, bounded rationalizability can be defined for different depths of reasoning [Bernheim, 1984; Pearce, 1984]. Work on the mean field equilibrium solution concept also distinguishes between “cognizant” strategies, which track the current state of other agents, and “oblivious” strategies, which do not [Adlakha and Johari, 2013].
Our proposed characterization is a structural notion: it restricts the information that agents are permitted to use by restricting them to summarize all outcomes into a single number before performing their reasoning. This is also a binary distinction: in this view, an agent model is either nonstrategic, or it is not. One possible direction for future work is to extend this distinction to be more quantitative; i.e., is there a sense in which agents are nonstrategic to a greater or lesser degree that is distinct from the number of steps of strategic reasoning that they perform?
There are a number of special cases of strategic solution concepts that fit our definition of nonstrategic. Specifically, the equilibrium of a twoplayer zerosum game can be computed by considering only the utility of a single agent, and hence the mixed action for an equilibriumplaying player in such a game can be computed by an elementary agent model. Similarly, an equilibrium for a potential game can of course be computed in terms of outcome values computed by a potential function [Monderer and Shapley, 1996]. In repeated settings, many noregret learning rules (which are guaranteed to converge to a coarse correlated equilibrium) can be executed by agents that only take account of their own utilities. One thing that these exceptions all have in common is that they are also computationally easy, unlike general equilibrium, which is known to be hard in a precise computational sense [Daskalakis et al., 2009; Chen and Deng, 2006]
. The equilibrium of a zerosum game can be solved in polynomial time by a linear program; the equilibrium of a potential game can be found simply by finding the maximum of the potential function over all the outcomes. Noregret algorithms are cheap to run, requiring in each time period work that is linear in the number of actions, and converge rapidly to coarse correlated equilibrium.
The connection between ease of computation and strategic simplicity seems natural, but it is not an equivalence. For example, correlated equilibrium in general games is computationally easy, as it can be computed in polynomial time by a linear program, but cannot in general be computed by an elementary agent model (see Corollary 15). An attractive future direction is to clarify the connection between computational and strategic simplicity.
References
 Adlakha and Johari [2013] Adlakha, S. and Johari, R. (2013). Mean field equilibrium in dynamic games with strategic complementarities. Operations Research, 61(4):971–989.
 Airiau and Sen [2003] Airiau, S. and Sen, S. (2003). Strategic bidding for multiple units in simultaneous and sequential auctions. Group Decision and Negotiation, 12(5):397–413.
 Babaioff et al. [2004] Babaioff, M., Nisan, N., and Pavlov, E. (2004). Mechanisms for a spatially distributed market. In Proceedings of the 5th ACM Conference on Electronic Commerce, pages 9–20.
 BarIsaac and Ganuza [2005] BarIsaac, H. and Ganuza, J.J. (2005). Teaching to the top and searching for superstars. Technical report, New York University, Leonard N. Stern School of Business, Department of Economics.
 Bernheim [1984] Bernheim, B. (1984). Rationalizable Strategic Behavior. Econometrica, 52(4):1007–1028.
 Camerer et al. [2004] Camerer, C., Ho, T., and Chong, J. (2004). A cognitive hierarchy model of games. Quarterly Journal of Economics, 119(3):861–898.
 Camerer [2003] Camerer, C. F. (2003). Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press.
 Chen and Deng [2006] Chen, X. and Deng, X. (2006). Settling the complexity of twoplayer nash equilibrium. In Foundations of Computer Science, 2006. FOCS’06. 47th Annual IEEE Symposium on, pages 261–272.
 CostaGomes et al. [2001] CostaGomes, M., Crawford, V., and Broseta, B. (2001). Cognition and behavior in normalform games: An experimental study. Econometrica, 69(5):1193–1235.
 Crawford et al. [2010] Crawford, V. P., CostaGomes, M. A., Iriberri, N., et al. (2010). Strategic thinking. Working paper.
 Daskalakis et al. [2009] Daskalakis, C., Goldberg, P. W., and Papadimitriou, C. H. (2009). The complexity of computing a Nash equilibrium. SIAM Journal on Computing, 39(1):195–259.
 Gerding et al. [2011] Gerding, E. H., Robu, V., Stein, S., Parkes, D. C., Rogers, A., and Jennings, N. R. (2011). Online mechanism design for electric vehicle charging. In The 10th International Conference on Autonomous Agents and Multiagent SystemsVolume 2, pages 811–818.
 Ghosh and Hummel [2012] Ghosh, A. and Hummel, P. (2012). Implementing optimal outcomes in social computing: a gametheoretic approach. In Proceedings of the 21st International Conference on World Wide Web, pages 539–548.
 Gilboa and Samet [1989] Gilboa, I. and Samet, D. (1989). Bounded versus unbounded rationality: The tyranny of the weak. Games and Economic Behavior, 1(3):213–221.
 Grabisch et al. [2017] Grabisch, M., Mandel, A., Rusinowska, A., and Tanimura, E. (2017). Strategic influence in social networks. Mathematics of Operations Research, 43(1):29–50.
 Hartford et al. [2016] Hartford, J. S., Wright, J. R., and LeytonBrown, K. (2016). Deep learning for predicting human strategic behavior. In Advances in Neural Information Processing Systems, pages 2424–2432.
 Lee [2014] Lee, H. (2014). Algorithmic and gametheoretic approaches to group scheduling. In Proceedings of the 2014 International Conference on Autonomous Agents and MultiAgent Systems, pages 1709–1710.
 Li and Tesauro [2003] Li, C. and Tesauro, G. (2003). A strategic decision model for multiattribute bilateral negotiation with alternating. In Proceedings of the 4th ACM Conference on Electronic Commerce, pages 208–209.
 McKelvey and Palfrey [1995] McKelvey, R. and Palfrey, T. (1995). Quantal response equilibria for normal form games. Games and Economic Behavior, 10(1):6–38.
 Monderer and Shapley [1996] Monderer, D. and Shapley, L. S. (1996). Potential games. Games and Economic Behavior, 14(1):124–143.
 Nagel [1995] Nagel, R. (1995). Unraveling in guessing games: An experimental study. American Economic Review, 85(5):1313–1326.
 Nisan and Segal [2006] Nisan, N. and Segal, I. (2006). The communication requirements of efficient allocations and supporting prices. Journal of Economic Theory, 129(1):192–224.
 Pearce [1984] Pearce, D. (1984). Rationalizable Strategic Behavior and the Problem of Perfection. Econometrica, 52(4):1029–1050.
 Roth and Ockenfels [2002] Roth, A. E. and Ockenfels, A. (2002). Lastminute bidding and the rules for ending secondprice auctions: Evidence from ebay and amazon auctions on the internet. American economic review, 92(4):1093–1103.
 Rubinstein [1998] Rubinstein, A. (1998). Modeling bounded rationality. MIT press.
 Sandholm and Lesser [2001] Sandholm, T. W. and Lesser, V. R. (2001). Leveled commitment contracts and strategic breach. Games and Economic Behavior, 35(12):212–270.
 Savage [1951] Savage, L. (1951). The Theory of Statistical Decision. Journal of the American Statistical Association, 46(253):55–67.
 Shoham and LeytonBrown [2008] Shoham, Y. and LeytonBrown, K. (2008). Multiagent Systems: Algorithmic, Gametheoretic, and Logical Foundations. Cambridge University Press.
 Stahl and Wilson [1994] Stahl, D. and Wilson, P. (1994). Experimental evidence on players’ models of other players. Journal of Economic Behavior and Organization, 25(3):309–327.
 Train [2009] Train, K. (2009). Discrete Choice Methods with Simulation. Cambridge University Press.
 Walsh et al. [2000] Walsh, W. E., Wellman, M. P., and Ygge, F. (2000). Combinatorial auctions for supply chain formation. In Proceedings of the 2nd ACM conference on Electronic commerce, pages 260–269.
 Wright and LeytonBrown [2017] Wright, J. R. and LeytonBrown, K. (2017). Predicting human behavior in unrepeated, simultaneousmove games. Games and Economic Behavior, 106:16–37.
 Wright and LeytonBrown [pear] Wright, J. R. and LeytonBrown, K. (to appear). Level0 models for predicting human behavior in games. Journal of Artificial Intelligence Research.
Comments
There are no comments yet.