A Formal Separation Between Strategic and Nonstrategic Behavior

12/30/2018 ∙ by James R. Wright, et al. ∙ University of Alberta The University of British Columbia 0

It is common to make a distinction between `strategic' behavior and other forms of intentional but `nonstrategic' behavior: typically, that strategic agents model other agents while nonstrategic agents do not. However, a crisp boundary between these concepts has proven elusive. This problem is pervasive throughout the game theoretic literature on bounded rationality. It is particularly critical in parts of the behavioral game theory literature that make an explicit distinction between the behavior of `nonstrategic' level-0 agents and `strategic' higher-level agents (e.g., the level-k and cognitive hierarchy models). The literature gives no clear guidance on how the rationality of nonstrategic agents must be bounded, instead typically just singling out specific decision rules and informally asserting them to be nonstrategic (e.g., truthfully revealing private information; randomizing uniformly). In this work, we propose a new, formal characterization of nonstrategic behavior. Our main contribution is to show that it satisfies two properties: (1) it is general enough to capture all purportedly `nonstrategic' decision rules of which we are aware; (2) behavior that obeys our characterization is distinct from strategic behavior in a precise sense.



There are no comments yet.


This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Our focus in this paper is on game-theoretic environments, characterized by multiple independent agents with independent goals, each of whose payoffs depend upon more than just their own actions. How should agents be assumed to behave in such environments? The classical answer is that agents should be modeled as fully rational. This is an extremely demanding standard. For example, when a fully rational agent ’s payoffs depend upon the actions of some other self-interested agent , must form probabilistic beliefs about ’s actions. If believes to be fully rational as well, ’s actions are thus responses to beliefs about , and so must also hold beliefs about ’s beliefs about , and about ’s beliefs about ’s beliefs about , and so on. Although such assumptions may seem baroque, they in fact undergird the predictions of classical game theory, which assumes all agents to be fully rational; e.g., one way of defining the Nash equilibrium is as such a set of beliefs in which every agent is correct in its beliefs about the others. The behavior of fully rational agents is commonly said to be strategic. Indeed, in the early days of game theory, the term “strategic” was used as a synonym for full rationality (e.g., Bernheim, 1984; Pearce, 1984).

At the other extreme, we might assume that agents are entirely irrational and do not model other agents at all, following some fixed rule like truthfully revealing all private information or uniformly randomizing across all available actions. The behavior of such agents is commonly said to be nonstrategic.

Things get muddier in between these extremes. Human players are clearly not fully rational; e.g., nobody knows the Nash equilibrium strategy for chess. However, at least some of us surely do reason about the behavior and beliefs of other agents with whom we interact. The literature generally also calls such “boundedly rational” behavior strategic, even when the behavior is inconsistent with full rationality; intuitively, the dividing line is generally taken to be the question of whether agents model other agents and their incentives when deciding how to act. More formally, the term “strategic” is generally used111Of course, the term “strategic” is not used entirely consistently in the literature. For example, some work refers to strategic agents as those that pay any attention to the effects of their own actions when reasoning about how to maximize their own utility (Walsh et al., 2000; Airiau and Sen, 2003; Bar-Isaac and Ganuza, 2005). Under this meaning for the term, there exists a meaningful distinction between nonstrategic and strategic agents even in single-agent environments: the former are myopic about the effects of their own actions. to describe agents who act to maximize their own utilities based on explicit probabilistic beliefs about the actions of other agents (Roth and Ockenfels, 2002; Li and Tesauro, 2003; Babaioff et al., 2004; Lee, 2014; Gerding et al., 2011; Ghosh and Hummel, 2012; Grabisch et al., 2017), and the term “nonstrategic” is generally used to describe agents who follow some fixed, known decision rule, often truth-telling (Sandholm and Lesser, 2001; Airiau and Sen, 2003; Li and Tesauro, 2003; Lee, 2014; Gerding et al., 2011; Grabisch et al., 2017). The key theme of this paper is that the task of making this idea precise is both underexplored in the literature and tricky in practice.

The literature has taken two main approaches to conceptualizing bounded rationality. The first views boundedness as error. For example, quantal response notions, especially quantal response equilibrium, assume that agents make an error by choosing a less-than-optimal action with increasing probability as the payoffs from taking the suboptimal action approach the payoffs from taking the optimal action

(McKelvey and Palfrey, 1995; Train, 2009). An -equilibrium is an equilibrium in which every agent comes within of best responding to the others (Shoham and Leyton-Brown, 2008). More generally, a common approach to modeling less than fully rational agents is to mix in uniform noise representing the probability of making an error (e.g., Costa-Gomes et al., 2001).

The other main approach to conceptualizing bounded rationality views boundedness as a structural property of an agent’s reasoning process. Some work distinguishes between distinct forms of reasoning. For example, Sandholm and Lesser (2001) distinguish between strategic threshold strategies and Nash threshold strategies; work on the mean field equilibrium solution concept distinguishes between “cognizant” strategies, which track the current state of other agents, and “oblivious” strategies, which do not (Adlakha and Johari, 2013). Other work considers finer gradations between the levels of reasoning that different agents are able to perform, as in bounded rationalizability (Bernheim, 1984; Pearce, 1984) and work in program equilibrium, which often models boundedness by a bounded number of states in a finite state automaton (Gilboa and Samet, 1989; Rubinstein, 1998).

Another example in this latter vein serves as the most concrete application of both strategic and nonstrategic descriptions of behavior within a single model of which we are aware: the iterative models of behavioral game theory. These models thus serve as a running example throughout the paper and as one application area in which our work has immediate implications. Overall, the field of behavioral game theory aims to develop models that more accurately describe human behavior in game-theoretic environments, based on both field data and experimental data (e.g., Camerer, 2003). Iterative models are one prominent class of models from this literature; they include the level- (Nagel, 1995; Costa-Gomes et al., 2001; Crawford et al., 2010), cognitive hierarchy (Camerer et al., 2004), and quantal cognitive hierarchy models (Wright and Leyton-Brown, 2017). In all of these models, each agent has a non-negative integer level representing the degree of strategic reasoning (i.e., modeling of recursive beliefs) of which the agent is capable. Level- agents are nonstrategic—they do not model other agents’ beliefs or actions at all; level- agents model level- agents’ actions; level- agents model the beliefs and actions of level- agents; and so forth.

So far so good; the catch comes when we start getting fancy with the definition of level-0 agents. In most work in the literature, the issue does not come up; level-0 behavior is defined simply as uniform randomization. However, we showed in recent work that model performance can be substantially improved by allowing for richer level-0 specifications (Wright and Leyton-Brown, 2014; Hartford et al., 2016; Wright and Leyton-Brown, 2019). This raises the question of how rich these specifications should be allowed to become, before level- stops being plausible as a description of nonstrategic behavior. In our past work we followed the AI tradition described earlier, saying that a strategic agent’s behavior can depend on an explicit, probabilistic belief about the other agents’ actions while a non-strategic agent’s behavior must not. But this argument has a crucial weakness: just because a proposed level-0 behavior can be written without reference to beliefs about other agents’ strategies, we cannot conclude that there does not exist another, equivalent way of writing it that does depend on such beliefs. Things get even worse if we aspire to learn the level-0 specification directly from data, effectively optimizing over a space of specifications, as we do in our most recent work (Hartford et al., 2016): the task now becomes reassuring a skeptic that no point in this space corresponds to behavior that could somehow be rewritten in strategic terms.

This paper establishes a firm and constructive dividing line between nonstrategic and strategic behaviors. Specifically, it characterizes a broad family of “nonstrategic” decision rules and shows that they deserve the name: i.e., that no rule in this class can represent strategic reasoning. In the sense of the literature on bounded rationality just described, our proposed characterization is a structural notion: it restricts the information that agents are permitted to use by restricting them to summarize all outcomes into a single number before performing their reasoning. In what follows, we introduce notation and important background in Section 2. In particular, we introduce the concept of behavioral models in Section 2.3

: functions that map from an arbitrary game to a probability distribution over a single agent’s actions in that game. We define strategic behavioral models in Section 

3; these are behavioral models that require agents to take account of the incentives of the other agents and to be self interested, both in a precise sense. In Section 4, we prove that all of the existing solution concepts described in Section 2 are strategic. Section 5 then defines elementary behavioral models—agent behaviors that can be computed in terms of a matrix of real-valued potentials that score each outcome in the game independently—and shows how a wide range of “level-0” behaviors from the literature can be encoded as elementary models. In Section 6, we prove our main result: that minimally self-interested behavioral models are partitioned into nonstrategic behavioral models, all of which are elementary, and strategic behavioral models, none of which are elementary. Finally, in Section 7, we show that aggregating a finite number of elementary behavioral models results in a strictly more expressive set of (not necessarily even minimally self-interested) models that still contains only nonstrategic behavioral models. We conclude in Section 8 with discussion and some future directions.

2 Background

We begin by briefly defining our formal framework and notation, discussing normal-form games, solution concepts, and behavioral models.

2.1 Normal-Form Games

In this work, we focus on unrepeated, simultaneous-move normal-form games. These games turn out to be perfectly general, in a mathematical sense: any game, including repeated games or dynamic “game trees”, can be represented as a normal-form game.

A normal-form game is defined by a tuple , where is a finite set of agents; is the set of possible action profiles; is the finite set of actions available to agent ; and is a set of utility functions , each of which maps from an action profile to a utility for agent . Agents may also randomize over their actions. It is standard in the literature to call such randomization a mixed strategy; however, for our purposes this terminology will be confusing, since it would lead us to discuss the strategies of nonstrategic agents. We thus instead adopt the somewhat nonstandard terminology behavior for this concept. We denote the set of agent ’s possible behaviors by , and the set of possible behavior profiles by , where denotes the standard -simplex (the set ), and hence is the set of probability distributions over a discrete set . Overloading notation, we represent the expected utility to agent  of a behavior profile by . We use the notation to refer to the behavior profile of all agents except , and to represent a full behavior profile.

2.2 Solution Concepts

A solution concept is a mapping from a game to a behavior profile (or set of behavior profiles) that satisfies some criteria. Solution concepts can be interpreted descriptively, as a prediction of how agents will actually play a game. They can also be interpreted normatively, as a claim about how rational agents ought to play a game. (These two senses frequently overlap, as it is common to assume that agents will play a game rationally.) We will primarily be concerned with these solution concepts as formalizations of strategic behavior in games.

The foundational solution concept in game theory, and the most commonly used, is the Nash equilibrium.

Definition 1 (Nash equilibrium).

Let denote the set of agent ’s best responses to a behavior profile . A Nash equilibrium is a behavior profile in which every agent simultaneously best responds to all the other agents. Formally, is a Nash equilibrium if

When agents play a Nash equilibrium, they must randomize independently. A correlated equilibrium relaxes this requirement, and allows for joint distributions of actions that are correlated.

Definition 2 (Correlated equilibrium).

A correlated equilibrium is a distribution over action profiles which satisfies the following for every agent and every mapping :

Note that every Nash equilibrium corresponds to a correlated equilibrium .

One important idea from behavioral economics is that people become more likely to make errors as the cost of making those errors decreases. This can be modeled by assuming that agents best respond quantally, rather than via strict maximization.

Definition 3 (Quantal best response).


(logit) quantal best response

by agent to in game is a behavior such that


where (the precision parameter) indicates how sensitive agents are to utility differences. When , quantal best response is equivalent to uniform randomization; as , quantal best response corresponds to best response in the sense that actions are played with positive probability only if they are best responses, i.e. .

The generalization from best response to quantal best response gives rise to a generalization of Nash equilibrium known as the quantal response equilibrium (“QRE”) (McKelvey and Palfrey, 1995).

Definition 4 (Qre).

A quantal response equilibrium with precision is a behavior profile in which every agent’s behavior is a quantal best response to the behaviors of the other agents; i.e., for all agents ,

2.3 Models of Agent Behavior

We now turn to what we term behavioral models, functions that return a behavior (a probability distribution over a single agent’s action space) for every given game. They differ from solution concepts because they represent a prediction of a single agent’s behavior, rather than a prediction about all agents’ joint behavior. Profiles of behavioral models can thus be seen as solution concepts that always encode a single product distribution over a given set of individual behaviors. We consider what can be said about the profiles of behavioral models induced by existing solution concepts in Section 4.

Before we can define behavioral models, we must introduce some basic notation. Let:

  • denote the space of all normal-form games;

  • denote the space of all finite vectors;

  • denote the space of all finite-sized, finite-dimensional tensors;

  • denote the space of all finite standard simplices; and

  • denote player ’s action space in game .

Definition 5 (Behavioral models).

A behavioral model is a function ; for all games . We use a function name with no agent subscript, such as , to denote a profile of behavioral models with one function for each agent. We write to denote the behavior profile that results from applying each to .

Much work in behavioral game theory proposes behavioral models rather than solution concepts (though the formal definition of a behavioral model is our own). One key idea from that literature is that humans can only perform a limited number of steps of strategic reasoning, or equivalently that they only reason about higher-order beliefs up to some fixed, maximum order.

We begin with the so-called level- model (Nagel, 1995; Costa-Gomes et al., 2001). Unlike Nash equilibrium, correlated equilibrium, and quantal response equilibrium, all of which describe fixed points, the level- model is computed via a finite number of best response calculations. Each agent is associated with a level , corresponding to the number of steps of reasoning the agent is able to perform. A level- agent plays nonstrategically (i.e., without reasoning about its opponent); a level- agent (for ) best responds to the belief that all other agents are level-. The level- model implies a distribution over play for all agents when combined with a distribution over levels.

Definition 6 (Level- prediction).

Fix a distribution over levels and a level- behavior . Then the level- behavior for an agent is defined as

The level- prediction for a game is the average of the behavior of the level- strategies weighted by the frequency of the levels,

Cognitive hierarchy (Camerer et al., 2004) is a very similar model in which agents respond to the distribution of lower-level agents, rather than believing that every agent performs exactly one step less of reasoning.

Definition 7 (Cognitive hierarchy prediction).

Fix a distribution over levels and a level- behavior . Then the level- hierarchical behavior for an agent is

where and . The cognitive hierarchy prediction is again the average of the level- hierarchical strategies weighted by the frequencies of the levels,

As we did with quantal response equilibrium, it is possible to generalize these iterative solution concepts by basing agents’ behavior on quantal best responses rather than best responses. The resulting models are called quantal level- and quantal cognitive hierarchy (e.g., Stahl and Wilson, 1994; Wright and Leyton-Brown, 2017).

3 Strategic Behavioral Models

As discussed in the introduction, there is general qualitative agreement in the literature that strategic agents act to maximize their own utilities based on explicit probabilistic beliefs about the actions of other agents. Our ultimate goal is to characterize behavioral models that are unambiguously nonstrategic; thus, to strengthen our results, we adopt a somewhat more expansive notion of strategic behavior. Specifically, we define an agent as strategic if it satisfies two conditions, which we call (1) other responsiveness and (2) dominance responsiveness. These conditions require that the agent chooses actions both (1) with at least some dependence on others’ payoffs; and (2) with at least some concern for its own payoffs.

Definition 8 (Strategic behavioral model).

A behavioral model is strategic if it is both other responsive and dominance responsive.

The key feature of strategic agents is that they take account of the incentives of other agents when choosing their own actions. To capture this intuition via the weakest possible necessary condition, we say that an agent is other responsive if its behavior is ever influenced by changes (only) to the utilities of other agents.

Definition 9 (Other responsiveness).

A behavioral model is other responsive if there exists any pair of games and such that for all , but .

It is also traditional to assume that agents always act to maximize their expected utilities. This assumption is too strong for our purposes; for example, we want to allow for deviations from perfect utility maximization such as quantal best response. However, it does not seem reasonable to call an agent strategic if it pays no attention whatsoever to its own payoffs. We thus introduce a concept that we call dominance responsiveness, which is a considerably weaker sense in which an agent might show concern for its own payoffs. The condition requires only that gross changes in agents’ own incentives will cause them to change their behavior. More formally, we say that a behavioral model is dominance responsive if for every pair of games such that an action is strictly dominant in one game and strictly dominated in another, then the behavioral model does not behave identically in both games.

Definition 10 (Strict dominance).

In a game , an action strictly dominates an action if and only if If an action strictly dominates every other action in game , then it is strictly dominant in game .

Definition 11 (Dominance responsiveness).

A pair of games and are dominance reversed for agent if there exist such that is strictly dominant in , but strictly dominates in . A behavioral model is dominance responsive if for every pair of games and that are dominance reversed for , .

Observe that the two games and are required to have nothing in common beyond their sizes and their use of the same names for the actions; the latter is to enable the statement to make sense. (Each evaluation of is a vector; the inequality compares them element-wise, pairing up actions with the same names.) Also observe that we do not require to play the dominant action in or to do anything in particular in ; is simply required to change behavior in some way when a strictly dominated action changes to become dominated (and the rest of the game changes in an arbitrary way). Again, this is meant to be an extremely weak condition capturing the notion that an agent responds to its own payoffs.

4 Existing Solution Concepts are Strategic

We now demonstrate that our definition of strategic behavioral models does more than describe qualitative patterns of behavior that have been called “strategic” in the past: it also formally captures the predictions of various solution concepts both from classical game theory and from behavioral game theory (Nash equilibrium, correlated equilibrium, quantal response equilibrium, level-, cognitive hierarchy, and quantal cognitive hierarchy).

We begin with a technical definition that is used in the proofs in this section: another notion of self-interest that we call self responsiveness.

Definition 12 (Self responsiveness).

A behavioral model is self responsive if for any game , there exists a game such that

  1. , and

  2. for all and .

We now show that quantal best response to a self responsive behavioral model is strategic.

Theorem 13.

Fix a set and a profile of behavioral models that satisfies

for all games and players . Then every behavioral model is strategic.


Suppose that satisfies the above for some . We then prove the claim in two parts. In Part 1 we show that every behavioral model in must be self responsive. In Part 2, we show that every behavioral model in is strategic.

Part 1: For any profile of behavioral models and , the behavioral model is self responsive.

We prove this by showing how to construct a game satisfying Definition 12 from any game . Fix an arbitrary and . One of agent ’s actions must be assigned weakly less probability by than all others. Call this action . Now construct as follows:

Clearly differs from only in ’s payoffs. Action was assigned weakly less probability by than any other action. However, is strictly dominant in , and hence must be assigned strictly greater probability than any other action by . Therefore, , and is self responsive.

Part 2: For any and profile of self responsive behavioral models, the behavioral model is strategic.

First, observe that strictly dominant actions always have higher expected utility than the actions that they dominate, and hence quantal best response to any behavioral model is dominance responsive, since higher expected utility actions are always played with higher probability.

It remains only to show that quantal response is other responsive. Consider the following game .


Let be the row player and be the column player. Choose a game with that differs only in ’s payoffs; such a game is guaranteed to exist because is self responsive. Notice that . Since , , and therefore . ∎

Corollary 14.

All of QRE, Nash equilibrium, and correlated equilibrium are (profiles of) strategic behavioral models.


Immediate from Theorem 13 and the observation that best response is a special case of quantal best response. ∎

Theorem 15.

All of level-, cognitive hierarchy, and quantal cognitive hierarchy are strategic behavioral models for agents of level or level (and higher).


Let be a profile of behavioral models and for all . Choose behavioral model profiles satisfying and for all games and players .

If all of the behavioral models in are self responsive, then by the argument in Part 2 of the proof of Theorem 13, all of the behavioral models in are strategic. Thus, noting that best response is a special case of quantal best response, all of the listed iterative models are strategic for level .

Otherwise, all of the behavioral models in are self responsive by the argument in Part 1 of the proof of Theorem 13, and thus by the argument in Part 2 of the proof of Theorem 13, all of the behavioral models in are strategic. Thus, noting that best response is a special case of quantal best response, all of the listed iterative models are strategic for level . ∎

5 Elementary Behavioral Models

Our main task in this paper is to separate nonstrategic behavior from strategic behavior. Now that we have formally defined the latter, we can introduce a class of behavioral models, called elementary models, that we will ultimately show are always nonstrategic. Observe that an agent reasoning strategically needs to account both for its own payoffs (in order to be dominance responsive) and for others’ payoffs (in order to be other responsive); thus, it must evaluate each outcome in multiple terms. Our key idea is thus to require that nonstrategic behavior independently “scores” each outcome using a single number. In this section, we formalize such a notion and illustrate its generality via examples of how it can be used to encode previously proposed “nonstrategic” behaviors.

5.1 Defining Elementary Behavioral Models

The formal definition of elementary behavioral models is unfortunately more complex than the intuition we just gave. The reason is that any tuple of real values can be encoded into a single real number; in information economics this is referred to as dimension smuggling (e.g., Nisan and Segal, 2006). Without ruling out dimension smuggling, therefore, a restriction that nonstrategic agents rely on only a single number would lack any force. We thus define a no smuggling criterion; it depends in turn on the concept of a dictatorial function.

Definition 16 (Dictatorial function).

A function is dictatorial if its value is completely determined by a single input: such that , , .

This class of functions takes its name from the social choice condition from which it is inspired; one input to the function acts as a dictator over ’s output.

Our no smuggling condition says that either a function is dictatorial or that for every input dimension (i.e., action chosen by each player), there exist inputs for the function differing in that dimension that produce the same output.

Definition 17 (No smuggling).

A function satisfies no smuggling iff either is dictatorial or for every and for every , there exist such that and .

The no smuggling condition considers functions that reduce a vector of numbers to a smaller number of dimensions. For our purposes at the moment, it is helpful to imagine

(the vector is reduced to a single real number), though we will appeal to the general case in Section 7. The condition requires that summarizes its input in a meaningful sense, rather than simply encoding all of the original numbers into the infinite number of digits of a single real number. Observe that if is one to one (i.e., if every value in ’s domain maps to a different value in its range) then it can be inverted, meaning that it performs dimension smuggling. The alternative is that there be at least one pair of inputs that produce the same output. For the sake of convenience, we impose a slightly stronger condition, that either there exists at least one such pair of vectors differing in each input dimension or the function is dictatorial. (That is, for the input dimension that completely determines a dictatorial function, we do not require the existence of a pair of inputs that produce the same output.)

Definition 18 (Elementary behavioral model).

A behavioral model is elementary if it can be represented as , where

  1. for every action profile ,

  2. satisfies no smuggling, and

  3. is an arbitrary function; we use it to map from to .

For convenience, when condition 1 holds we refer to as the potential map for .

That is, an elementary behavioral model works as follows. First, given an arbitrary game , and for each action profile , we apply the same no-smuggling function to the -tuple of real values , producing in each case a single real value. We represent all of these real values in a mapping we call ; this potential map is a function of the same size as each of the utility functions. We then apply an arbitrary function to , producing a probability distribution over .

5.2 Examples of Elementary Behavioral Models

To demonstrate the generality of elementary behavioral models, we show how to encode each of the candidate level-0 behavioral models that we proposed in our past work Wright and Leyton-Brown (2014, 2019). (Thus, although that work only appealed to intuition, we can now conclude that these behavioral models are all nonstrategic.)

We begin with the simplest behavioral models: those that depend only on a given agent ’s utilities .

Example 19 (Maxmax behavioral model).

A maxmax action for agent is an action giving rise to the best best case. An agent that wishes to maximize its possible payoff will play a maxmax action. The maxmax behavioral model uniformly randomizes over all of ’s maxmax actions in .

Example 20 (Maxmin behavioral model).

A maxmin action for agent is the action with the best worst-case guarantee. This is the safest action to play against hostile agents. The maxmin behavioral model uniformly randomizes over all of ’s maxmin actions in :

Example 21 (Minimax regret behavioral model).

Following Savage Savage (1951), for each action profile, an agent has a possible regret: how much more utility could the agent have gained by playing the best response to the other agents’ actions? Each of the agent’s actions is therefore associated with a vector of possible regrets, one for each possible profile of the other agents’ actions. A minimax regret action is an action whose maximum regret (in the vector of possible regrets) is minimal. The minimax regret behavioral model uniformly randomizes over all of ’s minimax regret actions in . That is, if

is the regret of agent in action profile , then

Because each of the maxmax, maxmin, and minimax regret behavioral models depends only on agent ’s payoffs, we can set in each case; satisfies no smuggling because it is dictatorial. The encodings differ only in their choice of . These vary in their complexity (e.g., maxmax: simply uniformly randomize over all actions that tie for corresponding to the largest potential value; for minimax regret it is necessary to compute a best response for each action profile). However, recall that is an arbitrary function; thus, this is not a problem for our encoding.

Other behavioral models depend on both agents’ utilities, and so require different functions.

Example 22 (Max welfare behavioral model).

An max welfare action is part of some action profile that maximizes the sum of agents’ utilities. The max welfare behavioral model uniformly randomizes over max welfare actions in :

We can encode the efficent behavioral model as elementary by setting ; satisfies no smuggling because it is continuous. We then define to uniformly randomize over all actions that tie for corresponding to the largest potential value.

Example 23 (Fair behavioral model).

Let the unfairness of an action profile be the difference between the maximum and minimum payoffs among the agents under that action profile:

Then a “fair” outcome minimizes this difference in utilities. A fair action is part of a minimally unfair action profile. The fair behavioral model uniformly randomizes over fair actions:

We can encode the fair behavioral model as elementary by setting ; again, satisfies no smuggling because it is continuous. We then define to uniformly randomize over all actions that tie for corresponding to the smallest potential value.

Finally, we note that all of the examples just given are binary: actions are either fair/maxmin/etc or they are not. By changing only , we could similarly construct continuous variants of each concept in which actions that achieve nearly maximal potentials are played nearly as often by the behavioral model, etc.

6 Elementary Behavorial Models are Nonstrategic

We are now ready to show that elementary behavioral models are never strategic. This result is important because it achieves our key goal of distinguishing strategic from nonstrategic behavioral models via a formal mathematical criterion, rather than relying on the intuitive sense that a model “depends on” an explicit model of an opponent’s behavior. In fact, we do a bit better than simply showing that elementary models are nonstrategic: we show that the space of dominance responsive behavioral models is exactly partitioned into elementary models and strategic models.

Theorem 24.

A dominance responsive behavioral model is strategic if and only if it is not elementary.


If direction: no elementary behavioral model is strategic. Suppose for contradiction that elementary behavioral model is strategic, where is the potential map for . There are two ways in which can satisfy the no-smuggling condition:

  1. is a dictator for . Because is other responsive, there exist and with for all , such that . But since is a dictator for , , and hence , a contradiction.

  2. For all , there exist such that and . Let be two such values; we use them to construct -player games and in which has two actions and the other agent has actions .



    Note that is a strictly dominant action for in and a strictly dominated action for in . By dominance responsiveness, . But , and since and are the only payoff tuples that occur in either or , and hence , a contradiction.

Only-if direction: if a dominance responsive behavioral model is not strategic, then it is elementary. We show how to represent by constructing appropriate and functions. Since is dominance responsive but not strategic, it is not other responsive. Therefore, for every pair of games and with for all , . Define , which clearly satisfies no smuggling. Let , where is a function that returns a game with the utilities of set to its argument and the utilities of the other players set to 0. Since differences in the other agents’ utilities never change the output of , for all . ∎

7 Finite Aggregations of Elementary Models

We now consider behavioral models that are constructed by drawing together the predictions of multiple elementary models. For example, we might build a behavioral model from some convex combination of the predictions of the elementary behavioral models defined in Section 5.2 (as, indeed, we did in our past work Wright and Leyton-Brown (2014, 2019)). Our key result in this section is that the class of such models is strictly larger than the class of elementary behavioral models. However, this larger class still consists entirely of nonstrategic models, under an appropriate strengthening of our no smuggling condition.

How can this be—didn’t we already characterize all nonstrategic behavioral models? Not quite: recall that Theorem 24 partitioned the set of dominance responsive behavioral models into strategic and elementary behavioral models. In addition, some elementary models are non-dominance-responsive. The set of non-dominance-responsive behavioral models also includes the extra models made possible by taking finite aggregations. This section thus introduces a natural class of nonstrategic behavior that sometimes violates our minimal notion of self interest without allowing completely arbitrary behavior.

Definition 25 (Finite aggregations of elementary behavioral models).

A behavioral model is a finite aggregation of elementary behavioral models if it can be represented as

where is an arbitrary function, , and for all , is an elementary behavioral model.

The class of finite aggregations of elementary behavioral models is strictly larger than the class of elementary behavioral models.

Theorem 26.

Finite aggregations of elementary behavioral models are not necessarily elementary themselves.


It suffices to exhibit two elementary behavioral models whose linear combination is not elementary. Let be the maxmax behavioral model defined in Example 19; let be the max welfare behavioral model defined in Example 22. Let and let

Suppose for contradiction that is elementary. Since is a function of all agents’ utilities, its potential is not dictatorial for any agent. Therefore, from no smuggling, there must exist two utility vectors satisfying and . Let , , and be any utility tuple satisfying and . Now consider the following two-player games and , in which is the row player.



Since the two games differ only in outcome , and , it must be that . But , yielding our contradiction. ∎

We need to strengthen our no-smuggling condition for behavioral models based on finite aggregations of elementary models, because the existence of multiple potential functions and the fact that we allow arbitrary aggregation functions makes our previous definition too weak to prevent dimension smuggling.

Definition 27 (Joint no smuggling).

A set of functions for is jointly no-smuggling if the function satisfies the no-smuggling condition.

The joint no-smuggling condition is a benign assumption. In particular, it is satisfied by every set of continuous potentials, including all linear combinations of utilities (note that a behavioral model with a continuous potential need not be continuous itself).

This new definition suffices to give us our final result: that finite aggregations of elementary behavioral models are always nonstrategic.

Theorem 28.

Let be a set of elementary behavioral models. If the potentials associated with the models in are jointly no-smuggling, then no finite aggregation of is strategic.


The argument in the proof of the if direction of Theorem 24 can be applied to finite aggregations of elementary behavioral models in exactly the same way, since the joint no-smuggling condition guarantees the existence of utility vectors that are indistinguishable by the joint potential but that have . ∎

8 Discussion and Future Work

In this work, we proposed elementary behavioral models and their finite aggregations as mathematical characterizations of classes of nonstrategic decision rules. These classes are constructively defined, in the sense that membership of a rule is verified by demonstrating how to represent the rule in a specific form—as a function of the output of a non-smuggling potential map—rather than by proving that it cannot be represented as a response to probabilistic beliefs.

It is interesting to note that various special cases of strategic solution concepts are nonstrategic under our definition. For example, the equilibrium of a two-player zero-sum game can be computed by considering only the utility of a single agent, and hence the behavior for an equilibrium-playing player in such a game can be computed by an elementary behavioral model. Similarly, an equilibrium for a potential game can of course be computed in terms of outcome values computed by a potential function (Monderer and Shapley, 1996). In repeated settings, many no-regret learning rules (which are guaranteed to converge to a coarse correlated equilibrium) can be executed by agents that only take account of their own utilities. One thing that these exceptions all have in common is that they are also computationally easy, unlike general -equilibrium, which is known to be hard in a precise computational sense (Daskalakis et al., 2009; Chen and Deng, 2006)

. The equilibrium of a zero-sum game can be solved in polynomial time by a linear program; the equilibrium of a potential game can be found simply by finding the maximum of the potential function over all the outcomes. No-regret algorithms are cheap to run, requiring in each time period work that is linear in the number of actions, and converge rapidly to coarse correlated equilibrium. However, the connection between ease of computation and strategic simplicity is not an equivalence. For example, correlated equilibrium in general games can be computed in polynomial time by a linear program, but cannot in general be computed by an elementary behavioral model (see Corollary 

14). An attractive future direction is to shed further light on the connection between computational and strategic simplicity.

We also observe that our characterization of nonstrategic behavior in this paper is a binary distinction: in the view we have advanced, a behavioral model is either nonstrategic or it is not. An intriguing question for future work is whether such a distinction can be made more quantitative: i.e., is there a sense in which agents are nonstrategic to a greater or lesser degree that is distinct from the number of steps of strategic reasoning that they perform?


This work was funded in part by an NSERC E.W.R. Steacie Fellowship and an NSERC Discovery Grant. Part of this work was done at Microsoft Research New York while the first author was a postdoctoral researcher and the second author was a visiting researcher.


  • Adlakha and Johari (2013) Adlakha, S. and Johari, R. (2013). Mean field equilibrium in dynamic games with strategic complementarities. Operations Research, 61(4):971–989.
  • Airiau and Sen (2003) Airiau, S. and Sen, S. (2003). Strategic bidding for multiple units in simultaneous and sequential auctions. Group Decision and Negotiation, 12(5):397–413.
  • Babaioff et al. (2004) Babaioff, M., Nisan, N., and Pavlov, E. (2004). Mechanisms for a spatially distributed market. In Proceedings of the 5th ACM Conference on Electronic Commerce, pages 9–20.
  • Bar-Isaac and Ganuza (2005) Bar-Isaac, H. and Ganuza, J.-J. (2005). Teaching to the top and searching for superstars. Technical report, New York University, Leonard N. Stern School of Business, Department of Economics.
  • Bernheim (1984) Bernheim, B. (1984). Rationalizable Strategic Behavior. Econometrica, 52(4):1007–1028.
  • Camerer et al. (2004) Camerer, C., Ho, T., and Chong, J. (2004). A cognitive hierarchy model of games. Quarterly Journal of Economics, 119(3):861–898.
  • Camerer (2003) Camerer, C. F. (2003). Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press.
  • Chen and Deng (2006) Chen, X. and Deng, X. (2006). Settling the complexity of two-player nash equilibrium. In Foundations of Computer Science, 2006. FOCS’06. 47th Annual IEEE Symposium on, pages 261–272.
  • Costa-Gomes et al. (2001) Costa-Gomes, M., Crawford, V., and Broseta, B. (2001). Cognition and behavior in normal-form games: An experimental study. Econometrica, 69(5):1193–1235.
  • Crawford et al. (2010) Crawford, V. P., Costa-Gomes, M. A., Iriberri, N., et al. (2010). Strategic thinking. Working paper.
  • Daskalakis et al. (2009) Daskalakis, C., Goldberg, P. W., and Papadimitriou, C. H. (2009). The complexity of computing a Nash equilibrium. SIAM Journal on Computing, 39(1):195–259.
  • Gerding et al. (2011) Gerding, E. H., Robu, V., Stein, S., Parkes, D. C., Rogers, A., and Jennings, N. R. (2011). Online mechanism design for electric vehicle charging. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 811–818.
  • Ghosh and Hummel (2012) Ghosh, A. and Hummel, P. (2012). Implementing optimal outcomes in social computing: a game-theoretic approach. In Proceedings of the 21st International Conference on World Wide Web, pages 539–548.
  • Gilboa and Samet (1989) Gilboa, I. and Samet, D. (1989). Bounded versus unbounded rationality: The tyranny of the weak. Games and Economic Behavior, 1(3):213–221.
  • Grabisch et al. (2017) Grabisch, M., Mandel, A., Rusinowska, A., and Tanimura, E. (2017). Strategic influence in social networks. Mathematics of Operations Research, 43(1):29–50.
  • Hartford et al. (2016) Hartford, J. S., Wright, J. R., and Leyton-Brown, K. (2016). Deep learning for predicting human strategic behavior. In Advances in Neural Information Processing Systems, pages 2424–2432.
  • Lee (2014) Lee, H. (2014). Algorithmic and game-theoretic approaches to group scheduling. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-Agent Systems, pages 1709–1710.
  • Li and Tesauro (2003) Li, C. and Tesauro, G. (2003). A strategic decision model for multi-attribute bilateral negotiation with alternating. In Proceedings of the 4th ACM Conference on Electronic Commerce, pages 208–209.
  • McKelvey and Palfrey (1995) McKelvey, R. and Palfrey, T. (1995). Quantal response equilibria for normal form games. Games and Economic Behavior, 10(1):6–38.
  • Monderer and Shapley (1996) Monderer, D. and Shapley, L. S. (1996). Potential games. Games and Economic Behavior, 14(1):124–143.
  • Nagel (1995) Nagel, R. (1995). Unraveling in guessing games: An experimental study. American Economic Review, 85(5):1313–1326.
  • Nisan and Segal (2006) Nisan, N. and Segal, I. (2006). The communication requirements of efficient allocations and supporting prices. Journal of Economic Theory, 129(1):192–224.
  • Pearce (1984) Pearce, D. (1984). Rationalizable Strategic Behavior and the Problem of Perfection. Econometrica, 52(4):1029–1050.
  • Roth and Ockenfels (2002) Roth, A. E. and Ockenfels, A. (2002). Last-minute bidding and the rules for ending second-price auctions: Evidence from ebay and amazon auctions on the internet. American economic review, 92(4):1093–1103.
  • Rubinstein (1998) Rubinstein, A. (1998). Modeling bounded rationality. MIT press.
  • Sandholm and Lesser (2001) Sandholm, T. W. and Lesser, V. R. (2001). Leveled commitment contracts and strategic breach. Games and Economic Behavior, 35(1-2):212–270.
  • Savage (1951) Savage, L. (1951). The Theory of Statistical Decision. Journal of the American Statistical Association, 46(253):55–67.
  • Shoham and Leyton-Brown (2008) Shoham, Y. and Leyton-Brown, K. (2008). Multiagent Systems: Algorithmic, Game-theoretic, and Logical Foundations. Cambridge University Press.
  • Stahl and Wilson (1994) Stahl, D. and Wilson, P. (1994). Experimental evidence on players’ models of other players. Journal of Economic Behavior and Organization, 25(3):309–327.
  • Train (2009) Train, K. (2009). Discrete Choice Methods with Simulation. Cambridge University Press.
  • Walsh et al. (2000) Walsh, W. E., Wellman, M. P., and Ygge, F. (2000). Combinatorial auctions for supply chain formation. In Proceedings of the 2nd ACM conference on Electronic commerce, pages 260–269.
  • Wright and Leyton-Brown (2014) Wright, J. R. and Leyton-Brown, K. (2014). Level- meta-models for predicting human behavior in games. In Proceedings of the ACM Conference on Economics and Computation (EC’14), pages 857–874.
  • Wright and Leyton-Brown (2017) Wright, J. R. and Leyton-Brown, K. (2017). Predicting human behavior in unrepeated, simultaneous-move games. Games and Economic Behavior, 106:16–37.
  • Wright and Leyton-Brown (2019) Wright, J. R. and Leyton-Brown, K. (2019). Level-0 models for predicting human behavior in games.

    Journal of Artificial Intelligence Research

    , 64:357–383.