# The Exact Computational Complexity of Evolutionarily Stable Strategies

While the computational complexity of many game-theoretic solution concepts, notably Nash equilibrium, has now been settled, the question of determining the exact complexity of computing an evolutionarily stable strategy has resisted solution since attention was drawn to it in 2004. In this paper, I settle this question by proving that deciding the existence of an evolutionarily stable strategy is Σ_2^P-complete.

## Authors

• 28 publications
• ### On the Computational Complexity of Decision Problems about Multi-Player Nash Equilibria

We study the computational complexity of decision problems about Nash eq...
01/15/2020 ∙ by Marie Louisa Tølbøll Berthelsen, et al. ∙ 0

• ### Computational Complexity of Stable Marriage and Stable Roommates and Their Variants

This paper gives an overview on and summarizes existing complexity and a...
04/17/2019 ∙ by Jiehua Chen, et al. ∙ 0

• ### A Note on Computational Complexity of Dou Shou Qi

Dou Shou Qi is a Chinese strategy board game for two players. We use a E...
04/27/2019 ∙ by Zhujun Zhang, et al. ∙ 0

• ### Be a Leader or Become a Follower: The Strategy to Commit to with Multiple Leaders (Extended Version)

We study the problem of computing correlated strategies to commit to in ...
05/30/2019 ∙ by Matteo Castiglioni, et al. ∙ 0

• ### A Novel Strategy Selection Method for Multi-Objective Clustering Algorithms Using Game Theory

The most important factors which contribute to the efficiency of game-th...
08/15/2012 ∙ by Mahsa Badami, et al. ∙ 0

• ### Complexity of Stability in Trading Networks

Efficient computability is an important property of solution concepts in...
05/22/2018 ∙ by Tamás Fleiner, et al. ∙ 0

• ### Farsighted Collusion in Stable Marriage Problem

The Stable Marriage Problem, as proposed by Gale and Shapley, considers ...
05/27/2019 ∙ by Mircea Digulescu, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Game theory provides ways of formally representing strategic interactions between multiple players, as well as a variety of solution concepts for the resulting games. The best-known solution concept is that of Nash equilibrium (Nash, 1950), where each player plays a best response to all the other players’ strategies. The computational complexity of, given a game in normal form, computing a (any) Nash equilibrium, remained open for a long time and was accorded significant importance (Papadimitriou, 2001). (I will give a brief introduction to / review of computational complexity in Section 2; the reader unfamiliar with it may prefer to read this section first.) An elegant algorithm for the two-player case, the Lemke-Howson algorithm (Lemke and Howson, 1964), was proved to require exponential time on some game families by Savani and von Stengel (2006). Finally, in a breakthrough series of papers, the problem was established to be PPAD-complete, even in the two-player case (Daskalakis et al., 2009; Chen et al., 2009).111Depending on the precise formulation, the problem can actually be FIXP-complete for more than 2 players (Etessami and Yannakakis, 2010).

Not all Nash equilibria are created equal; for example, one can Pareto-dominate another. Moreover, generally, the set of Nash equilibria does not satisfy interchangeability. That is, if player 1 plays her strategy from one Nash equilibrium, and player 2 plays his strategy from another Nash equilibrium, the result is not guaranteed to be a Nash equilibrium. This leads to the dreaded equilibrium selection problem

: if one plays a game for the first time, how is one to know according to which equilibrium to play? This problem is arguably exacerbated by the fact that determining whether equilibria with particular properties, such as placing probability on a particular pure strategy or having at least a certain level of social welfare, exist is NP-complete in two-player games (and associated optimization problems are inapproximable unless P=NP)

(Gilboa and Zemel, 1989; Conitzer and Sandholm, 2008). In any case, equilibria are often seen as a state to which play could reasonably converge, rather than an outcome that can necessarily be arrived at immediately by deduction.

In this paper, we consider the concept of evolutionarily stable strategies, a solution concept for symmetric games with two players. will denote a pure strategy and a mixed strategy, where denotes the probability that mixed strategy places on pure strategy . is the utility that a player playing obtains when playing against a player playing , and

 u(σ,σ′)=∑s,s′σ(s)σ′(s′)u(s,s′)

is the natural extension to mixed strategies.

###### Definition 1 (Price and Smith (1973))

Given a symmetric two-player game, a mixed strategy is said to be an evolutionarily stable strategy (ESS) if both of the following properties hold.

1. (Symmetric Nash equilibrium property) For any mixed strategy , we have .

2. For any mixed strategy () for which , we have .

The intuition behind this definition is that a population of players playing cannot be successfully “invaded” by a small population of players playing some , because they will perform strictly worse than the players playing and therefore they will shrink as a fraction of the population. They perform strictly worse either because (1) , and because has dominant presence in the population this outweighs performance against ; or because (2) so the second-order effect of performance against becomes significant, but in fact performs worse against itself than performs against it, that is, .

Example (Hawk-Dove game (Price and Smith, 1973)). Consider the following symmetric two-player game:

Dove Hawk
Dove 1,1 0,2
Hawk 2,0 -1,-1

The unique symmetric Nash equilibrium of this game is 50% Dove, 50% Hawk. For any , we have . That is, everything is a best reponse to . We also have , and . The difference between the former and the latter expression is . The latter is clearly positive for all , implying that is an ESS.

Intuitively, the problem of computing an ESS appears significantly harder than that of computing a Nash equilibrium, or even a Nash equilibrium with a simple additional property such as those described earlier. In the latter type of problem, while it may be difficult to find the solution, once found, it is straightforward to verify that it is in fact a Nash equilibrium (with the desired simple property). This is not so for the notion of ESS: given a candidate strategy, it does not appear straightforward to figure out whether there exists a strategy that successfully invades it. However, appearances can be deceiving; perhaps there is a not entirely obvious, but nevertheless fast and elegant way of checking whether such an invading strategy exists. Even if not, it is not immediately clear whether this makes the problem of finding an ESS genuinely harder. Computational complexity provides the natural toolkit for answering these questions.

The complexity of computing whether a game has an evolutionarily stable strategy (for an overview, see Chapter 29 of the Algorithmic Game Theory book (Suri, 2007)) was first studied by Etessami and Lochbihler (2008), who proved that the problem is both NP-hard and coNP-hard, as well as that the problem is contained in

(the class of decision problems that can be solved in nondeterministic polynomial time when given access to an NP oracle; see also Section

2). Nisan (2006) subsequently222An early version of Etessami and Lochbihler (2008) appeared in 2004. proved the stronger hardness result that the problem is co-hard. He also observed that it follows from his reduction that the problem of determining whether a given strategy is an ESS is coNP-hard (and Etessami and Lochbihler (2008) then pointed out that this also follows from their reduction). Etessami and Lochbihler (2008) also showed that the problem of determining the existence of a regular ESS is NP-complete. As was pointed out in both papers, all of this still leaves the main question of the exact complexity of the general ESS problem open. In this paper, this is settled: the problem is in fact -complete. After the review of computational complexity (Section 2), I will briefly discuss the significance of this result (Section 3).

The remainder of the paper—to which the reader not interested in a review of computational complexity or a discussion of the significance of the result is welcome to jump—contains the proof, which is structured as follows. In Section 4, Lemma 1 states that the slightly more general problem of determining whether an ESS exists whose support is restricted to a subset of the strategies is -hard. This is the main part of the proof. Then, in Section 5, Lemma 4 points out that if two pure strategies are exact duplicates, neither of them can occur in the support of any ESS. By this, we can disallow selected strategies from taking part in any ESS simply by duplicating them. Combining this with the first result, we arrive at the main result, Theorem 1.

One may well complain that Lemma 4 is a bit of a cheat; perhaps we should just consider duplicate strategies to be “the same” strategy and merge them back into one. As the reader probably suspects, such a hasty and limited patch will not avoid the hardness result. Even something a little more thorough, such as iterated elimination of very weakly dominated strategies (in some order), will not suffice: in Appendix A I show, with additional analysis and modifications, that the result holds even in games where each pure strategy is the unique best response to some mixed strategy.

## 2 Brief Background on Computational Complexity

Much of theoretical computer science is concerned with designing algorithms that solve computational problems fast (as well as, of course, correctly). For example, one computational problem is the following: given a two-player game in normal form, determine whether there exists a Nash equilibrium in which player obtains utility at least . A specific two-player normal-form game would be an instance of that problem. What does it mean to solve a problem fast? This is fundamentally about how the runtime scales with the size of the input (e.g., the size of the game). The focus is generally primarily on whether the runtime scales as a polynomial function of the input, which is considered fast (or efficient)—as opposed to, say, an exponential function.

For many problems, including the one described in the previous paragraph, we do not have any efficient algorithm, nor do we have a proof that no such algorithm exists. However, in these situations, we can often prove that the problem is at least as hard as any other problem in a large class. That is, we can prove that if the problem under consideration admits an efficient algorithm, then so do all other problems in a large class. The most famous such class is NP, which consists of decision problems, i.e., problems for which every instance has a “yes” or “no” answer. Specifically, it consists of decision problems that are such that for every “yes” instance, there is a succinct proof (that can be efficiently checked) that the answer is “yes.” A problem that is at least as hard as any problem in NP is said to be NP-hard. If an NP-hard problem is also in the class NP, it is said to be NP-complete; thus, in a sense, all NP-complete problems are equally hard.

Many problems of interest are NP-complete. The paradigmatic NP-complete problem is the satisfiability problem, which asks, given a propositional logic formula, whether there is a way to set the variables in this formula to true or false in such a way that the formula as a whole evaluates to true. For example, the formula is a “yes” instance, because setting to true and to false results in the formula evaluating to true. The succinct proof that an instance is a “yes” instance consists simply of values that the variables can take to make the formula evaluate to true. As it turns out, the problem introduced at the beginning of this section is NP-complete. It is in NP because given the supports of the strategies in a Nash equilibrium with high utility for player , we can easily reconstruct such an equilibrium; therefore, the supports serve as the proof that it is a “yes” instance. Many similar problems are also NP-complete (Gilboa and Zemel, 1989; Conitzer and Sandholm, 2008).

A standard way to prove that a problem is NP-hard is to take another problem that is already known to be NP-hard, and reduce it to problem . A reduction here is an efficiently computable function that maps every instance of to some instance of with the same truth value (“yes” or “no”). Given such a reduction, an efficient algorithm for could be used to solve as well, proving that in the relevant sense, is at least as hard as .

There are other classes of interest besides NP, with hardness and completeness defined similarly. For example, coNP consists of problems where there is a succinct proof of an instance being a “no” instance. The class is most easily illustrated by a standard complete problem for it. As in the satisfiability problem, we are given a propositional logic formula, but this time, the variables are split into two sets, and . We are asked whether there exists a way to set the variables in such that no matter how the variables in are set, the formula evaluates to true. (Note here the similarity to the ESS problem, where we are asked whether there exists a strategy such that no matter which invades, the invasion is repelled.) Similarly, a complete problem for the class (which equals co) asks whether no matter how the variables in are set, there is a way to set the variables in so that the formula evaluates to true. These classes are said to be at the second level of the polynomial hierarchy, and the generalization to higher levels is straightforward.

## 3 Significance of the Result

What is the significance of establishing the -completeness of deciding whether an evolutionarily stable strategy exists? When the computational problem of determining the existence of an ESS comes up, it is surely more satisfying to be able to simply state the exact complexity of the problem than to have to state that it is hard for some classes, included in another, and the exact complexity is unknown. Moreover, the latter situation also left open the possibility that the ESS problem exposed a fundamental gap in our understanding of computational complexity theory. It could even have been the case that the ESS problem required the definition of an entirely new complexity class for which the problem was complete.333In the case of computing one Nash equilibrium, the class PPAD had previously been defined (Papadimitriou, 1994), but it did not have much in the way of known complete problems before the Nash equilibrium result—and the standing of the class was quite diminished by this lack of natural problems known to be complete for it. The result presented here implies that this is not the case; while is not as well known as NP, it is a well-established complexity class.

Additionally, some of the significance of the result is in the irony that a key solution concept in evolutionary game theory, which is often taken to be a model of how equilibria might actually be reached in practice by a simple process, is actually computationally significantly less tractable (as far as our current understanding of computational complexity goes) than the concept of Nash equilibrium. This was already implied by the earlier hardness results referenced in the introduction, but the result obtained here shows the gap to be even wider. This perhaps suggests that modified solution concepts are called for, and more generally that the computational complexity of solution concepts should be taken into account in assessing their reasonableness for the purpose at hand. On the other hand, it is important to note that it may yet be possible to find evolutionarily stable strategies fast for most games actually encountered in practice. Games encountered in practice may have additional structure that puts the problem in a lower complexity class, possibly even P. If so, this would clearly reduce the force of the call for new solution concepts.

## 4 Hardness with Restricted Support

Having completed a review of the relevant computational complexity theory and a discussion of the significance of the result, we now begin the technical part of the paper. As outlined earlier, we first introduce a slightly different problem, which we will then show is -hard. From this, it will be fairly easy to show, in Section 5, that the main problem is -hard.

###### Definition 2

In ESS-RESTRICTED-SUPPORT, we are given a symmetric two-player normal-form game with strategies , and a subset . We are asked whether there exists an evolutionarily stable strategy of that places positive probability only on strategies in (but not necessarily on all strategies in ).

We will establish -hardness by reduction from (the complement of) the following problem.

###### Definition 3 (Minmax-Clique)

We are given a graph , sets and , a partition of into subsets for and , and a number . We are asked whether it is the case that for every function , there is a clique of size (at least) in the subgraph induced on . (Without loss of generality, we will require .)

Example. Figure 1 shows a tiny MINMAX-CLIQUE instance (let ).

The answer to this instance is “no” because for , the graph induced on has no clique of size at least .

We have the following known hardness result for this problem. (Recall that .)

###### Known Theorem 1 ((Ko and Lin, 1995))

MINMAX-CLIQUE is -complete.

We are now ready to present the main part of the proof.

###### Lemma 1

ESS-RESTRICTED-SUPPORT is -hard.

Proof: We reduce from the complement of MINMAX-CLIQUE. That is, we show how to transform any instance of MINMAX-CLIQUE into a symmetric two-player normal-form game with a distinguished subset of its strategies, so that this game has an ESS with support in if and only if the answer to the MINMAX-CLIQUE instance is “no.”

The Reduction. For every and every , create a strategy . For every , create a strategy . Finally, create a single additional strategy .

• For all and , .

• For all and with , .

• For all with and , .

• For all , , and , .

• For all and , .

• For all , , and , .

• For all , with , and , .

• For all with , , and , .

• For all , .

• For all with where , .

• For all with where , .

• For all , .

• For all and , .

• For all , .

• .

We are asked whether there exists an ESS that places positive probability only on strategies with and . That is, .

Example. Consider again the MINMAX-CLIQUE instance from Figure 1. The game to which the reduction maps this instance is:

 s11 s12 s21 s22 sv11 sv12 sv21 sv22 s0 s11 1 0 2 2 3/2 3/2 3/2 3/2 3/2 s12 0 1 2 2 3/2 3/2 3/2 3/2 3/2 s21 2 2 1 0 3/2 3/2 3/2 3/2 3/2 s22 2 2 0 1 3/2 3/2 3/2 3/2 3/2 sv11 3/2 0 3/2 3/2 0 0 3 3 0 sv12 0 3/2 3/2 3/2 0 0 0 3 0 sv21 3/2 3/2 3/2 0 3 0 0 0 0 sv22 3/2 3/2 0 3/2 3 3 0 0 0 s0 3/2 3/2 3/2 3/2 0 0 0 0 0

It has an ESS with weight on each of and . In contrast, (for example) with weight on each of and is invaded by the strategy with weight on each of and , because and .

Proof of equivalence. Suppose there exists a function such that every clique in the subgraph induced on has size strictly less than . We will show that the mixed strategy that places probability on for each (and everywhere else) is an ESS.

First, we show that is a best response against itself. For any in the support of , we have , and hence we also have . For not in the support of , we have . For all , for all , we have . For all , with , and , we have . Finally, . So is a best response to itself.

It follows that if there were a strategy that could successfully invade , then must put probability only on best responses to . Based on the calculations in the previous paragraph, these best responses are , and, for any , and, for all , . The expected utility of against any of these is (in particular, for any , we have ). Hence, , and to successfully invade, must attain .

We can write , where , only puts positive probability on the strategies, and only puts positive probability on the strategies with . The strategy that results from conditioning on not being played may be written as

 (p0/(p0+p2))s0+(p2/(p0+p2))σ′2

and thus we may write

 u(σ′,σ′)= p21u(σ′1,σ′1)+p1(p0+p2)u(σ′1,(p0/(p0+p2))s0+(p2/(p0+p2))σ′2) +(p0+p2)p1u((p0/(p0+p2))s0+(p2/(p0+p2))σ′2,σ′1) +(p0+p2)2u((p0/(p0+p2))s0+(p2/(p0+p2))σ′2,(p0/(p0+p2))s0+(p2/(p0+p2))σ′2)

Now, if we shift probability mass from to , i.e., we decrease and increase by the same amount, this will not affect any of the coefficients in the previous expression; it will not affect any of

 u(σ′1,σ′1), u(σ′1,(p0/(p0+p2))s0+(p2/(p0+p2))σ′2) (because u(sij,sv)=u(sij,s0)=2−1/|I|), and u((p0/(p0+p2))s0+(p2/(p0+p2))σ′2,σ′1) (because u(s0,sij)=u(sv,sij)=2−1/|I| % when v∈Vij or v∈Vi′j′ with i′≠i);

and it will not decrease

 u((p0/(p0+p2))s0+(p2/(p0+p2))σ′2,(p0/(p0+p2))s0+(p2/(p0+p2))σ′2) (because for any v∈V, u(s0,s0)=u(s0,sv)=u(sv,s0)=0).

Therefore, we may assume without loss of generality that , and hence . It follows that we can write

 u(σ′,σ′)=p21u(σ′1,σ′1)+p1p2u(σ′1,σ′2)+p2p1u(σ′2,σ′1)+p22u(σ′2,σ′2)

We first note that can be at most . Specifically,

 u(σ′1,σ′1)=(∑iσ′1(si,t(i))2)⋅1+(1−∑iσ′1(si,t(i))2)⋅2

and this expression is uniquely maximized by setting each to . is easily seen to also be , and is easily seen to be at most (in fact, it is exactly that). Thus, to obtain , we must have either or . However, in the former case, we would require , which can only be attained by setting each to —but this would result in . Thus, we can conclude . But then would also successfully invade . Hence, we can assume without loss of generality that , i.e., and .

That is, we can assume that only places positive probability on strategies with . For any , we have . Specifically, if and , and otherwise. Now, suppose that and for with . We can write , where , , and sum to . We have

 u(σ′,σ′)=p20u(σ′′,σ′′)+2p0p1u(σ′′,sv)+2p0p2u(σ′′,sv′)

(because ). Suppose, without loss of generality, that . Then, if we shift all the mass from to (so that the mass on the latter becomes ), this can only increase , and it reduces the size of the support of by . By repeated application, we can assume without loss of generality that the support of corresponds to a clique of the induced subgraph on . We know this clique has size where . is maximized if randomizes uniformly over its support, in which case

 u(σ′,σ′)=((c−1)/c)(k/(k−1))(2−1/|I|)<((k−1)/k)(k/(k−1))(2−1/|I|)=2−1/|I|

But this contradicts that would successfully invade . It follows that is indeed an ESS.

Conversely, suppose that there exists an ESS that places positive probability only on strategies with and . We must have , because otherwise would be a better response to . First suppose that for every , there is at most one such that places positive probability on (we will shortly show that this must be the case). Let denote the such that (if there is no such for some , then choose an arbitrary to equal ). Then, is uniquely maximized by setting for all , resulting in

 u(σ,σ)=(1/|I|)⋅1+(1−1/|I|)⋅2=2−1/|I|

Hence, this is the only way to ensure that , under the assumption that for every , there is at most one such that places positive probability on .

Now, let us consider the case where there exists an such that there exist with , , and , to show that such a strategy cannot obtain a utility of or more against itself. We can write , where places probability zero on and . We observe that and , because when the game is restricted to these strategies, each player always gets the same payoff as the other player. Moreover, , because does not place positive probability on either or . Hence, we have that

 u(σ,σ)=p20u(σ′,σ′)+2p0(p1+p2)u(σ′,sij)+p21+p22

But then, if we shift all the mass from to (so that the mass on the latter becomes ) to obtain strategy , it follows that . By repeated application, we can find a strategy such that and for every , there is at most one such that places positive probability on . Because we showed previously that the latter type of strategy can obtain expected utility at most against itself, it follows that it is in fact the only type of strategy (among those that randomize only over the strategies) that can obtain expected utility against itself. Hence, we can conclude that the ESS must have, for each , exactly one (to which we will refer as ) such that , and that places probability on every other strategy.

Finally, suppose, for the sake of contradiction, that there exists a clique of size in the induced subgraph on . Consider the strategy that places probability on each of the corresponding strategies . We have that . Moreover,

 u(σ′,σ′)=(1/k)⋅0+((k−1)/k)⋅(k/(k−1))(2−1/|I|)=2−1/|I|

It follows that successfully invades —but this contradicts being an ESS. It follows, then, that is such that every clique in the induced graph on has size strictly less than .

## 5 Hardness without Restricted Support

All that remains is to reduce the modified problem to the main problem of determining whether a game has an ESS. The following lemma makes this fairly straightforward.

###### Lemma 2 (No duplicates in ESS)

Suppose that strategies and () are duplicates, i.e., for all , .444It is fine to require as well, and we will do so in the proof of Theorem 1, but it is not necessary for this lemma to hold. Then no ESS places positive probability on or .

Proof: For the sake of contradiction, suppose is an ESS that places positive probability on or (or both). Then, let be identical to with the exception that and (but it must be that ). That is, redistributes some mass between and . Then, cannot repel , because and .

We now formally define the main problem:

###### Definition 4

In ESS, we are given a symmetric two-player normal-form game . We are asked whether there exists an evolutionarily stable strategy of .

We now obtain the main result as follows.

###### Theorem 1

ESS is -complete.

Proof: Etessami and Lochbihler (2008) proved membership in . We prove hardness by reduction from ESS-RESTRICTED-SUPPORT, which is hard by Lemma 1. Given the game with strategies and subset of strategies that can receive positive probability, construct a modified game by duplicating all the strategies in . (At this point, for duplicate strategies and , we require as well as .) If has an ESS that places positive probability only on strategies in , this will still be an ESS in , because any strategy that uses the new duplicate strategies will still be repelled, just as its equivalent strategy that does not use the new duplicates was repelled in the original game. (Here, it should be noted that the equivalent strategy in the original game cannot turn out to be , because does not put any probability on a strategy that is duplicated.) On the other hand, if has an ESS, then by Lemma 4, this ESS can place positive probability only on strategies in . This ESS will still be an ESS in (all of whose strategies also exist in ), and naturally it will still place positive probability only on strategies in .

## Appendix A Hardness without duplication

In this appendix, it is shown that with some additional analysis and modifications, the result holds even in games where each pure strategy is the unique best response to some mixed strategy. That is, the hardness is not simply an artifact of the introduction of duplicate or otherwise redundant strategies.

###### Definition 5

In the MINMAX-CLIQUE problem, say vertex dominates vertex if they are in the same partition element , there is no edge between them, and the set of neighbors of is a superset (not necessarily strict) of the set of neighbors of .

###### Lemma 3

Removing a dominated vertex does not change the answer to a MINMAX-CLIQUE instance.

Proof: In any clique in which dominated vertex participates (and therefore its dominator does not), can participate in its stead.

###### Modified Lemma 1

ESS-RESTRICTED-SUPPORT is -hard, even if every pure strategy is the unique best response to some mixed strategy.

Proof: We use the same reduction as in the proof of Lemma 1. We restrict our attention to instances of the MINMAX-CLIQUE problem where , , there are no dominated vertices, and every vertex is part of at least one edge. Clearly, the problem remains -complete when restricting attention to these instances. For the games resulting from these restricted instances, we show that every pure strategy is the unique best response to some mixed strategy. Specifically:

• is the unique best response to the strategy that distributes mass uniformly over the with , and mass uniformly over the with . (This is because only pure strategies will get a utility of against the part with mass , and among these only will get a utility of against the part with mass .)

• (with ) is the unique best response to the strategy that places probability on and probability on every with , and that distributes the remaining mass uniformly over the vertex strategies corresponding to neighbors of . (This is because obtains an expected utility of against the part with mass , and an expected utility of against the part with mass ; strategies with obtain utility strictly less than against the part with mass ; and strategies , , and with obtain utility at most against the part with mass , and an expected utility of strictly less than against the part with mass . (In the case of with , this is because by assumption, does not dominate , so either has a neighbor that does not have, which gets positive probability and against which gets a utility of ; or, there is an edge between and , so that gets positive probability and gets utility against itself.))

• is the unique best response to the strategy that randomizes uniformly over all the . (This is because it obtains utility against that strategy, and all the other pure strategies obtain utility strictly less against that strategy, due to getting utility against at least one pure strategy in its support.)

The following lemma is a generalization of Lemma 4.

###### Modified Lemma 2

Suppose that subset satisfies:

• for all and , we have (that is, strategies in are interchangeable when they face a strategy outside );555Again, it is fine to require as well, and we will do so in the proof of Modified Theorem 1, but it is not necessary for the lemma to hold. and

• the restricted game where players must choose from has no ESS.

Then no ESS of the full game places positive probability on any strategy in .

Proof: Consider a strategy that places positive probability on . We can write , where , places positive probability only on , and places positive probability only on . Because no ESS exists in the game restricted to , there must be a strategy (with ) whose support is contained in that successfully invades , so either (1) or (2) and . Now consider the strategy ; we will show that it successfully invades . This is because

 u(σ′,σ) =p21u(σ1,σ1)+p1p2u(σ1,σ2)+p2p1u(σ′2,σ1)+p22u(σ′2,σ2) =p21u(σ1,σ1)+p1p2u(σ1,σ2)