Using game theory to examine multi-agent interactions in complex systems is a non-trivial task. Works by Walsh et al.Walsh et al. (2002, 2003) and Wellman et al. Wellman (2006)
, have shown the great potential of using heuristic strategies and empirical game theory to examine such interactions at a higher meta-level, instead of trying to capture the decision-making processes at the level of the atomic actions involved. Doing this turns the interaction in a smaller normal form game, or meta-game, with the higher-level strategies now being the primitive actions of the game, making the complex multi-agent interaction amenable to game theoretic analysis.
Others have built on this empirical game theoretic methodology and applied these ideas to no limit Texas hold’em Poker and various types of double auctions for example, see Phelps et al. (2004); Ponsen et al. (2009); Phelps et al. (2007); Kaisers et al. (2008); Tuyls and Parsons (2007), showing that a game theoretic analysis at the level of meta-strategies yields novel insights into the type and form of interactions in complex systems.
A major limitation of this empirical game theoretic approach is that it comes without theoretical guarantees on the approximation of the true underlying game by an estimated game based on sampled data, and that it is unclear how many data samples are required to achieve a good approximation. Additionally, the method remains limited to symmetric situations, in which the agents or players have access to the same set of strategies, and are interchangeable. One approach is to ignore asymmetry (types of players), and average over many samples of types resulting in a single expected payoff to each player in each entry of the meta-game payoff table. Many real-world situations though are asymmetric in nature and involve various roles for the agents that participate in the interactions. For instance, buyers and sellers in auctions, or games such as Scotland Yard, but also different roles in e.g. robotic soccer (defender vs striker) and even natural language (hearer vs speaker).
In this paper we tackle these problems. We prove that a Nash equilibrium of the estimated game is a -Nash equilibrium of the real underlying game, showing that we can closely approximate the real Nash equilibrium as long as we have enough data samples from which to build the meta-game payoff table. Furthermore, we also examine how much data samples are required to confidently approximate the underlying game. We also show how to generalise the heuristic payoff or meta-game method introduced by Walsh et al. to two-population asymmetric games.
Finally, we illustrate the generalised method in several domains. We carry out an experimental illustration on the AlphaGo algorithm Silver et al. (2016), Colonel Blotto Kohli et al. (2012) and an asymmetric Leduc poker game. In the AlphaGo experiments we show how a symmetric meta-game analysis can provide insights into the evolutionary dynamics and strengths of various versions of the AlphaGo
algorithm while it was being developed, and how intransitive behaviour can occur by introducing a non-related strategy. In the Colonel Blotto game we illustrate how the methodology can provide insights into how humans play this game, constructing several symmetric meta-games from data collected on Facebook. Finally, we illustrate the method in Leduc poker, by examining an asymmetric meta-game, generated by a recently introduced multiagent reinforcement learning algorithm, policy-space response oracles (PSRO)Lanctot et al. (2017). For this analysis we rely on some theoretical results that connect an asymmetric normal form game to its symmetric counterparts Tuyls et al. (2018).
In this section, we introduce the necessary background to describe our game theoretic meta-game analysis of the repeated interaction between players.
2.1. Normal Form Games:
In a -player Normal Form Game (NFG), players are involved in a single round strategic interaction. Each player chooses a strategy from a set of strategy and receives a payoff . For the sake of simplicity, we will write the joint strategy and the joint reward . Then a -player NFG is a tuple . Each player interacts in this game by following a strategy profile
which is a probability distribution over.
A symmetric NFG captures interactions where players can be interchanged. The first condition is therefore that the strategy sets are the same for all players, (i.e. and will be written ). In a symmetric NFG, if a permutation is applied to the joint strategy , the joint payoff is permuted accordingly. Formally, a game is symmetric if for all permutations of elements we have (where and ). So for a game to be symmetric there are two conditions, the players need to have access to the same strategy set and the payoff structure needs to be symmetric, such that players are interchangeable. If one of these two conditions is violated the game is asymmetric.
In the asymmetric case our analysis will focus on the two-player case (two roles) and thus we introduce specific notations for the sake of simplicity. In a two-player normal-form game, each player’s payoff can be seen as a matrix. We will write for the payoff matrix of player one (i.e. ) and for the payoff matrix of player two (i.e.
). In this two-player game, the column vectoris the strategy of player one and the one of player two. In the end, a two player NFG is defined by the following tuple .
2.2. Nash Equilibrium
In a two-player game, a pair of strategies is a Nash equilibrium of the game if no player has an incentive to switch from their current strategy. In other words, is a Nash equilibrium if and .
Evolutionary game theory often consider a single strategy that plays against itself. In this situation, the game is said to have a single population. In a single population game, is a Nash equilibrium if .
2.3. Replicator Dynamics
The replicator dynamics equation describes how a strategy profile evolves in the midst of others. This evolution is described according to a first order dynamical system. In a two-player NFG , the replicator equations are defined as:
The dynamics defined by these two coupled differential equations changes the strategy profile to increase the probability of the strategies that have the best return or are the fittest.
In the case of a symmetric two-player game (), the replicator equations assume that both players play the same strategy profile (i.e. player one and two play according to ) and the dynamics is defined as follows:
2.4. Meta Games
A meta game is a simplified model of a complex interaction. In order to analyze complex games like e.g. poker, we do not need to consider all possible strategies but a set of relevant meta-strategies that are often played Ponsen et al. (2009). These meta strategies (or styles of play), over atomic actions, are commonly played by players such as for instance "passive/aggressive" or "tight/loose" in poker. A -type meta game is now a -player repeated NFG where players play a limited number of meta strategies. Following our poker example, the strategy set of the meta game will now be defined as the set and the reward function as the outcome of a game between -players using different profiles.
There are now two possibilities, either the meta-game is symmetric, or it is asymmetric. We will start with the simpler symmetric case, which has been studied in empirical game theory, then we continue with asymmetric games, in which we consider two populations, or roles.
3.1. Symmetric Meta Games
We consider a set of agents or players with that can choose a strategy from a set with and can participate in one or more -type meta-games with . If the game is symmetric then the formulation of meta strategies has the advantage that the payoff for a strategy does not depend on which player has chosen that strategy and consequently the payoff for that strategy only depends on the composition of strategies it is facing in the game and not on who is playing the strategy. This symmetry has been the main focus of the use of empirical game theory analysis Walsh et al. (2002); Wellman (2006); Ponsen et al. (2009); Phelps et al. (2007).
If we were to construct a classical payoff table for we would require entries in the table (which becomes large very quickly). Since all players can choose from the same strategy set and all players receive the same payoff for being in the same situation, we can simplify our payoff table.
Let be a matrix, where each row contains a discrete distribution of players over strategies. The matrix yields rows. Each distribution over strategies can be simulated (or derived from data), returning a vector of expected rewards . Let be a matrix which captures the rewards corresponding to the rows in , i.e., . We refer to a meta payoff table as .
So each row yields a discrete profile indicating exactly how many players play each strategy, with . A strategy profile then equals .
Suppose we have a meta-game with meta-strategies () and players () that interact in a -type, this leads to a meta game payoff table of entries (which is a good reduction from . An important advantage of this type of table is that it easily extends to many agents, as opposed to the classical payoff matrix. Table 1 provides an example for three strategies and . The left-hand side expresses the discrete profiles and corresponds to matrix , while the right-hand side gives the payoffs for playing any of the strategies given the discrete profile and corresponds to matrix .
In order to analyse the evolutionary dynamics of high-level meta-strategies, we also need to estimate the expected payoff of such strategies relative to each other. In evolutionary game theoretic terms, this is the relative fitness of the various strategies, dependent on the current frequencies of those strategies in the population.
In order to approximate the payoff for an arbitrary mix of strategies in an infinite population of replicators distributed over the species according to , individuals are drawn randomly from the infinite distribution. The probability for selecting a specific row can be computed from and as
The expected payoff of strategy , , is then computed as the weighted combination of the payoffs given in all rows:
This expected payoff function can be used in Equation 2 to compute the evolutionary population change according to the replicator dynamics by replacing by . Note that we need to re-normalize (denominator) by ignoring rows that do not contribute to the payoff of a strategy because it is not present in the distribution in the HPT.
3.2. Asymmetric Meta Games
One can now wonder how the previously introduced method extends to asymmetric games, which has not been considered in the literature. An example of an asymmetric game is the famous battle of the sexes game illustrated in Table 3. In this game both players do have the same strategy sets, i.e., go to the opera or go to the movies, however, the corresponding payoffs for each are different, expressing the differences in preferences that both players have.
If we aim to carry out a similar evolutionary analysis as in the symmetric case, restricting ourselves to two populations or roles, we will need two meta game payoff tables, one for each player over its own strategy set. We will also need to use the asymmetric version of the replicator dynamics as shown in Equation LABEL:eq:asymRD2. Additionally, in order to compute the right payoffs for every situation we will have to interpret a discrete strategy profile in the meta-table slightly different. Suppose we have a 2-type meta game, with three strategies in each player’s strategy set. We introduce a generalisation of our meta-table for both players by means of an example shown in Table 4, which corresponds to the general NFG shown in Table 3.
Let’s have a look at the first entry in Table 4, i.e., . This entry means that both agents ( and ) are playing their first strategy, expressed by , meaning the number of agents playing strategy in the first population equals and that the number of agents playing strategy in the second population equals as well. The corresponding payoff for each player equals . Now lets have a look at the discrete profiles: and . The first one means that the first player is playing its first strategy while the second player is playing their second strategy. The corresponding payoffs are for the first player and for the second player. The profile shows the reverted situation in which the second player plays his first strategy and the first player plays his second strategy, yielding payoffs and for the first player and second player respectively. In order to turn the table into a similar format as for the symmetric case, we can now introduce meta-tables, one for each player. More precisely, we get Tables 5 and 6 for players 1 and 2 respectively.
One needs to take care in correctly interpreting these tables. Let’s have a look at row for instance. This should now be interpreted in two ways: one, the first player plays his first strategy while the other player plays his second strategy and he receives a payoff of , two, the first player plays his second strategy while the other player plays his first strategy and receives a payoff of . The expected payoff can now be estimated in the same way as explained for the symmetric case as we will be relying on symmetric replicator dynamics by decoupling asymmetric games in their symmetric counterparts (explained in the next section).
3.3. Linking symmetric and asymmetric games
Here we summarize the most important results on the link between an asymmetric game and its symmetric counterpart games. For a full treatment and discussion of these results see Tuyls et al. (2018). In a nutshell, this work proves that if is a Nash equilibrium of the bimatrix game (where and have the same support111 and have the same support if where and ), then is a Nash equilibrium of the single population, or symmetric, game and is a Nash equilibrium of the single population, or symmetric, game . Both symmetric games are called the counterpart games of the asymmetric game . The reverse is also true: If is a Nash equilibrium of the single population game and is a Nash equilibrium of the single population game (and if and have the same support), then is a Nash equilibrium of the game . In our empirical analysis, we use this property to analyze an asymmetric games by looking at the counterpart single population games and .
4. Theoretical Insights
As illustrated in the previous section the procedure for empirical meta-game analysis consists of two parts. Firstly, one needs to construct an empirical meta-game utility function for each player. This step can be performed using logs of interactions between players, or by playing the game sufficiently enough. Secondly, one expects that analyzing the empirical game will give insights in the true underlying game itself (i.e. the game from which we sample).This section provides insights in the following: how much data is enough to generate a good approximation of the true underlying game? Is uniform sampling over actions or strategies the right method?
4.1. Main Lemma
Sometimes players receive a stochastic reward for a given joint action . The underlying game we study is and for the sake of simplicity the joint action of every player but player will be written . In the two following definitions, we introduce the concept of Nash equilibrium and -Nash equilibrium in -player games (as we only introduced it in the -player case):
Definition : A joint strategy is a Nash equilibrium if for all :
Definition : A joint strategy is an -Nash equilibrium if for all :
When running an analysis on a meta game, we do not have access to the average reward function but to an empirical estimate . The following lemma shows that a Nash equilibrium for the empirical game is an -Nash equilibrium for the game where .
Lemma: If is a Nash equilibrium for , then it is an -Nash equilibrium for the game where .
First we have the following relation:
This lemma shows that if one can control the difference between uniformly over players and actions, then an equilibrium for the empirical game is almost an equilibrium for the game defined by the average reward function .
4.2. Finite Samples Analysis
This section details some concentration results. In practice, we often have access to a batch of observations of the underlying game. We will run our analysis on an empirical estimate of the game denoted by . The question then will be either with which confidence can we say that a Nash equilibrium for is a -Nash equilibrium, or for a fixed confidence, for which can we say that a Nash equilibrium for is a -Nash equilibrium for . In the case we have access to game play, the question is how many samples do we need to assess that a Nash equilibrium for is a -Nash equilibrium for for a fixed confidence and a fixed . For the sake of simplicity, we will assume that all payoff are bounded in .
4.2.1. The batch scenario
Here we assume that we are given independent samples to compute the empirical average . Then, by applying Hoeffding’s inequality we can prove the following result:
4.2.2. uniform sampling
In this section we assume that we have a budget of samples per joint actions and per player . In that case we have the following bound:
Then, If we want with a probability of at least we need at least
This section presents experiments that illustrate the meta-game approach and its feasibility for examining strengths and weaknesses of higher-level strategies in various domains, including AlphaGo, Colonel Blotto, and the meta-game generated by PSRO. Note that we restrict the meta-games to three strategies, as we can nicely visualise this in a phase plot, and these still provide useful information about the dynamics in the full strategy spaces.
The data set under study consists of AlphaGo variations and a a number of different Go strategies such as Crazystone and Zen (previously the state-of-the-art). stands for the algorithm and the indexes for the use of respectively rollouts, value nets and policy nets (e.g. uses all 3). For a detailed description of these strategies see Silver et al. (2016). The meta-game under study here concerns a -type NFG with . We will look at various -faces of the larger simplex. Table 9 in Silver et al. (2016) summarises all wins and losses between these various strategies (meeting several times), from which we can compute meta-game payoff tables.
5.1.1. Experiment 1: strong strategies
This first experiment examines three of the strongest AlphaGo strategies in the data-set, i.e., . As a first step we created a meta-game payoff table involving these three strategies, by looking at their pairwise interactions in the data set (summarised in Table 9 of Silver et al. (2016)). This set contains data for all strategies on how they interacted with the other strategies, listing the win rates that strategies achieved against one another (playing either as white or black) over several games. The meta-game payoff table derived for these three strategies is described in Table 7.
In Figure 2 we have plotted the directional field of the meta-game payoff table using the replicator dynamics for a number of strategy profiles in the simplex strategy space. From each of these points in strategy space an arrow indicates the direction of flow, or change, of the population composition over the three strategies. Figure 2 shows a corresponding trajectory plot. From these plots one can easily observe that strategy is a strong attractor and consumes the entire strategy space over the three strategies. This restpoint is also a Nash equilibrium. This result is in line with what we would expect from the knowledge we have of the strengths of these various learned policies. Still, the arrows indicate how the strategy landscape flows into this attractor and therefore provides useful information as we will discuss later.
5.1.2. Experiment 2: evolution and transitivity of strengths
We start by investigating the 2-face simplex involving strategies , and , for which we created a meta-game payoff table similarly as in the previous experiment (not shown). The evolutionary dynamics of this 2-face can be observed in Figure 5a. Clearly strategy is a strong attractor and dominates the two other strategies. We now replace this attractor by strategy and plot its evolutionary dynamics in Figure 5b. What can be observed from both trajectory plots in Figure 5 is that the curvature is less pronounced in plot 5b than it is in plot 5a. The reason for this is that the difference in strength between and is less obvious in the presence of an even stronger attractor than . This means that is now pulling much stronger on both and and consequently the flow goes more directly to . So even when a strategy space is dominated by one strategy, the curvature (or curl) is a promising measure for the strength of a meta-strategy.
What is worthwhile to observe from the AlphaGo dataset, and illustrated as a series in Figures 5 and 5, is that there is clearly an incremental increase in the strength of the AlphaGo algorithm going from version to , building on previous strengths, without any intransitive behaviour occurring, when only considering a strategy space formed by the AlphaGo versions.
Finally, as discussed in Section 4, we can now examine how good of an approximation an estimated game is. In the AlphaGo domain we only do this analysis for the games displayed in Figures 5a and 5b, as it is similar for the other experiments. We know that is a Nash equilibrium of the estimated game analyzed in Figure 5a (meta Table not shown). The outcome of against was estimated with games (for the other pair of strategies we have and ). Because of the symmetry of the problem, the bound in section 4.2.1 is reduced to:
Therefore, we can conclude that the strategy is an -Nash equilibrium (with ) for the real game with probability at least . The same calculation would also give a confidence of for the RD studied in Figure 5b for an (as the number of samples are ).
5.1.3. Experiment 3: cyclic behaviour
A final experiment investigates what happens if we add a pre-AlphaGo state-of-the-art algorithm to the strategy space. We have observed that even though remains the strongest strategy, dominating all other AlphaGo versions and previous state-of-the-art algorithms, cyclic behaviour can occur, something that cannot be measured or seen from Elo ratings.222An Elo rating or score is a measure to express the relative strength of a player, or strategy. It was named after Arpad Elo and originally introduced to rate chess players. For an introduction see e.g. Coulom (2008) More precisely, we constructed a meta-game payoff table for strategies , and (one of the previous commercial state-of-the-art algorithms). In Figure 5 we have plotted the evolutionary dynamics for this meta-game, and as can be observed there is a mixed equilibrium in strategy space, around which the dynamics cycle, indicating that is capable of introducing in-transitivity, as dominates , dominates and dominates .
5.2. Colonel Blotto
Colonel Blotto is a resource allocation game originally introduced by Borel Borel (1953). Two players interact, each allocating troops over locations. They do this separately without communication, after which both distributions are compared to determine the winner. When a player has more troops in a specific location, it wins that location. The player winning the most locations wins the game. This game has many game theoretic intricacies, for an analysis see Kohli et al. (2012). Kohli et al. have run Colonel Blotto on Facebook (project Waterloo), collecting data describing how humans play this game, with each player having troops and considering battlefields. The number of strategies in the game is vast: a game with troops and locations has strategies.
Based on Kohli et al. we carry out a meta game analysis of the strongest strategies and themost frequently played strategies on Facebook. We have a look at several -strategy simplexes, which can be considered as -faces of the entire strategy space.
An instance of a strategy in the game of Blotto will be denoted as follows: with . All permutations in this division of troops belong to the same strategy. We assume that permutations are chosen uniformly by a player. Note that in this game there is no need to carry out the theoretical analysis of the approximation of the meta-game, as we are are not examining heuristics or strategies over Blotto strategies, but rather these strategies themselves, for which the payoff against any other strategy will always be the same (by computation). Nevertheless, carrying out a meta-game analysis reveals interesting information.
5.2.1. Experiment 1: Top performing strategies
. In a first step we compute a meta-game payoff table for these three strategies. The interactions are pairwise, and the expected payoff can be easily computed, assuming a uniform distribution for different permutations of a strategy. This normalised payoff is shown in Table9.
Using table 9 we can compute evolutionary dynamics using the standard replicator equation. The resulting trajectory plot can be observed in Figure 6a. The first thing we see is that we have one strong attractor, i.e, strategy and there is transitive behaviour, meaning that dominates , dominates , and dominates . Although is the strongest strategy in this -strategy meta-game, the win rates (computed over all played strategies in project Waterloo) indicate that strategy was more successful on Facebook. The differences are minimal, and on average it is better to choose , which was also the most frequently chosen strategy from the set of strong strategies, see Table 8. We show a similar plot for the evolutionary dynamics of strategies , , and in Figure 6b, which are three of the most frequently played strong strategies from Table 8.
5.2.2. Experiment 2: most frequently played strategies
We compared the evolutionary dynamics of the eight most frequently played strategies and present here a selection of some of the results. The meta-game under study in this domain concerns a 2-type repeated NFG G with . We will look at various -faces of the -simplex. The top eight most frequently played strategies are shown in Table 10.
|Most played strategies|
First we investigate the strategies , , and from our strategy set. In Table 11 we show the resulting meta-game payoff table of this -face simplex. Using this table we can again compute the replicator dynamics and investigate the trajectory plots in Figure 7a. We observe that the dynamics cycle around a mixed Nash equilibrium (every interior rest point is a Nash equilibrium). This intransitive behaviour makes sense by looking at the pairwise interactions between strategies and the corresponding payoffs they receive from Table 9. The expected payoff for , when playing against will be lower than the expected payoff for . Similarly, will be dominated by when they meet, and to make the cycle complete, will receive a lower expected payoff against . As such, the behaviour will cycle around a the Nash equilibrium.
An interesting question is where human players are situated in this cyclic behaviour landscape. In Figure 7b we show the same trajectory plot but added a red marker to indicate the strategy profile based on the frequencies of these 3 strategies played by human players. This is derived from Table 10 and the profile vector is . If we assume that the human agents optimise their behaviour in a survival of the fittest style they will cycle along the red trajectory. In Figure 7c we illustrate similar intransitive behaviour for three other frequently played strategies.
5.3. PSRO-generated Meta-Game
We now turn our attention to an asymmetric game. Policy Space Response Oracles (PSRO) is a multiagent reinforcement learning process that reduces the strategy space of large extensive-form games via iterative best response computation. PSRO can be seen as a generalized form of fictitious play that produces approximate best responses, with arbitrary distributions over generated responses computed by meta-strategy solvers. One application of PSRO was applied to a commonly-used benchmark problem known as Leduc poker (Southey et al., 2005), except with a fixed action space and penalties for taking illegal moves. Therefore PSRO learned to play from scratch, without knowing which moves were legal. Leduc poker has a deck of 6 cards (jack, queen, king in two suits). Each player receives an initial private card, can bet a fixed amount of 2 chips in the first round, 4 chips in the second round, with a maximum of two raises in each round. A public card is revealed before the second round starts.
In Table 12 we present such an asymmetric
2-player game generated by the first few epochs of PSRO learning to play Leduc Poker. In the game illustrated here, each player has three strategies that, for ease of the exposition, we callfor player 1, and for player 2. Each one of these strategies represents an approximate best response to a distribution over previous opponent strategies. In Table 13 we show the two symmetric counterpart games (see section 3.3) of the empirical game produced by PSRO.
Again we can now analyse the equilbrium landscape of this game, but now using the asymmetric meta-game payoff table and the decomposition result introduced in section 3.3. Since the PSRO meta game is asymmetric we need two populations for the asymmetric replicator equations. Analysing and plotting the evolutionary asymmetric replicator dynamics now quickly becomes very tedious as we deal with two simplices, one for each player. More precisely, if we consider a strategy profile for one player in its corresponding simplex, and that player is adjusting its strategy, this will immediately cause the second simplex to change, and vice versa. Consequently, it is not straightforward anymore to analyse the dynamics.
In order to facilitate the process of analysing the dynamics we can apply the counterpart theorems to remedy the problem. In Figures 8 and 9 we show the evolutionary dynamics of the counterpart games. As can be observed in Figure 8 the first counterpart game has only one equilibrium, i.e., a pure Nash equilibrium in which both players play strategy , which absorbs the entire strategy space. Looking at Figure 9 we see the situation is a bit more complex in the second counterpart game, here we observe three equilibiria: one pure at strategy , one pure at strategy , and one unstable mixed equilibrium at the 1-face formed by strategies and . All these equilibria are Nash in the respective counterpart games333Banach solver (http://banach.lse.ac.uk/) is used to check Nash equilibria Avis et al. (2010). By applying the theory of section 3.3 we now know that we only maintain the combination as a pure Nash equilibrium of the asymmetric PSRO empirical game, since these strategies have the same support as a Nash equilibrium in the counterpart games. The other equilibria in the second counterpart game can be discarded as candidates for Nash equilibria in the PSRO empirical game since they do not appear as equilibria for player 1.
Finally, each joint action of the game was estimated with samples. As the outcome of the game is bounded in the interval we can only guarantee that the Nash equilibrium of the meta game we studied is a -Nash equilibrium of the unknown underlying game. It turns out that with and , the confidence can only be guaranteed to be above . To guarantee a confidence of at least for the same value of , we would need at least samples.
In this paper we have generalised the heuristic payoff table method introduced by Walsh et al. Walsh et al. (2002) to two-population asymmetric games. We call such games meta-games as they consider complex strategies instead of atomic actions as found in normal-form games. As such they are well suited to investigate real-world multi-agent interactions, as they summarize behaviour in terms of high-level strategies rather than primitive actions. We have shown that a Nash equilibrium of the meta-game is a Nash equilibrium of the true underlying game, providing theoretical bounds on how much data samples are required to build a reliable meta payoff table. As such our method allows for an equilibrium analysis with a certain confidence that this game is a good approximation of the underlying real game. Finally, we have carried out an empirical illustration of this method in three complex domains, i.e., AlphaGo, Colonel Blotto and PSRO, showing the feasibility and strengths of the approach.
We wish to thank Angeliki Lazaridou and Guy Lever for insightful comments, the DeepMind AlphaGo team for support with the analysis of the AlphaGo dataset, and Pushmeet Kohli for supporting us with the Colonel Blotto dataset.
- Avis et al. (2010) D. Avis, G. Rosenberg, R. Savani, and B. von Stengel. 2010. Enumeration of Nash Equilibria for Two-Player Games. Economic Theory 42 (2010), 9–37.
E. Borel. 1953.
La théorie du jeu les équations intégrales à noyau symétrique. Comptes Rendus de l’Académie 173, 1304–1308 (1921); English translation by Savage, L.: The theory of play and integral equations with skew symmetric kernels.Econometrica 21 (1953), 97–100.
- Coulom (2008) Rémi Coulom. 2008. Whole-History Rating: A Bayesian Rating System for Players of Time-Varying Strength. In Computers and Games, 6th International Conference, CG 2008, Beijing, China, September 29 - October 1, 2008. Proceedings. 113–124.
- Kaisers et al. (2008) Michael Kaisers, Karl Tuyls, Frank Thuijsman, and Simon Parsons. 2008. Auction Analysis by Normal Form Game Approximation. In Proceedings of the 2008 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, Sydney, NSW, Australia, December 9-12, 2008. 447–450.
- Kohli et al. (2012) Pushmeet Kohli, Michael Kearns, Yoram Bachrach, Ralf Herbrich, David Stillwell, and Thore Graepel. 2012. Colonel Blotto on Facebook: the effect of social relations on strategic interaction. In Web Science 2012, WebSci ’12, Evanston, IL, USA - June 22 - 24, 2012. 141–150.
- Lanctot et al. (2017) Marc Lanctot, Vinicius Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Perolat, David Silver, and Thore Graepel. 2017. A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). 4190–4203.
- Phelps et al. (2007) Steve Phelps, Kai Cai, Peter McBurney, Jinzhong Niu, Simon Parsons, and Elizabeth Sklar. 2007. Auctions, Evolution, and Multi-agent Learning. In Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning, 5th, 6th, and 7th European Symposium, ALAMAS 2005-2007 on Adaptive and Learning Agents and Multi-Agent Systems, Revised Selected Papers. 188–210.
- Phelps et al. (2004) Steve Phelps, Simon Parsons, and Peter McBurney. 2004. An Evolutionary Game-Theoretic Comparison of Two Double-Auction Market Designs. In Agent-Mediated Electronic Commerce VI, Theories for and Engineering of Distributed Mechanisms and Systems, AAMAS 2004 Workshop, AMEC 2004, New York, NY, USA, July 19, 2004, Revised Selected Papers. 101–114.
- Ponsen et al. (2009) Marc Ponsen, Karl Tuyls, Michael Kaisers, and Jan Ramon. 2009. An evolutionary game-theoretic analysis of poker strategies. Entertainment Computing 1, 1 (2009), 39–45.
Silver et al. (2016)
David Silver, Aja Huang,
Chris J. Maddison, Arthur Guez,
Laurent Sifre, George van den Driessche,
Julian Schrittwieser, Ioannis Antonoglou,
Vedavyas Panneershelvam, Marc Lanctot,
Sander Dieleman, Dominik Grewe,
John Nham, Nal Kalchbrenner,
Ilya Sutskever, Timothy P. Lillicrap,
Madeleine Leach, Koray Kavukcuoglu,
Thore Graepel, and Demis Hassabis.
Mastering the game of Go with deep neural networks and tree search.Nature 529, 7587 (2016), 484–489.
Southey et al. (2005)
Finnegan Southey, Michael
Bowling, Bryce Larson, Carmelo Piccione,
Neil Burch, Darse Billings, and
Chris Rayner. 2005.
Bayes’ bluff: Opponent modelling in poker. In
Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI-05).
- Tuyls and Parsons (2007) Karl Tuyls and Simon Parsons. 2007. What evolutionary game theory tells us about multiagent learning. Artif. Intell. 171, 7 (2007), 406–416.
- Tuyls et al. (2018) Karl Tuyls, Julien Perolat, Marc Lanctot, Rahul Savani, Joel Leibo, Toby Ord, Thore Graepel, and Shane Legg. 2018. Symmetric Decomposition of Asymmetric Games. Scientific Reports 8, 1 (2018), 1015.
- Walsh et al. (2002) W. E. Walsh, R. Das, G. Tesauro, and J.O. Kephart. 2002. Analyzing complex strategic interactions in multi-agent games. In AAAI-02 Workshop on Game Theoretic and Decision Theoretic Agents, 2002.
- Walsh et al. (2003) W. E. Walsh, D. C. Parkes, and R. Das. 2003. Choosing samples to compute heuristic-strategy Nash equilibrium. In Proceedings of the Fifth Workshop on Agent-Mediated Electronic Commerce.
- Wellman (2006) Michael P. Wellman. 2006. Methods for Empirical Game-Theoretic Analysis. In Proceedings, The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, July 16-20, 2006, Boston, Massachusetts, USA. 1552–1556.