1 Introduction
This paper introduces a principled, practical, and descriptive methodology, which we call Rank. Rank enables evaluation and ranking of agents in largescale multiagent settings, and is grounded in a new gametheoretic solution concept, called MarkovConley chains (MCCs), which captures the dynamics of multiagent interactions. While much progress has been made in learning for games such as Go [1, 2] and Chess [3], computational gains are now enabling algorithmic innovations in domains of significantly higher complexity, such as Poker [4] and MuJoCo soccer [5] where ranking of agents is much more intricate than in classical simple matrix games. With multiagent learning domains of interest becoming increasingly more complex, we need methods for evaluation and ranking that are both comprehensive and theoretically wellgrounded.
Evaluation of agents in a multiagent context is a hard problem due to several complexity factors: strategy and action spaces of players quickly explode (e.g., multirobot systems), models need to be able to deal with intransitive behaviors (e.g., cyclical bestresponses in RockPaperScissors, but at a much higher scale), the number of agents can be large in the most interesting applications (e.g., Poker), types of interactions between agents may be complex (e.g., MuJoCo soccer), and payoffs for agents may be asymmetric (e.g., a boardgame such as Scotland Yard).
This evaluation problem has been studied in Empirical Game Theory using the concept of empirical games or metagames, and the convergence of their dynamics to Nash equilibria
[6, 7, 8, 9]. A metagame is an abstraction of the underlying game, which considers metastrategies rather than primitive actions [6, 8]. In the Go domain, for example, metastrategies may correspond to different AlphaGo agents (e.g., each metastrategy is an agent using a set of specific training hyperparameters, policy representations, and so on). The players of the metagame now have a choice between these different agents (henceforth synonymous with metastrategies), and payoffs in the metagame are calculated corresponding to the win/loss ratio of these agents against each other over many rounds of the full game of Go. Metagames, therefore, enable us to investigate the strengths and weaknesses of these agents using gametheoretic evaluation techniques.
Existing metagame analysis techniques, however, are still limited in a number of ways: either a low number of players or a low number of agents (i.e., metastrategies) may be analyzed [6, 10, 8, 9]. Specifically, on the one hand continuoustime metagame evaluation models, using replicator dynamics from Evolutionary Game Theory [11, 12, 13, 14, 15], are deployed to capture the microdynamics of interacting agents. These approaches study and visualize basins of attraction and equilibria of interacting agents, but are limited as they can only be feasibly applied to games involving few agents, exploding in complexity in the case of large and asymmetric games. On the other hand, existing discretetime metagame evaluation models (e.g., [16, 17, 18, 19, 20]) capture the macrodynamics of interacting agents, but involve a large number of evolutionary parameters and are not yet grounded in a gametheoretic solution concept.
To further compound these issues, using the Nash equilibrium as a solution concept for metagame evaluation in these dynamical models is in many ways problematic: first, computing a Nash equilibrium is computationally difficult [21, 22]; second, there are intractable equilibrium selection issues even if Nash equilibria can be computed [23, 24, 25]; finally, there is an inherent incompatibility in the sense that it is not guaranteed that dynamical systems will converge to a Nash equilibrium [26], or, in fact, to any fixed point. However, instead of taking this as a disappointing flaw of dynamical systems models, we see it as an opportunity to look for a novel solution concept that does not have the same limitations as Nash in relation to these dynamical systems. Specifically, exactly as J. Nash used one of the most advanced topological results of his time, i.e., Kakutani’s fixed point theorem [27], as the basis for the Nash solution concept, in the present work, we employ Conley’s Fundamental Theorem of Dynamical Systems [28] and propose the solution concept of MarkovConley chains (MCCs). Intuitively, Nash is a static solution concept solely based on fixed points. MCCs, by contrast, are a dynamic solution concept based not only on fixed points, but also on recurrent sets, periodic orbits, and limit cycles, which are fundamental ingredients of dynamical systems. The key advantages are that MCCs comprehensively capture the longterm behaviors of our (inherently dynamical) evolutionary systems, and our associated Rank method runs in polynomial time with respect to the total number of pure strategy profiles (whereas computing a Nash equilibrium for a generalsum game is PPADcomplete [22]).
Main contributions: Rank and MCCs
While MCCs do not immediately address the equilibrium selection problem, we show that by introducing a perturbed variant that corresponds to a generalized multipopulation discretetime dynamical model, the underlying Markov chain containing them becomes irreducible and yields a unique stationary distribution. The ordering of the strategies of agents in this distribution gives rise to our Rank methodology. Rank provides a summary of the asymptotic evolutionary rankings of agents in the sense of the time spent by interacting populations playing them, yielding insights into their evolutionary strengths. It both automatically produces a ranking over agents favored by the evolutionary dynamics and filters out transient agents (i.e., agents that go extinct in the longterm evolutionary interactions).
Paper Overview
Due to the interconnected nature of the concepts discussed herein, we provide in Fig. 1 an overview of the paper that highlights the relationships between them. Specifically, the paper is structured as follows: we first provide a review of preliminary gametheoretic concepts, including the Nash equilibrium (box in Fig. 1), which is a longstanding yet static solution concept. We then overview the replicator dynamics micromodel (), which provides lowlevel insights into agent interactions but is limited in the sense that it can only feasibly be used for evaluating three to four agents. We then introduce a generalized evolutionary macromodel () that extends previous singlepopulation discretetime models () and (as later shown) plays an integral role in our Rank method. We then narrow our focus on a particular evolutionary macromodel () that generalizes singlepopulation discretetime models () and (as later shown) plays an integral role in our Rank method. Next, we highlight a fundamental incompatibility of the dynamical systems and the Nash solution concept (), establishing fundamental reasons that prevent dynamics from converging to Nash. This limitation motivates us to investigate a novel solution concept, using Conley’s Fundamental Theorem of Dynamical Systems as a foundation ().
Conley’s Theorem leads us to the topological concept of chain components, which do capture the irreducible longterm behaviors of a continuous dynamical system, but are unfortunately difficult to analyze due to the lack of an exact characterization of their geometry and the behavior of the dynamics inside them. We, therefore, introduce a discrete approximation of these limiting dynamics that is more feasible to analyze: our socalled MarkovConley chains solution concept (). While we show that MarkovConley chains share a close theoretical relationship with both discretetime and continuoustime dynamical models (), they unfortunately suffer from an equilibrium selection problem and thus cannot directly be used for computing multiagent rankings. To address this, we introduced a perturbed version of MarkovConley chains that resolves the equilibrium selection issues and yields our Rank evaluation method (). Rank computes both a ranking and assigns scores to agents using this perturbed model. We show that this perturbed model corresponds directly to the generalized macromodel under a particular setting of the latter’s socalled rankingintensity parameter . Rank not only captures the dynamic behaviors of interacting agents, but is also more tractable to compute than Nash for general games. We validate our methodology empirically by providing ranking analysis on datasets involving interactions of stateoftheart agents including AlphaGo [1], AlphaZero [3], MuJoCo Soccer [5], and Poker [29], and also provide scalability properties and theoretical guarantees for the overall ranking methodology.
2 Preliminaries and Methods
In this section, we concisely outline the gametheoretic concepts and methods necessary to understand the remainder of the paper. For a detailed discussion of the concepts we refer the reader to [13, 30, 31, 8]. We also introduce a novel gametheoretic concept, MarkovConley chains, which we use to theoretically ground our results in.
2.1 Game Theoretic Concepts
2.1.1 Normal Form Games
A wise interaction Normal Form Game (NFG) is defined as , where each player chooses a strategy from its strategy set and receives a payoff . We henceforth denote the joint strategy space and payoffs, respectively, as and . We denote the strategy profile of all players by , the strategy profile of all players except by , and the payoff profile by . An NFG is symmetric if the following two conditions hold: first, all players have the same strategy sets (i.e., ); second, if a permutation is applied to the strategy profile, the payoff profile is permuted accordingly. The game is asymmetric if one or both of these conditions do not hold. Note that in a player () NFG the payoffs for both players ( above) are typically represented by a bimatrix , which gives the payoff for the row player in , and the payoff for the column player in . If and , then this player game is symmetric.
Naturally the definitions of strategy and payoff can be extended in the usual multilinear fashion to allow for randomized (mixed) strategies. In that case, we usually overload notation in the following manner: if is a mixed strategy for each player and the mixed profile excluding that player, then we denote by the expected payoff of player , . Given these preliminaries, we are now ready to define the Nash equilibrium concept:
Definition 2.1.1 (Nash equilibrium).
A mixed strategy profile is a Nash equilibrium if for all players : .
Intuitively, a strategy profile is a Nash equilibrium of the NFG if no player has an incentive to unilaterally deviate from its current strategy.
2.1.2 Metagames
A metagame (or an empirical game) is a simplified model of an underlying multiagent system (e.g., an auction, a realtime strategy game, or a robot football match), which considers metastrategies or ‘styles of play’ of agents, rather than the full set of primitive strategies available in the underlying game [6, 7, 8]. In this paper, the metastrategies considered are learning agents (e.g., different variants of AlphaGo agents, as exemplified in Section 1). Thus, we henceforth refer to metagames and metastrategies, respectively, as ‘games’ and ‘agents’ when the context is clear. For example, in AlphaGo, styles of play may be characterized by a set of agents , where AG stands for the algorithm and indexes , , and stand for rollouts, value networks, and policy networks, respectively, that lead to different play styles. The corresponding metapayoffs quantify the outcomes when players play profiles over the set of agents (e.g., the empirical win rates of the agents when played against one another). These payoffs can be calculated from available data of the agents’ interactions in the real multiagent systems (e.g., wins/losses in the game of Go), or they can be computed from simulations. The question of how many such interactions are necessary to have a good approximation of the true underlying metagame is discussed in [8]. A metagame itself is an NFG and can, thus, leverage the gametheoretic toolkit to evaluate agent interactions at a high level of abstraction.
2.1.3 Micromodel: Replicator Dynamics
Dynamical systems is a powerful mathematical framework for specifying the time dependence of the players’ behavior (see the Supplementary Material for a brief introduction).
For instance, in a twoplayer asymmetric metagame represented as an NFG , the evolution of players’ strategy profiles under the replicator dynamics [32, 33] is given by,
(1) 
where and are, respectively, the proportions of strategies and in two infinitelysized populations, each corresponding to a player. This system of coupled differential equations models the temporal dynamics of the populations’ strategy profiles when they interact, and can be extended readily to the general wise interaction case (see Supplementary Material Section 5.2.2 for more details).
The replicator dynamics provide useful insights into the microdynamical characteristics of games, revealing strategy flows, basins of attraction, and equilibria [34] when visualized on a trajectory plot over the strategy simplex (e.g, Fig. 4). The accessibility of these insights, however, becomes limited for games involving large strategy spaces and manyplayer interactions. For instance, trajectory plots may be visualized only for subsets of three or four strategies in a game, and are complex to analyze for multipopulation games due to the inherentlycoupled nature of the trajectories. While methods for scalable empirical gametheoretic analysis of games have been recently introduced, they are still limited to twopopulation games [9, 8].
2.1.4 Macromodel: Discretetime Dynamics

; the strategy of the first individual is updated by either probabilistically copying the strategy of the second individual, mutating with a very small probability to a random strategy, or sticking with its own strategy. (fig:MacroModelOverview_3) Individual in the focal population copies the mutant strategy. (fig:MacroModelOverview_4) The mutant propagates in the focal population, yielding a new monomorphic population profile.
This section presents our main evolutionary dynamics model, which extends previous singlepopulation discretetime models and is later shown to play an integral role in our Rank method and can also be seen as an instantiation of the framework introduced in [20].
A promising alternative to using the continuoustime replicator dynamics for evaluation is to consider discretetime finitepopulation dynamics. As later demonstrated, an important advantage of the discretetime dynamics is that they are not limited to only three or four strategies (i.e., the agents under evaluation) as in the continuoustime case. Even though we lose the microdynamical details of the strategy simplex, this discretetime macrodynamical model, in which we observe the flows over the edges of the highdimensional simplex, still provides useful insights into the overall system dynamics.
To conduct this discretetime analysis, we consider a selectionmutation process but with a very small mutation rate (following the small mutation rate theorem, see [35]). Before elaborating on the details we specify a number of important concepts used in the description below and clarify their respective meanings in Table 0(a). Let a monomorphic population denote a population wherein all individuals play identical strategies, and a monomorphic population profile is a set of monomorphic populations, where each population may be playing a different strategy (see Fig. 1(a)). Our general idea is to capture the overall dynamics by defining a Markov chain over states that correspond to monomorphic population profiles. We can then calculate the transition probability matrix over these states, which captures the fixation probability of any mutation in any given population (i.e., the probability that the mutant will take over that population). By computing the stationary distribution over this matrix we find the evolutionary population dynamics, which can be represented as a graph. The nodes of this graph correspond to the states, with the stationary distribution quantifying the average time spent by the populations in each node [17, 36].
A large body of prior literature has conducted this discretetime Markov chain analysis in the context of pairwise interaction games with symmetric payoffs [36, 17, 37, 19, 18]. Recent work applies the underlying assumption of smallmutation rates [35] to propose a general framework for discretetime multiplayer interactions [20], which applies to games with asymmetric payoffs. In our work, we formalize how such an evolutionary model, in the micro/macro dynamics spectrum, should be instantiated to converge to our novel and dynamical solution concept of MCCs. Additionally, we show (in Theorem 2.1.3) that in the case of identical perpopulation payoffs (i.e., ) our generalization reduces to the singlepopulation model used by prior works. For completeness, we also detail the single population model in the Supplementary Material (see Section 5.3). We now formally define the generalized discretetime model.
Recall from Section 2.1.1 that each individual in a wise interaction game receives a local payoff dependent on its identity , its strategy , and the strategy profile of the other individuals involved in the game. To account for the identitydependent payoffs of such individuals, we consider the interactions of finite populations, each corresponding to a specific identity .
In each population , we have a set of strategies that we would like to evaluate for their evolutionary strength. We also have a set of individuals in each population , each of which is programmed to play a strategy from the set . Without loss of generality, we assume all populations have individuals.
Individuals interact wise through empirical games. At each timestep , one individual from each population is sampled uniformly, and the resulting individuals play a game. Let denote the number of individuals in population playing strategy and
denote the joint population state (i.e., vector of states of all populations). Under our sampling protocol, the fitness of an individual that plays strategy
is,(2) 
We consider any two individuals from a population , with respective strategies and respective fitnesses and (dependent on the values of the metagame table). We introduce here a discretetime dynamics, where the strategy of the first individual (playing ) is then updated by either mutating with a very small probability to a random strategy (Fig. 1(b)), probabilistically copying the strategy of the second individual (Fig. 1(c)), or sticking with its own strategy . The idea is that strong individuals will replicate and spread throughout the population (Fig. 1(d)). While one could choose other variants of discretetime dynamics [38], we show that this particular choice both yields useful closedform representations of the limiting behaviors of the populations, and also coincides with the MCC solution concept we later introduce under specific conditions.
As individuals from the same population never directly interact, the state of a population has no bearing on the fitnesses of its individuals. However, as evident in creftype 2, each population’s fitness may directly be affected by the competing populations’ states. The complexity of analyzing such a system can be significantly reduced by making the assumption of a small mutation rate [35]. Let the ‘focal population’ denote a population wherein a mutant strategy appears. We denote the probability for a strategy to mutate randomly into another strategy by and we will assume it to be infinitesimally small (i.e., we consider a smallmutation limit ). If we neglected mutations, the end state of this evolutionary process would be monomorphic. If we introduce a very small mutation rate this means that either the mutant fixates and takes over the current population, or the current population is capable of wiping out the mutant strategy [35]. Therefore, given a small mutation rate, the mutant either fixates or disappears before a new mutant appears in the current population. This means that any given population will never contain more than two strategies at any point in time. We refer the interested reader to [20] for a more extensive treatment of these arguments.
Applying the same line of reasoning, in the smallmutation rate regime, the mutant strategy in the focal population will either fixate or go extinct much earlier than the appearance of a mutant in any other population [35]. Thus, at any given time, there can maximally be only one population with a mutant, and the remaining populations will be monomorphic; i.e., in each competing population , for a single strategy and for the rest. As such, given a small enough mutation rate, analysis of any focal population needs only consider the monomorphic states of all other populations. Overloading the notation in creftype 2, the fitness of an individual from population that plays then considerably simplifies to
(3) 
where denotes the strategy profile of the other populations.
Let and respectively denote the number of individuals playing and in focal population , where . Per creftype 3, the fitness of an individual playing in the focal population while the remaining populations play monomorphic strategies is given by . Likewise, the fitness of any individual in playing is, .
We randomly sample two individuals in population and consider the probability that the one playing copies the other individual’s strategy . The probability with which the individual playing strategy will copy the individual playing strategy can be described by a selection function , which governs the dynamics of the finitepopulation model. For the remainder of the paper, we focus on the logistic selection function (aka Fermi distribution),
(4) 
with determining the selection strength, which we call the rankingintensity (the correspondence between and our ranking method will become clear later). There are alternative definitions of the selection function that may be used here, we merely focus on the Fermi distribution due to its extensive use in the singlepopulation literature [18, 19, 17].
Based on this setup, we define a Markov chain over the set of strategy profiles with states. Each state corresponds to one of the strategy profiles , representing a multipopulation endstate where each population is monomorphic. The transitions between these states are defined by the corresponding fixation probabilities (the probability of overtaking the population) when a mutant strategy is introduced in any single monomorphic population . We now define the Markov chain, which has transition probabilities over all pairs of monomorphic multipopulation states. Denote by the probability of mutant strategy fixating in a focal population of individuals playing , while the remaining populations remain in their monomorphic states . For any given monomorphic strategy profile, there are a total of valid transitions to a subsequent profile where only a single population has changed its strategy. Thus, letting , then is the probability that the joint population state transitions from to state after the occurrence of a single mutation in population . The stationary distribution over this Markov chain tells us how much time, on average, the dynamics will spend in each of the monomorphic states.
The fixation probabilities (of a rare mutant playing overtaking the focal population ) can be calculated as follows. The probability that the number of individuals playing decreases/increases by one in the focal population is given by,
(5) 
Then, the fixation probability of a single mutant with strategy in a population of individuals playing is,
(6)  
(7)  
(8)  
(9) 
This corresponds to the computation of an step transition in the Markov chain corresponding to [39]. The quotient
expresses the likelihood (odds) that the mutation process in population
continues in either direction: if it is close to zero then it is very likely that the number of mutants (individuals with strategy in population ) increases; if it is very large it is very likely that the number of mutants will decrease; and if it close to one then the probabilities of increase and decrease of the number of mutants are equally likely. This yields the following Markov transition matrix corresponding to the jump from strategy profile to ,(10) 
for all , where .
The following theorem formalizes the irreducibility of this finitepopulation Markov chain, a property that is wellknown in the literature (e.g., see [35, Theorem 2] and [20, Theorem 1]) but stated here for our specialized model for completeness.
Theorem 2.1.2.
Given finite payoffs, the Markov chain with transition matrix is irreducible (i.e., it is possible to get to any state starting from any state). Thus a unique stationary distribution (where and ) exists.
Proof.
Refer to the Supplementary Material LABEL:{sec:proof_unique_pi} for the proof. ∎
This unique provides the evolutionary ranking, or strength of each strategy profile in the set , expressed as the average time spent in each state in distribution .
This generalized discretetime evolutionary model, as later shown, will form the basis of our Rank method. We would like to clarify the application of this general model to the single population case, which applies only to symmetric 2player games and is commonly used in the literature (see Section 5.1).
Application to SinglePopulation (Symmetric TwoPlayer) Games
For completeness, we provide a detailed outline of the single population model in Supplementary Material Section 5.3.
Theorem 2.1.3 (Multipopulation model generalizes the symmetric singlepopulation model).
The general multipopulation model inherently captures the dynamics of the single population symmetric model.
Proof.
(Sketch) In the pairwise symmetric game setting, we consider only a single population of interacting individuals (i.e., ), where a maximum of two strategies may exist at any time in the population due to the small mutation rate assumption. At each timestep, two individuals (with respective strategies ) are sampled from this population and play a game using their respective strategies and . Their respective fitnesses then correspond directly to their payoffs, i.e., and . With this change, all other derivations and results follow directly the generalized model. For example, the probability of decrease/increase of a strategy of type in the singlepopulation case translates to,
(11) 
and likewise for the remaining equations. ∎
In other words, the generalized model is general in the sense that one can not only simulate symmetric pairwise interaction dynamics, but also wise and asymmetric interactions.
Linking the Micro and Macrodynamics Models
We have introduced, so far, a micro and macrodynamics model, each with unique advantages in terms of analyzing the evolutionary strengths of agents. The formal relationship between these two models remains of interest, and is established in the limit of a large population:
Theorem 2.1.4 (DiscreteContinuous Edge Dynamics Correspondence).
In the largepopulation limit, the macrodynamics model is equivalent to the microdynamics model over the edges of the strategy simplex. Specifically, the limiting model is a variant of the replicator dynamics with the caveat that the Fermi revision function takes the place of the usual fitness terms.
Proof.
Refer to the Supplementary Material LABEL:{sec:proof_discrete_cont_edge_dyn} for the proof. ∎
Therefore, a correspondence exists between the two models on the ‘skeleton’ of the simplex, with the macrodynamics model useful for analyzing the global evolutionary behaviors over this skeleton, and the micromodel useful for ‘zooming into’ the three or fourfaces of the simplex to analyze the interior dynamics.
In the next sections, we first give a few conceptual examples of the generalized discretetime model, then discuss the need for a new solution concept and the incompatibility between Nash equilibria and dynamical systems. We then directly link the generalized model to our new gametheoretic solution concept, MarkovConley chains (in Theorem 2.5.1).
2.2 Conceptual Examples


We present two canonical examples that visualize the discretetime dynamics and build intuition regarding the macrolevel insights gained using this type of analysis.
2.2.1 RockPaperScissors
We first consider the singlepopulation (symmetric) discretetime model in the RockPaperScissors (RPS) game, with the payoff matrix shown in Fig. 2(a) (top). One can visualize the discretetime dynamics using a graph that corresponds to the Markov transition matrix defined in creftype 10, as shown in Fig. 2(a) (bottom).
Nodes in this graph correspond to the monomorphic population states. In this example, these are the states of the population where all individuals play as agents Rock, Paper, or Scissors. To quantify the time the population spends as each agent, we indicate the corresponding mass of the stationary distribution within each node. As can be observed in the graph, the RPS population spends exactly of its time as each agent.
Edges in the graph correspond to the fixation probabilities for pairs of states. Edge directions corresponds to the flow of individuals from one agent to another, with strong edges indicating rapid flows towards ‘fitter’ agents. We denote fixation probabilities as a multiple of the neutral fixation probability baseline, , which corresponds to using the Fermi selection function with . To improve readability of the graphs, we also do not visualize edges looping a node back to itself, or edges with fixation probabilities lower than . In this example, we observe a cycle (intransitivity) involving all three agents in the graph. While for small games such cycles may be apparent directly from the structure of the payoff table, we later show that the graph visualization can be used to automatically iterate through cycles even in player games involving many agents.
2.2.2 Battle of the Sexes
Next we illustrate the generalized multipopulation (asymmetric) model in the Battle of the Sexes game, with the payoff matrix shown in Fig. 2(b) (top). The graph now corresponds to the interaction of two populations, each representing a player type, with each node corresponding to a monomorphic population profile . Edges, again, correspond to fixation probabilities, but occur only when a single population changes its strategy to a different one (an artifact of our small mutation assumption). In this example, it is evident from the stationary distribution that the populations spend an equal amount of time in profiles and , and a very small amount of time in states and .
2.3 The Incompatibility of Nash Equilibrium and Dynamical Systems
Continuous and discretetime dynamical systems have been used extensively in Game Theory, Economics, and Algorithmic Game Theory. In the particular case of multiagent evaluation in metagames, this type of analysis is relied upon for revealing useful insights into the strengths and weaknesses of interacting agents [8]. Often, the goal of research in these areas is to establish that, in some sense and manner, the investigated dynamics actually converge to a Nash equilibrium; there has been limited success in this front, and there are some negative results [40, 41, 42]. In fact, all known dynamics in games (the replicator dynamics, the many continuous variants of the dynamics used in the proof of Nash’s theorem, etc.) do cycle. To compound this issue, metagames are often large, extend beyond pairwise interactions, and may not be zerosum. While solving for a Nash equilibrium can be done in polynomial time for zerosum games, doing so in generalsum games is known to be PPADcomplete [22], which severely limits the feasibility of using such a solution concept for evaluating our agents.
Of course, some dynamics are known to converge to relaxations of the Nash equilibrium, such as the correlated equilibrium polytope or the coarse correlated equilibria [43]. But unfortunately, this “convergence” is typically considered in the sense of time average; time averages can be useful for establishing performance bounds for games, but tell us little about actual system behavior — which is a core component of what we study through games. For certain games, dynamics may indeed converge to a Nash equilibrium, but they may also cycle. For example, it is encouraging that in all matrix games these equilibria, cycles, and slight generalizations thereof are the only possible limiting behaviors for continuoustime dynamics (i.e., flows). But unfortunately this clean behavior (convergence to either a cycle or, as a special case, to a Nash equilibrium) is an artifact of the twodimensional nature of games, a consequence of the Poincaré–Bendixson theorem [44]. There is a wide range of results in different disciplines arguing that learning dynamics in games tend to not equilibrate to any Nash equilibrium but instead exhibit complex, unpredictable behavior (e.g., [45, 46, 47, 48, 49, 50]). The dynamics of even simple twoperson games with three or more strategies per player can be chaotic [51], that is, inherently difficult to predict and complex. Chaos goes against the core of our project; there seems to be little hope for building a predictive theory of player behavior based on dynamics in terms of Nash equilibrium.
2.4 MarkovConley chains: A Dynamical Solution Concept


Recall our overall objective: we would like to understand and evaluate multiagent interactions using a detailed and realistic model of evolution, such as the replicator dynamics, in combination with a gametheoretic solution concept. We start by acknowledging the fundamental incompatibility between dynamics and the Nash equilibrium: dynamics are often incapable of reaching the Nash equilibrium. However, instead of taking this as a disappointing flaw of dynamics, we see it instead as an opportunity to look for a novel solution concept that does not have the same limitations as Nash in relation to these dynamical systems. We contemplate whether a plausible algorithmic solution concept can emerge by asking, what do these dynamics converge to? Our goal is to identify the nontrivial, irreducible behaviors of a dynamical system (i.e., behaviors that cannot be partitioned more finely in a way that respects the system dynamics) and thus provide a new solution concept — an alternative to Nash’s — that will enable evaluation of of multiagent interactions using the underlying evolutionary dynamics. We carve a pathway towards this alternate solution concept by first considering the topology of dynamical systems.
2.4.1 Topology of Dynamical Systems and Conley’s Theorem
Dynamicists and topologists have been working hard throughout the past century to find a way to extend to higher dimensions the benign yet complete limiting dynamical behaviors described in Section 2.3 that one sees in two dimensions: convergence to cycles (or equilibria as a special case). That is, they have been trying to find an appropriate relaxation of the notion of a cycle such that the twodimensional picture is restored. After many decades of trial and error, new and intuitive conceptions of “periodicity” and “cycles” were indeed discovered, in the form of chain recurrent sets and chain components, which we define in this section. These key ingredients form the foundation of Conley’s Fundamental Theorem of Dynamical Systems, which in turn leads to the formulation of our MarkovConley chain solution concept and associated multiagent evaluation scheme.
Definitions
To make our treatment formal, we require definitions of the following set of topological concepts, based primarily on the work of Conley [28]. Our chain recurrence approach and the theorems in this section follow from [52]. We also provide the interested reader a general background on dynamical systems in Supplementary Material 5.2 in an effort to make our work selfcontained.
Definition 2.4.1 (Flow).
A flow on a topological space is a continuous mapping such that:

(i) is a homeomorphism for each .

(ii) for all .

(iii) for all and all .
Depending on the context, we sometimes write for and denote a flow by , where .
Definition 2.4.2 (chain).
Let be a flow on a metric space . Given , , and , an chain from to with respect to and is a pair of finite sequences in and in , denoted together by such that,
(12) 
for .
Intuitively, an chain corresponds to the forward dynamics under flow connecting points , with slight perturbations allowed at each timestep (see Fig. 5 for an example). Note these deviations are allowed to occur at stepsizes bounded away from , as otherwise the accumulation of perturbations could yield trajectories completely dissimilar to those induced by the original flow [53].
Definition 2.4.3 (Forward chain limit set).
Let be a flow on a metric space . The forward chain limit set of with respect to and is the set,
(13) 
Definition 2.4.4 (Chain equivalent points).
Let be a flow on a metric space . Two points are chain equivalent with respect to and if and .
Definition 2.4.5 (Chain recurrent point).
Let be a flow on a metric space . A point is chain recurrent with respect to and if is chain equivalent to itself; i.e., there exists an chain connecting to itself for every and .
Chain recurrence can be understood as an orbit with slight perturbations allowed at each time step (see Fig. 5), which constitutes a new conception of “periodicity” with a very intuitive explanation in Computer Science terms: Imagine Alice is using a computer to simulate the trajectory of a dynamical system that induces a flow . Every time she computes a single iteration of the dynamical process with a minimum stepsize , there is a rounding error . Consider an adversary, Bob, who can manipulate the result at each timestep within the sphere of the actual result. If, regardless of or minimum stepsize , Bob can persuade Alice that her dynamical system starting from a point returns back to this point in a finite number of steps, then this point is chain recurrent.
This new notion of “periodicity” (i.e., chain recurrence) leads to a corresponding notion of a “cycle” captured in the concept of chain components, defined below.
Definition 2.4.6 (Chain recurrent set).
The chain recurrent set of flow , denoted , is the set of all chain recurrent points of .
Definition 2.4.7 (Chain equivalence relation ).
Let the relation on be defined by if and only if is chain equivalent to . This is an equivalence relation on the chain recurrent set .
Definition 2.4.8 (Chain component).
The equivalence classes in of the chain equivalence relation are called the chain components of .
In the context of the Alice and Bob example, chain components are the maximal sets such that for any two points , Bob can similarly persuade Alice that the flow induced by her dynamical system can get her from to in a finite number of steps. For example the matching pennies replicator dynamics (shown in Fig. 3(a)) have one chain component, consisting of the entire domain; in the context of the Alice and Bob example, the cyclical nature of the dynamics throughout the domain means that Bob can convince Alice that any two points may be connected using a series of finite perturbations , for all and . On the other hand, the coordination game replicator dynamics (shown in Fig. 3(b)) has five chain components corresponding to the fixed points (which are recurrent by definition): four in the corners, and one mixed strategy fixed point in the center. For a formal treatment of these examples, see [26].
Points in each chain component are transitive by definition. Naturally, the chain recurrent set can be partitioned into a (possibly infinitely many) number of chain components. In other words, chain components cannot be partitioned more finely in a way that respects the system dynamics; they constitute the fundamental topological concept needed to define the irreducible behaviors we seek.
Conley’s Theorem
We now wish to characterize the role of chain components in the longterm dynamics of systems, such that we can evaluate the limiting behaviors of multiagent interactions using our evolutionary dynamical models. Conley’s Fundamental Theorem of Dynamical Systems leverages the above perspective on “periodicity” (i.e., chain recurrence) and “cycles” (i.e., chain components) to decompose the domain of any dynamical system into two classes: 1) chain components, and 2) transient points. To introduce Conley’s theorem, we first need to define the notion of a complete Lyapunov function. The gametheoretic analogue of this idea is the notion of a potential function in potential games. In a potential game, as long as we are not at an equilibrium, the potential is strictly decreasing and guiding the dynamics towards the standard gametheoretic solution concept, i.e., equilibria [54]. The notion of a complete Lyapunov function switches the target solution concept from equilibria to chain recurrent sets. More formally:
Definition 2.4.9 (Complete Lyapunov function).
Let be a flow on a metric space . A complete Lyapunov function for is a continuous function such that,

is a strictly decreasing function of for all ,

for all the points , are in the same chain component if and only if ,

is nowhere dense.
Conley’s Theorem, the important result in topology that will form the basis of our solution concept and ranking scheme, is as follows:
Theorem 2.4.10 (Conley’s Fundamental Theorem of Dynamical Systems [28], informal statement).
The domain of any dynamical system can be decomposed into its (possibly infinitely many) chain components; the remaining points are transient, each led to the recurrent part by a Lyapunov function.
The powerful implication of Conley’s Theorem is that complete Lyapunov functions always exist.
Theorem 2.4.11 ([28]).
Every flow on a compact metric space has a complete Lyapunov function.
In other words, the space is decomposed into points that are chain recurrent and points that are led to the chain recurrent part in a gradientlike fashion with respect to a Lyapunov function that is guaranteed to exist. In gametheoretic terms, every game is a “potential” game, if only we change our solution concept from equilibria to chain recurrent sets.
2.4.2 Asymptotically Stable Sink Chain Components
Our objective is to investigate the likelihood of an agent being played in a wise metagame by using a detailed and realistic model of multiagent evolution, such as the replicator dynamics. While chain components capture the limiting behaviors of dynamical systems (in particular, evolutionary dynamics that we seek to use for our multiagent evaluations), they can be infinite in number (as mentioned in Section 2.4.1); it may not be feasible to compute or use them in practice within our evaluation scheme. To resolve this, we narrow our focus onto a particular class of chain components called asymptotically stable sink chain components, which we define in this section. Asymptotically stable sink chain components are a natural target for this investigation as they encode the possible “final” long term system; by contrast, we can escape out of other chain components via infinitesimally small perturbations. We prove in the subsequent section (Theorem 2.4.24, specifically) that, in the case of replicator dynamics and related variants, asymptotically stable sink chain components are finite in number; our desired solution concept is obtained as an artifact of this proof.
We proceed by first showing that the chain components of a dynamical system can be partially ordered by reachability through chains, and we focus on the sinks of this partial order. We start by defining a partial order on the set of chain components:
Definition 2.4.12.
Let be a flow on a metric space and be chain components of the flow. Define the relation to hold if and only if there exists and such that .
Intuitively, , if we can reach from with chains for arbitrarily small and .
Theorem 2.4.13 (Partial order on chain components).
Let be a flow on a metric space and be chain components of the flow. Then the relation defined by is a partial order.
Proof.
Refer to the Supplementary Material LABEL:{sec:proof_partial_order_chain_components} for the proof. ∎
We will be focusing on minimal elements of this partial order, i.e., chain components such that there does not exist any chain component such that . We call such chain components sink chain components.
Definition 2.4.14 (Sink chain components).
A chain component is called a sink chain component if there does not exist any chain component such that .
We can now define the useful notion of asymptotically stable sink chain components, which relies on the notions of Lyapunov stable, asymptotically stable, and attracting sets.
Definition 2.4.15 (Lyapunov stable set).
Let be a flow on a metric space . A set is Lyapunov stable if for every neighborhood of there exists a neighborhood of such that every trajectory that starts in is contained in ; i.e., if then for all .
Definition 2.4.16 (Attracting set).
Set is attracting if there exists a neighborhood of such that every trajectory starting in converges to .
Definition 2.4.17 (Asymptotically stable set).
A set is called asymptotically stable if it is both Lyapunov stable and attracting.
Definition 2.4.18 (Asymptotically stable sink chain component).
Chain component is called an asymptotically stable sink chain component if it is both a sink chain component and an asymptotically stable set.
2.4.3 MarkovConley chains
Although we wish to study asymptotically stable sink chain components, it is difficult to do so theoretically as we do not have an exact characterization of their geometry and the behavior of the dynamics inside them. This is a rather difficult task to accomplish even experimentally. Replicator dynamics can be chaotic both in small and large games [51, 55]. Even when their behavior is convergent for all initial conditions, the resulting equilibrium can be hard to predict and can be highly sensitive to initial conditions [56]. It is, therefore, not clear how to extract any meaningful information even from many trial runs of the dynamics. These issues are exacerbated especially when games involve more than three or four strategies, where even visualization of trajectories becomes difficult. While studies of these dynamics have been conducted for these lowdimensional cases [57, 58], very little is known about the geometry and topology of the limit behavior of replicator dynamics for general games, making it hard to even make informed guesses about whether the dynamics have, for practical reasons, converged to an invariant subset (i.e., a sink chain component).
Instead of studying the actual dynamics, a computationally amenable alternative is to use a discretetime discretespace approximation with similar limiting dynamics, but which can be directly and efficiently analyzed. We will start off by the most crude (but still meaningful) such approximations: a set of Markov chains whose statespace is the set of pure strategy profiles of the game. We refer to each of these Markov chains as a MarkovConley chain, and prove in Theorem 2.4.24 that a finite number of them exist in any game under the replicator dynamics (or variants thereof).
Let us now formally define the MarkovConley chains of a game, which relies on the notions of the response graph of a game and its sink strongly connected components.
Definition 2.4.19 (Strictly and weakly better response).
Let be any two pure strategy profiles of the game, which differ in the strategy of a single player . Strategy is a strictly (respectively, weakly) better response than for player if her payoff at is larger than (respectively, at least as large as) her payoff at .
Definition 2.4.20 (Response graph of a game).
The response graph of a game is a directed graph whose vertex set coincides with the set of pure strategy profiles of the game, . Let be any two pure strategy profiles of the game. We include a directed edge from to if is a weakly better response for player as compared to .
Definition 2.4.21 (Strongly connected components).
The strongly connected components of a directed graph are the maximal subgraphs wherein there exists a path between each pair of vertices in the subgraph.
Definition 2.4.22 (Sink strongly connected components).
The sink strongly connected components of a directed graph are the strongly connected components with no outgoing edges.
The response graph of a game has a finite number of sink strongly connected components. If such a component is a singleton, it is a pure Nash equilibrium by definition.
Definition 2.4.23 (MarkovConley chains (MCCs) of a game).
A MarkovConley chain of a game is an irreducible Markov chain, the state space of which is a sink strongly connected component of the response graph associated with . Many MCCs may exist for a given game . In terms of the transition probabilities out of a node of each MCC, a canonical way to define them is as follows: with some probability, the node selftransitions. The rest of the probability mass is split between all strictly and weakly improving responses of all players. Namely, the probability of strictly improving responses for all players are set equal to each other, and transitions between strategies of equal payoff happen with a smaller probability also equal to each other for all players.
When the context is clear, we sometimes overload notation and refer to the set of pure strategy profiles in a sink strongly connected component (as opposed to the Markov chain over them) as an MCC. The structure of the transition probabilities introduced in Definition 2.4.23 has the advantage that it renders the MCCs invariant under arbitrary positive affine transformations of the payoffs; i.e., the resulting theoretical and empirical insights are insensitive to such transformations, which is a useful desideratum for a gametheoretic solution concept. There may be alternative definitions of the transition probabilities that may warrant future exploration.
MCCs can be understood as a discrete approximation of the chain components of continuoustime dynamics (hence the connection to Conley’s Theorem). The following theorem formalizes this relationship, and establishes finiteness of MCCs:
Theorem 2.4.24.
Let be the replicator flow when applied to a person game. The number of asymptotically stable sink chain components is finite. Specifically, every asymptotically stable sink chain component contains at least one MCC; each MCC is contained in exactly one chain component.
Proof.
Refer to the Supplementary Material LABEL:{sec:proof_MCC} for the proof. ∎
The notion of MCCs is thus used as a stepping stone, a computational handle that aims to mimic the long term behavior of replicator dynamics in general games. Similar results to Theorem 2.4.24 apply for several variants of replicator dynamics [13] as long as the dynamics are volume preserving in the interior of the state space, preserve the support of mixed strategies, and the dynamics act myopically in the presence of two strategies/options with fixed payoffs (i.e., if they have different payoffs converge to the best, if they have the same payoffs remain invariant).
2.5 From MarkovConley chains to the Discretetime Macromodel
The key idea behind the ordering of agents we wish to compute is that the evolutionary fitness/performance of a specific strategy should be reflected by how often it is being chosen by the system/evolution. We have established the solution concept of MarkovConley chains (MCCs) as a discretetime sparsediscretespace analogue of the continuoustime replicator dynamics, which capture these longterm recurrent behaviors for general metagames (see Theorem 2.4.24). MCCs are attractive from a computational standpoint: they can be found efficiently in all games by computing the sink strongly connected components of the response graph, addressing one of the key criticisms of Nash equilibria. However, similar to Nash equilibria, even simple games may have many MCCs (e.g., five in the coordination game of Fig. 3(b)). The remaining challenge is, thus, to solve the MCC selection problem.
One of the simplest ways to resolve the MCC selection issue is to introduce noise in our system and study a stochastically perturbed version, such that the overall Markov chain is irreducible and therefore has a unique stationary distribution that can be used for our rankings. Specifically, we consider the following stochastically perturbed model: we choose an agent at random, and, if it is currently playing strategy , we choose one of its strategies at random and set the new system state to be . Remarkably, these perturbed dynamics correspond closely to the macromodel introduced in Section 2.1.4 for a particularly large choice of rankingintensity value :
Theorem 2.5.1.
In the limit of infinite rankingintensity , the Markov chain associated with the generalized multipopulation model introduced in Section 2.1.4 coincides with the MCC.
Proof.
Refer to the Supplementary Material LABEL:{sec:proof_inf_pop_alpha} for the proof. ∎
A low rankingintensity () corresponds to the case of weak selection, where a weak mutant strategy can overtake a given population. A large rankingintensity, on the other hand, ensures that the probability that a suboptimal strategy overtakes a given population is close to zero, which corresponds closely to the MCC solution concept. In practice, setting the rankingintensity to infinity may not be computationally feasible; in this case, the underlying Markov chain may be reducible and the existence of a unique stationary distribution (which we use for our rankings) may not be guaranteed. To resolve the MCC selection problem, we require a perturbed model, but one with a large enough rankingintensity such that it approximates an MCC, but small enough such that the MCCs remain connected. By introducing this perturbed version of MarkovConley chains, the resulting Markov chain is now irreducible (per Theorem 2.1.2). The longterm behavior is thus captured by the unique stationary distribution under the large limit. Our socalled Rank evaluation method then corresponds to the ordering of the agents in this particular stationary distribution. The perturbations introduced here imply the need for a sweep over the rankingintensity parameter – a single hyperparameter – which we find to be computationally feasible across all of the largescale games we analyze using Rank.
The combination of Theorem 2.4.24 and Theorem 2.5.1 yields a unifying perspective involving a chain of models of increasing complexity: the continuoustime replicator dynamics is on one end, our generalized discretetime concept is on the other, and MCCs are the link in between.
3 Results
In the following we summarize our generalized ranking model and the main theoretical and empirical results. We start by outlining how the Rank procedure exactly works. Then we continue with illustrating Rank in a number of canonical examples. We continue with some deeper understanding of Rank’s evolutionary dynamics model by introducing some further intuitions and theoretical results, and we end with an empirical validation of Rank in various domains.
3.1 Rank: Evolutionary Ranking of Strategies
We first detail the Rank algorithm, then provide some insights and intuitions to further facilitate the understanding of our ranking method and solution concept.
3.1.1 Algorithm
Based on the dynamical concepts of chain recurrence and MCCs established, we now detail a descriptive method, titled Rank, for computing strategy rankings in a multiagent interaction:

Construct the metagame payoff table for each population from data of multiagent interactions, or from running game simulations.

Compute the transition matrix as outlined in Section 2.1.4. Per the discussions in Section 2.5, one must use a sufficiently large rankingintensity value in creftype 4; this ensures that
Rank preserves the ranking of strategies with closest correspondence to the MCC solution concept. As a large enough value is dependent on the domain under study, a useful heuristic is to conduct a sweep over
, starting from a small value and increasing it exponentially until convergence of rankings. 
Compute the unique stationary distribution, , of transition matrix . Each element of the stationary distribution corresponds to the time the populations spend in a given strategy profile.

Compute the agent rankings, which correspond to the ordered masses of the stationary distribution . The stationary distribution mass for each agent constitutes a ‘score’ for it (as might be shown, e.g., on a leaderboard).
3.1.2 Rank and MCCs as a Solution Concept: A Paradigm Shift
The solution concept of MCCs is foundationally distinct from that of the Nash equilibrium. The Nash equilibrium is rooted in classical game theory, which not only models the interactions in multiagent systems, but is also normative in the sense that it prescribes how a player should behave based on the assumption of individual rationality [59, 15, 13]. Besides classical game theory making strong assumptions regarding the rationality of players involved in the interaction, there exist many fundamental limitations with the concept of a Nash equilibrium: intractability (computing a Nash is PPADcomplete), equilibrium selection, and the incompatibility of this static concept with the dynamic behaviors of agents in interacting systems. To compound these issues, even methods that aim to compute an approximate Nash are problematic: a typical approach is to use exploitability to measure deviation from Nash and as such use it as a method to closely approximate one; the problem with this is that it is also intractable for large games (typically the ones we are interested in), and there even still remain issues with using exploitability as a measure of strategy strength (e.g., see [60]). Overall, there seems little hope of deploying the Nash equilibrium as a solution concept for the evaluation of agents in general largescale (empirical) games.
The concept of an MCC, by contrast, embraces the dynamical systems perspective, in a manner similar to evolutionary game theory. Rather than trying to capture the strategic behavior of players in an equilibrium, we deploy a dynamical system based on the evolutionary interactions of agents that captures and describes the longterm behavior of the players involved in the interaction. As such, our approach is descriptive rather than prescriptive, in the sense that it is not prescribing the strategies that one should play; rather, our approach provides useful information regarding the strategies that are evolutionarily nontransient (i.e., resistant to mutants), and highlights the remaining strategies that one might play in practice. To understand MCCs requires a shift away from the classical models described above for games and multiagent interactions. Our new paradigm is to allow the dynamics to roll out and enable strong (i.e., nontransient) agents to emerge and weak (i.e, transient) agents to vanish naturally through their longterm interactions. The resulting solution concept not only permits an automatic ranking of agents’ evolutionary strengths, but is powerful both in terms of computability and usability: our rankings are guaranteed to exist, can be computed tractably for any game, and involve no equilibrium selection issues as the evolutionary process converges to a unique stationary distribution. Nash tries to identify static single points in the simplex that capture simultaneous best response behaviors of agents, but comes with the range of complications mentioned above. On the other hand, the support of our stationary distribution captures the strongest nontransient agents, which may be interchangeably played by interacting populations and therefore constitute a dynamic output of our approach.
Given that both Nash and MCCs share a common foundation in the notion of a best response (i.e., simultaneous best responses for Nash, and the sink components of a best response graph for MCCs), it is interesting to consider the circumstances under which the two concepts coincide. There do, indeed, exist such exceptional circumstances: for example, for a potential game, every better response sequence converges to a (pure) Nash equilibrium, which coincides with an MCC. However, even in relatively simple games, differences between the two solution concepts are expected to occur in general due to the inherently dynamic nature of MCCs (as opposed to Nash). For example, in the Biased RockPaperScissors game detailed in Section 3.2.2, the Nash equilibrium and stationary distribution are not equivalent due to the cyclical nature of the game; each player’s symmetric Nash is , whereas the stationary distribution is . The key difference here is that whereas Nash is prescriptive and tells players which strategy mixture to use, namely , assuming rational opponents, Rank is descriptive in the sense that it filters out evolutionary transient strategies and yields a ranking of the remaining strategies in terms of their longterm survival. In the Biased RockPaperScissors example, Rank reveals that all three strategies are equally likely to persist in the longterm as they are part of the same sink strongly connected component of the response graph. In other words, the stationary distribution mass (i.e., the Rank score) on a particular strategy is indicative of its resistance to being invaded by any other strategy, including those in the distribution support. In the case of the Biased RockPaperScissors game, this means that the three strategies are equally likely to be invaded by a mutant, in the sense that their outgoing fixation probabilities are equivalent. In contrast to our evolutionary ranking, Nash comes without any such stability properties (e.g., consider the interior mixed Nash in Fig. 3(b)). Even computing Evolutionary Stable Strategies (ESS) [13], a refinement of Nash equilibria, is intractable [61, 62]. In larger games (e.g., AlphaZero in Section 3.4.2), the reduction in the number of agents that are resistant to mutations is more dramatic (in the sense of the stationary distribution support size being much smaller than the total number of agents) and less obvious (in the sense that moreresistant agents are not always the ones that have been trained for longer). In summary, the strategies chosen by our approach are those favored by evolutionary selection, as opposed to the Nash strategies, which are simultaneous bestresponses.
3.2 Conceptual Examples
We revisit the earlier conceptual examples of RockPaperScissors and Battle of the Sexes from Section 2.2 to illustrate the rankings provided by the Rank methodology. We use a population size of in our evaluations.
3.2.1 RockPaperScissors

In the RockPaperScissors game, recall the cyclical nature of the discretetime Markov chain (shown in Fig. 5(a)) for a fixed value of rankingintensity parameter, . We first investigate the impact of the rankingintensity on overall strategy rankings, by plotting the stationary distribution as a function of in Fig. 5(b). The result is that the population spends of its time playing each strategy regardless of the value of , which is in line with intuition due to the cyclical bestresponse structure of the game’s payoffs. The Nash equilibrium, for comparison, is also . The Rank output Table 0(b), which corresponds to a high value of , thus indicates a tied ranking for all three strategies, also in line with intuition.
3.2.2 Biased RockPaperScissors


Consider now the game of RockPaperScissors, but with biased payoffs (shown in Fig. 6(a)). The introduction of the bias moves the Nash from the center of the simplex towards one of the corners, specifically in this case. It is worthwhile to investigate the corresponding variation of the stationary distribution masses as a function of the rankingintensity (Fig. 6(c)) in this case. As evident from the fixation probabilities creftype 9 of the generalized discretetime model, very small values of cause the raw values of payoff to have a very low impact on the dynamics captured by discretetime Markov chain; in this case, any mutant strategy has the same probability of taking over the population, regardless of the current strategy played by the population. This corresponds well to Fig. 6(c), where small values yield stationary distributions close to .
As increases, payoff values play a correspondingly more critical role in dictating the longterm population state; in Fig. 6(c), the population tends to play Paper most often within this intermediate range of . Most interesting to us, however, is the case where increases to the point that our discretetime model bears a close correspondence to the MCC solution concept (per Theorem 2.5.1). In this limit of large , the striking outcome is that the stationary distribution once again converges to . Thus, Rank yields the highlevel conclusion that in the long term, a monomorphic population playing any of the 3 given strategies can be completely and repeatedly displaced by a rare mutant, and as such assigns the same ranking to all strategies (Table 0(c)). This simple example illustrates perhaps the most important trait of the MCC solution concept and resulting Rank methodology: they capture the fundamental dynamical structure of games and longterm intransitivities that exist therein, with the rankings produced corresponding to the dynamical strategy space consumption or basins of attraction of strategies.
3.2.3 Battle of the Sexes

We consider next an example of Rank applied to an asymmetric game – the Battle of the Sexes. Figure 7(b) plots the stationary distribution against rankingintensity , where we again observe a uniform stationary distribution corresponding to very low values of . As increases, we observe the emergence of two sink chain components corresponding to strategy profiles and , which thus attain the top Rank scores in Table 0(d). Note the distinct convergence behaviors of strategy profiles and in Fig. 7(b), where the stationary distribution mass on the converges to faster than that of for an increasing value of . This is directly due to the structure of the underlying payoffs and the resulting differences in fixation probabilities. Namely, starting from profile , if either player deviates, that player increases their local payoff from to . Likewise, if either player deviates starting from profile , that player’s payoff increases from to . Correspondingly, the fixation probabilities out of are higher than those out of (Fig. 7(a)), and thus the stationary distribution mass on converges to faster than that of as increases. We note that these low behaviors, while interesting, have no impact on the final rankings computed in the limit of large (Table 0(d)). We refer the interested reader to [63] for a detailed analysis of the noncoordination components of the stationary distribution in mutualistic interactions, such as the Battle of the Sexes.
We conclude this discussion by noting that despite the asymmetric nature of the payoffs in this example, the computational techniques used by Rank to conduct the evaluation are essentially identical to the simpler (symmetric) RockPaperScissors game. This key advantage is especially evident in contrast to recent evaluation approaches that involve decomposition of a asymmetric game into multiple counterpart symmetric games, which must then be concurrently analyzed [9].
3.3 Theoretical Properties of Rank
This section presents key theoretical findings related to the structure of the underlying discretetime model used in Rank, and computational complexity of the ranking analysis. Proofs are presented in the Supplementary Material.
Property 3.3.1 (Structure of ).
Given strategy profile corresponding to row of , the number of valid profiles it can transition to is (i.e., either selftransitions, or one of the populations switches to a different monomorphic strategy). The sparsity of is then,
(14) 
Therefore, for games involving many players and strategies, transition matrix is large (in the sense that there exist states), but extremely sparse (in the sense that there exist only outgoing edges from each state). For example, in a wise interaction game where agents in each population have a choice over strategies, is 99.53% sparse.
Property 3.3.2 (Computational complexity of solving for ).
The sparse structure of the Markov transition matrix (as identified in Property 3.3.1) can be exploited to solve for the stationary distribution
efficiently; specifically, computing the stationary distribution can be formulated as an eigenvalue problem, which can be computed in cubictime in the number of total pure strategy profiles.
The Rank method is, therefore, tractable, in the sense that it runs in polynomial time with respect to the total number of pure strategies. This yields a major computational advantage, in stark contrast to conducting rankings by solving for Nash (which is PPADcomplete for generalsum games [22], which our metagames may be).
3.4 Experimental Validation
Domain  Results  Symmetric?  # of Populations  # of Strategies 

RockPaperScissors  Section 3.2.1  ✓  
Biased RockPaperScissors  Section 3.2.2  ✓  
Battle of the Sexes  Section 3.2.3  ✗  
AlphaGo  Section 3.4.1  ✓  
AlphaZero Chess  Section 3.4.2  ✓  
MuJoCo Soccer  Section 3.4.3  ✓  
Kuhn Poker  Section 3.4.4  ✗  [4,4,4]  
Section 3.4.4  ✗  [4,4,4,4]  
Leduc Poker  Section 3.4.5  ✗ 
In this section we provide a series of experimental illustrations of Rank in a varied set of domains, including AlphaGo, AlphaZero Chess, MuJoCo Soccer, and both Kuhn and Leduc Poker. As evident in Table 1, the analysis conducted is extensive across multiple axes of complexity, as the domains considered include symmetric and asymmetric games with different numbers of populations and ranges of strategies.
3.4.1 AlphaGo

In this example we conduct an evolutionary ranking of AlphaGo agents based on the data reported in [1]. The metagame considered here corresponds to a 2player symmetric NFG with 7 AlphaGo agents: , , , , , , and , where , , and respectively denote the combination of rollouts, value networks, and/or policy networks used by each variant. The corresponding payoffs are the win rates for each pair of agent matchups, as reported in Table 9 of [1].
In Table 1(a) we summarize the rankings of these agents using the Rank method. Rank is quite conclusive in the sense that the top agent, , attains all of the stationary distribution mass, dominating all other agents. Further insights into the pairwise agent interactions are revealed by visualizing the underlying Markov chain, shown in Fig. 8(a). Here the population flows (corresponding to the graph edges) indicate which agents are more evolutionarily viable than others. For example, the edge indicating flow from to indicates that the latter agent is stronger in the shortterm of evolutionary interactions. Moreover, the stationary distribution (corresponding to high values in Fig. 8(b)) reveals that all agents but are transient in terms of the longterm dynamics, as a monomorphic population starting from any other agent node eventually reaches . In this sense, node constitutes an evolutionary stable strategy. We also see in Fig. 8(a) that no cyclic behaviors occur in these interactions. Finally, we remark that the recent work of [8] also conducted a metagame analysis on these particular AlphaGo agents and drew similar conclusions to ours. The key limitation of their approach is that it can only directly analyze interactions between triplets of agents, as they rely on visualization of the continuoustime evolutionary dynamics on a 2simplex. Thus, to draw conclusive results regarding the interactions of the full set of agents, they must concurrently conduct visual analysis of all possible 2simplices (35 total in this case). This highlights a key benefit of Rank as it can succinctly summarize agent evaluations with minimal intermediate humanintheloop analysis.
3.4.2 AlphaZero

AlphaZero is a generalized algorithm that has been demonstrated to master the games of Go, Chess, and Shogi without reliance on human data [3]. Here we demonstrate the applicability of the
Rank evaluation method to largescale domains by considering the interactions of a large number of AlphaZero agents playing the game of chess. In AlphaZero, training commences by randomly initializing the parameters of a neural network used to play the game in conjunction with a generalpurpose tree search algorithm. To synthesize the corresponding metagame, we take a ‘snapshot’ of the network at various stages of training, each of which becomes an agent in our metagame. For example, agent
corresponds to a snapshot taken at approximately 27.5% of the total number of training iterations, while corresponds to one taken approximately at the conclusion of training. We take 56 of these snapshots in total. The metagame considered here is then a symmetric player NFG involving 56 agents, with payoffs again corresponding to the winrates of every pair of agent matchups. We note that there exist 27720 total simplex 2faces in this dataset, substantially larger than those investigated in [8], which quantifiably justifies the computational feasibility of our evaluation scheme.We first analyze the evolutionary strengths of agents over a sweep of rankingintensity (Fig. 9(b)). While the overall rankings are quite invariant to the value of , we note again that a large value of dictates the final Rank evaluations attained in Table 1(b). To gain further insight into the interagent interactions, we consider the corresponding discretetime evolutionary dynamics shown in Fig. 9(a). Note that these interactions are evaluated using the entire 56agent dataset, though visualized only for the topranked agents for readability. The majority of topranked agents indeed correspond to snapshots taken near the end of AlphaZero training (i.e., the strongest agents in terms of training time). Specifically, , which is the final snapshot in our dataset and thus the mosttrained agent, attains the top rank with a score of 0.39, in contrast to the secondranked agent’s score of 0.22. This analysis does reveal some interesting outcomes, however: agent is not only ranked 5th overall, but also higher than several agents with longer training time, including , , and .
We also investigate here the relationship between the Rank scores and Nash equilibria. A key point to recall is the equilibrium selection problem associated with Nash, as multiple equilibria can exist even in the case of twoplayer zerosum metagames. In the case of zerosum metagames, Balduzzi et al. show that there exists a unique maximum entropy (maxent) Nash equilibrium [64], which constitutes a natural choice that we also use in the below comparisons. For general games, unfortunately, this selection issue persists for Nash, whereas it does not for Rank due to the uniqueness of the associated ranking (see Theorem 2.1.2).
We compare the Rank scores and maxent Nash by plotting each throughout AlphaZero training in Fig. 10(a) and Fig. 10(b), respectively; we also plot their difference in Fig. 10(c)
. At a given training iteration, the corresponding horizontal slice in each plot visualizes the associated evaluation metric (i.e.,
Rank, maxent Nash, or difference of the two) computed for all agent snapshots up to that iteration. We first note that both evaluation methods reach a consensus that the strengths of AlphaZero agents generally increase with training, in the sense that only the latest agent snapshots (i.e., the ones closest to the diagonal) appear in the support of both Rank scores and Nash. An interesting observation is that lesstrained agents sometimes reappear in the support of the distributions as training progresses; this behavior may even occur multiple times for a particular agent.We consider also the quantitative similarity of Rank and Nash in this domain. Figure 10(c) illustrates that differences do exist in the sense that certain agents are ranked higher via one method compared to the other. More fundamentally, however, we note a relationship exists between Rank and Nash in the sense that they share a common rooting in the concept of bestresponse: by definition, each player’s strategy in a Nash equilibrium is a best response to the other players’ strategies; in addition, Rank corresponds to the MCC solution concept, which itself is derived from the sink stronglyconnected components of the game’s response graph. Despite the similarities, Rank is a more refined solution concept than Nash in the sense that it is both rooted in dynamical systems and a bestresponse approach, which not only yields rankings, but also the associated dynamics graph (Fig. 9(a)) that gives insights into the longterm evolutionary strengths of agents. Beyond this, the critical advantage of Rank is its tractability for generalsum games (per Property 3.3.2), as well as lack of underlying equilibrium selection issues; in combination, these features yield a powerful empirical methodology with little room for user confusion or interpretability issues. This analysis reveals fundamental insights not only in terms of the benefits of using Rank to evaluate agents in a particular domain, but also an avenue of future work in terms of embedding the evaluation methodology into the training pipeline of agents involved in large and general games.
3.4.3 MuJoCo Soccer

We consider here a dataset consisting of complex agent interactions in the continuousaction domain of MuJoCo soccer [5]. Specifically, this domain involves a multiagent soccer physicssimulator environment with teams of 2 vs. 2 agents in the MuJoCo physics engine [