1 Introduction
Reinforcement learning (RL) from selfplay has drawn tremendous attention over the past few years. Empirical successes have been observed in several challenging tasks, including Go [35, 37, 36], simulated hideandseek [4], simulated sumo wrestling [5], Capture the Flag [19], Dota 2 [6], StarCraft II [42], and poker [9], to name a few. During RL from selfplay, the learner collects training data by competing with an opponent selected from its past self or an agent population. Selfplay presumably creates an autocurriculum for the agents to learn at their own pace. At each iteration, the learner always faces an opponent that is comparably in strength to itself, allowing continuous improvement.
The way the opponents are selected often follows humandesigned heuristic rules in prior works. For example, AlphaGo [35] always competes with the latest agent, while the later generation AlphaGo Zero [37] and AlphaZero [36] generate selfplay data with the maintained best historical agent. In specific tasks, such as OpenAI’s sumo wrestling environment, competing against a randomly chosen historical agent leads to the emergence of more diverse behaviors [5] and more stable training than against the latest agent [1]. In populationbased training [19, 25] and AlphaStar [42], an elite or random agent is picked from the agent population as the opponent.
Unfortunately, these rules may be inefficient and sometimes ineffective in practice and do not necessarily enjoy convergence guarantee to the “averagecase optimal” solution even in tabular matrix games. In fact, in the simple Matching Pennies game, selfplay with the latest agent fails to converge and falls into an oscillating behavior, as shown in Sec. 5.
In this paper, we want to develop an algorithm that adopts a principallyderived opponentselection rule to alleviate some of the issues mentioned above. This requires clarifying first what the solution of selfplay RL should be. From the gametheoretical perspective, Nash equilibrium is a fundamental solution concept that characterizes the desired “averagecase optimal” strategies (policies). When each player assumes other players also play their equilibrium strategies, no one in the game can gain more by unilaterally deviating to another strategy. Nash, in his seminal work [28], has established the existence result of mixedstrategy Nash equilibrium of any finite game. Thus solving for a mixedstrategy Nash equilibrium is a reasonable goal of selfplay RL.
We consider the particular case of twoplayer zerosum games, a reasonable model for the competitive environments studied in the selfplay RL literature. In this case, the Nash equilibrium is the same as the (global) saddle point and as the solution of the minimax program . We denote as the strategy profiles (in RL language, policies) and as the loss for or utility/reward for . A saddle point , where are the sets of all possible mixedstrategies (stochastic policies) of the two players, satisfies the following key property
(1) 
Connections to the saddle point problem and game theory inspire us to borrow ideas from the abundant literature for finding saddle points in the optimization field
[2, 22, 21, 29] and for finding equilibrium in the game theory field [44, 8, 38]. One particular class of method, i.e., the perturbationbased subgradient methods to find the saddles [22, 21], is especially appealing. This class of method directly builds upon the inequality properties in Eq. 1, and has several advantages: (1) Unlike some algorithms that require knowledge of the game dynamic [35, 37, 30], it requires only subgradients; thus, it is easy to adapt to policy optimization with estimated policy gradients. (2) It is guaranteed to converge in its last iterate instead of an average iterate, hence alleviates the need to compute any historical averages as in
[8, 38, 44], which can get complicated when neural nets are involved [16]. (3) Most importantly, it prescribes a simple principled way to adversarially choose opponents, which can be naturally implemented with a concurrentlytrained agent population.To summarize, we adapt the perturbationbased methods of classical saddle point optimization into the modelfree selfplay RL regime. This results in a novel populationbased policy gradient method for competitive selfplay RL described in Sec. 3. Analogous to the standard modelfree RL setting, we assume only “naive” players [20] where the game dynamic is hidden and only rewards for their own actions are revealed. This enables broader applicability than many existing algorithms [35, 37, 30] to problems with mismatched or unknown game dynamics, such as many realworld or simulated robotic problems. In Sec. 4, we provide an approximate convergence theorem of the proposed algorithm for convexconcave games as a sanity check. Sec. 5 shows extensive experiment results favoring our algorithm’s effectiveness on several games, including matrix games, a game of gridworld soccer, a board game, and a challenging simulated robot sumo game. Our method demonstrates better peragent sample efficiency than baseline methods with alternative opponentselection rules. Our trained agents can also outperform the agents trained by other methods on average.
2 Related Work
Reinforcement learning trains a single agent to maximize the expected return through interacting with an environment [39]. Multiagent reinforcement learning (MARL), where twoagent is a special case, concerns multiple agents taking actions in the same environment [24]. Selfplay is a training paradigm to generate data for learning in MARL and has led to great successes, achieving superhuman performance in many domains [40, 35, 9]. Applying RL algorithms naively as independent learners in MARL sometimes produces strong agents [40], but often does not converge to the equilibrium. People have studied ways to extend RL algorithms to MARL, e.g., minimaxQ [24], NashQ [18], WoLFPG [7], etc. However, most of these methods are designed for tabular RL only therefore not readily applicable to continuous state action spaces and complex policy functions where gradientbased policy optimization methods are preferred. Very recently, there is some work on the nonasymptotic regret analysis of tabular selfplay RL [3]. While our work roots from a practical sense, these work enrich the theoretical understanding and complement ours.
There are algorithms developed from the game theory and online learning perspective [23, 30, 10], notably Tree search, Fictitious selfplay [8] and Regret minimization [20, 44], and Mirror descent [26, 31]. Tree search such as minimax and alphabeta pruning is particularly effective in smallstate games. Monte Carlo Tree Search (MCTS) is also effective in Go [35]. However, Tree search requires learners to know the exact game dynamics. The latter ones typically require maintaining some historical quantities. In Fictitious play, the learner best responds to a historical average opponent, and the average strategy converges. Similarly, the total historical regrets in all (information) states are maintained in (counterfactual) regret minimization [44]. The extragradient rule in [26] is a special case of the adversarial perturbation we rely on in this paper. Most of those algorithms are designed only for discrete state action environments. Special care has to be taken with neural net function approximators [16]. On the contrary, our method enjoys lastiterate convergence under proper convexconcave assumptions, does not require the complicated computation of averaging neural nets, and is readily applicable to continuous environments through policy optimization.
In twoplayer zerosum games, the Nash equilibrium coincides with the saddle point. This enables the techniques developed for finding saddle points. While some saddlepoint methods also rely on time averages [29], a class of perturbationbased gradient method is known to converge under mild convexconcave assumption for deterministic functions [21, 22, 14]. We develop an efficient sampling version of them for stochastic RL objectives, which leads to a more principled and effective way of choosing opponents in selfplay. Our adversarial opponentselection rule bears a resemblance to [12]. However, the motivation of our work (and so is our algorithm) is to improve selfplay RL, while [12] aims at attacking deep selfplay RL policies. Though the algorithm presented here builds upon policy gradient, the same framework may be extended to other RL algorithms such as MCTS due to a recent interpretation of MCTS as policy optimization [13]. Finally, our way of leveraging Eq. 1
in a population may potentially work beyond gradientbased RL, e.g., in training generative adversarial networks similarly to
[14] because of the same minimax formulation.3 Method
Classical game theory defines a twoplayer zerosum game as a tuple where are the sets of possible strategies of Players 1 and 2 respectively, and is a mapping from a pair of strategies to a realvalued utility/reward for Player 2. The game is zerosum (fully competitive), and Player 1’s reward is .
We consider mixed strategies (corresponding to stochastic policies in RL). In the discrete case, and
could be all probability distributions over the sets of actions. When parameterized function approximators are used,
can be the spaces of all policy parameters.Multiagent RL formulates the problem as a Stochastic Game [34]
, an extension to Markov Decision Processes (MDP). Denote
as the action of Player 1 and as the action of Player 2 at time , let be the time limit of the game, then the stochastic payoff writes as(2) 
The state sequence follows a transition dynamic . Actions are sampled according stochastic policies and . And is the reward (payoff) for Player 2 at time , determined jointly by the state and actions. We use the term ‘agent’ and ‘player’ interchangeably. In deep RL, and are the policy neural net parameters. In some cases [35], we can enforce by sharing parameters if the game is impartial or we do not want to distinguish them. The discounting factor weights short and long term rewards and is optional. Note that when one agent is fixed  taking as an example  the problem is facing reduces to an MDP, if we define a new state transition dynamic and a new reward
The naive algorithm provably works in strictly convexconcave games (where is strictly convex in and strictly concave in ) under assumptions in [2]. However, in general, it does not enjoy lastiterate convergence to the Nash equilibrium. Even for simple games such as Matching Pennies and Rock Paper Scissors, as we shall see in our experiments, the naive algorithm generates cyclic sequences of that orbit around the equilibrium. This motivates us to study the perturbationbased method that converges under weaker assumptions.
Recall that the Nash equilibrium has to satisfy the saddle constraints Eq. 1: . The perturbationbased method builds upon this property [29, 21, 22] and directly optimize for a solution that meets the constraints. They find perturbed points of and of , and use gradients at and to optimize and . Under some regularity assumptions, gradient direction from a single perturbed point is adequate for proving convergence [29] for (not strictly) convexconcave functions. They can be easily extended to accommodate gradientbased policy optimization and stochastic RL objective Eq. 4.
We propose to find the perturbations from an agent population, resulting in the algorithm outlined in Alg. 1. The algorithm trains pairs of agents simultaneously. The pairwise competitions are run as the evaluation step (Alg. 1 Line 3), costing trajectories. To save sample complexity, we may use these rollouts to do one policy update as well. Then a simple adversarial rule (Eq. 3) is adopted in Alg. 1 Line 6 to choose the opponents adaptively. The intuition is that and are the most challenging opponents in the population that the current and are facing.
(3) 
The perturbations and always satisfy , since . Then we run gradient descent on with the perturbed as opponent to minimize , and run gradient ascent on to maximize . Intuitively, the duality gap between and , approximated by , is reduced, leading to converging to the saddle point (equilibrium).
We build the candidate opponent sets in Line 5 of Alg. 1 simply as the concurrentlytrained agent population. Specifically, and . This is due to the following considerations. An alternative source of candidates is the fixed known agents such as a rulebased agent, which may be unavailable in practice. Another source is the extragradient methods [22, 26], where extra gradient steps are taken on before optimizing . The extragradient method can be thought of as a local approximation to Eq. 3 with a neighborhood candidate opponent set, and thus related to our method. However, this method could be less efficient because the trajectory sample used in the extragradient estimation is wasted as it does not contribute to actually optimizing . Yet another source could be the past agents. This choice is motivated by Fictitious play and ensures that the current learner always defeats a past self. But, as we shall see in the experiment section, selfplay with a random past agent learns slower than our strategy. We expect all agents in the population in our algorithm to be strong, thus provide stronger learning signals.
Finally, we use Monte Carlo estimates to compute the values and gradients of . In the classical game theory setting, the game dynamic and payoff are known, so it is possible to compute the exact values and gradients of . But this is a rather restricted setting. In modelfree MARL, we have to collect rollout trajectories to estimate both the function values through policy evaluation and gradients through Policy gradient theorem [39]. After collecting independent trajectories , we can estimate by
(4) 
And given estimates to the stateaction value function (assuming an MDP with as a fixed opponent of ), we can construct an estimator for (and similarly for ) by
(5) 
4 Convergence Analysis
We establish an asymptotic convergence result in the Monte Carlo policy gradient setting in Thm. 2 for a variant of Alg. 1 under regularity assumptions. This algorithm variant sets and uses the vanilla SGD as the policy optimizer. We add a stop criterion after Line 6 with an accuracy parameter . The full proof can be found in the supplementary. Since the algorithm is symmetric between agents in the population, we drop the subscript for text clarity.
Assumption 1.
are compact sets. As a consequence, there exists s.t and . are compact subsets of and . Further, assume is a bounded convexconcave function.
Theorem 1 (Convergence with exact gradients [21]).
The above case with exact subgradients is easy since both and are deterministic. In RL setting, we construct estimates for (Eq. 4) and (Eq. 5) with samples. The intuition is that, when the samples are large enough, we can bound the deviation between the true values and estimates by concentration inequalities, then the proof outline similar to [21] also goes through.
Thm. 2 requires an extra assumption on the boundedness of and gradients. By showing the policy gradient estimates are approximate sub/supergradients of , we are able to prove that the output of Alg. 1 is an approximate Nash equilibrium with high probability.
Assumption 2.
The Q value estimation is unbiased and bounded by , and the policy has bounded gradient .
Theorem 2 (Convergence with policy gradients).
Discussion. The theorems require to be convex in and concave in , but not strictly. This is a weaker assumption than ArrowHurwiczUzawa’s [2]. The purpose of this simple analysis is mainly a sanity check and an assurance of the correctness of our method. It applies to the setting in Sec. 5.1
but not beyond, as the assumptions do not necessarily hold for neural networks. The sample size is chosen loosely as we are not aiming at a sharp finite sample complexity or regret analysis. In practice, we can find empirically suitable
(sample size) and (learning rates), and adopt a modern RL algorithm with an advanced optimizer (e.g., PPO [33] with RmsProp [17]) in place of the SGD updates.5 Experiments
We empirically evaluate our algorithm on several games with distinct characteristics. The implementation is based on PyTorch. Code, details, and demos are in the supplementary material.
Compared methods.
In Matrix games, we compare to a naive mirror descent method, which is essentially Selfplay with the latest agent. In the rest of the environments, we compare the results of the following methods:

[topsep=0pt, parsep=0pt]

Selfplay with the latest agent (ArrowHurwiczUzawa). The learner always competes with the most recent agent. This is essentially the ArrowHurwiczUzawa method [2] or the naive mirror/alternating descent.

Selfplay with a random past agent (Fictitious play). The learner competes against a randomly sampled historical opponent. This is the scheme in OpenAI sumo [5, 1]. It is similar to Fictitious play [8] since uniformly random sampling is equivalent to historical average by definition. However, Fictitious play only guarantees convergence of the averageiterate but not the lastiterate agent.

Ours(). This is our algorithm with a population of pairs of agents trained simultaneously, with each other as candidate opponents. Implementation can be distributed.
Evaluation protocols.
We mainly measure the strength of agents by the Elo scores [11]. Pairwise competition results are gathered from a large tournament among all
the checkpoint agents of all methods after training. Each competition has multiple matches to account for randomness. The Elo scores are computed by logistic regression, as Elo assumes a logistic relationship
A 100 Elo difference corresponds to roughly 64% winrate. The initial agent’s Elo is calibrated to 0. Another way to measure the strength is to compute the average rewards (winrates) against other agents. We also report average rewards in the supplementary.5.1 Matrix games
We verified the lastiterate convergence to Nash equilibrium in several classical twoplayer zerosum matrix games. In comparison, the vanilla mirror descent/ascent is known to produce oscillating behaviors [26]. Payoff matrices (for both players separated by comma), phase portraits, and error curves are shown in Tab. 1,2,3,4 and Fig. 4,4,4,4. Our observations are listed beside the figures.
We studied two settings: (1) Ours(Exact Gradient), the full information setting, where the players know the payoff matrix and compute the exact gradients on action probabilities; (2) Ours
(Policy Gradient), the reinforcement learning or bandit setting, where each player only receives the reward of its own action. The action probabilities were modeled by a probability vector
. We estimated the gradient w.r.t with the REINFORCE estimator [43] with sample size , and applied constant learning rate SGD with proximal projection onto . We trained agents jointly for Alg. 1 and separately for the naive mirror descent under the same initialization.Game payoff matrix  Phase portraits and error curves  


[width=]fig/penny.pdf  

[width=]fig/penny2.pdf  

[width=]fig/rps.pdf  
, while any interpolation between and is an equilibrium column strategy. Depending on initialization, agents in our method converges to different equilibria. 
[width=]fig/div.pdf 
5.2 Gridworld soccer game
We conducted experiments in a gridworld soccer game. Similar games were adopted in [15, 24]. Two players compete in a grid world, starting from random positions. The action space is . Once a player scores a goal, it gets positive reward 1.0, and the game ends. Up to timesteps are allowed. The game ends with a draw if time runs out. The game has imperfect information, as the two players move simultaneously.
The policy and value functions were parameterized by simple onelayer networks, consisting of a onehot encoding layer and a linear layer that outputs the action logits and values. The logits are transformed into probabilities via
. We used Advantage ActorCritic (A2C) [27] with Generalized Advantage Estimation (GAE) [32] and RmsProp [17] as the base RL algorithm. The hyperparameters are , , for Alg. 1. We kept track of the peragent number of trajectories (episodes) each algorithm uses for fair comparison. Other hyperparameters are in the supplementary. All methods were run multiple times to calculate the confidence intervals.
In Fig. 7, Ours() all perform better than others, achieving higher Elo scores after experiencing the same number of peragent episodes. Other methods fail to beat the rulebased agent after 32000 episodes. Competing with a random past agent learns the slowest, suggesting that, though it may stabilize training and lead to diverse behaviors [5], the learning efficiency is not as high because a large portion of samples is devoted to weak opponents. Within our methods, the performance increases with a larger , suggesting a larger population may help find better perturbations.
5.3 Gomoku board game
We investigated the effectiveness in the Gomoku game, which is also known as Renju, Fiveinarow. In our variant, two players place black or white stones on a 9by9 board in turn. The player who gets an unbroken row of five horizontally, vertically, or diagonally, wins (reward 1). The game is a draw (reward 0) when no valid move remains. The game is sequential and has perfect information.
This experiment involved much more complex neural networks than before. We adopted a 4layer convolutional ReLU network (kernels
, channels, all strides
) for both the policy and value networks. Gomoku is hard to train from scratch with pure modelfree RL without explicit tree search. Hence, we pretrained the policy nets on expert data collected from renjuoffline.com. We downloaded roughly 130 thousand games and applied behavior cloning. The pretrained networks were able to predict expert moves with accuracy and achieve an average score of 0.93 ( win and lose) against a randomaction player. We adopted the A2C [27] with GAE [32] and RmsProp with learning rate . Up to iterations of Alg. 1were run. The other hyperparameters are the same as those in the soccer game.
In Fig. 7, all methods are able to improve upon the behavior cloning policies significantly. Ours() demonstrate higher sample efficiency by achieving higher Elo ratings than the alternatives given the same amount of peragent experience. This again suggests that the opponents are chosen more wisely, resulting in better policy improvements. Lastly, the more complex policy and value functions (multilayer CNN) do not seem to undermine the advantage of our approach.
5.4 RoboSumo Ants
Our last experiment is based on the RoboSumo simulation environment in [1, 5], where two Ants wrestle in an arena. This setting is particularly relevant to practical robotics research, as we believe success in this simulation could be transferred into the realworld. The Ants move simultaneously, trying to force the opponent out of the arena or onto the floor. The physics simulator is MuJoCo [41]. The observation space and action space are continuous. This game is challenging since it involves a complex continuous control problem with sparse rewards. Following [1, 5], we utilized PPO [33] with GAE [32] as the base RL algorithm, and used a 2layer fully connected network with width 64 for function approximation. Hyperparameters , . In [1], a random past opponent is sampled in selfplay, corresponding to the “Selfplay w/ random past” baseline here. The agents are initialized from imitating the pretrained agents of [1]. We consider and in our method. From Fig. 7, we observe again that Ours() outperform the baseline methods by a statistical margin and that our method benefits from a larger population size.
6 Conclusion
We propose a new algorithmic framework for competitive selfplay policy optimization inspired by a perturbation subgradient method for saddle points. Our algorithm provably converges in convexconcave games and achieves better peragent sample efficiency in several experiments. In the future, we hope to study a larger population size (should we have sufficient computing power) and the possibilities of modelbased and offpolicy selfplay RL under our framework.
References
 [1] (2018) Continuous adaptation via metalearning in nonstationary and competitive environments. In International Conference on Learning Representations, Cited by: §1, item 3, §5.4.

[2]
(1958)
Studies in linear and nonlinear programming
. Vol. 2, Stanford University Press. Cited by: §1, §3, §4, item 1.  [3] (2020) Provable selfplay algorithms for competitive reinforcement learning. arXiv preprint arXiv:2002.04017. Cited by: §2.
 [4] (2020) Emergent tool use from multiagent autocurricula. In International Conference on Learning Representations, Cited by: §1.
 [5] (2017) Emergent complexity via multiagent competition. ICLR. Cited by: §1, §1, item 3, §5.2, §5.4.
 [6] (2019) Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680. Cited by: §1.
 [7] (2002) Multiagent learning using a variable learning rate. Artificial Intelligence 136 (2), pp. 215–250. Cited by: §2.
 [8] (1951) Iterative solution of games by fictitious play. Activity analysis of production and allocation 13 (1), pp. 374–376. Cited by: §1, §2, item 3.
 [9] (2019) Superhuman ai for multiplayer poker. Science 365 (6456), pp. 885–890. Cited by: §1, §2.

[10]
(2019)
Competing against nash equilibria in adversarially changing zerosum games.
In
International Conference on Machine Learning
, pp. 921–930. Cited by: §2.  [11] (1978) The rating of chessplayers, past and present. Arco Pub.. Cited by: §5.
 [12] (2019) Adversarial policies: attacking deep reinforcement learning. In International Conference on Learning Representations, Cited by: §2.
 [13] (2020) Montecarlo tree search as regularized policy optimization. ICML. Cited by: §2.
 [14] (2018) Kbeam minimax: efficient optimization for deep adversarial learning. ICML. Cited by: §2.
 [15] (2016) Opponent modeling in deep reinforcement learning. ICML. Cited by: §5.2.
 [16] (2016) Deep reinforcement learning from selfplay in imperfectinformation games. arXiv:1603.01121. Cited by: §1, §2.
 [17] Neural networks for machine learning lecture 6a overview of minibatch gradient descent. Cited by: §4, §5.2.
 [18] (2003) Nash qlearning for generalsum stochastic games. JMLR 4 (Nov), pp. 1039–1069. Cited by: §2.
 [19] (2019) Humanlevel performance in 3d multiplayer games with populationbased reinforcement learning. Science 364 (6443), pp. 859–865. Cited by: §1, §1.
 [20] (2001) On noregret learning, fictitious play, and nash equilibrium. ICML. Cited by: §1, §2.
 [21] (1994) Perturbation methods for saddle point computation. Cited by: §B.1, §1, §2, §3, §4, Theorem 1, Theorem 1.
 [22] (1976) The extragradient method for finding saddle points and other problems. Matecon 12, pp. 747–756. Cited by: §1, §2, §3, §3.
 [23] (2017) A unified gametheoretic approach to multiagent reinforcement learning. In Advances in neural information processing systems, pp. 4190–4203. Cited by: §2.
 [24] (1994) Markov games as a framework for multiagent reinforcement learning. In Machine Learning, pp. 157–163. Cited by: §2, §5.2.
 [25] (2019) Emergent coordination through competition. ICLR. Cited by: §1.
 [26] (2019) Optimistic mirror descent in saddlepoint problems: going the extra (gradient) mile. ICLR. Cited by: §2, §3, §5.1.
 [27] (2016) Asynchronous methods for deep reinforcement learning. In ICML, pp. 1928–1937. Cited by: §5.2, §5.3.
 [28] (1951) Noncooperative games. Annals of mathematics, pp. 286–295. Cited by: §1.
 [29] (2009) Subgradient methods for saddlepoint problems. Journal of optimization theory and applications. Cited by: §1, §2, §3.
 [30] (2012) Game theory and multiagent reinforcement learning. Reinforcement Learning, pp. 441. Cited by: §1, §1, §2.
 [31] (2013) Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, pp. 3066–3074. Cited by: §2.
 [32] (2016) Highdimensional continuous control using generalized advantage estimation. ICLR. Cited by: §5.2, §5.3, §5.4.
 [33] (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: §4, §5.4.
 [34] (1953) Stochastic games. PNAS 39 (10), pp. 1095–1100. Cited by: §3.
 [35] (2016) Mastering the game of go with deep neural networks and tree search. Nature 529 (7587), pp. 484. Cited by: §1, §1, §1, §1, §2, §2, §3.
 [36] (2018) A general reinforcement learning algorithm that masters chess, shogi, and go through selfplay. Science 362 (6419), pp. 1140–1144. Cited by: §1, §1, item 2.
 [37] (2017) Mastering the game of go without human knowledge. Nature 550 (7676), pp. 354. Cited by: §1, §1, §1, §1, item 2.
 [38] (2000) Nash convergence of gradient dynamics in generalsum games. UAI. Cited by: §1.
 [39] (2018) Reinforcement learning: an introduction. MIT press. Cited by: §2, §3.
 [40] (1995) Temporal difference learning and tdgammon. Communications of the ACM 38 (3), pp. 58–68. Cited by: §2.
 [41] (2012) Mujoco: a physics engine for modelbased control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: §5.4.
 [42] (2019) Grandmaster level in starcraft ii using multiagent reinforcement learning. Nature 575 (7782), pp. 350–354. Cited by: §1, §1.
 [43] (1992) Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning 8 (34), pp. 229–256. Cited by: §5.1.
 [44] (2008) Regret minimization in games with incomplete information. In Advances in neural information processing systems, pp. 1729–1736. Cited by: §1, §2.
Appendix A Experiment details
a.1 Illustrations of the games in the experiments
Illustration  Properties  

[width=]fig/soccer_illus.pdf 


[width=]fig/gomoku_illus.pdf 


[width=]fig/sumo_ants_400.jpg 

a.2 Hyperparameters
The hyperparameters in different games are listed in Tab. 5.
Hyperparam \ Game  Soccer  Gomoku  RoboSumo 
Num. of iterations  50  40  50 
Learning rate  0.1 
0 0.001 in first 20 steps then 0.001 
3e5 0 linearly 
Value func learning rate  (Same as above.)  (Same as above.)  9e5 
Sample size  32  32  500 
Num. of inner updates  10  10  10 
Env. time limit  50  41 perplayer  100 
Base RL algorithm  A2C  A2C 
PPO, clip 0.2, minibatch 512, epochs 3 
Optimizer  RmsProp,  RmsProp,  RmsProp, 
Max gradient norm  1.0  1.0  0.1 
GAE parameter  0.95  0.95  0.98 
Discounting factor  0.97  0.97  0.995 
Entropy bonus coef.  0.01  0.01  0 
Policy function 
Sequential[ OneHot[5832], Linear[5832,5], Softmax, CategoricalDist ] 
Sequential[ Conv[c16,k5,p2], ReLU, Conv[c32,k5,p2], ReLU, Conv[c64,k3,p1], ReLU, Conv[c1,k1], Spatial Softmax, CategoricalDist ] 
Sequential[ Linear[120,64], TanH, Linear[64,64], TanH, Linear[64,8], TanH, GaussianDist ] Tanh ensures the mean of the Gaussian is between 1 and 1. The density is corrected. 
Value function 
Sequential[ OneHot[5832], Linear[5832,1] ] 
Share 3 Conv layers with the policy, but additional heads: global average and Linear[64,1] 
Sequential[ Linear[120,64], TanH, Linear[64,64], TanH, Linear[64,1] ] 
a.3 Additional results
Winrates (or average rewards).
Here we report additional results in terms of the average winrates, or equivalently the average rewards through the linear transform
, in Tab. 6 and 7. Since we treat each pair as one agent, the values are the average of and in the first table. The oneside winrates are in the second table. Mean and 95% confidence intervals are estimated from multiple runs. Exact numbers of runs are in the captions of Fig. 7,7,7 of the main paper. The message is the same as that suggested by the Elo scores: Our method consistently produces stronger agents. We hope the winrates may give better intuition about the relative performance of different methods.(a) Soccer  


(b) Gomoku  


(c) RoboSumo  

(a) Soccer  


(b) Gomoku  


(c) RoboSumo  

Training time.
Thanks to the easiness of parallelization, the proposed algorithm enjoys good scalability. We can either distribute the agents into processes to run concurrently, or make the rollouts parallel. Our implementation took the later approach. In the most timeconsuming RoboSumo Ants experiment, with 30 Intel Xeon CPUs, the baseline methods took approximately 2.4h, while Ours (n=4) took 10.83h to train ( times), and Ours (n=8) took 20.75h ( times). Note that, Ours (n) trains agents simultaneously. If we train agents with the baseline methods by repeating the experiment times, the time would be hours, which is comparable to Ours (n).
Chance of selecting the agent itself as opponent.
One big difference between our method and the compared baselines is the ability to select opponents adversarially from the population. Consider the agent pair . When training , our method finds the strongest opponent (that incurs the largest loss on ) from the population, whereas the baselines always choose (possibly past versions of) . Since the candidate set contains , the “fallback” case is to use as opponent in our method. We report the frequency that is chosen as opponent for (and for likewise). This gives a sense of how often our method falls back to the baseline method. From Tab. 8, we can observe that, as grows larger, the chance of fallback is decreased. This is understandable since a larger population means larger candidate sets and a larger chance to find good perturbations.
Method  Ours ()  Ours ()  Ours () 

Frequency of self (Soccer)  
Frequency of self (Gomoku) 
Appendix B Proofs
We adopt the following variant of Alg. 1 in our asymptotic convergence analysis. For clarity, we investigate the learning process of one agent in the population and drop the index. and are not set simply as the population for the sake of the proof. Alternatively, we pose some assumptions. Setting them to the population as in the main text may approximately satisfy the assumptions.
b.1 Proof of Theorem 1
We restate the assumptions and the theorem here more clearly for reference.
Assumption B.1.
() are compact sets. As a consequence, there exists , s.t.,
Further, assume is a bounded convexconcave function.
Assumption B.2.
are compact subsets of and . Assume that a sequence for some and implies is a saddle point.
Theorem 1 (Convergence with exact gradients [21]).
Assump. B.1 is standard, which is true if is based on a payoff table and are probability simplex as in matrix games, or if is quadratic and are unitnorm vectors. Assump. B.2 is about the regularity of the candidate opponent sets. This is true if are compact and only at a saddle point . An trivial example would be . Another example would be the proximal regions around . In practice, Alg. 1 constructs the candidate sets from the population which needs to be adequately large and diverse to satisfy Assump. B.2 approximately.
The proof is due to [21], which we paraphrase here.
Comments
There are no comments yet.