Reinforcement learning (RL) from self-play has drawn tremendous attention over the past few years. Empirical successes have been observed in several challenging tasks, including Go [35, 37, 36], simulated hide-and-seek , simulated sumo wrestling , Capture the Flag , Dota 2 , StarCraft II , and poker , to name a few. During RL from self-play, the learner collects training data by competing with an opponent selected from its past self or an agent population. Self-play presumably creates an auto-curriculum for the agents to learn at their own pace. At each iteration, the learner always faces an opponent that is comparably in strength to itself, allowing continuous improvement.
The way the opponents are selected often follows human-designed heuristic rules in prior works. For example, AlphaGo  always competes with the latest agent, while the later generation AlphaGo Zero  and AlphaZero  generate self-play data with the maintained best historical agent. In specific tasks, such as OpenAI’s sumo wrestling environment, competing against a randomly chosen historical agent leads to the emergence of more diverse behaviors  and more stable training than against the latest agent . In population-based training [19, 25] and AlphaStar , an elite or random agent is picked from the agent population as the opponent.
Unfortunately, these rules may be inefficient and sometimes ineffective in practice and do not necessarily enjoy convergence guarantee to the “average-case optimal” solution even in tabular matrix games. In fact, in the simple Matching Pennies game, self-play with the latest agent fails to converge and falls into an oscillating behavior, as shown in Sec. 5.
In this paper, we want to develop an algorithm that adopts a principally-derived opponent-selection rule to alleviate some of the issues mentioned above. This requires clarifying first what the solution of self-play RL should be. From the game-theoretical perspective, Nash equilibrium is a fundamental solution concept that characterizes the desired “average-case optimal” strategies (policies). When each player assumes other players also play their equilibrium strategies, no one in the game can gain more by unilaterally deviating to another strategy. Nash, in his seminal work , has established the existence result of mixed-strategy Nash equilibrium of any finite game. Thus solving for a mixed-strategy Nash equilibrium is a reasonable goal of self-play RL.
We consider the particular case of two-player zero-sum games, a reasonable model for the competitive environments studied in the self-play RL literature. In this case, the Nash equilibrium is the same as the (global) saddle point and as the solution of the minimax program . We denote as the strategy profiles (in RL language, policies) and as the loss for or utility/reward for . A saddle point , where are the sets of all possible mixed-strategies (stochastic policies) of the two players, satisfies the following key property
Connections to the saddle point problem and game theory inspire us to borrow ideas from the abundant literature for finding saddle points in the optimization field[2, 22, 21, 29] and for finding equilibrium in the game theory field [44, 8, 38]. One particular class of method, i.e., the perturbation-based subgradient methods to find the saddles [22, 21], is especially appealing. This class of method directly builds upon the inequality properties in Eq. 1, and has several advantages: (1) Unlike some algorithms that require knowledge of the game dynamic [35, 37, 30]
, it requires only subgradients; thus, it is easy to adapt to policy optimization with estimated policy gradients. (2) It is guaranteed to converge in its last iterate instead of an average iterate, hence alleviates the need to compute any historical averages as in[8, 38, 44], which can get complicated when neural nets are involved . (3) Most importantly, it prescribes a simple principled way to adversarially choose opponents, which can be naturally implemented with a concurrently-trained agent population.
To summarize, we adapt the perturbation-based methods of classical saddle point optimization into the model-free self-play RL regime. This results in a novel population-based policy gradient method for competitive self-play RL described in Sec. 3. Analogous to the standard model-free RL setting, we assume only “naive” players  where the game dynamic is hidden and only rewards for their own actions are revealed. This enables broader applicability than many existing algorithms [35, 37, 30] to problems with mismatched or unknown game dynamics, such as many real-world or simulated robotic problems. In Sec. 4, we provide an approximate convergence theorem of the proposed algorithm for convex-concave games as a sanity check. Sec. 5 shows extensive experiment results favoring our algorithm’s effectiveness on several games, including matrix games, a game of grid-world soccer, a board game, and a challenging simulated robot sumo game. Our method demonstrates better per-agent sample efficiency than baseline methods with alternative opponent-selection rules. Our trained agents can also outperform the agents trained by other methods on average.
2 Related Work
Reinforcement learning trains a single agent to maximize the expected return through interacting with an environment . Multiagent reinforcement learning (MARL), where two-agent is a special case, concerns multiple agents taking actions in the same environment . Self-play is a training paradigm to generate data for learning in MARL and has led to great successes, achieving superhuman performance in many domains [40, 35, 9]. Applying RL algorithms naively as independent learners in MARL sometimes produces strong agents , but often does not converge to the equilibrium. People have studied ways to extend RL algorithms to MARL, e.g., minimax-Q , Nash-Q , WoLF-PG , etc. However, most of these methods are designed for tabular RL only therefore not readily applicable to continuous state action spaces and complex policy functions where gradient-based policy optimization methods are preferred. Very recently, there is some work on the non-asymptotic regret analysis of tabular self-play RL . While our work roots from a practical sense, these work enrich the theoretical understanding and complement ours.
There are algorithms developed from the game theory and online learning perspective [23, 30, 10], notably Tree search, Fictitious self-play  and Regret minimization [20, 44], and Mirror descent [26, 31]. Tree search such as minimax and alpha-beta pruning is particularly effective in small-state games. Monte Carlo Tree Search (MCTS) is also effective in Go . However, Tree search requires learners to know the exact game dynamics. The latter ones typically require maintaining some historical quantities. In Fictitious play, the learner best responds to a historical average opponent, and the average strategy converges. Similarly, the total historical regrets in all (information) states are maintained in (counterfactual) regret minimization . The extragradient rule in  is a special case of the adversarial perturbation we rely on in this paper. Most of those algorithms are designed only for discrete state action environments. Special care has to be taken with neural net function approximators . On the contrary, our method enjoys last-iterate convergence under proper convex-concave assumptions, does not require the complicated computation of averaging neural nets, and is readily applicable to continuous environments through policy optimization.
In two-player zero-sum games, the Nash equilibrium coincides with the saddle point. This enables the techniques developed for finding saddle points. While some saddle-point methods also rely on time averages , a class of perturbation-based gradient method is known to converge under mild convex-concave assumption for deterministic functions [21, 22, 14]. We develop an efficient sampling version of them for stochastic RL objectives, which leads to a more principled and effective way of choosing opponents in self-play. Our adversarial opponent-selection rule bears a resemblance to . However, the motivation of our work (and so is our algorithm) is to improve self-play RL, while  aims at attacking deep self-play RL policies. Though the algorithm presented here builds upon policy gradient, the same framework may be extended to other RL algorithms such as MCTS due to a recent interpretation of MCTS as policy optimization . Finally, our way of leveraging Eq. 1
in a population may potentially work beyond gradient-based RL, e.g., in training generative adversarial networks similarly to because of the same minimax formulation.
Classical game theory defines a two-player zero-sum game as a tuple where are the sets of possible strategies of Players 1 and 2 respectively, and is a mapping from a pair of strategies to a real-valued utility/reward for Player 2. The game is zero-sum (fully competitive), and Player 1’s reward is .
We consider mixed strategies (corresponding to stochastic policies in RL). In the discrete case, and
could be all probability distributions over the sets of actions. When parameterized function approximators are used,can be the spaces of all policy parameters.
Multiagent RL formulates the problem as a Stochastic Game 
, an extension to Markov Decision Processes (MDP). Denoteas the action of Player 1 and as the action of Player 2 at time , let be the time limit of the game, then the stochastic payoff writes as
The state sequence follows a transition dynamic . Actions are sampled according stochastic policies and . And is the reward (payoff) for Player 2 at time , determined jointly by the state and actions. We use the term ‘agent’ and ‘player’ interchangeably. In deep RL, and are the policy neural net parameters. In some cases , we can enforce by sharing parameters if the game is impartial or we do not want to distinguish them. The discounting factor weights short and long term rewards and is optional. Note that when one agent is fixed - taking as an example - the problem is facing reduces to an MDP, if we define a new state transition dynamic and a new reward
The naive algorithm provably works in strictly convex-concave games (where is strictly convex in and strictly concave in ) under assumptions in . However, in general, it does not enjoy last-iterate convergence to the Nash equilibrium. Even for simple games such as Matching Pennies and Rock Paper Scissors, as we shall see in our experiments, the naive algorithm generates cyclic sequences of that orbit around the equilibrium. This motivates us to study the perturbation-based method that converges under weaker assumptions.
Recall that the Nash equilibrium has to satisfy the saddle constraints Eq. 1: . The perturbation-based method builds upon this property [29, 21, 22] and directly optimize for a solution that meets the constraints. They find perturbed points of and of , and use gradients at and to optimize and . Under some regularity assumptions, gradient direction from a single perturbed point is adequate for proving convergence  for (not strictly) convex-concave functions. They can be easily extended to accommodate gradientbased policy optimization and stochastic RL objective Eq. 4.
We propose to find the perturbations from an agent population, resulting in the algorithm outlined in Alg. 1. The algorithm trains pairs of agents simultaneously. The pairwise competitions are run as the evaluation step (Alg. 1 Line 3), costing trajectories. To save sample complexity, we may use these rollouts to do one policy update as well. Then a simple adversarial rule (Eq. 3) is adopted in Alg. 1 Line 6 to choose the opponents adaptively. The intuition is that and are the most challenging opponents in the population that the current and are facing.
The perturbations and always satisfy , since . Then we run gradient descent on with the perturbed as opponent to minimize , and run gradient ascent on to maximize . Intuitively, the duality gap between and , approximated by , is reduced, leading to converging to the saddle point (equilibrium).
We build the candidate opponent sets in Line 5 of Alg. 1 simply as the concurrently-trained -agent population. Specifically, and . This is due to the following considerations. An alternative source of candidates is the fixed known agents such as a rule-based agent, which may be unavailable in practice. Another source is the extragradient methods [22, 26], where extra gradient steps are taken on before optimizing . The extragradient method can be thought of as a local approximation to Eq. 3 with a neighborhood candidate opponent set, and thus related to our method. However, this method could be less efficient because the trajectory sample used in the extragradient estimation is wasted as it does not contribute to actually optimizing . Yet another source could be the past agents. This choice is motivated by Fictitious play and ensures that the current learner always defeats a past self. But, as we shall see in the experiment section, self-play with a random past agent learns slower than our strategy. We expect all agents in the population in our algorithm to be strong, thus provide stronger learning signals.
Finally, we use Monte Carlo estimates to compute the values and gradients of . In the classical game theory setting, the game dynamic and payoff are known, so it is possible to compute the exact values and gradients of . But this is a rather restricted setting. In model-free MARL, we have to collect roll-out trajectories to estimate both the function values through policy evaluation and gradients through Policy gradient theorem . After collecting independent trajectories , we can estimate by
And given estimates to the state-action value function (assuming an MDP with as a fixed opponent of ), we can construct an estimator for (and similarly for ) by
4 Convergence Analysis
We establish an asymptotic convergence result in the Monte Carlo policy gradient setting in Thm. 2 for a variant of Alg. 1 under regularity assumptions. This algorithm variant sets and uses the vanilla SGD as the policy optimizer. We add a stop criterion after Line 6 with an accuracy parameter . The full proof can be found in the supplementary. Since the algorithm is symmetric between agents in the population, we drop the subscript for text clarity.
are compact sets. As a consequence, there exists s.t and . are compact subsets of and . Further, assume is a bounded convex-concave function.
Theorem 1 (Convergence with exact gradients ).
The above case with exact sub-gradients is easy since both and are deterministic. In RL setting, we construct estimates for (Eq. 4) and (Eq. 5) with samples. The intuition is that, when the samples are large enough, we can bound the deviation between the true values and estimates by concentration inequalities, then the proof outline similar to  also goes through.
Thm. 2 requires an extra assumption on the boundedness of and gradients. By showing the policy gradient estimates are approximate sub-/super-gradients of , we are able to prove that the output of Alg. 1 is an approximate Nash equilibrium with high probability.
The Q value estimation is unbiased and bounded by , and the policy has bounded gradient .
Theorem 2 (Convergence with policy gradients).
Discussion. The theorems require to be convex in and concave in , but not strictly. This is a weaker assumption than Arrow-Hurwicz-Uzawa’s . The purpose of this simple analysis is mainly a sanity check and an assurance of the correctness of our method. It applies to the setting in Sec. 5.1
but not beyond, as the assumptions do not necessarily hold for neural networks. The sample size is chosen loosely as we are not aiming at a sharp finite sample complexity or regret analysis. In practice, we can find empirically suitable(sample size) and (learning rates), and adopt a modern RL algorithm with an advanced optimizer (e.g., PPO  with RmsProp ) in place of the SGD updates.
We empirically evaluate our algorithm on several games with distinct characteristics. The implementation is based on PyTorch. Code, details, and demos are in the supplementary material.
In Matrix games, we compare to a naive mirror descent method, which is essentially Self-play with the latest agent. In the rest of the environments, we compare the results of the following methods:
Self-play with the latest agent (Arrow-Hurwicz-Uzawa). The learner always competes with the most recent agent. This is essentially the Arrow-Hurwicz-Uzawa method  or the naive mirror/alternating descent.
Self-play with a random past agent (Fictitious play). The learner competes against a randomly sampled historical opponent. This is the scheme in OpenAI sumo [5, 1]. It is similar to Fictitious play  since uniformly random sampling is equivalent to historical average by definition. However, Fictitious play only guarantees convergence of the average-iterate but not the last-iterate agent.
Ours(). This is our algorithm with a population of pairs of agents trained simultaneously, with each other as candidate opponents. Implementation can be distributed.
We mainly measure the strength of agents by the Elo scores . Pairwise competition results are gathered from a large tournament among all
the checkpoint agents of all methods after training. Each competition has multiple matches to account for randomness. The Elo scores are computed by logistic regression, as Elo assumes a logistic relationshipA 100 Elo difference corresponds to roughly 64% win-rate. The initial agent’s Elo is calibrated to 0. Another way to measure the strength is to compute the average rewards (win-rates) against other agents. We also report average rewards in the supplementary.
5.1 Matrix games
We verified the last-iterate convergence to Nash equilibrium in several classical two-player zero-sum matrix games. In comparison, the vanilla mirror descent/ascent is known to produce oscillating behaviors . Payoff matrices (for both players separated by comma), phase portraits, and error curves are shown in Tab. 1,2,3,4 and Fig. 4,4,4,4. Our observations are listed beside the figures.
We studied two settings: (1) Ours(Exact Gradient), the full information setting, where the players know the payoff matrix and compute the exact gradients on action probabilities; (2) Ours
(Policy Gradient), the reinforcement learning or bandit setting, where each player only receives the reward of its own action. The action probabilities were modeled by a probability vector. We estimated the gradient w.r.t with the REINFORCE estimator  with sample size , and applied constant learning rate SGD with proximal projection onto . We trained agents jointly for Alg. 1 and separately for the naive mirror descent under the same initialization.
|Game payoff matrix||Phase portraits and error curves|
, while any interpolation betweenand is an equilibrium column strategy. Depending on initialization, agents in our method converges to different equilibria.
5.2 Grid-world soccer game
We conducted experiments in a grid-world soccer game. Similar games were adopted in [15, 24]. Two players compete in a grid world, starting from random positions. The action space is . Once a player scores a goal, it gets positive reward 1.0, and the game ends. Up to timesteps are allowed. The game ends with a draw if time runs out. The game has imperfect information, as the two players move simultaneously.
The policy and value functions were parameterized by simple one-layer networks, consisting of a one-hot encoding layer and a linear layer that outputs the action logits and values. The logits are transformed into probabilities via. We used Advantage Actor-Critic (A2C)  with Generalized Advantage Estimation (GAE)  and RmsProp  as the base RL algorithm. The hyper-parameters are , , for Alg. 1
. We kept track of the per-agent number of trajectories (episodes) each algorithm uses for fair comparison. Other hyper-parameters are in the supplementary. All methods were run multiple times to calculate the confidence intervals.
In Fig. 7, Ours() all perform better than others, achieving higher Elo scores after experiencing the same number of per-agent episodes. Other methods fail to beat the rule-based agent after 32000 episodes. Competing with a random past agent learns the slowest, suggesting that, though it may stabilize training and lead to diverse behaviors , the learning efficiency is not as high because a large portion of samples is devoted to weak opponents. Within our methods, the performance increases with a larger , suggesting a larger population may help find better perturbations.
5.3 Gomoku board game
We investigated the effectiveness in the Gomoku game, which is also known as Renju, Five-in-a-row. In our variant, two players place black or white stones on a 9-by-9 board in turn. The player who gets an unbroken row of five horizontally, vertically, or diagonally, wins (reward 1). The game is a draw (reward 0) when no valid move remains. The game is sequential and has perfect information.
This experiment involved much more complex neural networks than before. We adopted a 4-layer convolutional ReLU network (kernels, channels
, all strides) for both the policy and value networks. Gomoku is hard to train from scratch with pure model-free RL without explicit tree search. Hence, we pre-trained the policy nets on expert data collected from renjuoffline.com. We downloaded roughly 130 thousand games and applied behavior cloning. The pre-trained networks were able to predict expert moves with accuracy and achieve an average score of 0.93 ( win and lose) against a random-action player. We adopted the A2C  with GAE  and RmsProp with learning rate . Up to iterations of Alg. 1
were run. The other hyperparameters are the same as those in the soccer game.
In Fig. 7, all methods are able to improve upon the behavior cloning policies significantly. Ours() demonstrate higher sample efficiency by achieving higher Elo ratings than the alternatives given the same amount of per-agent experience. This again suggests that the opponents are chosen more wisely, resulting in better policy improvements. Lastly, the more complex policy and value functions (multi-layer CNN) do not seem to undermine the advantage of our approach.
5.4 RoboSumo Ants
Our last experiment is based on the RoboSumo simulation environment in [1, 5], where two Ants wrestle in an arena. This setting is particularly relevant to practical robotics research, as we believe success in this simulation could be transferred into the real-world. The Ants move simultaneously, trying to force the opponent out of the arena or onto the floor. The physics simulator is MuJoCo . The observation space and action space are continuous. This game is challenging since it involves a complex continuous control problem with sparse rewards. Following [1, 5], we utilized PPO  with GAE  as the base RL algorithm, and used a 2-layer fully connected network with width 64 for function approximation. Hyper-parameters , . In , a random past opponent is sampled in self-play, corresponding to the “Self-play w/ random past” baseline here. The agents are initialized from imitating the pre-trained agents of . We consider and in our method. From Fig. 7, we observe again that Ours() outperform the baseline methods by a statistical margin and that our method benefits from a larger population size.
We propose a new algorithmic framework for competitive self-play policy optimization inspired by a perturbation subgradient method for saddle points. Our algorithm provably converges in convex-concave games and achieves better per-agent sample efficiency in several experiments. In the future, we hope to study a larger population size (should we have sufficient computing power) and the possibilities of model-based and off-policy self-play RL under our framework.
-  (2018) Continuous adaptation via meta-learning in nonstationary and competitive environments. In International Conference on Learning Representations, Cited by: §1, item 3, §5.4.
Studies in linear and non-linear programming. Vol. 2, Stanford University Press. Cited by: §1, §3, §4, item 1.
-  (2020) Provable self-play algorithms for competitive reinforcement learning. arXiv preprint arXiv:2002.04017. Cited by: §2.
-  (2020) Emergent tool use from multi-agent autocurricula. In International Conference on Learning Representations, Cited by: §1.
-  (2017) Emergent complexity via multi-agent competition. ICLR. Cited by: §1, §1, item 3, §5.2, §5.4.
-  (2019) Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680. Cited by: §1.
-  (2002) Multiagent learning using a variable learning rate. Artificial Intelligence 136 (2), pp. 215–250. Cited by: §2.
-  (1951) Iterative solution of games by fictitious play. Activity analysis of production and allocation 13 (1), pp. 374–376. Cited by: §1, §2, item 3.
-  (2019) Superhuman ai for multiplayer poker. Science 365 (6456), pp. 885–890. Cited by: §1, §2.
Competing against nash equilibria in adversarially changing zero-sum games.
International Conference on Machine Learning, pp. 921–930. Cited by: §2.
-  (1978) The rating of chessplayers, past and present. Arco Pub.. Cited by: §5.
-  (2019) Adversarial policies: attacking deep reinforcement learning. In International Conference on Learning Representations, Cited by: §2.
-  (2020) Monte-carlo tree search as regularized policy optimization. ICML. Cited by: §2.
-  (2018) K-beam minimax: efficient optimization for deep adversarial learning. ICML. Cited by: §2.
-  (2016) Opponent modeling in deep reinforcement learning. ICML. Cited by: §5.2.
-  (2016) Deep reinforcement learning from self-play in imperfect-information games. arXiv:1603.01121. Cited by: §1, §2.
-  Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. Cited by: §4, §5.2.
-  (2003) Nash q-learning for general-sum stochastic games. JMLR 4 (Nov), pp. 1039–1069. Cited by: §2.
-  (2019) Human-level performance in 3d multiplayer games with population-based reinforcement learning. Science 364 (6443), pp. 859–865. Cited by: §1, §1.
-  (2001) On no-regret learning, fictitious play, and nash equilibrium. ICML. Cited by: §1, §2.
-  (1994) Perturbation methods for saddle point computation. Cited by: §B.1, §1, §2, §3, §4, Theorem 1, Theorem 1.
-  (1976) The extragradient method for finding saddle points and other problems. Matecon 12, pp. 747–756. Cited by: §1, §2, §3, §3.
-  (2017) A unified game-theoretic approach to multiagent reinforcement learning. In Advances in neural information processing systems, pp. 4190–4203. Cited by: §2.
-  (1994) Markov games as a framework for multi-agent reinforcement learning. In Machine Learning, pp. 157–163. Cited by: §2, §5.2.
-  (2019) Emergent coordination through competition. ICLR. Cited by: §1.
-  (2019) Optimistic mirror descent in saddle-point problems: going the extra (gradient) mile. ICLR. Cited by: §2, §3, §5.1.
-  (2016) Asynchronous methods for deep reinforcement learning. In ICML, pp. 1928–1937. Cited by: §5.2, §5.3.
-  (1951) Non-cooperative games. Annals of mathematics, pp. 286–295. Cited by: §1.
-  (2009) Subgradient methods for saddle-point problems. Journal of optimization theory and applications. Cited by: §1, §2, §3.
-  (2012) Game theory and multi-agent reinforcement learning. Reinforcement Learning, pp. 441. Cited by: §1, §1, §2.
-  (2013) Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, pp. 3066–3074. Cited by: §2.
-  (2016) High-dimensional continuous control using generalized advantage estimation. ICLR. Cited by: §5.2, §5.3, §5.4.
-  (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: §4, §5.4.
-  (1953) Stochastic games. PNAS 39 (10), pp. 1095–1100. Cited by: §3.
-  (2016) Mastering the game of go with deep neural networks and tree search. Nature 529 (7587), pp. 484. Cited by: §1, §1, §1, §1, §2, §2, §3.
-  (2018) A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362 (6419), pp. 1140–1144. Cited by: §1, §1, item 2.
-  (2017) Mastering the game of go without human knowledge. Nature 550 (7676), pp. 354. Cited by: §1, §1, §1, §1, item 2.
-  (2000) Nash convergence of gradient dynamics in general-sum games. UAI. Cited by: §1.
-  (2018) Reinforcement learning: an introduction. MIT press. Cited by: §2, §3.
-  (1995) Temporal difference learning and td-gammon. Communications of the ACM 38 (3), pp. 58–68. Cited by: §2.
-  (2012) Mujoco: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: §5.4.
-  (2019) Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575 (7782), pp. 350–354. Cited by: §1, §1.
-  (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §5.1.
-  (2008) Regret minimization in games with incomplete information. In Advances in neural information processing systems, pp. 1729–1736. Cited by: §1, §2.
Appendix A Experiment details
a.1 Illustrations of the games in the experiments
The hyper-parameters in different games are listed in Tab. 5.
|Hyper-param \ Game||Soccer||Gomoku||RoboSumo|
|Num. of iterations||50||40||50|
0 0.001 in first 20 steps then 0.001
|3e-5 0 linearly|
|Value func learning rate||(Same as above.)||(Same as above.)||9e-5|
|Num. of inner updates||10||10||10|
|Env. time limit||50||41 per-player||100|
|Base RL algorithm||A2C||A2C||
PPO, clip 0.2, minibatch 512, epochs 3
|Max gradient norm||1.0||1.0||0.1|
|Entropy bonus coef.||0.01||0.01||0|
Tanh ensures the mean of the Gaussian is between -1 and 1. The density is corrected.
Share 3 Conv layers with the policy, but additional heads: global average and Linear[64,1]
a.3 Additional results
Win-rates (or average rewards).
Here we report additional results in terms of the average win-rates, or equivalently the average rewards through the linear transform, in Tab. 6 and 7. Since we treat each pair as one agent, the values are the average of and in the first table. The one-side win-rates are in the second table. Mean and 95% confidence intervals are estimated from multiple runs. Exact numbers of runs are in the captions of Fig. 7,7,7 of the main paper. The message is the same as that suggested by the Elo scores: Our method consistently produces stronger agents. We hope the win-rates may give better intuition about the relative performance of different methods.
Thanks to the easiness of parallelization, the proposed algorithm enjoys good scalability. We can either distribute the agents into processes to run concurrently, or make the rollouts parallel. Our implementation took the later approach. In the most time-consuming RoboSumo Ants experiment, with 30 Intel Xeon CPUs, the baseline methods took approximately 2.4h, while Ours (n=4) took 10.83h to train ( times), and Ours (n=8) took 20.75h ( times). Note that, Ours (n) trains agents simultaneously. If we train agents with the baseline methods by repeating the experiment times, the time would be hours, which is comparable to Ours (n).
Chance of selecting the agent itself as opponent.
One big difference between our method and the compared baselines is the ability to select opponents adversarially from the population. Consider the agent pair . When training , our method finds the strongest opponent (that incurs the largest loss on ) from the population, whereas the baselines always choose (possibly past versions of) . Since the candidate set contains , the “fall-back” case is to use as opponent in our method. We report the frequency that is chosen as opponent for (and for likewise). This gives a sense of how often our method falls back to the baseline method. From Tab. 8, we can observe that, as grows larger, the chance of fall-back is decreased. This is understandable since a larger population means larger candidate sets and a larger chance to find good perturbations.
|Method||Ours ()||Ours ()||Ours ()|
|Frequency of self (Soccer)|
|Frequency of self (Gomoku)|
Appendix B Proofs
We adopt the following variant of Alg. 1 in our asymptotic convergence analysis. For clarity, we investigate the learning process of one agent in the population and drop the index. and are not set simply as the population for the sake of the proof. Alternatively, we pose some assumptions. Setting them to the population as in the main text may approximately satisfy the assumptions.
b.1 Proof of Theorem 1
We restate the assumptions and the theorem here more clearly for reference.
() are compact sets. As a consequence, there exists , s.t.,
Further, assume is a bounded convex-concave function.
are compact subsets of and . Assume that a sequence for some and implies is a saddle point.
Theorem 1 (Convergence with exact gradients ).
Assump. B.1 is standard, which is true if is based on a payoff table and are probability simplex as in matrix games, or if is quadratic and are unit-norm vectors. Assump. B.2 is about the regularity of the candidate opponent sets. This is true if are compact and only at a saddle point . An trivial example would be . Another example would be the proximal regions around . In practice, Alg. 1 constructs the candidate sets from the population which needs to be adequately large and diverse to satisfy Assump. B.2 approximately.
The proof is due to , which we paraphrase here.