Efficient Competitive Self-Play Policy Optimization

09/13/2020 ∙ by Yuanyi Zhong, et al. ∙ University of Illinois at Urbana-Champaign 11

Reinforcement learning from self-play has recently reported many successes. Self-play, where the agents compete with themselves, is often used to generate training data for iterative policy improvement. In previous work, heuristic rules are designed to choose an opponent for the current learner. Typical rules include choosing the latest agent, the best agent, or a random historical agent. However, these rules may be inefficient in practice and sometimes do not guarantee convergence even in the simplest matrix games. In this paper, we propose a new algorithmic framework for competitive self-play reinforcement learning in two-player zero-sum games. We recognize the fact that the Nash equilibrium coincides with the saddle point of the stochastic payoff function, which motivates us to borrow ideas from classical saddle point optimization literature. Our method trains several agents simultaneously, and intelligently takes each other as opponent based on simple adversarial rules derived from a principled perturbation-based saddle optimization method. We prove theoretically that our algorithm converges to an approximate equilibrium with high probability in convex-concave games under standard assumptions. Beyond the theory, we further show the empirical superiority of our method over baseline methods relying on the aforementioned opponent-selection heuristics in matrix games, grid-world soccer, Gomoku, and simulated robot sumo, with neural net policy function approximators.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reinforcement learning (RL) from self-play has drawn tremendous attention over the past few years. Empirical successes have been observed in several challenging tasks, including Go [35, 37, 36], simulated hide-and-seek [4], simulated sumo wrestling [5], Capture the Flag [19], Dota 2 [6], StarCraft II [42], and poker [9], to name a few. During RL from self-play, the learner collects training data by competing with an opponent selected from its past self or an agent population. Self-play presumably creates an auto-curriculum for the agents to learn at their own pace. At each iteration, the learner always faces an opponent that is comparably in strength to itself, allowing continuous improvement.

The way the opponents are selected often follows human-designed heuristic rules in prior works. For example, AlphaGo [35] always competes with the latest agent, while the later generation AlphaGo Zero [37] and AlphaZero [36] generate self-play data with the maintained best historical agent. In specific tasks, such as OpenAI’s sumo wrestling environment, competing against a randomly chosen historical agent leads to the emergence of more diverse behaviors [5] and more stable training than against the latest agent [1]. In population-based training [19, 25] and AlphaStar [42], an elite or random agent is picked from the agent population as the opponent.

Unfortunately, these rules may be inefficient and sometimes ineffective in practice and do not necessarily enjoy convergence guarantee to the “average-case optimal” solution even in tabular matrix games. In fact, in the simple Matching Pennies game, self-play with the latest agent fails to converge and falls into an oscillating behavior, as shown in Sec. 5.

In this paper, we want to develop an algorithm that adopts a principally-derived opponent-selection rule to alleviate some of the issues mentioned above. This requires clarifying first what the solution of self-play RL should be. From the game-theoretical perspective, Nash equilibrium is a fundamental solution concept that characterizes the desired “average-case optimal” strategies (policies). When each player assumes other players also play their equilibrium strategies, no one in the game can gain more by unilaterally deviating to another strategy. Nash, in his seminal work [28], has established the existence result of mixed-strategy Nash equilibrium of any finite game. Thus solving for a mixed-strategy Nash equilibrium is a reasonable goal of self-play RL.

We consider the particular case of two-player zero-sum games, a reasonable model for the competitive environments studied in the self-play RL literature. In this case, the Nash equilibrium is the same as the (global) saddle point and as the solution of the minimax program . We denote as the strategy profiles (in RL language, policies) and as the loss for or utility/reward for . A saddle point , where are the sets of all possible mixed-strategies (stochastic policies) of the two players, satisfies the following key property

(1)

Connections to the saddle point problem and game theory inspire us to borrow ideas from the abundant literature for finding saddle points in the optimization field

[2, 22, 21, 29] and for finding equilibrium in the game theory field [44, 8, 38]. One particular class of method, i.e., the perturbation-based subgradient methods to find the saddles [22, 21], is especially appealing. This class of method directly builds upon the inequality properties in Eq. 1, and has several advantages: (1) Unlike some algorithms that require knowledge of the game dynamic [35, 37, 30]

, it requires only subgradients; thus, it is easy to adapt to policy optimization with estimated policy gradients. (2) It is guaranteed to converge in its last iterate instead of an average iterate, hence alleviates the need to compute any historical averages as in

[8, 38, 44], which can get complicated when neural nets are involved [16]. (3) Most importantly, it prescribes a simple principled way to adversarially choose opponents, which can be naturally implemented with a concurrently-trained agent population.

To summarize, we adapt the perturbation-based methods of classical saddle point optimization into the model-free self-play RL regime. This results in a novel population-based policy gradient method for competitive self-play RL described in Sec. 3. Analogous to the standard model-free RL setting, we assume only “naive” players [20] where the game dynamic is hidden and only rewards for their own actions are revealed. This enables broader applicability than many existing algorithms [35, 37, 30] to problems with mismatched or unknown game dynamics, such as many real-world or simulated robotic problems. In Sec. 4, we provide an approximate convergence theorem of the proposed algorithm for convex-concave games as a sanity check. Sec. 5 shows extensive experiment results favoring our algorithm’s effectiveness on several games, including matrix games, a game of grid-world soccer, a board game, and a challenging simulated robot sumo game. Our method demonstrates better per-agent sample efficiency than baseline methods with alternative opponent-selection rules. Our trained agents can also outperform the agents trained by other methods on average.

2 Related Work

Reinforcement learning trains a single agent to maximize the expected return through interacting with an environment [39]. Multiagent reinforcement learning (MARL), where two-agent is a special case, concerns multiple agents taking actions in the same environment [24]. Self-play is a training paradigm to generate data for learning in MARL and has led to great successes, achieving superhuman performance in many domains [40, 35, 9]. Applying RL algorithms naively as independent learners in MARL sometimes produces strong agents [40], but often does not converge to the equilibrium. People have studied ways to extend RL algorithms to MARL, e.g., minimax-Q [24], Nash-Q [18], WoLF-PG [7], etc. However, most of these methods are designed for tabular RL only therefore not readily applicable to continuous state action spaces and complex policy functions where gradient-based policy optimization methods are preferred. Very recently, there is some work on the non-asymptotic regret analysis of tabular self-play RL [3]. While our work roots from a practical sense, these work enrich the theoretical understanding and complement ours.

There are algorithms developed from the game theory and online learning perspective [23, 30, 10], notably Tree search, Fictitious self-play [8] and Regret minimization [20, 44], and Mirror descent [26, 31]. Tree search such as minimax and alpha-beta pruning is particularly effective in small-state games. Monte Carlo Tree Search (MCTS) is also effective in Go [35]. However, Tree search requires learners to know the exact game dynamics. The latter ones typically require maintaining some historical quantities. In Fictitious play, the learner best responds to a historical average opponent, and the average strategy converges. Similarly, the total historical regrets in all (information) states are maintained in (counterfactual) regret minimization [44]. The extragradient rule in [26] is a special case of the adversarial perturbation we rely on in this paper. Most of those algorithms are designed only for discrete state action environments. Special care has to be taken with neural net function approximators [16]. On the contrary, our method enjoys last-iterate convergence under proper convex-concave assumptions, does not require the complicated computation of averaging neural nets, and is readily applicable to continuous environments through policy optimization.

In two-player zero-sum games, the Nash equilibrium coincides with the saddle point. This enables the techniques developed for finding saddle points. While some saddle-point methods also rely on time averages [29], a class of perturbation-based gradient method is known to converge under mild convex-concave assumption for deterministic functions [21, 22, 14]. We develop an efficient sampling version of them for stochastic RL objectives, which leads to a more principled and effective way of choosing opponents in self-play. Our adversarial opponent-selection rule bears a resemblance to [12]. However, the motivation of our work (and so is our algorithm) is to improve self-play RL, while [12] aims at attacking deep self-play RL policies. Though the algorithm presented here builds upon policy gradient, the same framework may be extended to other RL algorithms such as MCTS due to a recent interpretation of MCTS as policy optimization [13]. Finally, our way of leveraging Eq. 1

in a population may potentially work beyond gradient-based RL, e.g., in training generative adversarial networks similarly to

[14] because of the same minimax formulation.

3 Method

Classical game theory defines a two-player zero-sum game as a tuple where are the sets of possible strategies of Players 1 and 2 respectively, and is a mapping from a pair of strategies to a real-valued utility/reward for Player 2. The game is zero-sum (fully competitive), and Player 1’s reward is .

We consider mixed strategies (corresponding to stochastic policies in RL). In the discrete case, and

could be all probability distributions over the sets of actions. When parameterized function approximators are used,

can be the spaces of all policy parameters.

Multiagent RL formulates the problem as a Stochastic Game [34]

, an extension to Markov Decision Processes (MDP). Denote

as the action of Player 1 and as the action of Player 2 at time , let be the time limit of the game, then the stochastic payoff writes as

(2)

The state sequence follows a transition dynamic . Actions are sampled according stochastic policies and . And is the reward (payoff) for Player 2 at time , determined jointly by the state and actions. We use the term ‘agent’ and ‘player’ interchangeably. In deep RL, and are the policy neural net parameters. In some cases [35], we can enforce by sharing parameters if the game is impartial or we do not want to distinguish them. The discounting factor weights short and long term rewards and is optional. Note that when one agent is fixed - taking as an example - the problem is facing reduces to an MDP, if we define a new state transition dynamic and a new reward

The naive algorithm provably works in strictly convex-concave games (where is strictly convex in and strictly concave in ) under assumptions in [2]. However, in general, it does not enjoy last-iterate convergence to the Nash equilibrium. Even for simple games such as Matching Pennies and Rock Paper Scissors, as we shall see in our experiments, the naive algorithm generates cyclic sequences of that orbit around the equilibrium. This motivates us to study the perturbation-based method that converges under weaker assumptions.

Recall that the Nash equilibrium has to satisfy the saddle constraints Eq. 1: . The perturbation-based method builds upon this property [29, 21, 22] and directly optimize for a solution that meets the constraints. They find perturbed points of and of , and use gradients at and to optimize and . Under some regularity assumptions, gradient direction from a single perturbed point is adequate for proving convergence [29] for (not strictly) convex-concave functions. They can be easily extended to accommodate gradientbased policy optimization and stochastic RL objective Eq. 4.

Input: : number of iterations, : learning rates, : sample size, : population size, : number of inner policy updates;
Result: pairs of policies;
1 Initialize ;
2 for  do
3        Evaluate with Eq. 4 and sample size ;
4        for  do
5               Construct candidate opponent sets and ;
6               Find perturbed , perturbed ;
7               Invoke a single-agent RL algorithm (e.g., A2C, PPO) on for times that:
8               Estimate policy gradients with sample size   (e.g., Eq. 5);
9               Update policy by

  (or RmsProp);

10               Invoke a single-agent RL algorithm (e.g., A2C, PPO) on for times that:
11               Estimate policy gradients with sample size ;
12               Update policy by   (or RmsProp);
13              
14return ;
Algorithm 1 Perturbation-based self-play policy optimization of an agent population.

We propose to find the perturbations from an agent population, resulting in the algorithm outlined in Alg. 1. The algorithm trains pairs of agents simultaneously. The pairwise competitions are run as the evaluation step (Alg. 1 Line 3), costing trajectories. To save sample complexity, we may use these rollouts to do one policy update as well. Then a simple adversarial rule (Eq. 3) is adopted in Alg. 1 Line 6 to choose the opponents adaptively. The intuition is that and are the most challenging opponents in the population that the current and are facing.

(3)

The perturbations and always satisfy , since . Then we run gradient descent on with the perturbed as opponent to minimize , and run gradient ascent on to maximize . Intuitively, the duality gap between and , approximated by , is reduced, leading to converging to the saddle point (equilibrium).

We build the candidate opponent sets in Line 5 of Alg. 1 simply as the concurrently-trained -agent population. Specifically, and . This is due to the following considerations. An alternative source of candidates is the fixed known agents such as a rule-based agent, which may be unavailable in practice. Another source is the extragradient methods [22, 26], where extra gradient steps are taken on before optimizing . The extragradient method can be thought of as a local approximation to Eq. 3 with a neighborhood candidate opponent set, and thus related to our method. However, this method could be less efficient because the trajectory sample used in the extragradient estimation is wasted as it does not contribute to actually optimizing . Yet another source could be the past agents. This choice is motivated by Fictitious play and ensures that the current learner always defeats a past self. But, as we shall see in the experiment section, self-play with a random past agent learns slower than our strategy. We expect all agents in the population in our algorithm to be strong, thus provide stronger learning signals.

Finally, we use Monte Carlo estimates to compute the values and gradients of . In the classical game theory setting, the game dynamic and payoff are known, so it is possible to compute the exact values and gradients of . But this is a rather restricted setting. In model-free MARL, we have to collect roll-out trajectories to estimate both the function values through policy evaluation and gradients through Policy gradient theorem [39]. After collecting independent trajectories , we can estimate by

(4)

And given estimates to the state-action value function (assuming an MDP with as a fixed opponent of ), we can construct an estimator for (and similarly for ) by

(5)

4 Convergence Analysis

We establish an asymptotic convergence result in the Monte Carlo policy gradient setting in Thm. 2 for a variant of Alg. 1 under regularity assumptions. This algorithm variant sets and uses the vanilla SGD as the policy optimizer. We add a stop criterion after Line 6 with an accuracy parameter . The full proof can be found in the supplementary. Since the algorithm is symmetric between agents in the population, we drop the subscript for text clarity.

Assumption 1.

are compact sets. As a consequence, there exists s.t and . are compact subsets of and . Further, assume is a bounded convex-concave function.

Theorem 1 (Convergence with exact gradients [21]).

Under A1, assume a sequence implies is a saddle point, Alg. 1 (replacing all estimates with true values) produces a sequence of points convergent to a saddle.

The above case with exact sub-gradients is easy since both and are deterministic. In RL setting, we construct estimates for (Eq. 4) and (Eq. 5) with samples. The intuition is that, when the samples are large enough, we can bound the deviation between the true values and estimates by concentration inequalities, then the proof outline similar to [21] also goes through.

Thm. 2 requires an extra assumption on the boundedness of and gradients. By showing the policy gradient estimates are approximate sub-/super-gradients of , we are able to prove that the output of Alg. 1 is an approximate Nash equilibrium with high probability.

Assumption 2.

The Q value estimation is unbiased and bounded by , and the policy has bounded gradient .

Theorem 2 (Convergence with policy gradients).

Under A1, A2, let sample size at step be and learning rate with , then with probability at least , the Monte Carlo version of Alg. 1 generates a sequence of points convergent to an -approximate equilibrium , that is .

Discussion. The theorems require to be convex in and concave in , but not strictly. This is a weaker assumption than Arrow-Hurwicz-Uzawa’s [2]. The purpose of this simple analysis is mainly a sanity check and an assurance of the correctness of our method. It applies to the setting in Sec. 5.1

but not beyond, as the assumptions do not necessarily hold for neural networks. The sample size is chosen loosely as we are not aiming at a sharp finite sample complexity or regret analysis. In practice, we can find empirically suitable

(sample size) and (learning rates), and adopt a modern RL algorithm with an advanced optimizer (e.g., PPO [33] with RmsProp [17]) in place of the SGD updates.

5 Experiments

We empirically evaluate our algorithm on several games with distinct characteristics. The implementation is based on PyTorch. Code, details, and demos are in the supplementary material.

Compared methods.

In Matrix games, we compare to a naive mirror descent method, which is essentially Self-play with the latest agent. In the rest of the environments, we compare the results of the following methods:

  1. [topsep=0pt, parsep=0pt]

  2. Self-play with the latest agent (Arrow-Hurwicz-Uzawa). The learner always competes with the most recent agent. This is essentially the Arrow-Hurwicz-Uzawa method [2] or the naive mirror/alternating descent.

  3. Self-play with the best past agent. The learner competes with the best historical agent maintained. The new agent replaces the best agent if it beats the previous one. This is the scheme in AlphaGo Zero and AlphaZero [37, 36].

  4. Self-play with a random past agent (Fictitious play). The learner competes against a randomly sampled historical opponent. This is the scheme in OpenAI sumo [5, 1]. It is similar to Fictitious play [8] since uniformly random sampling is equivalent to historical average by definition. However, Fictitious play only guarantees convergence of the average-iterate but not the last-iterate agent.

  5. Ours(). This is our algorithm with a population of pairs of agents trained simultaneously, with each other as candidate opponents. Implementation can be distributed.

Evaluation protocols.

We mainly measure the strength of agents by the Elo scores [11]. Pairwise competition results are gathered from a large tournament among all

the checkpoint agents of all methods after training. Each competition has multiple matches to account for randomness. The Elo scores are computed by logistic regression, as Elo assumes a logistic relationship

A 100 Elo difference corresponds to roughly 64% win-rate. The initial agent’s Elo is calibrated to 0. Another way to measure the strength is to compute the average rewards (win-rates) against other agents. We also report average rewards in the supplementary.

5.1 Matrix games

We verified the last-iterate convergence to Nash equilibrium in several classical two-player zero-sum matrix games. In comparison, the vanilla mirror descent/ascent is known to produce oscillating behaviors [26]. Payoff matrices (for both players separated by comma), phase portraits, and error curves are shown in Tab. 1,2,3,4 and Fig. 4,4,4,4. Our observations are listed beside the figures.

We studied two settings: (1) Ours(Exact Gradient), the full information setting, where the players know the payoff matrix and compute the exact gradients on action probabilities; (2) Ours

(Policy Gradient), the reinforcement learning or bandit setting, where each player only receives the reward of its own action. The action probabilities were modeled by a probability vector

. We estimated the gradient w.r.t with the REINFORCE estimator [43] with sample size , and applied constant learning rate SGD with proximal projection onto . We trained agents jointly for Alg. 1 and separately for the naive mirror descent under the same initialization.

                 Game payoff matrix Phase portraits and error curves
Heads Tails
Heads
Tails
Table 1: Matching Pennies, a classical game where two players simultaneously turn their pennies to heads or tails. If the pennies match, Player 2 wins one penny from 1; otherwise, Player 1 wins. is the unique Nash equilibrium with game value 0.
[width=]fig/penny.pdf
Heads Tails
Heads
Tails
Table 2: Skewed Matching Pennies.
Observation: In the leftmost column of Fig. 4,4, the naive mirror descent does not converge (pointwisely); instead, it is trapped in a cyclic behavior. The trajectories of the probabilities of playing Heads orbit around the Nash, showing as circles in the phase portrait. On the other hand, our method enjoys approximate last-iterate convergence with both exact and policy gradients.
[width=]fig/penny2.pdf
Rock Paper Scissors
Rock
Paper
Scissors
Table 3: Rock Paper Scissors.
Observation: Similar observations occur in the Rock Paper Scissors (Fig. 4). The naive method circles around the corresponding equilibrium points and , while our method converges.
[width=]fig/rps.pdf
a b c
A
B
Table 4: Extended Matching Pennies.
Observation: Our method has the benefit of producing diverse solutions when there exist multiple Nash equilibria. The solution for row player is

, while any interpolation between

and is an equilibrium column strategy. Depending on initialization, agents in our method converges to different equilibria.
[width=]fig/div.pdf
Figure 1: Matching Pennies. (Top) The phase portraits. (Bottom) The squared distance to the equilibrium. Four colors correspond to 4 agents in the population.
Figure 2: Skewed Matching Pennies. The unique Nash equ. is with value 0.8.
Figure 3: Rock Paper Scissors. (Top) Visualization of Player 1’s strategies of one agent P0 from the population. (Down) The squared distance to equilibrium.
Figure 4: Visualization of Player 2’s strategies. (Left) Exact gradient; (Right) Policy gradient. The dashed line represents possible equilibrium strategies. The four agents (in different colors) of the population in our algorithm () converge differently.

5.2 Grid-world soccer game

We conducted experiments in a grid-world soccer game. Similar games were adopted in [15, 24]. Two players compete in a grid world, starting from random positions. The action space is . Once a player scores a goal, it gets positive reward 1.0, and the game ends. Up to timesteps are allowed. The game ends with a draw if time runs out. The game has imperfect information, as the two players move simultaneously.

The policy and value functions were parameterized by simple one-layer networks, consisting of a one-hot encoding layer and a linear layer that outputs the action logits and values. The logits are transformed into probabilities via

. We used Advantage Actor-Critic (A2C) [27] with Generalized Advantage Estimation (GAE) [32] and RmsProp [17] as the base RL algorithm. The hyper-parameters are , , for Alg. 1

. We kept track of the per-agent number of trajectories (episodes) each algorithm uses for fair comparison. Other hyper-parameters are in the supplementary. All methods were run multiple times to calculate the confidence intervals.

In Fig. 7, Ours() all perform better than others, achieving higher Elo scores after experiencing the same number of per-agent episodes. Other methods fail to beat the rule-based agent after 32000 episodes. Competing with a random past agent learns the slowest, suggesting that, though it may stabilize training and lead to diverse behaviors [5], the learning efficiency is not as high because a large portion of samples is devoted to weak opponents. Within our methods, the performance increases with a larger , suggesting a larger population may help find better perturbations.

[width=]fig/soccer_tab_elo.pdf
Figure 5: Soccer Elo curves averaged over 3 runs (seeds). For Ours(), the average is over agents. Horizontal lines show the scores of the rule-based and the random-action agents.
[width=]fig/gomoku_elo.pdf
Figure 6: Gomoku Elo curves averaged over 10 runs for the baseline methods, 6 runs (12 agents) for Ours(), 4 runs (16 agents) for Ours(), and 3 runs (18 agents) for Ours().
[width=]fig/sumo_elo.pdf
Figure 7: RoboSumo Ants Elo curves averaged over 4 runs for the baseline methods, 2 runs for Ours(). A close-up is also drawn for better viewing.
* In all three figures, bars show the 95% confidence intervals. We compare per-agent sample efficiency.

5.3 Gomoku board game

We investigated the effectiveness in the Gomoku game, which is also known as Renju, Five-in-a-row. In our variant, two players place black or white stones on a 9-by-9 board in turn. The player who gets an unbroken row of five horizontally, vertically, or diagonally, wins (reward 1). The game is a draw (reward 0) when no valid move remains. The game is sequential and has perfect information.

This experiment involved much more complex neural networks than before. We adopted a 4-layer convolutional ReLU network (kernels

, channels

, all strides

) for both the policy and value networks. Gomoku is hard to train from scratch with pure model-free RL without explicit tree search. Hence, we pre-trained the policy nets on expert data collected from renjuoffline.com. We downloaded roughly 130 thousand games and applied behavior cloning. The pre-trained networks were able to predict expert moves with accuracy and achieve an average score of 0.93 ( win and lose) against a random-action player. We adopted the A2C [27] with GAE [32] and RmsProp with learning rate . Up to iterations of Alg. 1

were run. The other hyperparameters are the same as those in the soccer game.

In Fig. 7, all methods are able to improve upon the behavior cloning policies significantly. Ours() demonstrate higher sample efficiency by achieving higher Elo ratings than the alternatives given the same amount of per-agent experience. This again suggests that the opponents are chosen more wisely, resulting in better policy improvements. Lastly, the more complex policy and value functions (multi-layer CNN) do not seem to undermine the advantage of our approach.

5.4 RoboSumo Ants

Our last experiment is based on the RoboSumo simulation environment in [1, 5], where two Ants wrestle in an arena. This setting is particularly relevant to practical robotics research, as we believe success in this simulation could be transferred into the real-world. The Ants move simultaneously, trying to force the opponent out of the arena or onto the floor. The physics simulator is MuJoCo [41]. The observation space and action space are continuous. This game is challenging since it involves a complex continuous control problem with sparse rewards. Following [1, 5], we utilized PPO [33] with GAE [32] as the base RL algorithm, and used a 2-layer fully connected network with width 64 for function approximation. Hyper-parameters , . In [1], a random past opponent is sampled in self-play, corresponding to the “Self-play w/ random past” baseline here. The agents are initialized from imitating the pre-trained agents of [1]. We consider and in our method. From Fig. 7, we observe again that Ours() outperform the baseline methods by a statistical margin and that our method benefits from a larger population size.

6 Conclusion

We propose a new algorithmic framework for competitive self-play policy optimization inspired by a perturbation subgradient method for saddle points. Our algorithm provably converges in convex-concave games and achieves better per-agent sample efficiency in several experiments. In the future, we hope to study a larger population size (should we have sufficient computing power) and the possibilities of model-based and off-policy self-play RL under our framework.

References

  • [1] M. Al-Shedivat, T. Bansal, Y. Burda, I. Sutskever, I. Mordatch, and P. Abbeel (2018) Continuous adaptation via meta-learning in nonstationary and competitive environments. In International Conference on Learning Representations, Cited by: §1, item 3, §5.4.
  • [2] K. J. Arrow, H. Azawa, L. Hurwicz, and H. Uzawa (1958)

    Studies in linear and non-linear programming

    .
    Vol. 2, Stanford University Press. Cited by: §1, §3, §4, item 1.
  • [3] Y. Bai and C. Jin (2020) Provable self-play algorithms for competitive reinforcement learning. arXiv preprint arXiv:2002.04017. Cited by: §2.
  • [4] B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch (2020) Emergent tool use from multi-agent autocurricula. In International Conference on Learning Representations, Cited by: §1.
  • [5] T. Bansal, J. Pachocki, S. Sidor, I. Sutskever, and I. Mordatch (2017) Emergent complexity via multi-agent competition. ICLR. Cited by: §1, §1, item 3, §5.2, §5.4.
  • [6] C. Berner, G. Brockman, B. Chan, V. Cheung, P. Debiak, C. Dennison, D. Farhi, Q. Fischer, S. Hashme, C. Hesse, et al. (2019) Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680. Cited by: §1.
  • [7] M. Bowling and M. Veloso (2002) Multiagent learning using a variable learning rate. Artificial Intelligence 136 (2), pp. 215–250. Cited by: §2.
  • [8] G. W. Brown (1951) Iterative solution of games by fictitious play. Activity analysis of production and allocation 13 (1), pp. 374–376. Cited by: §1, §2, item 3.
  • [9] N. Brown and T. Sandholm (2019) Superhuman ai for multiplayer poker. Science 365 (6456), pp. 885–890. Cited by: §1, §2.
  • [10] A. R. Cardoso, J. Abernethy, H. Wang, and H. Xu (2019) Competing against nash equilibria in adversarially changing zero-sum games. In

    International Conference on Machine Learning

    ,
    pp. 921–930. Cited by: §2.
  • [11] A. E. Elo (1978) The rating of chessplayers, past and present. Arco Pub.. Cited by: §5.
  • [12] A. Gleave, M. Dennis, C. Wild, N. Kant, S. Levine, and S. Russell (2019) Adversarial policies: attacking deep reinforcement learning. In International Conference on Learning Representations, Cited by: §2.
  • [13] J. Grill, F. Altché, Y. Tang, T. Hubert, M. Valko, I. Antonoglou, and R. Munos (2020) Monte-carlo tree search as regularized policy optimization. ICML. Cited by: §2.
  • [14] J. Hamm and Y. Noh (2018) K-beam minimax: efficient optimization for deep adversarial learning. ICML. Cited by: §2.
  • [15] H. He, J. Boyd-Graber, K. Kwok, and H. Daumé III (2016) Opponent modeling in deep reinforcement learning. ICML. Cited by: §5.2.
  • [16] J. Heinrich and D. Silver (2016) Deep reinforcement learning from self-play in imperfect-information games. arXiv:1603.01121. Cited by: §1, §2.
  • [17] G. Hinton, N. Srivastava, and K. Swersky Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. Cited by: §4, §5.2.
  • [18] J. Hu and M. P. Wellman (2003) Nash q-learning for general-sum stochastic games. JMLR 4 (Nov), pp. 1039–1069. Cited by: §2.
  • [19] M. Jaderberg, W. Czarnecki, I. Dunning, L. Marris, G. Lever, A. Castaneda, C. Beattie, N. Rabinowitz, A. Morcos, A. Ruderman, et al. (2019) Human-level performance in 3d multiplayer games with population-based reinforcement learning. Science 364 (6443), pp. 859–865. Cited by: §1, §1.
  • [20] A. Jafari, A. Greenwald, D. Gondek, and G. Ercal (2001) On no-regret learning, fictitious play, and nash equilibrium. ICML. Cited by: §1, §2.
  • [21] M. Kallio and A. Ruszczynski (1994) Perturbation methods for saddle point computation. Cited by: §B.1, §1, §2, §3, §4, Theorem 1, Theorem 1.
  • [22] G. Korpelevich (1976) The extragradient method for finding saddle points and other problems. Matecon 12, pp. 747–756. Cited by: §1, §2, §3, §3.
  • [23] M. Lanctot, V. Zambaldi, A. Gruslys, A. Lazaridou, K. Tuyls, J. Pérolat, D. Silver, and T. Graepel (2017) A unified game-theoretic approach to multiagent reinforcement learning. In Advances in neural information processing systems, pp. 4190–4203. Cited by: §2.
  • [24] M. L. Littman (1994) Markov games as a framework for multi-agent reinforcement learning. In Machine Learning, pp. 157–163. Cited by: §2, §5.2.
  • [25] S. Liu, G. Lever, J. Merel, S. Tunyasuvunakool, N. Heess, and T. Graepel (2019) Emergent coordination through competition. ICLR. Cited by: §1.
  • [26] P. Mertikopoulos, H. Zenati, B. Lecouat, C. Foo, V. Chandrasekhar, and G. Piliouras (2019) Optimistic mirror descent in saddle-point problems: going the extra (gradient) mile. ICLR. Cited by: §2, §3, §5.1.
  • [27] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu (2016) Asynchronous methods for deep reinforcement learning. In ICML, pp. 1928–1937. Cited by: §5.2, §5.3.
  • [28] J. Nash (1951) Non-cooperative games. Annals of mathematics, pp. 286–295. Cited by: §1.
  • [29] A. Nedić and A. Ozdaglar (2009) Subgradient methods for saddle-point problems. Journal of optimization theory and applications. Cited by: §1, §2, §3.
  • [30] A. Nowé, P. Vrancx, and Y. De Hauwere (2012) Game theory and multi-agent reinforcement learning. Reinforcement Learning, pp. 441. Cited by: §1, §1, §2.
  • [31] S. Rakhlin and K. Sridharan (2013) Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, pp. 3066–3074. Cited by: §2.
  • [32] J. Schulman, P. Moritz, S. Levine, et al. (2016) High-dimensional continuous control using generalized advantage estimation. ICLR. Cited by: §5.2, §5.3, §5.4.
  • [33] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: §4, §5.4.
  • [34] L. S. Shapley (1953) Stochastic games. PNAS 39 (10), pp. 1095–1100. Cited by: §3.
  • [35] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of go with deep neural networks and tree search. Nature 529 (7587), pp. 484. Cited by: §1, §1, §1, §1, §2, §2, §3.
  • [36] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al. (2018) A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362 (6419), pp. 1140–1144. Cited by: §1, §1, item 2.
  • [37] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. (2017) Mastering the game of go without human knowledge. Nature 550 (7676), pp. 354. Cited by: §1, §1, §1, §1, item 2.
  • [38] S. Singh, M. Kearns, and Y. Mansour (2000) Nash convergence of gradient dynamics in general-sum games. UAI. Cited by: §1.
  • [39] R. Sutton and A. Barto (2018) Reinforcement learning: an introduction. MIT press. Cited by: §2, §3.
  • [40] G. Tesauro (1995) Temporal difference learning and td-gammon. Communications of the ACM 38 (3), pp. 58–68. Cited by: §2.
  • [41] E. Todorov, T. Erez, and Y. Tassa (2012) Mujoco: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: §5.4.
  • [42] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, et al. (2019) Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575 (7782), pp. 350–354. Cited by: §1, §1.
  • [43] R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §5.1.
  • [44] M. Zinkevich, M. Johanson, M. Bowling, and C. Piccione (2008) Regret minimization in games with incomplete information. In Advances in neural information processing systems, pp. 1729–1736. Cited by: §1, §2.

Appendix A Experiment details

a.1 Illustrations of the games in the experiments

Illustration Properties
[width=]fig/soccer_illus.pdf
Observation:
Tensor of shape , (, , , , A has ball)
Action:
Time limit:
50 moves
Terminal reward:
for winning team
for losing team
if timeout
[width=]fig/gomoku_illus.pdf
Observation:
Tensor of shape , last dim 0: vacant, 1: black, 2: white
Action:
Any valid location on the 9x9 board
Time limit:
41 moves per-player
Terminal reward:
for winning player
for losing player
if timeout
[width=]fig/sumo_ants_400.jpg
Observation:
Action:
Time limit:
100 moves
Reward:
Terminal or .
Figure 8: Illustration of the 6x9 grid-world soccer game. Red and blue represent the two teams A and B. At start, the players are initialized to random positions on respective sides, and the ball is randomly assigned to one team. Players move up, down, left and right. Once a player scores a goal, the corresponding team wins and the game ends. One player can intercept the other’s ball by crossing the other player.
Figure 9: Illustration of the Gomoku game (also known as Renju, five-in-a-row). We study the 9x9 board variant. Two players sequentially place black and white stones on the board. Black goes first. A player wins when he or she gets five stones in a row. In the case of this illustration, the black wins because there is five consecutive black stones in the 5th row. Numbers in the stones indicate the ordered they are placed.
Figure 10: Illustration of the RoboSumo Ants game. Two ants fight in the arena. The goal is to push the opponent out of the arena or down to the floor. Agent positions are initialized to be random at the start of the game. The game ends in a draw if the time limit is reached. In addition to the terminal reward, the environment comes with shaping rewards (motion bonus, closeness to opponent, etc.). In order to make the game zero-sum, we take the difference between the original rewards of the two ants.

a.2 Hyper-parameters

The hyper-parameters in different games are listed in Tab. 5.

Hyper-param \ Game Soccer Gomoku RoboSumo
Num. of iterations 50 40 50
Learning rate 0.1

0 0.001 in first 20 steps then 0.001

3e-5 0 linearly
Value func learning rate (Same as above.) (Same as above.) 9e-5
Sample size 32 32 500
Num. of inner updates 10 10 10
Env. time limit 50 41 per-player 100
Base RL algorithm A2C A2C

PPO, clip 0.2, minibatch 512, epochs 3

Optimizer RmsProp, RmsProp, RmsProp,
Max gradient norm 1.0 1.0 0.1
GAE parameter 0.95 0.95 0.98
Discounting factor 0.97 0.97 0.995
Entropy bonus coef. 0.01 0.01 0
Policy function

Sequential[

OneHot[5832],

Linear[5832,5],

Softmax,

CategoricalDist ]

Sequential[

Conv[c16,k5,p2],

ReLU,

Conv[c32,k5,p2],

ReLU,

Conv[c64,k3,p1],

ReLU,

Conv[c1,k1],

Spatial Softmax,

CategoricalDist ]

Sequential[

Linear[120,64],

TanH,

Linear[64,64],

TanH,

Linear[64,8],

TanH,

GaussianDist ]

Tanh ensures the mean of the Gaussian is between -1 and 1. The density is corrected.

Value function

Sequential[

OneHot[5832],

Linear[5832,1] ]

Share 3 Conv layers with the policy, but additional heads: global average and Linear[64,1]

Sequential[

Linear[120,64],

TanH,

Linear[64,64],

TanH,

Linear[64,1] ]

Table 5: Hyper-parameters.

a.3 Additional results

Win-rates (or average rewards).

Here we report additional results in terms of the average win-rates, or equivalently the average rewards through the linear transform

, in Tab. 6 and 7. Since we treat each pair as one agent, the values are the average of and in the first table. The one-side win-rates are in the second table. Mean and 95% confidence intervals are estimated from multiple runs. Exact numbers of runs are in the captions of Fig. 7,7,7 of the main paper. The message is the same as that suggested by the Elo scores: Our method consistently produces stronger agents. We hope the win-rates may give better intuition about the relative performance of different methods.

(a) Soccer
Soccer Self-play latest Self-play best Self-play rand Ours (n=2) Ours (n=4) Ours (n=6)
Self-play latest -
Self-play best -
Self-play rand -
Ours (n=2) -
Ours (n=4) -
Ours (n=6) -
Last-iter average
Overall average
(b) Gomoku
Gomoku Self-play latest Self-play best Self-play rand Ours (n=2) Ours (n=4) Ours (n=6)
Self-play latest -
Self-play best -
Self-play rand -
Ours (n=2) -
Ours (n=4) -
Ours (n=6) -
Last-iter average
Overall average
(c) RoboSumo
RoboSumo Self-play latest Self-play best Self-play rand Ours (n=4) Ours (n=8)
Self-play latest -
Self-play best -
Self-play rand -
Ours (n=4) -
Ours (n=8) -
Last-iter average
Overall average
Table 6: Average win-rates () between the last-iterate (final) agents trained by different algorithms. Last two rows further show the average over other last-iterate agents and all other agents (historical checkpoint) included in the tournament, respectively. Since an agent consists of an pair, the win-rate is averaged on and , i.e., . The lower the better within each column; The higher the better within each row.
(a) Soccer
row \ col Self-play latest Self-play best Self-play rand Ours (n=2) Ours (n=4) Ours (n=6)
Self-play latest
Self-play best
Self-play rand
Ours (n=2)
Ours (n=4)
Ours (n=6)
Last-iter average
Overall average
(b) Gomoku
row \ col Self-play latest Self-play best Self-play rand Ours (n=2) Ours (n=4) Ours (n=6)
Self-play latest
Self-play best
Self-play rand
Ours (n=2)
Ours (n=4)
Ours (n=6)
Last-iter average
Overall average
(c) RoboSumo
row \ col Self-play latest Self-play best Self-play rand Ours (n=4) Ours (n=8)
Self-play latest
Self-play best
Self-play rand
Ours (n=4)
Ours (n=8)
Last-iter average
Overall average
Table 7: Average one-sided win-rates () between the last-iterate (final) agents trained by different algorithms. The win-rate is one-sided, i.e., . The lower the better within each column; The higher the better within each row.

Training time.

Thanks to the easiness of parallelization, the proposed algorithm enjoys good scalability. We can either distribute the agents into processes to run concurrently, or make the rollouts parallel. Our implementation took the later approach. In the most time-consuming RoboSumo Ants experiment, with 30 Intel Xeon CPUs, the baseline methods took approximately 2.4h, while Ours (n=4) took 10.83h to train ( times), and Ours (n=8) took 20.75h ( times). Note that, Ours (n) trains agents simultaneously. If we train agents with the baseline methods by repeating the experiment times, the time would be hours, which is comparable to Ours (n).

Chance of selecting the agent itself as opponent.

One big difference between our method and the compared baselines is the ability to select opponents adversarially from the population. Consider the agent pair . When training , our method finds the strongest opponent (that incurs the largest loss on ) from the population, whereas the baselines always choose (possibly past versions of) . Since the candidate set contains , the “fall-back” case is to use as opponent in our method. We report the frequency that is chosen as opponent for (and for likewise). This gives a sense of how often our method falls back to the baseline method. From Tab. 8, we can observe that, as grows larger, the chance of fall-back is decreased. This is understandable since a larger population means larger candidate sets and a larger chance to find good perturbations.

Method Ours () Ours () Ours ()
Frequency of self (Soccer)
Frequency of self (Gomoku)
Table 8: Average frequency of using the agent itself as opponent, in the Soccer and Gomoku experiments. The frequency is calculated by counting over all agents and iterations. The shows the standard deviations estimated by 3 runs with different random seeds.

Appendix B Proofs

We adopt the following variant of Alg. 1 in our asymptotic convergence analysis. For clarity, we investigate the learning process of one agent in the population and drop the index. and are not set simply as the population for the sake of the proof. Alternatively, we pose some assumptions. Setting them to the population as in the main text may approximately satisfy the assumptions.

Input: : learning rates, : sample size;
Result: Pair of policies ;
1 Initialize ;
2 for  do
3        Construct candidate opponent sets and ;
4        Find perturbed and perturbed where the evaluation is done with Eq. 4 and sample size ;
5        Compute estimated duality gap ;
6        if  then
7               return
8       Estimate policy gradients and w/ Eq. 5 and sample size ;
9        Update policy parameters with and ;
10       
Algorithm 2 Simplified perturbation-based self-play policy optimization of one agent.

b.1 Proof of Theorem 1

We restate the assumptions and the theorem here more clearly for reference.

Assumption B.1.

() are compact sets. As a consequence, there exists , s.t.,

Further, assume is a bounded convex-concave function.

Assumption B.2.

are compact subsets of and . Assume that a sequence for some and implies is a saddle point.

Theorem 1 (Convergence with exact gradients [21]).

Under Assump. B.1,B.2, let the learning rate satisfies

Alg. 2 (when replacing all estimates with true values) produces a sequence of points convergent to a saddle point.

Assump. B.1 is standard, which is true if is based on a payoff table and are probability simplex as in matrix games, or if is quadratic and are unit-norm vectors. Assump. B.2 is about the regularity of the candidate opponent sets. This is true if are compact and only at a saddle point . An trivial example would be . Another example would be the proximal regions around . In practice, Alg. 1 constructs the candidate sets from the population which needs to be adequately large and diverse to satisfy Assump. B.2 approximately.

The proof is due to [21], which we paraphrase here.

Proof.

We shall prove that one iteration of Alg. 2 decreases the distance between the current and the optimal . Expand the squared distance,

(6)

From Assump. B.1, convexity of on gives

(7)

which yields

(8)

Similarly for , concavity of on gives