1 Introduction
Recent progress in reinforcement learning (RL) has achieved very impressive results ranging from playing games
(Mnih et al., 2015; Silver et al., 2016), to applications in dialogue systems (Li et al., 2016), to robotics (Levine et al., 2016). Despite the progress, the learning algorithms for solving many of these tasks are designed to deal with stationary environments. On the other hand, realworld is often nonstationary either due to complexity (Sutton et al., 2007), changes in the dynamics or the objectives in the environment over the lifetime of a system (Thrun, 1998), or presence of multiple learning actors (Lowe et al., 2017; Foerster et al., 2017a). Nonstationarity breaks the standard assumptions and requires agents to continuously adapt, both at training and execution time, in order to succeed.Learning under nonstationary conditions is challenging. The classical approaches to dealing with nonstationarity are usually based on context detection (Da Silva et al., 2006) and tracking (Sutton et al., 2007), i.e., reacting to the already happened changes in the environment by continuously finetuning the policy. Unfortunately, modern deep RL algorithms, while able to achieve superhuman performance on certain tasks, are known to be sample inefficient. Nevertheless, nonstationarity allows only for limited interaction before the properties of the environment change. Thus, it immediately puts learning into the fewshot regime and often renders simple finetuning methods impractical.
A nonstationary environment can be seen as a sequence of stationary tasks, and hence we propose to tackle it as a multitask learning problem (Caruana, 1998). The learningtolearn (or metalearning) approaches (Schmidhuber, 1987; Thrun & Pratt, 1998) are particularly appealing in the fewshot regime, as they produce flexible learning rules that can generalize from only a handful of examples. Metalearning has shown promising results in the supervised domain and have gained a lot of attention from the research community recently (e.g., Santoro et al., 2016; Ravi & Larochelle, 2016). In this paper, we develop a gradientbased metalearning algorithm similar to (Finn et al., 2017b) and suitable for continuous adaptation of RL agents in nonstationary environments. More concretely, our agents metalearn to anticipate the changes in the environment and update their policies accordingly.
While virtually any changes in an environment could induce nonstationarity (e.g., changes in the physics or characteristics of the agent), environments with multiple agents are particularly challenging due to complexity of the emergent behavior and are of practical interest with applications ranging from multiplayer games (Peng et al., 2017) to coordinating selfdriving fleets Cao et al. (2013). Multiagent environments are nonstationary from the perspective of any individual agent since all actors are learning and changing concurrently (Lowe et al., 2017). In this paper, we consider the problem of continuous adaptation to a learning opponent in a competitive multiagent setting.
To this end, we design RoboSumo—a 3D environment with simulated physics that allows pairs of agents to compete against each other. To test continuous adaptation, we introduce iterated adaptation games—a new setting where a trained agent competes against the same opponent for multiple rounds of a repeated game, while both are allowed to update their policies and change their behaviors between the rounds. In such iterated games, from the agent’s perspective, the environment changes from round to round, and the agent ought to adapt in order to win the game. Additionally, the competitive component of the environment makes it not only nonstationary but also adversarial, which provides a natural training curriculum and encourages learning robust strategies (Bansal et al., 2018).
We evaluate our metalearning agents along with a number of baselines on a (singleagent) locomotion task with handcrafted nonstationarity and on iterated adaptation games in RoboSumo. Our results demonstrate that metalearned strategies clearly dominate other adaptation methods in the fewshot regime in both single and multiagent settings. Finally, we carry out a largescale experiment where we train a diverse population of agents with different morphologies, policy architectures, and adaptation methods, and make them interact by competing against each other in iterated games. We evaluate the agents based on their TrueSkills (Herbrich et al., 2007) in these games, as well as evolve the population as whole for a few generations—the agents that lose disappear, while the winners get duplicated. Our results suggest that the agents with metalearned adaptation strategies end up being the fittest. Videos that demonstrate adaptation behaviors are available at https://goo.gl/tboqaN.
2 Related Work
The problem of continuous adaptation considered in this work is a variant of continual learning (Ring, 1994, 1997) and is related to lifelong (Thrun & Pratt, 1998; Silver et al., 2013) and neverending (Mitchell et al., 2015) learning. Lifelong learning systems aim at solving multiple tasks sequentially by efficiently transferring and utilizing knowledge from already learned tasks to new tasks while minimizing the effect of catastrophic forgetting (McCloskey & Cohen, 1989). Neverending learning is concerned with mastering a fixed set of tasks in iterations, where the set keeps growing and the performance on all the tasks in the set keeps improving from iteration to iteration.
The scope of continuous adaptation is narrower and more precise. While lifelong and neverending learning settings are defined as general multitask problems (Silver et al., 2013; Mitchell et al., 2015), continuous adaptation targets to solve a single but nonstationary task or environment. The nonstationarity in the former two problems exists and is dictated by the selected sequence of tasks. In the latter case, we assume that nonstationarity is caused by some underlying dynamics in the properties of a given task in the first place (e.g., changes in the behavior of other agents in a multiagent setting). Finally, in the lifelong and neverending scenarios the boundary between training and execution is blurred as such systems constantly operate in the training regime. Continuous adaptation, on the other hand, expects a (potentially trained) agent to adapt to the changes in the environment at execution time under the pressure of limited data or interaction experience between the changes^{1}^{1}1The limited interaction aspect of continuous adaptation makes the problem somewhat similar to the recently proposed lifelong fewshot learning (Finn et al., 2017a)..
Nonstationarity of multiagent environments is a well known issue that has been extensively studied in the context of learning in simple multiplayer iterated games (such as rockpaperscissors) where each episode is oneshot interaction (Singh et al., 2000; Bowling, 2005; Conitzer & Sandholm, 2007). In such games, discovering and converging to a Nash equilibrium strategy is a success for the learning agents. Modeling and exploiting opponents (Zhang & Lesser, 2010; Mealing & Shapiro, 2013) or even their learning processes (Foerster et al., 2017b) is advantageous as it improves convergence or helps to discover equilibria of certain properties (e.g., leads to cooperative behavior). In contrast, each episode in RoboSumo consists of multiple steps, happens in continuous time, and requires learning a good intraepisodic controller. Finding Nash equilibria in such a setting is hard. Thus, fast adaptation becomes one of the few viable strategies against changing opponents.
Our proposed method for continuous adaptation follows the general metalearning paradigm (Schmidhuber, 1987; Thrun & Pratt, 1998), i.e., it learns a highlevel procedure that can be used to generate a good policy each time the environment changes. There is a wealth of work on metalearning, including methods for learning update rules for neural models that were explored in the past (Bengio et al., 1990, 1992; Schmidhuber, 1992), and more recent approaches that focused on learning optimizers for deep networks (Hochreiter et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2016; Ravi & Larochelle, 2016), generating model parameters (Ha et al., 2016; Edwards & Storkey, 2016; AlShedivat et al., 2017), learning task embeddings (Vinyals et al., 2016; Snell et al., 2017) including memorybased approaches (Santoro et al., 2016), learning to learn implicitly via RL (Wang et al., 2016; Duan et al., 2016), or simply learning a good initialization (Finn et al., 2017b).
3 Method
The problem of continuous adaptation in nonstationary environments immediately puts learning into the fewshot regime: the agent must learn from only limited amount of experience that it can collect before its environment changes. Therefore, we build our method upon the previous work on gradientbased modelagnostic metalearning (MAML) that has been shown successful in the fewshot settings (Finn et al., 2017b). In this section, we rederive MAML for multitask reinforcement learning from a probabilistic perspective (cf. Grant et al., 2018), and then extend it to dynamically changing tasks.
3.1 A probabilistic view of modelagnostic metalearning (MAML)
Assume that we are given a distribution over tasks, , where each task, , is a tuple:
(1) 
is a taskspecific loss function that maps a trajectory,
, to a loss value, i.e., ; and define the Markovian dynamics of the environment in task ; denotes the horizon; observations, , and actions,, are elements (typically, vectors) of the observation space,
, and action space, , respectively. The loss of a trajectory, , is the negative cumulative reward, ., are all random variables with dependencies encoded in the edges of the given graph. (b) Our extended model suitable for continuous adaptation to a task changing dynamically due to nonstationarity of the environment. Policy and trajectories at a previous step are used to construct a new policy for the current step. (c) Computation graph for the metaupdate from
to. Boxes represent replicas of the policy graphs with the specified parameters. The model is optimized via truncated backpropagation through time starting from
.The goal of metalearning is to find a procedure which, given access to a limited experience on a task sampled from , can produce a good policy for solving it. More formally, after querying trajectories from a task under policy , denoted , we would like to construct a new, taskspecific policy, , that would minimize the expected subsequent loss on the task . In particular, MAML constructs parameters of the taskspecific policy, , using gradient of w.r.t. :
(2) 
We call (2) the adaptation update with a step . The adaptation update is parametrized by , which we optimize by minimizing the expected loss over the distribution of tasks, —the metaloss:
(3) 
where and are trajectories obtained under and , respectively.
In general, we can think of the task, trajectories, and policies, as random variables (Fig. LABEL:fig:MAML), where is generated from some conditional distribution . The metaupdate (2) is equivalent to assuming the delta distribution, ^{2}^{2}2Grant et al. (2018)
similarly reinterpret adaptation updates (in nonRL settings) as Bayesian inference.
. To optimize (3), we can use the policy gradient method (Williams, 1992), where the gradient of is as follows:(4) 
The expected loss on a task, , can be optimized with trustregion policy (TRPO) (Schulman et al., 2015a) or proximal policy (PPO) (Schulman et al., 2017) optimization methods. For details and derivations please refer to Appendix A.
3.2 Continuous adaptation via metalearning
In the classical multitask setting, we make no assumptions about the distribution of tasks, . When the environment is nonstationary, we can see it as a sequence of stationary tasks on a certain timescale where the tasks correspond to different dynamics of the environment. Then, is defined by the environment changes, and the tasks become sequentially dependent. Hence, we would like to exploit this dependence between consecutive tasks and metalearn a rule that keeps updating the policy in a way that minimizes the total expected loss encountered during the interaction with the changing environment. For instance, in the multiagent setting, when playing against an opponent that changes its strategy incrementally (e.g., due to learning), our agent should ideally metalearn to anticipate the changes and update its policy accordingly.
In the probabilistic language, our nonstationary environment is equivalent to a distribution of tasks represented by a Markov chain (Fig.
LABEL:fig:NSML). The goal is to minimize the expected loss over the chain of tasks of some length :(5) 
Here, and
denote the initial and the transition probabilities in the Markov chain of tasks. Note that (i) we deal with Markovian dynamics on two levels of hierarchy, where the upper level is the dynamics of the tasks and the lower level is the MDPs that represent particular tasks, and (ii) the objectives,
, will depend on the way the metalearning process is defined. Since we are interested in adaptation updates that are optimal with respect to the Markovian transitions between the tasks, we define the metaloss on a pair of consecutive tasks as follows:(6) 
The principal difference between the loss in (3) and (6) is that trajectories come from the current task, , and are used to construct a policy, , that is good for the upcoming task, . Note that even though the policy parameters, , are sequentially dependent (Fig. LABEL:fig:NSML), in (6) we always start from the initial parameters, ^{3}^{3}3This is due to stability considerations. We find empirically that optimization over sequential updates from to is unstable, often tends to diverge, while starting from the same initialization leads to better behavior.. Hence, optimizing is equivalent to truncated backpropagation through time with a unit lag in the chain of tasks.
To construct parameters of the policy for task , we start from and do multiple^{4}^{4}4Empirically, it turns out that constructing via multiple metagradient steps (between 2 and 5) with adaptive step sizes tends yield better results in practice. metagradient steps with adaptive step sizes as follows (assuming the number of steps is ):
(7)  
where is a set of metagradient step sizes that are optimized jointly with . The computation graph for the metaupdate is given in Fig. 0(c). The expression for the policy gradient is the same as in (4) but with the expectation is now taken w.r.t. to both and :
(8)  
More details and the analog of the policy gradient theorem for our setting are given in Appendix A.
Note that computing adaptation updates requires interacting with the environment under while computing the metaloss, , requires using , and hence, interacting with each task in the sequence twice. This is often impossible at execution time, and hence we use slightly different algorithms at training and execution times.
Metalearning at training time. Once we have access to a distribution over pairs of consecutive tasks^{5}^{5}5Given a sequences of tasks generated by a nonstationary environment, , we use the set of all pairs of consecutive tasks, , as the training distribution., , we can metalearn the adaptation updates by optimizing and jointly with a gradient method, as given in Algorithm 1. We use to collect trajectories from and when interacting with . Intuitively, the algorithm is searching for and such that the adaptation update (7) computed on the trajectories from brings us to a policy, , that is good for solving . The main assumption here is that the trajectories from contain some information about . Note that we treat adaptation steps as part of the computation graph (Fig. 0(c)) and optimize and via backpropagation through the entire graph, which requires computing second order derivatives.
Adaptation at execution time. Note that to compute unbiased adaptation gradients at training time, we have to collect experience in using . At test time, due to environment nonstationarity, we usually do not have the luxury to access to the same task multiple times. Thus, we keep acting according to and reuse past experience to compute updates of for each new incoming task (see Algorithm 2). To adjust for the fact that the past experience was collected under a policy different from , we use importance weight correction. In case of single step metaupdate, we have:
(9) 
where and are used to rollout from and , respectively. Extending importance weight correction to multistep updates is straightforward and requires simply adding importance weights to each of the intermediate steps in (7).
4 Environments
We have designed a set of environments for testing different aspects of continuous adaptation methods in two scenarios: (i) simple environments that change from episode to episode according to some underlying dynamics, and (ii) a competitive multiagent environment, RoboSumo, that allows different agents to play sequences of games against each other and keep adapting to incremental changes in each other’s policies. All our environments are based on MuJoCo physics simulator (Todorov et al., 2012), and all agents are simple multileg robots, as shown in Fig. 1(a).
4.1 Dynamic
First, we consider the problem of robotic locomotion in a changing environment. We use a sixleg agent (Fig. 1(b)) that observes the absolute position and velocity of its body, the angles and velocities of its legs, and it acts by applying torques to its joints. The agent is rewarded proportionally to its moving speed in a fixed direction. To induce nonstationarity, we select a pair of legs of the agent and scale down the torques applied to the corresponding joints by a factor that linearly changes from 1 to 0 over the course of 7 episodes. In other words, during the first episode all legs are fully functional, while during the last episode the agent has two legs fully paralyzed (even though the policy can generate torques, they are multiplied by 0 before being passed to the environment). The goal of the agent is to learn to adapt from episode to episode by changing its gait so that it is able to move with a maximal speed in a given direction despite the changes in the environment (cf. Cully et al., 2015). Also, there are 15 ways to select a pair of legs of a sixleg creature which gives us 15 different nonstationary environments. This allows us to use a subset of these environments for training and a separate held out set for testing. The training and testing procedures are described in the next section.
4.2 Competitive
Our multiagent environment, RoboSumo, allows agents to compete in the 1vs1 regime following the standard sumo rules^{6}^{6}6To win, the agent has to push the opponent out of the ring or make the opponent’s body touch the ground.. We introduce three types of agents, Ant, Bug, and Spider, with different anatomies (Fig. 1(a)). During the game, each agent observes positions of itself and the opponent, its own joint angles, the corresponding velocities, and the forces exerted on its own body (i.e., equivalent of tactile senses). The action spaces are continuous.



Iterated adaptation games. To test adaptation, we define the iterated adaptation game (Fig. 3)—a game between a pair of agents that consists of rounds each of which consists of one or more fixed length episodes (500 time steps each). The outcome of each round is either win, loss, or draw. The agent that wins the majority of rounds (with at least 5% margin) is declared the winner of the game. There are two distinguishing aspects of our setup: First, the agents are trained either via pure selfplay or versus opponents from a fixed training collection. At test time, they face a new opponent from a testing collection. Second, the agents are allowed to learn (or adapt) at test time. In particular, an agent should exploit the fact that it plays against the same opponent multiple consecutive rounds and try to adjust its behavior accordingly. Since the opponent may also be adapting, the setup allows to test different continuous adaptation strategies, one versus the other.
Reward shaping. In RoboSumo, rewards are naturally sparse: the winner gets +2000, the loser is penalized for 2000, and in case of a draw both opponents receive 1000 points. To encourage fast learning at the early stages of training, we shape the rewards given to agents in the following way: the agent (i) gets reward for staying closer to the center of the ring, for moving towards the opponent, and for exerting forces on the opponent’s body, and (ii) gets penalty inversely proportional to the opponent’s distance to the center of the ring. At test time, the agents continue having access to the shaped reward as well and may use it to update their policies. Throughout our experiments, we use discounted rewards with the discount factor, . More details are in Appendix D.2.
Calibration. To study adaptation, we need a wellcalibrated environment in which none of the agents has an initial advantage. To ensure the balance, we increased the mass of the weaker agents (Ant and Spider) such that the win rates in games between one agent type versus the other type in the nonadaptation regime became almost equal (for details on calibration see Appendix D.3).
5 Experiments
Our goal is to test different adaptation strategies in the proposed nonstationary RL settings. However, it is known that the testtime behavior of an agent may highly depend on a variety of factors besides the chosen adaptation method, including training curriculum, training algorithm, policy class, etc. Hence, we first describe the precise setup that we use in our experiments to eliminate irrelevant factors and focus on the effects of adaptation. Most of the lowlevel details are deferred to appendices. Video highlights of our experiments are available at https://goo.gl/tboqaN.
5.1 The setup
Policies. We consider 3 types of policy networks: (i) a 2layer MLP, (ii) embedding (i.e., 1 fullyconnected layer replicated across the time dimension) followed by a 1layer LSTM, and (iii) RL (Duan et al., 2016) of the same architecture as (ii) which additionally takes previous reward and done signals as inputs at each step, keeps the recurrent state throughout the entire interaction with a given environment (or an opponent), and resets the state once the latter changes. For advantage functions, we use networks of the same structure as for the corresponding policies and have no parameter sharing between the two. Our metalearning agents use the same policy and advantage function structures as the baselines and learn a 3step metaupdate with adaptive step sizes as given in (7). Illustrations and details on the architectures are given in Appendix B.
Metalearning. We compute metaupdates via gradients of the negative discounted rewards received during a number of previous interactions with the environment. At training time, metalearners interact with the environment twice, first using the initial policy, , and then the metaupdated policy, . At test time, the agents are limited to interacting with the environment only once, and hence always act according to and compute metaupdates using importanceweight correction (see Sec. 3.2 and Algorithm 2
). Additionally, to reduce the variance of the metaupdates at test time, the agents store the experience collected during the interaction with the test environment (and the corresponding importance weights) into the experience buffer and keep reusing that experience to update
as in (7). The size of the experience buffer is fixed to 3 episodes for nonstationary locomotion and 75 episodes for RoboSumo. More details are given in Appendix C.1.Adaptation baselines. We consider the following three baseline strategies:

[itemsep=1pt,topsep=0pt]

naive (or no adaptation),

implicit adaptation via RL, and

adaptation via tracking (Sutton et al., 2007) that keeps doing PPO updates at execution time.
Training in nonstationary locomotion. We train all methods on the same collection of nonstationary locomotion environments constructed by choosing all possible pairs of legs whose joint torques are scaled except 3 pairs that are held out for testing (i.e., 12 training and 3 testing environments for the sixleg creature). The agents are trained on the environments concurrently, i.e., to compute a policy update, we rollout from all environments in parallel and then compute, aggregate, and average the gradients (for details, see Appendix C.2). LSTM policies retain their state over the course of 7 episodes in each environment. Metalearning agents compute metaupdates for each nonstationary environment separately.
Training in RoboSumo. To ensure consistency of the training curriculum for all agents, we first pretrain a number of policies of each type for every agent type via pure selfplay with the PPO algorithm (Schulman et al., 2017; Bansal et al., 2018). We snapshot and save versions of the pretrained policies at each iteration. This lets us train other agents to play against versions of the pretrained opponents at various stages of mastery. Next, we train the baselines and the metalearning agents against the pool of pretrained opponents^{7}^{7}7In competitive multiagent environments, besides selfplay, there are plenty of ways to train agents, e.g., train them in pairs against each other concurrently, or randomly match and switch opponents each few iterations. We found that concurrent training often leads to an unbalanced population of agents that have been trained under vastly different curricula and introduces spurious effects that interfere with our analysis of adaptation. Hence, we leave the study of adaptation in naturally emerging curricula in multiagent settings to the future work. concurrently. At each iteration we (a) randomly select an opponent from the training pool, (b) sample a version of the opponent’s policy to be in (this ensures that even when the opponent is strong, sometimes an undertrained version is selected which allows the agent learn to win at early stages), and (c) rollout against that opponent. All baseline policies are trained with PPO; metalearners also used PPO as the outer loop for optimizing and parameters. We retain the states of the LSTM policies over the course of interaction with the same version of the same opponent and reset it each time the opponent version is updated. Similarly to the locomotion setup, metalearners compute metaupdates for each opponent in the training pool separately. A more detailed description of the distributed training is given in Appendix C.2.
Episodic rewards for 7 consecutive episodes in 3 held out nonstationary locomotion environments. To evaluate adaptation strategies, we ran each of them in each environment for 7 episodes followed by a full reset of the environment, policy, and metaupdates (repeated 50 times). Shaded regions are 95% confidence intervals. Best viewed in color.
Experimental design. We design our experiments to answer the following questions:

[itemsep=0.5pt,topsep=1pt,leftmargin=2em]

When the interaction with the environment before it changes is strictly limited to one or very few episodes, what is the behavior of different adaptation methods in nonstationary locomotion and competitive multiagent environments?

What is the sample complexity of different methods, i.e., how many episodes is required for a method to successfully adapt to the changes? We test this by controlling the amount of experience the agent is allowed to get form the same environment before it changes.
Additionally, we ask the following questions specific to the competitive multiagent setting:

[itemsep=0.5pt,topsep=1pt,leftmargin=2em]

Given a diverse population of agents that have been trained under the same curriculum, how do different adaptation methods rank in a competition versus each other?

When the population of agents is evolved for several generations—such that the agents interact with each other via iterated adaptation games, and those that lose disappear while the winners get duplicated—what happens with the proportions of different agents in the population?
5.2 Adaptation in the fewshot regime and sample complexity
Fewshot adaptation in nonstationary locomotion environments. Having trained baselines and metalearning policies as described in Sec. 5.1, we selected 3 testing environments that corresponded to disabling 3 different pairs of legs of the sixleg agent: back, middle, and front legs. The results are presented on Fig. 4. Three observations: First, during the very first episode, the metalearned initial policy, , turns out to be suboptimal for the task (it underperforms compared to other policies). However, after 12 episodes (and environment changes), it starts performing on par with other policies. Second, by the 6th and 7th episodes, metaupdated policies perform much better than the rest. Note that we use 3 gradient metaupdates for the adaptation of the metalearners; the metaupdates are computed based on experience collected during the previous 2 episodes. Finally, tracking is not able to improve upon the baseline without adaptation and sometimes leads to even worse results.
Adaptation in RoboSumo under the fewshot constraint. To evaluate different adaptation methods in the competitive multiagent setting consistently, we consider a variation of the iterated adaptation game, where changes in the opponent’s policies at test time are predetermined but unknown to the agents. In particular, we pretrain 3 opponents (1 of each type, Fig. 1(a)) with LSTM policies with PPO via selfplay (the same way as we pretrain the training pool of opponents, see Sec. 5.1) and snapshot their policies at each iteration. Next, we run iterated games between our trained agents that use different adaptation algorithms versus policy snapshots of the pretrained opponents. Crucially, the policy version of the opponent keeps increasing from round to round as if it was training via selfplay^{8}^{8}8
At the beginning of the iterated game, both agents and their opponent start from version 700, i.e., from the policy obtained after 700 iterations (PPO epochs) of learning to ensure that the initial policy is reasonable.
. The agents have to keep adapting to increasingly more competent versions of the opponent (see Fig. 3). This setup allows us to test different adaptation strategies consistently against the same learning opponents.The results are given on Fig. 5. We note that metalearned adaptation strategies, in most cases, are able to adapt and improve their winrates within about 100 episodes of interaction with constantly improving opponents. On the other hand, performance of the baselines often deteriorates during the rounds of iterated games. Note that the pretrained opponents were observing 90 episodes of selfplay per iteration, while the agents have access to only 3 episodes per round.
Sample complexity of adaptation in RoboSumo. Metalearning helps to find an update suitable for fast or fewshot adaptation. However, how do different adaptation methods behave when more experience is available? To answer this question, we employ the same setup as previously and vary the number of episodes per round in the iterated game from 3 to 90. Each iterated game is repeated 20 times, and we measure the winrates during the last 25 rounds of the game.
The results are presented on Fig. 6. When the number of episodes per round goes above 50, adaptation via tracking technically turns into “learning at test time,” and it is able to learn to compete against the selftrained opponents that it has never seen at training time. The metalearned adaptation strategy performed near constantly the same in both fewshot and standard regimes. This suggests that the metalearned strategy acquires a particular bias at training time that allows it to perform better from limited experience but also limits its capacity of utilizing more data. Note that, by design, the metaupdates are fixed to only 3 gradient steps from with stepsizes (learned at training), while tracking keeps updating the policy with PPO throughout the iterated game. Allowing for metaupdates that become more flexible with the availability of data can help to overcome this limitation. We leave this to future work.
5.3 Evaluation on the populationlevel
Combining different adaptation strategies with different policies and agents of different morphologies puts us in a situation where we have a diverse population of agents which we would like to rank according to the level of their mastery in adaptation (or find the “fittest”). To do so, we employ TrueSkill (Herbrich et al., 2007)—a metric similar to the ELO rating, but more popular in 1vs1 competitive videogames.
In this experiment, we consider a population of 105 trained agents: 3 agent types, 7 different policy and adaptation combinations, and 5 different stages of training (from 500 to 2000 training iterations). First, we assume that the initial distribution of any agent’s skill is and the default distance that guarantees about 76% of winning, . Next, we randomly generate 1000 matches between pairs of opponents and let them adapt while competing with each other in 100round iterated adaptation games (states of the agents are reset before each game). After each game, we record the outcome and updated our belief about the skill of the corresponding agents using the TrueSkill algorithm^{9}^{9}9We used an implementation from http://trueskill.org/.. The distributions of the skill for the agents of each type after 1000 iterated adaptation games between randomly selected players from the pool are visualized in Fig. 7.
There are a few observations we can make: First, recurrent policies were dominant. Second, adaptation via RL tended to perform equally or a little worse than plain LSTM with or without tracking in this setup. Finally, agents that metalearned adaptation rules at training time, consistently demonstrated higher skill scores in each of the categories corresponding to different policies and agent types.
Finally, we enlarge the population from 105 to 1050 agents by duplicating each of them 10 times and evolve it (in the “natural selection” sense) for several generations as follows. Initially, we start with a balanced population of different creatures. Next, we randomly match 1000 pairs of agents, make them play iterated adaptation games, remove the agents that lost from the population and duplicate the winners. The same process is repeated 10 times. The result is presented in Fig 8. We see that many agents quickly disappear form initially uniform population and the metalearners end up dominating.
6 Conclusion and Future Directions
In this work, we proposed a simple gradientbased metalearning approach suitable for continuous adaptation in nonstationary environments. The key idea of the method is to regard nonstationarity as a sequence of stationary tasks and train agents to exploit the dependencies between consecutive tasks such that they can handle similar nonstationarities at execution time. We applied our method to nonstationary locomotion and within a competitive multiagent setting. For the latter, we designed the RoboSumo environment and defined iterated adaptation games that allowed us to test various aspects of adaptation strategies. In both cases, metalearned adaptation rules were more efficient than the baselines in the fewshot regime. Additionally, agents that metalearned to adapt demonstrated the highest level of skill when competing in iterated games against each other.
The problem of continuous adaptation in nonstationary and competitive environments is far from being solved, and this work is the first attempt to use metalearning in such setup. Indeed, our metalearning algorithm has a few limiting assumptions and design choices that we have made mainly due to computational considerations. First, our metalearning rule is to onestepahead update of the policy and is computationally similar to backpropagation through time with a unit time lag. This could potentially be extended to fully recurrent metaupdates that take into account the full history of interaction with the changing environment. Additionally, our metaupdates were based on the gradients of a surrogate loss function. While such updates explicitly optimized the loss, they required computing second order derivatives at training time, slowing down the training process by an order of magnitude compared to baselines. Utilizing information provided by the loss but avoiding explicit backpropagation through the gradients would be more appealing and scalable. Finally, our approach is unlikely to work with sparse rewards as the metaupdates use policy gradients and heavily rely on the reward signal. Introducing auxiliary dense rewards designed to enable metalearning is a potential way to overcome this issue that we would like to explore in the future work.
Acknowledgements
We would like to thank Harri Edwards, Jakob Foerster, Aditya Grover, Aravind Rajeswaran, Vikash Kumar, Yuhuai Wu and many others at OpenAI for helpful comments and fruitful discussions.
References
 AlShedivat et al. (2017) Maruan AlShedivat, Avinava Dubey, and Eric P Xing. Contextual explanation networks. arXiv preprint arXiv:1705.10301, 2017.
 Andrychowicz et al. (2016) Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pp. 3981–3989, 2016.
 Bansal et al. (2018) Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent complexity via multiagent competition. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Sy0GnUxCb.

Bengio et al. (1992)
Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei.
On the optimization of a synaptic learning rule.
In
Preprints Conf. Optimality in Artificial and Biological Neural Networks
, pp. 6–8. Univ. of Texas, 1992.  Bengio et al. (1990) Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Université de Montréal, Département d’informatique et de recherche opérationnelle, 1990.
 Bowling (2005) Michael Bowling. Convergence and noregret in multiagent learning. In Advances in neural information processing systems, pp. 209–216, 2005.
 Cao et al. (2013) Yongcan Cao, Wenwu Yu, Wei Ren, and Guanrong Chen. An overview of recent progress in the study of distributed multiagent coordination. IEEE Transactions on Industrial informatics, 9(1):427–438, 2013.
 Caruana (1998) Rich Caruana. Multitask learning. In Learning to learn, pp. 95–133. Springer, 1998.
 Conitzer & Sandholm (2007) Vincent Conitzer and Tuomas Sandholm. Awesome: A general multiagent learning algorithm that converges in selfplay and learns a best response against stationary opponents. Machine Learning, 67(12):23–43, 2007.
 Cully et al. (2015) Antoine Cully, Jeff Clune, Danesh Tarapore, and JeanBaptiste Mouret. Robots that can adapt like animals. Nature, 521(7553):503–507, 2015.
 Da Silva et al. (2006) Bruno C Da Silva, Eduardo W Basso, Ana LC Bazzan, and Paulo M Engel. Dealing with nonstationary environments using context detection. In Proceedings of the 23rd international conference on Machine learning, pp. 217–224. ACM, 2006.
 Duan et al. (2016) Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
 Edwards & Storkey (2016) Harrison Edwards and Amos Storkey. Towards a neural statistician. arXiv preprint arXiv:1606.02185, 2016.
 Finn et al. (2017a) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Lifelong fewshot learning. In Lifelong Learning: A Reinforcement Learning Approach ICML workshop, 2017a.
 Finn et al. (2017b) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Modelagnostic metalearning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017b.
 Foerster et al. (2017a) Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual multiagent policy gradients. arXiv preprint arXiv:1705.08926, 2017a.
 Foerster et al. (2017b) Jakob N Foerster, Richard Y Chen, Maruan AlShedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. Learning with opponentlearning awareness. arXiv preprint arXiv:1709.04326, 2017b.
 Grant et al. (2018) Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradientbased metalearning as hierarchical bayes. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=BJ_ULk0b.
 Ha et al. (2016) David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
 Herbrich et al. (2007) Ralf Herbrich, Tom Minka, and Thore Graepel. Trueskill™: a bayesian skill rating system. In Advances in neural information processing systems, pp. 569–576, 2007.
 Hochreiter et al. (2001) Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87–94. Springer, 2001.
 Levine et al. (2016) Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. Endtoend training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1–40, 2016.
 Li et al. (2016) Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541, 2016.
 Li & Malik (2016) Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
 Lowe et al. (2017) Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multiagent actorcritic for mixed cooperativecompetitive environments. arXiv preprint arXiv:1706.02275, 2017.
 McCloskey & Cohen (1989) Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of learning and motivation, 24:109–165, 1989.

Mealing & Shapiro (2013)
Richard Mealing and Jonathan L Shapiro.
Opponent modelling by sequence prediction and lookahead in twoplayer
games.
In
International Conference on Artificial Intelligence and Soft Computing
, pp. 385–396. Springer, 2013.  Mitchell et al. (2015) Tom M Mitchell, William W Cohen, Estevam R Hruschka Jr, Partha Pratim Talukdar, Justin Betteridge, Andrew Carlson, Bhavana Dalvi Mishra, Matthew Gardner, Bryan Kisiel, Jayant Krishnamurthy, et al. Never ending learning. In AAAI, pp. 2302–2310, 2015.
 Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
 Peng et al. (2017) Peng Peng, Quan Yuan, Ying Wen, Yaodong Yang, Zhenkun Tang, Haitao Long, and Jun Wang. Multiagent bidirectionallycoordinated nets for learning to play starcraft combat games. arXiv preprint arXiv:1703.10069, 2017.
 Ravi & Larochelle (2016) Sachin Ravi and Hugo Larochelle. Optimization as a model for fewshot learning. 2016.
 Ring (1994) Mark B Ring. Continual learning in reinforcement environments. PhD thesis, University of Texas at Austin Austin, Texas 78712, 1994.
 Ring (1997) Mark B Ring. CHILD: A first step towards continual learning. Machine Learning, 28(1):77–104, 1997.
 Santoro et al. (2016) Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Metalearning with memoryaugmented neural networks. In International conference on machine learning, pp. 1842–1850, 2016.
 Schmidhuber (1987) Jurgen Schmidhuber. Evolutionary principles in selfreferential learning. On learning how to learn: The metameta… hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
 Schmidhuber (1992) Jürgen Schmidhuber. Learning to control fastweight memories: An alternative to dynamic recurrent networks. Learning, 4(1), 1992.
 Schulman et al. (2015a) John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML15), pp. 1889–1897, 2015a.
 Schulman et al. (2015b) John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. Highdimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015b.
 Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
 Silver et al. (2013) Daniel L Silver, Qiang Yang, and Lianghao Li. Lifelong machine learning systems: Beyond learning algorithms. In AAAI Spring Symposium: Lifelong Machine Learning, volume 13, pp. 05, 2013.
 Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
 Singh et al. (2000) Satinder Singh, Michael Kearns, and Yishay Mansour. Nash convergence of gradient dynamics in generalsum games. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence, pp. 541–548. Morgan Kaufmann Publishers Inc., 2000.
 Snell et al. (2017) Jake Snell, Kevin Swersky, and Richard S Zemel. Prototypical networks for fewshot learning. arXiv preprint arXiv:1703.05175, 2017.
 Sutton et al. (2000) Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063, 2000.
 Sutton et al. (2007) Richard S Sutton, Anna Koop, and David Silver. On the role of tracking in stationary environments. In Proceedings of the 24th international conference on Machine learning, pp. 871–878. ACM, 2007.
 Thrun (1998) Sebastian Thrun. Lifelong learning algorithms. Learning to learn, 8:181–209, 1998.
 Thrun & Pratt (1998) Sebastian Thrun and Lorien Pratt. Learning to learn. Springer, 1998.
 Todorov et al. (2012) Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for modelbased control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026–5033. IEEE, 2012.
 Vinyals et al. (2016) Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630–3638, 2016.
 Wang et al. (2016) Jane X Wang, Zeb KurthNelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.
 Williams (1992) Ronald J Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(34):229–256, 1992.
 Zhang & Lesser (2010) Chongjie Zhang and Victor R Lesser. Multiagent learning with policy prediction. In AAAI, 2010.
Appendix A Derivations and the policy gradient theorem
In this section, we derive the policy gradient update for MAML as give in (4) as well as formulate and equivalent of the policy gradient theorem (Sutton et al., 2000) in the learningtolearn setting.
Our derivation is not bound to a particular form of the adaptation update. In general, we are interested in metalearning a procedure, , parametrized by , which, given access to a limited experience on a task, can produce a good policy for solving it. Note that is responsible for both collecting the initial experience and constructing the final policy for the given task. For example, in case of MAML (Finn et al., 2017b), is represented by the initial policy, , and the adaptation update rule (4) that produces with .
More formally, after querying trajectories, , we want to produce that minimizes the expected loss w.r.t. the distribution over tasks:
(10) 
Note that the innermost expectation is conditional on the experience, , which our metalearning procedure, , collects to produce a taskspecific policy, . Assuming that the loss is linear in trajectories, and using linearity of expectations, we can drop the superscript and denote the trajectory sampled under for task simply as . At training time, we are given a finite sample of tasks from the distribution and can search for close to optimal by optimizing over the empirical distribution:
(11) 
We rewrite the objective function for task in (11) more explicitly by expanding the expectations:
(12)  
Trajectories, and , and parameters of the policy can be thought as random variables that we marginalize out to construct the objective that depends on only. The adaptation update rule (4) assumes the following :
(13) 
Note that by specifying differently, we may arrive at different metalearning algorithms. After plugging (13) into (12) and integrating out , we get the following expected loss for task as a function of :
(14)  
The gradient of (14) will take the following form:
(15)  
where as given in (14). Note that the expression consists of two terms: the first term is the standard policy gradient w.r.t. the updated policy, , while the second one is the policy gradient w.r.t. the original policy, , that is used to collect . If we were to omit marginalization of (as it was done in the original paper (Finn et al., 2017b)), the terms would disappear. Finally, the gradient can be rewritten in a more succinct form:
(16) 
The update given in (16
) is an unbiased estimate of the gradient as long as the loss
is simply the sum of discounted rewards (i.e., it extends the classical REINFORCE algorithm (Williams, 1992) to metalearning). Similarly, we can define that uses a value or advantage function and extend the policy gradient theorem Sutton et al. (2000) to make it suitable for metalearning.[Meta policy gradient theorem] For any MDP, gradient of the value function w.r.t. takes the following form:
(17)  
where is the stationary distribution under policy .
Proof.
We define taskspecific value functions under the generated policy, , as follows:
(18)  
where the expectations are taken w.r.t. the dynamics of the environment of the given task, , and the policy, . Next, we need to marginalize out :
(19) 
and after the gradient w.r.t. , we arrive at:
(20)  
where the first term is similar to the expression used in the original policy gradient theorem (Sutton et al., 2000) while the second one comes from differentiating trajectories that depend on . Following Sutton et al. (2000), we unroll the derivative of the Qfunction in the first term and arrive at the following final expression for the policy gradient:
(21)  
∎
The same theorem is applicable to the continuous setting with the only changes in the distributions used to compute expectations in (17) and (18). In particular, the outer expectation in (17) should be taken w.r.t. while the inner expectation w.r.t. .
a.1 Multiple adaptation gradient steps
All our derivations so far assumed single step gradientbased adaptation update. Experimentally, we found that the multistep version of the update often leads to a more stable training and better test time performance. In particular, we construct via intermediate gradient steps:
(22)  
where are intermediate policy parameters. Note that each intermediate step, , requires interacting with the environment and sampling intermediate trajectories, . To compute the policy gradient, we need to marginalize out all the intermediate random variables, and , . The objective function (12) takes the following form:
(23)  
Since at each intermediate steps are delta functions, the final expression for the multistep MAML objective has the same form as (14
), with integration taken w.r.t. all intermediate trajectories. Similarly, an unbiased estimate of the gradient of the objective gets
additional terms:(24) 
where the expectation is taken w.r.t. trajectories (including all intermediate ones). Again, note that at training time we do not constrain the number of interactions with each particular environment and do rollout using each intermediate policy to compute updates. At testing time, we interact with the environment only once and rely on the importance weight correction as described in Sec. 3.2.
Appendix B Additional details on the architectures
The neural architectures used for our policies and value functions are illustrated in Fig. 9. Our MLP architectures were memoryless and reactive. The LSTM architectures had used a fully connected embedding layer (with 64 hidden units) followed by a recurrent layer (also with 64 units). The state in LSTMbased architectures was kept throughout each episode and reset to zeros at the beginning of each new episode. The RL architecture additionally took reward and done signals from the previous time step and kept the state throughout the whole interactions with a given environment (or opponent). The recurrent architectures were unrolled for time steps and optimized with PPO via backprop through time.



Appendix C Additional details on metalearning and optimization
c.1 Metaupdates for continuous adaptation
Our metalearned adaptation methods were used with MLP and LSTM policies (Fig. 9). The metaupdates were based on 3 gradient steps with adaptive step sizes were initialized with 0.001. There are a few additional details to note:

[itemsep=0pt,topsep=0pt]

and parameters were a concatenation of the policy and the value function parameters.

At the initial stages of optimization, metagradient steps often tended to “explode”, hence we clipped them by values norms to be between 0.1 and 0.1.

We used different surrogate loss functions for the metaupdates and for the outer optimization. For metaupdates, we used the vanilla policy gradients computed on the negative discounted rewards, while for the outer optimization loop we used the PPO objective.
c.2 On PPO and its distributed implementation
As mentioned in the main text and similar to (Bansal et al., 2018), large batch sizes were used to ensure enough exploration throughout policy optimization and were critical for learning in the competitive setting of RoboSumo
. In our experiments, the epoch size of the PPO was set 32,000 episodes and the batch size was set to 8,000. The PPO clipping hyperparameter was set to
and the KL penalty was set to 0. In all our experiments, the learning rate (for metalearning, the learning rate for and ) was set to . The generalized advantage function estimator (GAE) (Schulman et al., 2015b) was optimized jointly with the policy (we used and ).To train our agents in reasonable time, we used a distributed implementation of the PPO algorithm. To do so, we versioned the agent’s parameters (i.e., kept parameters after each update and assigned it a version number) and used a versioned queue for rollouts. Multiple worker machines were generating rollouts in parallel for the most recent available version of the agent parameters and were pushing them into the versioned rollout queue. The optimizer machine collected rollouts from the queue and made a PPO optimization steps (see (Schulman et al., 2017) for details) as soon as enough rollouts were available.
We trained agents on multiple environments simultaneously. In nonstationary locomotion, each environment corresponded to a different pair of legs of the creature becoming dysfunctional. In RoboSumo, each environment corresponded to a different opponent in the training pool. Simultaneous training was achieved via assigning these environments to rollout workers uniformly at random, so that the rollouts in each minibatch were guaranteed to come from all training environments.
Appendix D Additional details on the environments
d.1 Observation and action spaces
Both observation and action spaces in RoboSumo continuous. The observations of each agent consist of the position of its own body (7 dimensions that include 3 absolute coordinates in the global cartesian frame and 4 quaternions), position of the opponent’s body (7 dimensions), its own joint angles and velocities (2 angles and 2 velocities per leg), and forces exerted on each part of its own body (6 dimensions for torso and 18 for each leg) and forces exerted on the opponent’s torso (6 dimensions). All forces were squared and clipped at 100. Additionally, we normalized observations using a running mean and clipped their values between 5 and 5. The action spaces had 2 dimensions per joint. Table 1 summarizes the observation and action spaces for each agent type.
Agent  Observation space  Action space  

Self  Opponent  
Coordinates  Velocities  Forces  Coordinates  Forces  
Ant  15  14  78  7  6  8 
Bug  19  18  114  7  6  12 
Spider  23  22  150  7  6  16 
Note that the agents observe neither any of the opponents velocities, nor positions of the opponent’s limbs. This allows us to keep the observation spaces consistent regardless of the type of the opponent. However, even though the agents are blind to the opponent’s limbs, they can sense them via the forces applied to the agents’ bodies when in contact with the opponent.
d.2 Shaped rewards
In RoboSumo, the winner gets 2000 reward, the loser is penalized for 2000, and in case of draw both agents get 1000. In addition to the sparse win/lose rewards, we used the following dense rewards to encourage fast learning at the early training stages:

[itemsep=0pt,topsep=0pt]

Quickly push the opponent outside. The agent got penalty at each time step proportional to where was the distance of the opponent from the center of the ring.

Moving towards the opponent. Reward at each time step proportional to magnitude of the velocity component towards the opponent.

Hit the opponent. Reward proportional to the square of the total forces exerted on the opponent’s torso.

Control penalty. The penalty on the actions to prevent jittery/unnatural movements.
d.3 RoboSumo calibration
To calibrate the RoboSumo environment we used the following procedure. First, we trained each agent via pure selfplay with LSTM policy using PPO for the same number of iterations, tested them one against the other (without adaptation), and recorded the win rates (Table 2). To ensure the balance, we kept increasing the mass of the weaker agents and repeated the calibration procedure until the win rates equilibrated.
Masses (Ant, Bug, Spider)  Ant vs. Bug  Ant vs. Spider  Bug vs. Spider 

Initial (10, 10, 10)  
Calibrated (13, 10, 39) 
Appendix E Additional details on experiments
e.1 Average win rates
Table 3 gives average win rates for the last 25 rounds of iterated adaptation games played by different agents with different adaptation methods (win rates for each episode are visualized in Figure 5).
Agent  Opponent  Adaptation Strategy  

RL  LSTM + PPOtracking  LSTM + metaupdates  
Ant  Ant  
Bug  
Spider  
Bug  Ant  
Bug  
Spider  
Spider  Ant  
Bug  
Spider 
e.2 TrueSkill rank of the top agents
valign=t Rank Agent TrueSkill rank* 1 Bug + LSTMmeta 2 Ant + LSTMmeta 3 Bug + LSTMtrack 4 Ant + RL 5 Ant + LSTM 6 Bug + MLPmeta 7 Ant + MLPmeta 8 Spider + MLPmeta 9 Spider + MLP 10 Bug + MLPtrack * The rank is a conservative estimate of the skill, , to ensure that the actual skill of the agent is higher with 99% confidence. valign=t [table]
Since TrueSkill represents the belief about the skill of an agent as a normal distribution (i.e., with two parameters,
and ), we can use it to infer a priori probability of an agent, , winning against its opponent, , as follows (Herbrich et al., 2007):(25) 
The ranking of the top5 agents with MLP and LSTM policies according to their TrueSkill is given in Tab. 1 and the a priori win rates in Fig. 10. Note that within the LSTM and MLP categories, the best metalearners are 10 to 25% more likely to win the best agents that use other adaptation strategies.
e.3 Intolerance to large distributional shifts
Continuous adaptation via metalearning assumes consistency in the changes of the environment or the opponent. What happens if the changes are drastic? Unfortunately, the training process of our metalearning procedure turns out to be sensitive to such shifts and can diverge when the distributional shifts from iteration to iteration are large. Fig. 11 shows the training curves for a metalearning agent with MLP policy trained against versions of an MLP opponent pretrained via selfplay. At each iteration, we kept updating the opponent policy by 1 to 10 steps. The metalearned policy was able to achieve nonnegative rewards by the end of training only when the opponent was changing up to 4 steps per iteration.