1 Introduction
Both modelbased and modelfree reinforcement learning (RL) methods generally operate in one of two regimes: all training is performed in advance, producing a model or policy that can be used at testtime to make decisions in settings that approximately match those seen during training; or, training is performed online (e.g., as in the case of online temporaldifference learning), in which case the agent can slowly modify its behavior as it interacts with the environment. However, in both of these cases, dynamic changes such as failure of a robot’s components, encountering a new terrain, environmental factors such as lighting and wind, or other unexpected perturbations, can cause the agent to fail. In contrast, humans can rapidly adapt their behavior to unseen physical perturbations and changes in their dynamics (Braun et al., 2009): adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children that can walk on carpet and grass can quickly figure out how to walk on ice without having to relearn how to walk. How is this possible? If an agent has encountered a large number of perturbations in the past, it can in principle use that experience to learn how to adapt. In this work, we propose a metalearning approach for learning online adaptation.
Motivated by the ability to tackle realworld applications, we specifically develop a modelbased metareinforcement learning algorithm. In this setting, data for updating the model is readily available at every timestep in the form of recent experiences. But more crucially, the metatraining process for training such an adaptive model can be much more sample efficient than modelfree metaRL approaches (Duan et al., 2016; Wang et al., 2016; Finn et al., 2017). Further, our approach foregoes the episodic framework on which modelfree metaRL approaches rely on, where tasks are predefined to be different rewards or environments, and tasks exist at the trajectory level only. Instead, our method considers each timestep to potentially be a new “task, " where any detail or setting could have changed at any timestep. This view induces a more general metaRL problem setting by allowing the notion of a task to represent anything from existing in a different part of the state space, to experiencing disturbances, or attempting to achieve a new goal.
Learning to adapt a model alleviates a central challenge of modelbased reinforcement learning: the problem of acquiring a global model that is accurate throughout the entire state space. Furthermore, even if it were practical to train a globally accurate dynamics model, the dynamics inherently change as a function of uncontrollable and often unobservable environmental factors, such as those mentioned above. If we have a model that can adapt online, it need not be perfect everywhere a priori. This property has previously been exploited by adaptive control methods (Åström and Wittenmark, 2013; Sastry and Isidori, 1989; Pastor et al., 2011; Meier et al., 2016)
; but, scaling such methods to complex tasks and nonlinear systems is exceptionally difficult. Even when working with deep neural networks, which have been used to model complex nonlinear systems
(Kurutach et al., 2018), it is exceptionally difficult to enable adaptation, since such models typically require large amounts of data and many gradient steps to learn effectively. By specifically training a neural network model to require only a small amount of experience to adapt, we can enable effective online adaptation in complex environments while putting less pressure on needing a perfect global model.The primary contribution of our work is an efficient meta reinforcement learning approach that achieves online adaptation in dynamic environments. To the best knowledge of the authors, this is the first metareinforcement learning algorithm to be applied in a real robotic system. Our algorithm efficiently trains a global model that is capable to use its recent experiences to quickly adapt, achieving fast online adaptation in dynamic environments. We evaluate two versions of our approach, recurrencebased adaptive learner (ReBAL) and gradientbased adaptive learner (GrBAL) on stochastic and simulated continuous control tasks with complex contact dynamics (Fig. 2). In our experiments, we show a quadrupedal “ant” adapting to the failure of different legs, as well as a “halfcheetah" robot adapting to the failure off different joints, navigating terrains with different slopes, and walking on floating platforms of varying buoyancy. Our modelbased meta RL method attains substantial improvement over prior approaches, including standard modelbased methods, online modeladaptive methods, modelfree methods, and prior metareinforcement learning methods, when trained with similar amounts of data. In all experiments, metatraining across multiple tasks is sample efficient, using only the equivalent of hours of realworld experience, roughly less than what modelfree methods require to learn a single task. Finally, we demonstrate GrBAL on a real dynamic legged millirobot (see Fig 2). To highlight not only the sample efficiency of our meta modelbased reinforcement learning approach, but also the importance of fast online adaptation in the real world, we show the agent’s learned ability to adapt online to tasks such as a missing leg, novel terrains and slopes, miscalibration or errors in pose estimation, and new payloads to be pulled.
2 Related Work
Advances in learning control policies have shown success on numerous complex and high dimensional tasks (Schulman et al., 2015; Lillicrap et al., 2015; Mnih et al., 2015; Levine et al., 2016; Silver et al., 2017). While reinforcement learning algorithms provide a framework for learning new tasks, they primarily focus on mastery of individual skills, rather than generalizing and quickly adapting to new scenarios. Furthermore, modelfree approaches (Peters and Schaal, 2008) require large amounts of system interaction to learn successful control policies, which often makes them impractical for realworld systems. In contrast, modelbased methods attain superior sample efficiency by first learning a model of system dynamics, and then using that model to optimize a policy (Deisenroth et al., 2013; Lenz et al., 2015; Levine et al., 2016; Nagabandi et al., 2017b; Williams et al., 2017). Our approach alleviates the need to learn a single global model by allowing the model to be adapted automatically to different scenarios online based on recent observations. A key challenge with modelbased RL approaches is the difficulty of learning a global model that is accurate for the entire state space. Prior modelbased approaches tackled this problem by incorporating model uncertainty using Gaussian Processes (GPs) (Ko and Fox, 2009; Deisenroth and Rasmussen, 2011; Doerr et al., 2017). However, these methods make additional assumptions on the system (such as smoothness), and does not scale to high dimensional environments. Chua et al. (2018) has recently showed that neural networks models can also benefit from incorporating uncertainty, and it can lead to modelbased methods that attain modelfree performance with a significant reduction on sample complexity. Our approach is orthogonal to theirs, and can benefit from incorporating such uncertainty.
Prior online adaptation approaches (Tanaskovic et al., 2013; Aswani et al., 2012) have aimed to learn an approximate global model and then adapt it at test time. Dynamic evaluation algorithms (Rei, 2015; Krause et al., 2017, 2016; Fortunato et al., 2017), for example, learn an approximate global distribution at training time and adapt those model parameters at test time to fit the current local distribution via gradient descent. There exists extensive prior work on online adaptation in modelbased reinforcement learning and adaptive control (Sastry and Isidori, 1989). In contrast from inverse model adaptation (Kelouwani et al., 2012; Underwood and Husain, 2010; Pastor et al., 2011; Meier et al., 2016; Meier and Schaal, 2016; Rai et al., 2017), we are concerned in the problem of adapting the forward model, closely related to online system identification (Manganiello et al., 2014). Work in model adaptation (Levine and Koltun, 2013; Gu et al., 2016; Fu et al., 2015; Weinstein and Botvinick, 2017) has shown that a perfect global model is not necessary, and prior knowledge can be finetuned to handle small changes. These methods, however, face a mismatch between what the model is trained for and how it is used at test time. In this paper, we bridge this gap by explicitly training a model for fast and effective adaptation. As a result, our model achieves more effective adaptation compared to these prior works, as validated in our experiments.
Our problem setting relates to metalearning, a longstanding problem of interest in machine learning that is concerned with enabling artificial agents to efficiently learn new tasks by learning to learn
(Thrun and Pratt, 1998; Schmidhuber and Huber, 1991; Naik and Mammone, 1992; Lake et al., 2015). A metalearner can control learning through approaches such as deciding the learner’s architecture (Baker et al., 2016), or by prescribing an optimization algorithm or update rule for the learner (Bengio et al., 1990; Schmidhuber, 1992; Younger et al., 2001; Andrychowicz et al., 2016; Li and Malik, 2016; Ravi and Larochelle, 2018). Another popular metalearning approach involves simply unrolling a recurrent neural network (RNN) that ingests the data
(Santoro et al., 2016; Munkhdalai and Yu, 2017; Munkhdalai et al., 2017; Mishra et al., 2017) and learns internal representations of the algorithms themselves, one instantiation of our approach (ReBAL) builds on top of these methods. On the other hand, the other instantiation of our method (GrBAL) builds on top of MAML (Finn et al., 2017). GrBAL differs from the supervised version of MAML in that MAML assumes access to a handdesigned distribution of tasks. Instead, one of our primary contributions is the online formulation of metalearning, where tasks correspond to temporal segments, enabling “tasks” to be constructed automatically from the experience in the environment.Metalearning in the context of reinforcement learning has largely focused on modelfree approaches (Duan et al., 2016; Wang et al., 2016; Sung et al., 2017; AlShedivat et al., 2017). However, these algorithms present even more (meta)training sample complexity than nonmeta modelfree RL methods, which precludes them from realworld applications. Recent work (Sæmundsson et al., 2018) has developed a modelbased meta RL algorithm, framing metalearning as a hierarchical latent variable model, training for episodic adaptation to dynamics changes; the modeling is done with GPs, and results are shown on the cartpole and doublependulum agents. In contrast, we propose an approach for learning online adaptation of highcapacity neural network dynamics models; we present two instantiations of this general approach and show results on both simulated agents and a real legged robot.
3 Preliminaries
In this section, we present modelbased reinforcement learning, introduce the metalearning formulation, and describe the two main metalearning approaches.
3.1 ModelBased Reinforcement Learning
Reinforcement learning agents aim to perform actions that maximize some notion of cumulative reward. Concretely, consider a Markov decision process (MDP) defined by the tuple
. Here, is the set of states, is the set of actions, is the state transition distribution, is a bounded reward function, is the initial state distribution, is the discount factor, and is the horizon. A trajectory segment is denoted by . Finally, the sum of expected rewards from a trajectory is the return. In this framework, RL aims to find a policy that prescribes the optimal action to take from each state in order to maximize the expected return.Modelbased RL aims to solve this problem by learning the transition distribution , which is also referred to as the dynamics model. This can be done using a function approximator to approximate the dynamics, where the weights are optimized to maximize the loglikelihood of the observed data . In practice, this model is then used in the process of action selection by either producing data points from which to train a policy, or by producing predictions and dynamics constraints to be optimized by a controller.
3.2 MetaLearning
Metalearning is concerned with automatically learning learning algorithms that are more efficient and effective than learning from scratch. These algorithms leverage data from previous tasks to acquire a learning procedure that can quickly adapt to new tasks. These methods operate under the assumption that the previous metatraining tasks and the new metatest tasks are drawn from the same task distribution
and share a common structure that can be exploited for fast learning. In the supervised learning setting, we aim to learn a function
with parameters that minimizes a supervised loss . Then, the goal of metalearning is to find a learning procedure, denoted as , that can learn a range of tasks from small datasets .We can formalize this metalearning problem setting as optimizing for the parameters of the learning procedure as follows:
(1) 
where are sampled without replacement from the metatraining dataset .
Once metatraining optimizes for the parameters , the learning procedure can then be used to learn new heldout tasks from small amounts of data. We will also refer to the learning procedure as the update function.
Gradientbased metalearning.
Modelagnostic metalearning (MAML) (Finn et al., 2017) aims to learn the initial parameters of a neural network such that taking one or several gradient descent steps from this initialization leads to effective generalization (or fewshot generalization) to new tasks. Then, when presented with new tasks, the model with the metalearned initialization can be quickly finetuned using a few data points from the new tasks. Using the notation from before, MAML uses gradient descent as a learning algorithm:
(2) 
The learning rate may be a learnable paramter (in which case
) or fixed as a hyperparameter, leading to
. Despite the update rule being fixed, a learned initialization of an overparameterized deep network followed by gradient descent is as expressive as update rules represented by deep recurrent networks (Finn and Levine, 2017).Recurrencebased metalearning.
Another approach to metalearning is to use recurrent models. In this case, the update function is always learned, and corresponds to the weights of the recurrent model that update the hidden state. The parameters of the prediction model correspond to the remainder of the weights of the recurrent model and the hidden state. Both gradientbased and recurrencebased metalearning methods have been used for meta modelfree RL (Finn et al., 2017; Duan et al., 2016). We will build upon these ideas to develop a meta modelbased RL algorithm that enables adaptation in dynamic environments, in an online way.
4 MetaLearning for Online Model Adaptation
In this section, we present our approach for metalearning for online model adaptation. As explained in Section 3.2, standard metalearning formulations require the learned model to learn using data points from some new “task.” In prior gradientbased and modelbased metaRL approaches (Finn et al., 2017; Sæmundsson et al., 2018), the has corresponded to trajectories, leading to episodic adaptation.
Our notion of task is slightly more fluid, where every segment of a trajectory can be considered to be a different “task,” and observations from the past timesteps (rather than the past episodes) can be considered as providing information about the current task setting. Since changes in system dynamics, terrain details, or other environmental changes can occur at any time, we consider (at every time step) the problem of adapting the model using the past time steps to predict the next timesteps. In this setting, and are prespecified hyperparameters; see appendix for a sensitivity analysis of these parameters.
In this work, we use the notion of environment to denote different settings or configurations of a particular problem, ranging from malfunctions in the system’s joints to the state of external disturbances. We assume a distribution of environments that share some common structure, such as the same observation and action space, but may differ in their dynamics . We denote a trajectory segment by , which represents a sequence of states and actions sampled within an environment . Our algorithm assumes that the environment is locally consistent, in that every segment of length has the same environment. Even though this assumption is not always correct, it allows us to learn to adapt from data without knowing when the environment has changed. Due to the fast nature of our adaptation (less than a second), this assumption is seldom violated.
We pose the metaRL problem in this setting as an optimization over (, ) with respect to a maximum likelihood metaobjective. The metaobjective is the likelihood of the data under a predictive model with parameters , where corresponds to model parameters that were updated using the past data points. Concretely, this corresponds to the following optimization:
(3) 
In that corresponds to trajectory segments sampled from our previous experience, and the loss corresponds to the negative log likelihood of the data under the model:
(4) 
In the metaobjective in Equation 3, note that the past points are used to adapt into , and the loss of this is evaluated on the future points. Thus, we use the past timesteps to provide insight into how to adapt our model to perform well for nearby future timesteps. As outlined in Algorithm 1, the update rule for the inner update and a gradient step on for the outer update allow us to optimize this metaobjective of adaptation. Thus, we achieve fast adaptation at test time by being able to finetune the model using just data points.
While we focus on reinforcement learning problems in our experiments, this metalearning approach could be used for a learning to adapt online in a variety of sequence modeling domains. We present our algorithm using both a recurrence and a gradientbased metalearner, as we discuss next.
GradientBased Adaptive Learner (GrBAL).
RecurrenceBased Adaptive Learner (ReBAL).
ReBAL, instead, utilizes a recurrent model, which learns its own update rule (i.e., through its internal gating structure). In this case, and correspond to the weights of the recurrent model that update its hidden state.
5 ModelBased MetaReinforcement Learning
Now that we have discussed our approach for enabling online adaptation, we next propose how to build upon this idea to develop a modelbased metareinforcement learning algorithm. First, we explain how the agent can use the adapted model to perform a task, given parameters and from optimizing the metalearning objective.
Given and , we use the agent’s recent experience to adapt the model parameters: . This results in a model that better captures the local dynamics in the current setting, task, or environment. This adapted model is then passed to our controller, along with the reward function and a planning horizon . We use a planning that is smaller than the adaptation horizon , since the adapted model is only valid within the current context. We use model predictive path integral control (MPPI) (Williams et al., 2015), but, in principle, our model adaptation approach is agnostic to the model predictive control (MPC) method used.
The use of MPC compensates for model inaccuracies by preventing accumulating errors, since we replan at each time step using updated state information. MPC also allows for further benefits in this setting of online adaptation, because the model itself will also improve by the next time step. After taking each step, we append the resulting state transition onto our dataset, reset the model parameters back to , and repeat the entire planning process for each timestep. See Algorithm 2 for this adaptation procedure. Finally, in addition to testtime, we also perform this online adaptation procedure during the metatraining phase itself, to provide onpolicy rollouts for metatraining. For the complete metaRL algorithm, see Algorithm 1.
6 Experiments
Our evaluation aims to answer the following questions: (1) Is adaptation actually changing the model? (2) Does our approach enable fast adaptation to varying dynamics, tasks, and environments, both inside and outside of the training distribution? (3) How does our method’s performance compare to that of other methods? (4) How do GrBAL and ReBAL compare? (5) How does meta modelbased RL compare to meta modelfree RL in terms of sample efficiency and performance for these experiments? (6) Can our method learn to adapt online on a real robot, and if so, how does it perform? We next present our setup and results, motivated by these questions. Videos are available online^{2}^{2}2Videos available at: https://sites.google.com/berkeley.edu/metaadaptivecontrol, and further analysis is provided in the appendix. We first conduct a comparative evaluation of our algorithm, on a variety of simulated robots using the MuJoCo physics engine (Todorov et al., 2012)
. For all of our environments, we model the transition probabilities as Gaussian random variables with mean parameterized by a neural network model (3 hidden layers of 512 units each and ReLU activations) and fixed variance. In this case, maximum likelihood estimation corresponds to minimizing the mean squared error. We now describe the setup of our environments (Fig.
2), where each agent requires different types of adaptation to succeed at runtime:Halfcheetah (HC): disabled joint.
For each rollout during metatraining, we randomly sample a joint to be disabled (i.e., the agent cannot apply torques to that joint). At test time, we evaluate performance in two different situations: disabling a joint unseen during training, and switching between disabled joints during a rollout. The former examines extrapolation to outofdistribution environments, and the latter tests fast adaptation to changing dynamics.
HC: sloped terrain.
For each rollout during metatraining, we randomly select an upward or downward slope of low steepness. At test time, we evaluate performance on unseen settings including a gentle upward slope, a steep upward slope, and a steep hill that first goes up and then down.
HC: pier.
In this experiment, the cheetah runs over a series of blocks that are floating on water. Each block moves up and down when stepped on, and the changes in the dynamics are rapidly changing due to each block having different damping and friction properties. The HC is metatrained by varying these block properties, and tested on a specific (randomlyselected) configuration of properties.
Ant: crippled leg.
For each metatraining rollout, we randomly sample a leg to cripple on this quadrupedal robot. This causes unexpected and drastic changes to the underlying dynamics. We evaluate this agent at test time by crippling a leg from outside of the training distribution, as well as transitioning within a rollout from normal operation to having a crippled leg.
In the following sections, we evaluate our modelbased metaRL methods (GrBAL and ReBAL) in comparison to several prior methods:

[leftmargin=*]

Modelfree RL (TRPO): To evaluate the importance of adaptation, we compare to a modelfree RL agent that is trained across environments using TRPO (Schulman et al., 2015).

Modelfree metaRL (MAMLRL): We compare to a stateoftheart modelfree metaRL method, MAMLRL (Finn et al., 2017).

Modelbased RL (MB): Similar to the modelfree agent, we also compare to a single modelbased RL agent, to evaluate the importance of adaptation. This model is trained using supervised modelerror and iterative model bootstrapping.

Modelbased RL with dynamic evaluation (MB+DE): We compare to an agent trained with modelbased RL, as above. However, at test time, the model is adapted by taking a gradient step at each timestep using the past observations, akin to dynamic evaluation (Krause et al., 2017). This final comparison evaluates the benefit of explicitly training for adaptability.
All modelbased approaches (MB, MB+DE, GrBAL, and ReBAL) use model bootstrapping, use the same neural network architecture, and use the same planner within experiments: MPPI (Williams et al., 2015) for the simulated experiments and random shooting (RS) (Nagabandi et al., 2017a) for the realworld experiments.
6.1 Effect of Adaptation
First, we analyze the effect of the model adaptation, and show results from testtime runs on three environments: HC pier, HC sloped terrain with a steep up/down hill, and ant crippled leg with the chosen leg not seen as crippled during training. Figure 3 displays the distribution shift between the preupdate and postupdate model prediction errors of three GrBAL runs, showing that using the past timesteps to update (pre) into (post) does indeed reduce model error on predicting the following timesteps.
6.2 Performance and Metatraining Sample Efficiency
We first study the sample efficiency of the metatraining process. Figure 4 shows the average return across test environments w.r.t. the amount of data used for metatraining. We (meta)train the modelfree methods (TRPO and MAMLRL) until convergence, using the equivalent of about two days of realworld experience. In contrast, we metatrain the modelbased methods (including our approach) using the equivalent of 1.53 hours of realworld experience. Our methods result in superior or equivalent performance to the modelfree agent that is trained with times more data. Our methods also surpass the performance of the nonmetalearned modelbased approaches. Finally, our performance closely matches the high asymptotic performance of the modelfree metaRL method for halfcheetah disabled, and achieves a suboptimal performance for ant crippled but, again, it does so with the equivalent of times less data. Note that this suboptimality in asymptotic performance is a known issue with modelbased methods, and thus an interesting direction for future efforts. The improvement in sample efficiency from using modelbased methods matches prior findings (Deisenroth and Rasmussen, 2011; Nagabandi et al., 2017a; Kurutach et al., 2018); the most important evaluation, which we discuss in more detail next, is the ability for our method to adapt online to drastic dynamics changes in only a handful of timesteps.
6.3 Testtime Performance: Online Adaptation & Generalization
In our second comparative evaluation, we evaluate final test time performance both GrBAL and ReBAL in comparison to the aforementioned methods. In the interest of developing efficient algorithms for realworld applications, we operate all methods in the low data regime for all experiments: the amount of data available (meta)training is fixed across methods, and roughly corresponds to 1.53 hours of realworld experience depending on the domain. We also provide the performance of a MB oracle, which is trained using unlimited data from only the given test environment (rather than needing to generalize to various training environments).
In these experiments, note that all agents were metatrained on a distribution of tasks/environments (as detailed above), but we then evaluate their adaptation ability on unseen environments at test time. We test the ability of each approach to adapt to sudden changes in the environment, as well as to generalize beyond the training environments. We evaluate the fast adaptation (F.A.) component on the HC disabled joint, ant crippled leg, and the HC pier. On the first two, we cause a joint/leg of the robot to malfunction in the middle of a rollout. We evaluate the generalization component also on the tasks of HC disabled joint and ant crippled leg, but this time, the leg/joint that malfunctions has not been seen as crippled during training. The last environment that we test generalization on is the HC slopped terrain for a hill, where the agent has to run up and down a steep slope, which is outside of the gentle slopes that it experienced during training. The results, shown in Fig. 5, show returns that are normalized such that the MB oracle achieves a return of 1.
In all experiments, due to low quantity of training data, TRPO performs poorly. Although MB+DE achieves better generalization than MB, the slow nature of its adaptation causes it to fall behind MB in the environments that require fast adaptation. On the other hand, our approach surpasses the other approaches in all of the experiments. In fact, in the HC pier and the fast adaptation of ant environments, our approach surpasses the modelbased oracle. This result showcases the importance of adaptation in stochastic environments, where even a model trained with a lot of data cannot be robust to unexpected occurrences or disturbances. ReBAL displays its strengths on scenarios where longer sequential inputs allow it to better asses current environment settings, but overall, GrBAL seems to perform better for both generalization and fast adaptation.
6.4 RealWorld Results
To test our meta modelbased RL method’s sample efficiency, as well as its ability to perform fast and effective online adaptation, we applied GrBAL to a real legged millirobot, comparing it to modelbased RL (MB) and modelbased RL with dynamic evaluation (MB+DE). Due to the cost of running real robot experiments, we chose the better performing method (i.e., GrBAL) to evaluate on the real robot. This small 6legged robot, as shown in Fig. 1 and Fig. 2, presents a modeling and control challenge in the form of highly stochastic and dynamic movement. This robot is an excellent candidate for online adaptation for many reasons: the rapid manufacturing techniques and numerous customdesign steps used to construct this robot make it impossible to reproduce the same dynamics each time, its linkages and other body parts deteriorate over time, and it moves very quickly and dynamically with
The state space of the robot is a 24dimensional vector, including center of mass positions and velocities, center of mass pose and angular velocities, backEMF readings of motors, encoder readings of leg motor angles and velocities, and battery voltage. We define the action space to be velocity setpoints of the rotating legs. The action space has a dimension of two, since one motor on each side is coupled to all three of the legs on that side. All experiments are conducted in a motion capture room. Computation is done on an external computer, and the velocity setpoints are streamed over radio at 10 Hz to be executed by a PID controller on the microcontroller onboard of the robot.
We metatrain a dynamics model for this robot using the metaobjective described in Equation 3, and we train it to adapt on entirely realworld data from three different training terrains: carpet, styrofoam, and turf. We collect approximately 30 minutes of data from each of the three training terrains. This data was entirely collected using a random policy, in conjunction with a safety policy, whose sole purpose was to prevent the robot from exiting the area of interest.
Our first group of results (Table 1) show that, when data from a random policy is used to train a dynamics model, both a model trained with a standard supervised learning objective (MB) and a GrBAL model achieve comparable performance for executing desired trajectories on terrains from the training distribution.
Next, we test the performance of our method on what it is intended for: fast online adaptation of the learned model to enable successful execution of new, changing, or outofdistribution environments at test time. Similar to the comparisons above, we compare GrBAL to a modelbased method (MB) that involves neither metatraining nor online adaptation, as well as a dynamic evaluation method that involves online adaptation of that MB model (MB+DE). Our results (Fig. 6) demonstrate that GrBAL substantially outperforms MB and MB+DE, and, unlike MB and MB+DE, and that GrBAL can quickly 1) adapt online to a missing leg, 2) adjust to novel terrains and slopes, 3) account for miscalibration or errors in pose estimation, and 4) compensate for pulling payloads. None of these environments were seen during training time, but the agent’s ability to learn how to learn enables it to quickly leverage its prior knowledge and finetune to adapt to new environments online. Furthermore, the poor performance of the MB and MB+DE baselines demonstrate not only the need for adaptation, but also the importance of good initial parameters to adapt from (in this case, metalearned parameters). The qualitative results of these experiments in Fig. 7 show that the robot is able to use our method to adapt online and effectively follow the target trajectories, even in the presence of new environments and unexpected perturbations at test time.
boundingstyle gaits; hence, its dynamics are strongly dependent on the terrain or environment at hand.
Left  Str  Zz  F8  

Carpet  GrBAL  4.07  3.26  7.08  5.28 
MB  3.94  3.26  6.56  5.21  
Styrofoam  GrBAL  3.90  3.75  7.55  6.01 
MB  4.09  4.06  7.48  6.54  
Turf  GrBAL  1.99  1.65  2.79  3.40 
MB  1.87  1.69  3.52  2.61 
7 Conclusion
In this work, we present an approach for modelbased metaRL that enables fast, online adaptation of large and expressive models in dynamic environments. We show that metalearning a model for online adaptation results in a method that is able to adapt to unseen situations or sudden and drastic changes in the environment, and is also sample efficient to train. We provide two instantiations of our approach (ReBAL and GrBAL), and we provide a comparison with other prior methods on a range of continuous control tasks. Finally, we show that (compared to modelfree metaRL approaches), our approach is practical for realworld applications, and that this capability to adapt quickly is particularly important under complex realworld dynamics.
References
 AlShedivat et al. [2017] M. AlShedivat, T. Bansal, Y. Burda, I. Sutskever, I. Mordatch, and P. Abbeel. Continuous adaptation via metalearning in nonstationary and competitive environments. CoRR, abs/1710.03641, 2017.
 Andrychowicz et al. [2016] M. Andrychowicz, M. Denil, S. G. Colmenarejo, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas. Learning to learn by gradient descent by gradient descent. CoRR, abs/1606.04474, 2016.
 Åström and Wittenmark [2013] K. J. Åström and B. Wittenmark. Adaptive control. Courier Corporation, 2013.
 Aswani et al. [2012] A. Aswani, P. Bouffard, and C. Tomlin. Extensions of learningbased model predictive control for realtime application to a quadrotor helicopter. In American Control Conference (ACC), 2012. IEEE, 2012.
 Baker et al. [2016] B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016.
 Bengio et al. [1990] Y. Bengio, S. Bengio, and J. Cloutier. Learning a synaptic learning rule. Université de Montréal, Département d’informatique et de recherche opérationnelle, 1990.
 Braun et al. [2009] D. A. Braun, A. Aertsen, D. M. Wolpert, and C. Mehring. Learning optimal adaptation strategies in unpredictable motor tasks. Journal of Neuroscience, 2009.
 Chua et al. [2018] K. Chua, R. Calandra, R. McAllister, and S. Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. arXiv preprint arXiv:1805.12114, 2018.
 Deisenroth and Rasmussen [2011] M. Deisenroth and C. E. Rasmussen. Pilco: A modelbased and dataefficient approach to policy search. In International Conference on machine learning (ICML), pages 465–472, 2011.
 Deisenroth et al. [2013] M. P. Deisenroth, G. Neumann, J. Peters, et al. A survey on policy search for robotics. Foundations and Trends® in Robotics, 2(1–2):1–142, 2013.
 Doerr et al. [2017] A. Doerr, D. NguyenTuong, A. Marco, S. Schaal, and S. Trimpe. Modelbased policy search for automatic tuning of multivariate PID controllers. CoRR, abs/1703.02899, 2017. URL http://arxiv.org/abs/1703.02899.
 Duan et al. [2016] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel. Rl$^2$: Fast reinforcement learning via slow reinforcement learning. CoRR, abs/1611.02779, 2016.
 Finn and Levine [2017] C. Finn and S. Levine. Metalearning and universality: Deep representations and gradient descent can approximate any learning algorithm. CoRR, abs/1710.11622, 2017.
 Finn et al. [2017] C. Finn, P. Abbeel, and S. Levine. Modelagnostic metalearning for fast adaptation of deep networks. CoRR, abs/1703.03400, 2017.
 Fortunato et al. [2017] M. Fortunato, C. Blundell, and O. Vinyals. Bayesian recurrent neural networks. arXiv preprint arXiv:1704.02798, 2017.
 Fu et al. [2015] J. Fu, S. Levine, and P. Abbeel. Oneshot learning of manipulation skills with online dynamics adaptation and neural network priors. CoRR, abs/1509.06841, 2015.
 Gu et al. [2016] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep qlearning with modelbased acceleration. In International Conference on Machine Learning, pages 2829–2838, 2016.
 Kelouwani et al. [2012] S. Kelouwani, K. Adegnon, K. Agbossou, and Y. Dube. Online system identification and adaptive control for pem fuel cell maximum efficiency tracking. IEEE Transactions on Energy Conversion, 27(3):580–592, 2012.
 Ko and Fox [2009] J. Ko and D. Fox. Gpbayesfilters: Bayesian filtering using gaussian process prediction and observation models. Autonomous Robots, 27(1):75–90, 2009.
 Krause et al. [2016] B. Krause, L. Lu, I. Murray, and S. Renals. Multiplicative lstm for sequence modelling. arXiv preprint arXiv:1609.07959, 2016.
 Krause et al. [2017] B. Krause, E. Kahembwe, I. Murray, and S. Renals. Dynamic evaluation of neural sequence models. CoRR, abs/1709.07432, 2017.
 Kurutach et al. [2018] T. Kurutach, I. Clavera, Y. Duan, A. Tamar, and P. Abbeel. Modelensemble trustregion policy optimization. arXiv preprint arXiv:1802.10592, 2018.
 Lake et al. [2015] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Humanlevel concept learning through probabilistic program induction. Science, 2015.
 Lenz et al. [2015] I. Lenz, R. A. Knepper, and A. Saxena. Deepmpc: Learning deep latent features for model predictive control. In Robotics: Science and Systems, 2015.
 Levine and Koltun [2013] S. Levine and V. Koltun. Guided policy search. In International Conference on Machine Learning, pages 1–9, 2013.
 Levine et al. [2016] S. Levine, C. Finn, T. Darrell, and P. Abbeel. Endtoend training of deep visuomotor policies. Journal of Machine Learning Research (JMLR), 2016.
 Li and Malik [2016] K. Li and J. Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
 Lillicrap et al. [2015] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015.
 Manganiello et al. [2014] P. Manganiello, M. Ricco, G. Petrone, E. Monmasson, and G. Spagnuolo. Optimization of perturbative pv mppt methods through online system identification. IEEE Trans. Industrial Electronics, 61(12):6812–6821, 2014.
 Meier and Schaal [2016] F. Meier and S. Schaal. Drifting gaussian processes with varying neighborhood sizes for online model learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2016. IEEE, May 2016.
 Meier et al. [2016] F. Meier, D. Kappler, N. Ratliff, and S. Schaal. Towards robust online inverse dynamics learning. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems. IEEE, 2016.
 Mishra et al. [2017] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel. A simple neural attentive metalearner. In NIPS 2017 Workshop on MetaLearning, 2017.
 Mnih et al. [2015] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Humanlevel control through deep reinforcement learning. Nature, 2015.
 Munkhdalai and Yu [2017] T. Munkhdalai and H. Yu. Meta networks. arXiv preprint arXiv:1703.00837, 2017.
 Munkhdalai et al. [2017] T. Munkhdalai, X. Yuan, S. Mehri, T. Wang, and A. Trischler. Learning rapidtemporal adaptations. arXiv preprint arXiv:1712.09926, 2017.
 Nagabandi et al. [2017a] A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine. Neural network dynamics for modelbased deep reinforcement learning with modelfree finetuning. CoRR, abs/1708.02596, 2017a.
 Nagabandi et al. [2017b] A. Nagabandi, G. Yang, T. Asmar, R. Pandya, G. Kahn, S. Levine, and R. S. Fearing. Learning imageconditioned dynamics models for control of underactuated legged millirobots. arXiv preprint arXiv:1711.05253, 2017b.
 Naik and Mammone [1992] D. K. Naik and R. Mammone. Metaneural networks that learn by learning. In Neural Networks, 1992. IJCNN., International Joint Conference on, volume 1, pages 437–442. IEEE, 1992.
 Pastor et al. [2011] P. Pastor, L. Righetti, M. Kalakrishnan, and S. Schaal. Online movement adaptation based on previous sensor experiences. In IEEE International Conference on Intelligent Robots and Systems (IROS), pages 365–371, 9 2011.
 Peters and Schaal [2008] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural networks, 2008.
 Rai et al. [2017] A. Rai, G. Sutanto, S. Schaal, and F. Meier. Learning feedback terms for reactive planning and control. In Proceedings 2017 IEEE International Conference on Robotics and Automation (ICRA), Piscataway, NJ, USA, May 2017. IEEE.
 Ravi and Larochelle [2018] S. Ravi and H. Larochelle. Optimization as a model for fewshot learning. International Conference on Learning Representations (ICLR), 2018.
 Rei [2015] M. Rei. Online representation learning in recurrent neural language models. CoRR, abs/1508.03854, 2015.
 Sæmundsson et al. [2018] S. Sæmundsson, K. Hofmann, and M. P. Deisenroth. Meta reinforcement learning with latent variable gaussian processes. arXiv preprint arXiv:1803.07551, 2018.
 Santoro et al. [2016] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Oneshot learning with memoryaugmented neural networks. arXiv preprint arXiv:1605.06065, 2016.
 Sastry and Isidori [1989] S. S. Sastry and A. Isidori. Adaptive control of linearizable systems. IEEE Transactions on Automatic Control, 1989.
 Schmidhuber [1992] J. Schmidhuber. Learning to control fastweight memories: An alternative to dynamic recurrent networks. Neural Computation, 1992.
 Schmidhuber and Huber [1991] J. Schmidhuber and R. Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 1991.
 Schulman et al. [2015] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015.
 Silver et al. [2017] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. Mastering the game of go without human knowledge. Nature, 2017.
 Sung et al. [2017] F. Sung, L. Zhang, T. Xiang, T. Hospedales, and Y. Yang. Learning to learn: Metacritic networks for sample efficient learning. arXiv preprint arXiv:1706.09529, 2017.
 Tanaskovic et al. [2013] M. Tanaskovic, L. Fagiano, R. Smith, P. Goulart, and M. Morari. Adaptive model predictive control for constrained linear systems. In Control Conference (ECC), 2013 European. IEEE, 2013.
 Thrun and Pratt [1998] S. Thrun and L. Pratt. Learning to learn: Introduction and overview. In Learning to learn. Springer, 1998.
 Todorov et al. [2012] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for modelbased control. In IROS, pages 5026–5033. IEEE, 2012.
 Underwood and Husain [2010] S. J. Underwood and I. Husain. Online parameter estimation and adaptive control of permanentmagnet synchronous machines. IEEE Transactions on Industrial Electronics, 57(7):2435–2443, 2010.
 Wang et al. [2016] J. X. Wang, Z. KurthNelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, C. Blundell, D. Kumaran, and M. Botvinick. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.
 Weinstein and Botvinick [2017] A. Weinstein and M. Botvinick. Structure learning in motor control: A deep reinforcement learning model. CoRR, abs/1706.06827, 2017.
 Williams et al. [2015] G. Williams, A. Aldrich, and E. Theodorou. Model predictive path integral control using covariance variable importance sampling. CoRR, abs/1509.01149, 2015.
 Williams et al. [2017] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou. Information theoretic mpc for modelbased reinforcement learning. In International Conference on Robotics and Automation (ICRA), 2017.

Younger et al. [2001]
A. S. Younger, S. Hochreiter, and P. R. Conwell.
Metalearning with backpropagation.
In International Joint Conference on Neural Networks. IEEE, 2001.
Appendix A Model Prediction Errors: Preupdate vs. Postupdate
In this section, we show the effect of adaptation in the case of GrBAL. In particular, we show the histogram of the step normalized error, as well as the pertimestep visualization of this error during a trajectory. Across all tasks and environments, the postupdated model achieves lower prediction error than the preupdted model .
Appendix B Effect of MetaTraining Distribution
To see how training distribution affects test performance, we ran an experiment that used GrBAL to train models of the 7DOF arm, where each model was trained on the same number of datapoints during metatraining, but those datapoints came from different ranges of force perturbations. We observe (in the plot below) that
1. Seeing more during training is helpful during testing — a model that saw a large range of force perturbations during training performed the best
2. A model that saw no perturbation forces during training did the worst
3. The middle 3 models show comparable performance in the "constant force = 4" case, which is an outofdistribution task for those models. Thus, there is not actually a strong restriction on what needs to be seen during training in order for adaptation to occur at train time (though there is a general trend that more is better)
Appendix C Sensitivity of K and M
In this section we analyze how sensitive is our algorithm w.r.t the hyperparameters and . In all experiments of the paper, we set equal to . Figure 11 shows the average return of GrBAL across metatraining iterations of our algorithm for different values of . The performance of the agent is largely unaffected for different values of these hyperparameters, suggesting that our algorithm is not particularly sensitive to these values. For different agents, the optimal value for these hyperparameters depends on various task details, such as the amount of information present in the state (a fullyinformed state variable precludes the need for additional past timesteps) and the duration of a single timestep (a longer timestep duration makes it harder to predict more steps into the future).
Appendix D Reward functions
For each MuJoCo agent, the same reward function is used across its various tasks. Table 2 shows the reward functions used for each agent. We denote by the xcoordinate of the agent at time , refers to the position of the endeffector of the 7DoF arm, and corresponds to the position of the desired goal.
Reward function  

Halfcheetah  
Ant  + 0.05 
7DoF Arm 
Appendix E Hyperparameters
Below, we list the hyperparameters of our experiments. In all experiments we used a single gradient step for the update rule of GrBAL. The learning rate (LR) of TRPO corresponds to the Kullback–Leibler divergence constraint. # Task/itr corresponds to the number of tasks sampled for collecting data to train the model or model, whereas # TS/itr is the total number of times steps collected (for all tasks). Finally,
refers to the horizon of the task.LR  Inner LR  Epochs  K  M  Batch Size  # Tasks/itr  # TS/itr  T  Train  H Train  Test  H Test  
GrBAL  0.001  0.01  50  32  32  500  32  64000  1000  1000  10  2500  15 
ReBAL  0.001    50  32  32  500  32  64000  1000  1000  10  2500  15 
MB  0.001    50      500  64  64000  1000  1000  10  2500  15 
TRPO  0.05          50000  50  50000  1000         
LR  Inner LR  Epochs  K  M  Batch Size  # Tasks/itr  # TS/itr  T  Train  H Train  Test  H Test  
GrBAL  0.001  0.001  50  10  16  500  32  24000  500  1000  15  1000  15 
ReBAL  0.001    50  32  16  500  32  32000  500  1000  15  1000  15 
MB  0.001    70      500  10  10000  500  1000  15  1000  15 
TRPO  0.05          50000  50  50000  500         
LR  Inner LR  Epochs  K  M  Batch Size  # Tasks/itr  # TS/itr  T  Train  H Train  Test  H Test  
GrBAL  0.001  0.001  50  32  16  1500  32  24000  500  1000  15  1000  15 
ReBAL  0.001    50  32  16  1500  32  24000  500  1000  15  1000  15 
MB  0.001    70      10000  10  10000  500  1000  15  1000  15 
TRPO  0.05          50000  50  50000  500         