Model-based reinforcement learning approaches carry the promise of being data efficient. However, due to challenges in learning dynamics models that sufficiently match the real-world dynamics, they struggle to achieve the same asymptotic performance as model-free methods. We propose Model-Based Meta-Policy-Optimization (MB-MPO), an approach that foregoes the strong reliance on accurate learned dynamics models. Using an ensemble of learned dynamic models, MB-MPO meta-learns a policy that can quickly adapt to any model in the ensemble with one policy gradient step. This steers the meta-policy towards internalizing consistent dynamics predictions among the ensemble while shifting the burden of behaving optimally w.r.t. the model discrepancies towards the adaptation step. Our experiments show that MB-MPO is more robust to model imperfections than previous model-based approaches. Finally, we demonstrate that our approach is able to match the asymptotic performance of model-free methods while requiring significantly less experience.READ FULL TEXT VIEW PDF
A modification of MB-MPO with trajectory-buffer
. Model-free (MF) algorithms tend to achieve optimal performance, are generally applicable, and are easy to implement. However, this is achieved at the cost of being data intensive, which is exacerbated when combined with high-capacity function approximators like neural networks. Their high sample complexity presents a major barrier to their application to robotic control tasks, on which data gathering is expensive.
In contrast, model-based (MB) reinforcement learning methods are able to learn with significantly fewer samples by using a learned model of the environment dynamics against which policy optimization is performed. Learning dynamics models can be done in a sample efficient way since they are trained with standard supervised learning techniques, allowing the use of off-policy data. However, accurate dynamics models can often be far more complex than good policies. For instance, pouring water into a cup can be achieved by a fairly simple policy while modeling the underlying dynamics of this task is highly complex. Hence, model-based methods have only been able to learn good policies on a much more limited set of problems, and even when good policies are learned, they typically saturate in performance at a level well below their model-free counterparts[4, 5].
Model-based approaches tend to rely on accurate (learned) dynamics models to solve a task. If the dynamics model is not sufficiently precise, the policy optimization is prone to overfit on the deficiencies of the model, leading to suboptimal behavior or even to catastrophic failures. This problem is known in the literature as model-bias . Previous work has tried to alleviate model-bias by characterizing the uncertainty of the models and learning a robust policy [6, 7, 8, 9, 10], often using ensembles to represent the posterior. This paper also uses ensembles, but very differently.
We propose Model-Based Meta-Policy-Optimization (MB-MPO), an orthogonal approach to previous model-based RL methods: while traditional model-based RL methods rely on the learned dynamics models to be sufficiently accurate to enable learning a policy that also succeeds in the real world, we forego reliance on such accuracy. We are able to do so by learning an ensemble of dynamics models and framing the policy optimization step as a meta-learning problem. Meta-learning, in the context of RL, aims to learn a policy that adapts fast to new tasks or environments [11, 12, 13, 14, 15]. Using the models as learned simulators, MB-MPO learns a policy that can be quickly adapted to any of the fitted dynamics models with one gradient step. This optimization objective steers the meta-policy towards internalizing the parts of the dynamics prediction that are consistent among the ensemble while shifting the burden of behaving optimally w.r.t discrepancies between models towards the adaptation step. This way, the learned policy exhibits less model-bias without the need to behave conservatively. While much is shared with previous MB methods in terms of how trajectory samples are collected and the dynamic models are trained, the use of (and reliance on) learned dynamics models for the policy optimization is fundamentally different.
In this paper we show that 1) model-based policy optimization can learn policies that match the asymptotic performance of model-free methods while being substantially more sample efficient, 2) MB-MPO consistently outperforms previous model-based methods on challenging control tasks, 3) learning is still possible when the models are strongly biased. The low sample complexity of our method makes it applicable to real-world robotics. For instance, we are able learn an optimal policy in high-dimensional and complex quadrupedal locomotion within two hours of real-world data. Note that the amount of data required to learn such policy using model-free methods is 10 - 100 higher, and, to the best knowledge of the authors, no prior model-based method has been able to attain the model-free performance in such tasks.
In this section, we discuss related work, including model-based RL and approaches that combine elements of model-based and model-free RL. Finally, we outline recent advances in the field of meta-learning.
Model-Based Reinforcement Learning: Addressing Model Inaccuracies. Impressive results with model-based RL have been obtained using simple linear models [16, 17, 18, 19]. However, like Bayesian models [6, 20, 21], their application is limited to low-dimensional domains. Our approach, which uses neural networks (NNs), is easily able to scale to complex high dimensional control problems. NNs for model learning offer the potential to scale to higher dimensional problems with impressive sample complexity [22, 23, 24, 25]. A major challenge when using high-capacity dynamics models is preventing policies from exploiting model inaccuracies. Several works approach this problem of model-bias by learning a distribution of models [26, 7, 10, 23], or by learning adaptive models [27, 28, 29]. We incorporate the idea of reducing model-bias by learning an ensemble of models. However, we show that these techniques do not suffice in challenging domains, and demonstrate the necessity of meta-learning for improving asymptotic performance.
Past work has also tried to overcome model inaccuracies through the policy optimization process. Model Predictive Control (MPC) compensates for model imperfections by re-planning at each step , but it suffers from limited credit assignment and high computational cost. Robust policy optimization [7, 8, 9] looks for a policy that performs well across models; as a result policies tend to be over-conservative. In contrast, we show that MB-MPO learns a robust policy in the regions where the models agree, and an adaptive one where the models yield substantially different predictions.
Model-Based + Model-Free Reinforcement Learning. Naturally, it is desirable to combine elements of model-based and model-free to attain high performance with low sample complexity. Attempts to combine them can be broadly categorized into three main approaches. First, differentiable trajectory optimization methods propagate the gradients of the policy or value function through the learned dynamics model [31, 32]
. However, the models are not explicitly trained to approximate first order derivatives, and, when backpropagating, they suffer from exploding and vanishing gradients. Second, model-assisted MF approaches use the dynamics models to augment the real environment data by imagining policy roll-outs [33, 29, 34, 22]. These methods still rely to a large degree on real-world data, which makes them impractical for real-world applications. Thanks to meta-learning, our approach could, if needed, adapt fast to the real-world with fewer samples. Third, recent work fully decouples the MF module from the real environment by entirely using samples from the learned models [35, 10]
. These methods, even though considering the model uncertainty, still rely on precise estimates of the dynamics to learn the policy. In contrast, we meta-learn a policy on an ensemble of models, which alleviates the strong reliance on precise models by training for adaption when the prediction uncertainty is high.Kurutach et al.  can be viewed as an edge case of our algorithm when no adaptation is performed.
Our approach makes use of meta-learning to address model inaccuracies. Meta-learning algorithms aim to learn models that can adapt to new scenarios or tasks with few data points. Current meta-learning algorithms can be classified in three categories. One approach involves training a recurrent or memory-augmented network that ingests a training dataset and outputs the parameters of a learner model[36, 37]. Another set of methods feeds the dataset followed by the test data into a recurrent model that outputs the predictions for the test inputs [12, 38]. The last category embeds the structure of optimization problems into the meta-learning algorithm [11, 39, 40]. These algorithms have been extended to the context of RL [12, 13, 15, 11]. Our work builds upon MAML . However, while in previous meta-learning methods each task is typically defined by a different reward function, each of our tasks is defined by the dynamics of different learned models.
A discrete-time finite Markov decision process (MDP)is defined by the tuple . Here, is the set of states, the action space, the transition distribution, is a reward function, represents the initial state distribution, the discount factor, and is the horizon of the process. We define the return as the sum of rewards along a trajectory . The goal of reinforcement learning is to find a policy that maximizes the expected return.
While model-free RL does not explicitly model state transitions, model-based RL methods learn the transition distribution, also known as dynamics model, from the observed transitions. This can be done with a parametric function approximator . In such case, the parameters of the dynamics model are optimized to maximize the log-likelihood of the state transition distribution.
Meta-RL aims to learn a learning algorithm which is able to quickly learn optimal policies in MDPs drawn from a distribution over a set of MDPs. The MDPs may differ in their reward function and transition distribution , but share action space and state space .
Our approach builds on the gradient-based meta-learning framework MAML , which in the RL setting, trains a parametric policy to quickly improve its performance on a new task with one or a few vanilla policy gradient steps. The meta-training objective for MAML can be written as:
MAML attempts to learn an initialization such that for any task the policy attains maximum performance in the respective task after one policy gradient step.
Enabling complex and high-dimensional real robotics tasks requires extending current model-based methods to the capabilities of mode-free while, at the same time, maintaining their data efficiency. Our approach, model-based meta-policy-optimization (MB-MPO), attains such goal by framing model-based RL as meta-learning a policy on a distribution of dynamic models, advocating to maximize the policy adaptation, instead of robustness, when models disagree. This not only removes the arduous task of optimizing for a single policy that performs well across differing dynamic models, but also results in better exploration properties and higher diversity of the collected samples, which leads to improved dynamic estimates.
We instantiate this general framework by employing an ensemble of learned dynamic models and meta-learning a policy that can be quickly adapted to any of the dynamic models with one policy gradient step. In the following, we first describe how the models are learned, then explain how the policy can be meta-trained on an ensemble of models, and, finally, we present our overall algorithm.
A key component of our method is learning a distribution of dynamics models, in the form of an ensemble, of the real environment dynamics. In order to decorrelate the models, each model differs in its random initialization and it is trained with a different randomly selected subset of the collected real environment samples. In order to address the distributional shift that occurs as the policy changes throughout the meta-optimization, we frequently collect samples under the current policy, aggregate them with the previous data , and retrain the dynamic models with warm starts.
In our experiments, we consider the dynamics models to be a deterministic function of the current state and action
, employing a feed-forward neural network to approximate them. We follow the standard practice in model-based RL of training the neural network to predict the change in state(rather than the next state ) [22, 6]. We denote by the function approximator for the next state, which is the sum of the input state and the output of the neural network. The objective for learning each model
of the ensemble is to find the parameter vectorthat minimizes the one-step prediction loss:
where is a sampled subset of the training data-set that stores the transitions which the agent has experienced. Standard techniques to avoid overfitting and facilitate fast learning are followed; specifically, 1) early stopping the training based on the validation loss, 2) normalizing the inputs and outputs of the neural network, and 3) weight normalization .
Given an ensemble of learned dynamic models for a particular environment, our core idea is to learn a policy which can adapt quickly to any of these models. To learn this policy, we use gradient based meta-learning with MAML (described in Section 3.2). To properly formulate this problem in the context of meta-learning, we first need to define an appropriate task distribution. Considering the models , which approximate the dynamics of the true environment, we can construct a uniform task distribution by embedding them into different MDPs using these learned dynamics models. We note that, unlike the experimental considerations of prior methods [12, 11, 14], in our work the reward function remains the same across tasks while the dynamics vary. Therefore, each task constitutes a different belief about what the dynamics in the true environment could be. Finally, we pose our objective as the following meta-optimization problem:
with being the expected return under the policy and the estimated dynamics model .
For estimating the expectation in Eq. 4 and computing the corresponding gradients, we sample trajectories from the imagined MDPs. The rewards are computed by evaluating the reward function, which we assume as given, in the predicted states and actions . In particular, when estimating the adaptation objectives , the meta-policy is used to sample a set of imaginary trajectories for each model . For the meta-objective , we generate trajectory roll-outs with the models and the policies obtained from adapting the parameters to the -th model. Thus, no real-world data is used for the data intensive step of meta-policy optimization.
In practice, any policy gradient algorithm can be chosen to perform the meta-update of the policy parameters. In our implementation, we use Trust-Region Policy Optimization (TPRO)  for maximizing the meta-objective, and employ vanilla policy gradient (VPG) 
for the adaptation step. To reduce the variance of the policy gradient estimates a linear reward baseline is used.
In the following, we describe the overall algorithm of our approach (see Algorithm 1). First, we initialize the models and the policy with different random weights. Then, we proceed to the data collection step. In the first iteration, a uniform random controller is used to collect data from the real-world, which is stored in a buffer . At subsequent iterations, trajectories from the real-world are collected with the adapted policies , and then aggregated with the trajectories from previous iterations. The models are trained with the aggregated real-environment samples following the procedure explained in section 4.1. The algorithm proceeds by imagining trajectories from each the ensemble of models using the policy . These trajectories are are used to perform the inner adaptation policy gradient step, yielding the adapted policies . Finally, we generate imaginary trajectories using the adapted policies and models , and optimize the policy towards the meta-objective (as explained in section 4.2). We iterate through these steps until desired performance is reached. The algorithm returns the optimal pre-update parameters .
Meta-learning a policy over an ensemble of dynamic models using imaginary trajectory roll-outs provides several benefits over traditional model-based and model-based model-free approaches. In the following we discuss several such advantages, aiming to provide intuition for the algorithm.
Regularization effect during training. Optimizing the policy to adapt within one policy gradient step to any of the fitted models imposes a regularizing effect on the policy learning (as  observed in the supervised learning case). The meta-optimization problem steers the policy towards higher plasticity in regions with high dynamics model uncertainty, shifting the burden of adapting to model discrepancies towards the inner policy gradient update.
We consider plasticity as the policy’s ability to change its (conditional) distribution with a small change (i.e. gradient update) in the parameter space. The policy plasticity is manifested in the statistical distance between the pre- and post-update policy. In section 6.3 we analyze the connection between model uncertainty and the policy plasticity, finding a strong positive correlation between the model ensembles predictive variance and the KL-divergence between and . This effect prevents the policy to learn sub-optimal behaviors that arise in robust policy optimization. More importantly, this regularization effect fades away once the dynamics models get more accurate, which leads to asymptotic optimal policies if enough data is provided to the learned models. In section 6.4, we show how this property allows us to learn from noisy and highly biased models.
Tailored data collection for fast model improvement. Since we sample real-environment trajectories using the different policies obtained by adaptation to each model, the collected training data is more diverse which promotes robustness of the dynamic models. Specifically, the adapted policies tend to exploit the characteristic deficiencies of the respective dynamic models. As a result, we collect real-world data in regions where the dynamic models insufficiently approximate the true dynamics. This effect accelerates correcting the imprecision of the models leading to faster improvement. In Appendix A.1, we experimentally show the positive effect of tailored data collection on the performance.
Fast fine-tuning. Meta-learning optimizes a policy for fast adaptation  to a set of tasks. In our case, each task corresponds to a different believe of what the real environment dynamics might be. When optimal performance is not achieved, the ensemble of models will present high discrepancy in their predictions, increasing the likelihood of the real dynamics to lie in the believe distribution’s support. As a result, the learned policy is likely to exhibit high adaptability towards the real environment, and fine-tuning the policy with VPG on the real environment leads to faster convergence than training the policy from scratch or from any other MB initialization.
The aim of our experimental evaluation is to examine the following questions: 1) How does MB-MPO compare against state-of-the-art model-free and model-based methods in terms of sample complexity and asymptotic performance? 2) How does the model uncertainty influence the policy’s plasticity? 3) How robust is our method against imperfect models?
To answer the posed questions, we evaluate our approach on six continuous control benchmark tasks in the Mujoco simulator . A depiction of the environments as well a detailed description of the experimental setup can be found in Appendix A.3. In all of the following experiments, the pre-update policy is used to report the average returns obtained with our method. The performance reported are averages over at least three random seeds. The source code and the experiments data is available on our supplementary website ***https://sites.google.com/view/mb-mpo.
We compare our method in sample complexity and performance to four state-of-the-art model free RL algorithms: Deep Deterministic Policy Gradient (DDPG) , Trust Region Policy Optimization , Proximal Policy Optimization (PPO) , and Actor Critic using Kronecker-Factored Trust Region (ACKTR) . The results are shown in Figure 1.
In all the locomotion tasks we are able to achieve maximum performance using between 10 and 100 times less data than model-free methods. In the most challenging domains: ant, hopper, and walker2D; the data complexity of our method is two orders of magnitude less than the MF. In the easier tasks: the simulated PR2 and swimmer, our method achieves the same performance of MF using 20-50 less data. These results highlight the benefit of MB-MPO for real robotics tasks; the amount of real-world data needed for attaining maximum return corresponds to 30 min in the case of easier domains and to 90 min in the more complex ones.
We also compare our method against recent model-based work: Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) , and the model-based approach introduced in Nagabandi et al. , which uses MPC for planning (MB-MPC).
The results, shown in Figure 2, highlight the strength of MB-MPO in complex tasks. MB-MPC struggles to perform well on tasks that require robust planning, and completely fails in tasks where medium/long-term planning is necessary (as in the case of hopper). In contrast, ME-TRPO is able to learn better policies, but the convergence to such policies is slower when compared to MB-MPO . Furthermore, while ME-TRPO converges to suboptimal policies in complex domains, MB-MPO is able to achieve max-performance.
In section 6.3 we hypothesize that the meta-optimization steers the policy towards higher plasticity in regions with high dynamics model uncertainty while embedding consistent model predictions into the pre-update policy. To empirically analyze this hypothesis, we conduct an experiment in a simple 2D-Point environment where the agent, starting uniformly from , must go to the goal position . We use the average KL-divergence between and the different adapted policies to measure the plasticity conditioned on the state .
Figure 3 depicts the KL-divergence between the pre- and post-update policy, as well as the standard deviation of the predictions of the ensemble over the state space. Since the agent steers towards the center of the environment, more transition data is available in this region. As a result the models present higher accuracy in the center. The results indicate a strong positive correlation between model uncertainty and the KL-divergence between pre- and post-update policy. We find this connection between policy plasticity and predictive uncertainty consistently throughout the training and among different hyper-parameter configurations.
We pose the question of how robust our proposed algorithm is w.r.t. imperfect dynamics predictions. We examine it in two ways. First, with an illustrative example of a model with clearly wrong dynamics. Specifically, we add biased Gaussian noise to the next state prediction, whereby the bias is re-sampled in every iteration for each model. Second, we present a realistic case on which long horizon predictions are needed. Bootstrapping the model predictions for long horizons leads to high compounding errors, making policy learning on such predictions challenging.
Figure 4 depicts the performance comparison between our method and ME-TRPO on the half-cheetah environment for various values of . Results indicate that our method consistently outperforms ME-TRPO when exposed to biased and noisy dynamics models. ME-TPRO catastrophically fails to learn a policy in the presence of strong bias (i.e. and ), but our method, despite the strongly compromised dynamic predictions, is still able to learn a locomotion behavior with a positive forward velocity.
This property also manifests itself in long horizon tasks. Figure 5 compares the performance of our approach with inner learning rate against the edge case , where no adaption is taking place. For each random seed, MB-MPO steadily converges to maximum performance. However, when there is no adaptation, the learning becomes unstable and different seeds exhibit different behavior: proper learning, getting stuck in sub-optimal behavior, and even unlearning good behaviors.
In this paper, we present a simple and generally applicable algorithm, model-based meta-policy optimization (MB-MPO), that learns an ensemble of dynamics models and meta-optimizes a policy for adaptation in each of the learned models. Our experimental results demonstrate that meta-learning a policy over an ensemble of learned models provides the recipe for reaching the same level of performance as state-of-the-art model-free methods with substantially lower sample complexity. We also compare our method against previous model-based approaches, obtaining better performance and faster convergence. Our analysis demonstrate the ineffectiveness of prior approaches to combat model-bias, and showcases the robustness of our method against imperfect models. As a result, we are able to extend model-based to more complex domains and longer horizons. One direction that merits further investigation is the usage of Bayesian neural networks, instead of ensembles, to learn a distribution of dynamics models. Finally, an exciting direction of future work is the application of MB-MPO to real-world systems.
We thank A. Gupta, C. Finn, and T. Kurutach for the feedback on the earlier draft of the paper. IC was supported by La Caixa Fellowship. The research leading to these results received funding from the EU Horizon 2020 Research and Innovation programme under grant agreement No. 731761 (IMAGINE) and was supported by Berkeley Deep Drive, Amazon Web Services, and Huawei.
Journal of Machine Learning Research, 17(39):1–40, 2016.
A reduction of imitation learning and structured prediction to no-regret online learning.In AISTATS, 2011.
We present the effects of collecting data using tailored exploration. We refer to tailored exploration as the effect of collecting data using the post-update policies – the policies adapted to each specific model. When training policies on learned models they tend to exploit the deficiencies of the model, and thus overfitting to it. Using the post-update policies to collect data results in exploring the regions of the state space where these policies overfit and the model is inaccurate. Iteratively collecting data in the regions where the models are innacurate has been shown to greatly improve the performance .
The effect of using tailored exploration is shown in Figure 6. In the half-cheetah and the walker we get an improvement of 12% and 11%, respectively. The tailored exploration effect cannot be accomplished by robust optimization algorithms, such as ME-TRPO. Those algorithms learn a single policy that is robust across models. The data collection using such policy will not exploit the regions in which each model fails resulting in less accurate models.
We perform a hyperparameter study (see Figure7) to assess the sensitivity of MB-MPO to its parameters. Specifically, we vary the inner learning rate , the size of the ensemble, and the number of meta gradient steps before collecting further real environment samples. Consistent with the results in Figure 5, we find that adaptation significantly improves the performance when compared to the non-adaptive case of . Increasing the number of models and meta gradient steps per iteration results in higher performance at a computational cost. However, as the computational burden is increased the performance gains diminish.
Up to a certain level, increasing the number of meta gradient steps per iteration improves performance. Though, too many meta gradients steps (i.e. 60) can lead to early convergence to a suboptimal policy. This may be due to the fact that the variance of the Gaussian policy distribution is also learned. Usually, the policies variance decreases during the training. If the number of meta-gradient steps is too large, the policy loses its exploration capabilities too early and can hardly improve once the models are more accurate. This problem can be alleviated using a fixed policy variance, or by adding an entropy bonus the learning objective.
In the following we provide a detailed description of the setup used in the experiments presented in section 6:
We benchmark MB-MPO on six continuous control benchmark tasks in the Mujoco simulator , shown in Fig. 8. Five of these tasks, namely swimmer, half-cheetah, walker2D, hopper and ant, involve robotic locomotion and are provided trough the OpenAI gym .
The sixth, the 7-DoF arm of the PR2 robot, has to reach arbitrary end-effector positions. Thereby, the PR2 robot is torque controlled. The reward function is comprised of the squared distance of the end-effector (TCP) to the goal and energy / control costs:
In section 6.3 we use the simple 2D-Point environment to analyze the connection between policy plasticity and model uncertainty. The corresponding MDP is defined as follows:
Policy: We use a Gaussian policy ) with diagonal covariance matrix. The mean is computed by a neural network (2 hidden layers of size 32, tanh nonlinearity) which receives the current state as an input. During the policy optimization, both the weights of the neural network and the standard deviation vector are learned.
Dynamics Model Ensemble: In all experiments (except in Figure 7b) we use an ensemble of 5 fully connected neural networks. For the different environments the following hidden layer sizes were used:
Ant, Walker: (512, 512, 512)
PR2, Swimmer, Hopper, Half-Cheetah: (512, 512)
2D-Point-Env: (128, 128)
In all models, we used weight normalization and ReLu nonlinearities. For the minimization of theprediction error, the Adam optimizer with a batch-size of 500 was employed. In the first iteration all models are randomly initialized. In later iterations, the models are trained with warm starts using the parameters of the previous iteration. In each iteration and for each model in the ensemble the transition data buffer
is randomly split in a training (80%) and validation (20%) set. The latter split is used to compute the validation loss after each training epoch on the shuffled training split. A rolling average of the validation losses with a persistence of 0.95 is maintained throughout the epochs. Each model’s training is stopped individually as soon as the rolling validation loss average decreases.
Meta-Policy Optimization: As described in section 4.2, the policy parameters are optimized using the gradient-based meta learning framework MAML. For the inner adaptation step we use a gradient step-size of . For maximizing the meta-objective specified in equation 3 we use the policy gradient method TPRO  with KL-constraint . Since computing the gradients of the meta-objective involves second order terms such as the Hessian of the policy’s log-likelihood, computing the necessary Hessian vector products for TRPO analytically is very compute intensive. Hence, we use a finite difference approximation of the vector product of the Fisher Information Matrix and the gradients as suggested in . If not denoted differently, 30 meta-optimization steps are performed before new trajectories are collected from the real environment.
Trajectory collection: In each algorithm iteration 4000 environment transitions (20 trajectories of 200 time steps) are collected. For the meta-optimization, 100000 imaginary environment transitions are sampled.
In this section we compare the computational complexity of MB-MPO against TRPO. Specifically, we report the wall clock time that it takes both algorithms to reach maximum performance on the half-cheetah environment when running the experiments on an Amazon Web Services EC2 c4.4xlarge compute instance. Our method only requires 20% more compute time than TRPO (7 hours instead of 5.5), while attaining 70 reduction in sample complexity. The main time bottleneck of our method compared with the model-free algorithms is training the models.
Notice that when running real world experiment, our method will be significantly faster than model-free approaches since the bottleneck then would shift towards the data collection step.