Model-based Adversarial Meta-Reinforcement Learning

06/16/2020 ∙ by Zichuan Lin, et al. ∙ Stanford University 11

Meta-reinforcement learning (meta-RL) aims to learn from multiple training tasks the ability to adapt efficiently to unseen test tasks. Despite the success, existing meta-RL algorithms are known to be sensitive to the task distribution shift. When the test task distribution is different from the training task distribution, the performance may degrade significantly. To address this issue, this paper proposes Model-based Adversarial Meta-Reinforcement Learning (AdMRL), where we aim to minimize the worst-case sub-optimality gap – the difference between the optimal return and the return that the algorithm achieves after adaptation – across all tasks in a family of tasks, with a model-based approach. We propose a minimax objective and optimize it by alternating between learning the dynamics model on a fixed task and finding the adversarial task for the current model – the task for which the policy induced by the model is maximally suboptimal. Assuming the family of tasks is parameterized, we derive a formula for the gradient of the suboptimality with respect to the task parameters via the implicit function theorem, and show how the gradient estimator can be efficiently implemented by the conjugate gradient method and a novel use of the REINFORCE estimator. We evaluate our approach on several continuous control benchmarks and demonstrate its efficacy in the worst-case performance over all tasks, the generalization power to out-of-distribution tasks, and in training and test time sample efficiency, over existing state-of-the-art meta-RL algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 17

Code Repositories

AdMRL

Code for paper "Model-based Adversarial Meta-Reinforcement Learning" (https://arxiv.org/abs/2006.08875)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep reinforcement learning (Deep RL) methods can solve difficult tasks such as Go (silver2016mastering), Atari games (mnih2013playing), robotic control (levine2016end) successfully, but often require sampling a large amount interactions with the environment. Meta-reinforcement learning and multi-task reinforcement learning aim to improve the sample efficiency by leveraging the shared structure within a family of tasks. For example, Model Agnostic Meta Learning (MAML) (finn2017model) learns in the training time a shared policy initialization across tasks, from which in the test time it can adapt to the new tasks quickly with a small amount of samples. The more recent work PEARL (rakelly2019efficient) learns latent representations of the tasks in the training time, and then infers the representations of test tasks and adapts to them.

The existing meta-RL formulation and methods are largely distributional. The training tasks and the testing tasks are assumed to be drawn from the same distribution of tasks. Consequently, the existing methods are prone to the distribution shift issue, as shown in (mehta2020curriculum) — when the tasks in the test time are not drawn from the same distribution as in the training, the performance degrades significantly. Figure 1 also confirms this issue for PEARL (rakelly2019efficient), a recent state-of-the-art meta-RL method, on the Ant2D-velocity tasks. PEARL can adapt to tasks with smaller goal velocities much better than tasks with larger goal velocities, in terms of the relative difference, or the sub-optimality gap, from the optimal policy of the corresponding task.111The same conclusion is still true if we measure the raw performance on the tasks. But that could be misleading because the tasks have varying optimal returns. To address this issue, mehta2020curriculum propose an algorithm that iteratively re-define the task distribution to focus more on the hard task.

[width=]figs/pearl-gap.png
Figure 1: The performance of PEARL (rakelly2019efficient) on Ant2D-velocity tasks. Each task is represented by the target velocity with which the ant should run. The training tasks are uniformly drawn in . The color of each cell shows the sub-optimality gap of the corresponding task, namely, the optimal return of that task minus the return of PEARL. Lighter means smaller sub-optimality gap and is better. High-velocity tasks tend to perform worse, which implies that if the test task distribution shift towards high-velocity tasks, the performance will degrade.

In this paper, we instead take a non-distributional perspective by formulating the adversarial meta-RL problem. Given a parametrized family of tasks, we aim to minimize the worst sub-optimality gap — the difference between the optimal return and the return the algorithm achieves after adaptation — across all tasks in the family in the test time. This can be naturally formulated mathematically as a minimax problem (or a two-player game) where the maximum is over all the tasks and the minimum is over the parameters of the algorithm (e.g., the shared policy initialization or the shared dynamics).

Our approach is model-based. We learn a shared dynamics model across the tasks in the training time, and during the test time, given a new reward function, we train a policy on the learned dynamics. The model-based methods can outperform significantly the model-free methods in sample-efficiency even in the standard single task setting (luo2018algorithmic; dong2019bootstrapping; janner2019trust; wang2019exploring; chua2018deep; buckman2018sample; nagabandi2018neural; kurutach2018model; feinberg2018model; rajeswaran2016epopt; rajeswaran2020game; wang2019benchmarking), and are particularly suitable for meta-RL settings where the optimal policies for tasks are very different, but the underlying dynamics is shared (landolfi2019model). We apply the natural adversarial training (madry2017towards) on the level of tasks — we alternate between the minimizing the sub-optimality gap over the parameterized dynamics and maximizing it over the parameterized tasks.

The main technical challenge is to optimize over the task parameters in a sample-efficient way. The sub-optimality gap objective depends on the task parameters in a non-trivial way because the algorithm uses the task parameters iteratively in its adaptation phase during the test time. The naive attempt to back-propagate through the sequential updates of the adaptation algorithm is time costly, especially because the adaptation time in the model-based approach is computationally expensive (despite being sample-efficient). Inspired by a recent work on learning equilibrium models in supervised learning 

(bai2019deep)

, we derive an efficient formula of the gradient w.r.t. the task parameters via the implicit function theorem. The gradient involves an inverse Hessian vector product, which can be efficiently computed by conjugate gradients and the REINFORCE estimator

(williams1992simple).

In summary, our contributions are:

  • We propose a minimax formulation of model-based adversarial meta-reinforcement learning (AdMRL, pronounced like “admiral”) with an adversarial training algorithm to address the distribution shift problem.

  • We derive an estimator of the gradient with respect to the task parameters, and show how it can be implemented efficiently in both samples and time.

  • Our approach significantly outperforms the state-of-the-art meta-RL algorithms in the worst-case performance over all tasks, the generalization power to out-of-distribution tasks, and in training and test time sample efficiency on a set of continuous control benchmarks.

2 Related Work

The idea of learning to learn was established in a series of previous works (utgoff1986shift; schmidhuber1987evolutionary; thrun1996learning; thrun2012learning). These papers propose to build a base learner for each task and train a meta-learner that learns the shared structure of the base learners and outputs a base learner for a new task. Recent literature mainly instantiates this idea in two directions: (1) learning a meta-learner to predict the base learner (wang2016learning; snell2017prototypical); (2) learning to update the base learner (hochreiter2001learning; bengio1992optimization; finn2017model). The goal of meta-reinforcement learning is to find a policy that can quickly adapt to new tasks by collecting only a few trajectories. In MAML (finn2017model), the shared structure learned at train time is a set of policy parameters. Some recent meta-RL algorithms propose to condition the policy on a latent representation of the task (rakelly2019efficient; zintgraf2019variational; wang2020learning; humplik2019meta). duan2016rl; wang2016learning represent the reinforcement learning algorithm as a recurrent network. mendonca2019guided improves the sample efficiency during meta-training by consolidating the solutions of individual off-policy learners into a single meta-learner. mehta2020curriculum attempt to address the distribution shift problem by introducing a curriculum for meta-training tasks. landolfi2019model also proposes to share a dynamical model across tasks during meta-training and perform model-based adaptation in new tasks. The approach is still distributional and suffers from distribution shift. We adversarially choose training tasks to address the distribution shift issue and show in the experiment section that we outperform the algorithm with randomly-chosen tasks. rothfuss2018promp improves the sample-efficiency during meta-training by overcoming the issue of poor credit assignment. schulzevaribad meta-learn to perform approximate inference on an unknown task, and incorporate task uncertainty directly during action selection.

Model-based approaches have long been recognized as a promising avenue for reducing sample complexity of RL algorithms. One popular branch in MBRL is Dyna-style algorithms (sutton1990integrated), which iterates between collecting samples for model update and improving the policy with virtual data generated by the learned model (luo2018algorithmic; janner2019trust; wang2019exploring; chua2018deep; buckman2018sample; kurutach2018model; feinberg2018model; rajeswaran2020game). Another branch of MBRL produces policies based on model predictive control (MPC), where at each time step the model is used to perform planning over a short horizon to select actions (chua2018deep; nagabandi2018neural; dong2019bootstrapping; wang2019exploring).

Our approach is also related to active learning 

(atlas1990training; lewis1994sequential; silberman1996active; settles2009active). It aims to find the most useful or difficult data point whereas we are operating in the task space. Our method is also related to curiosity-driven learning (pathak2017curiosity; burda2018large; burda2018exploration), which defines intrinsic curiosity rewards to encourage the agent to explore in an environment. Instead of exploring in state space, our method are “exploring” in the task space. The work of jin2020reward aims to compute the near-optimal policies for any reward function by sufficient exploration, while we search for the reward function with the worst suboptimality gap.

3 Preliminaries

Reinforcement Learning.

Consider a Markov Decision Process (MDP) with state space

and action space . A policy specifies the conditional distribution over the action space given a state . The transition dynamics specifies the conditional distribution of the next state given the current state and . We will use to denote the unknown true transition dynamics in this paper. A reward function defines the reward at each step. We also consider a discount and an initial state distribution . We define the value function at state for a policy on dynamics : . The goal of RL is to seek a policy that maximizes the expected return .

Meta-Reinforcement Learning

In this paper, we consider a family of tasks parameterized by and a family of polices parameterized by . The family of tasks is a family of Markov decision process (MDP) which all share the same dynamics but differ in the reward function. We denote the value function of a policy on a task with reward and dynamics by , and denote the expected return for each task and dynamics by . For simplicity, we will use the shorthand .

Meta-reinforcement learning leverages a shared structure across tasks. (The precise nature of this structure is algorithm-dependent.) Let denote the set of all such structures. A meta-RL training algorithm seeks to find a shared structure , which is subsequently used by an adaptation algorithm to learn quickly in new tasks. In this paper, the shared structure is the learned dynamics (more below).

Model-based Reinforcement Learning

In model-based reinforcement learning (MBRL), we parameterize the transition dynamics of the model

(as a neural network) and learn the parameters

so that it approximates the true transition dynamics of . In this paper, we use Stochastic Lower Bound Optimization (SLBO) (luo2018algorithmic), which is an MBRL algorithm with theoretical guarantees of monotonic improvement. SLBO interleaves policy improvement and model fitting.

4 Model-based Adversarial Meta-Reinforcement Learning

4.1 Formulation

We consider a family of tasks whose reward functions are parameterized by some parameters , and assume that is differentiable w.r.t. for every . We assume the reward function parameterization is known throughout the paper.222It’s challenging to formulate the worst-case performance without knowing a reward family, e.g., when we only have access to randomly sampled tasks from a task distribution. Recall that the total return of policy on dynamics and tasks is denoted by Here is the return of the trajectory under reward function . As shorthand, we define as the return in the real environment on tasks and as the return on the virtual dynamics on task .

Given a learned dynamics and test task , we can perform a zero-shot model-based adaptation by computing the best policy for task under the dynamics , namely, . Let , formally defined in equation below, be the suboptimality gap of the -optimal policy on task , i.e. the difference between the performance of the best policy for task and the performance of the policy which is best for according to the model . Our overall aim is to find the best shared dynamics , such that the worst-case sub-optimality gap is minimized. This can be formally written as a minimax problem:

(1)

In the inner step (max over ), we search for the task which is hardest for our current model , in the sense that the policy which is optimal under dynamics is most suboptimal in the real MDP. In the outer step (min over ), we optimize for a model with low worst-case suboptimality. We remark that, in general, other definitions of sub-optimality gap, e.g., the ratio between the optimal return and achieved return may also be used to formulate the problem.

Algorithmically, by training on the hardest task found in the inner step, we hope to obtain data that is most informative for correcting the model’s inaccuracies.

4.2 Computing Derivatives with respect to Task Parameters

To optimize Eq. (1), we will alternate between the min and max using gradient descent and ascent respectively. Fixing the task , minimizing reduces to standard MBRL.

On the other hand, for a fixed model , the inner maximization over the task parameter is non-trivial, and is the focus of this subsection. To perform gradient-based optimization, we need to estimate . Let us define (the optimal policy under the true dynamics and task ) and (the optimal policy under the virtual dynamics and task ). We assume there is a unique for each . Then,

(2)

Note that the first term comes from the usual (sub)gradient rule for pointwise maxima, and the second term comes from the chain rule. Differentiation w.r.t.

commutes with expectation over , so

(3)

Thus the first and last terms of the gradient of Eq. (2) can be estimated by simply rolling out and and differentiating the sampled rewards. Let be the advantage function. Then, the term in Eq. (2) can be computed by the standard policy gradient

(4)

The complicated part left in Eq. (2) is . We compute it using the implicit function theorem (wiki:implicit) (see Section A.1 for details):

(5)

The mixed-derivative term in equation above can be computed by differentiating the policy gradient:

(6)

An estimator for the Hessian term in Eq. (5) can be derived by the REINFORCE estimator (sutton2000policy), or the log derivative trick (see Section A.2 for a detailed derivation),

(7)

By computing the gradient estimator using implicit function theorem, we do not need to back-propagate through the sequential updates of our adaptation algorithm, from which we can estimate the gradient w.r.t. task parameters in a sample-efficient and computationally tractable way.

4.3 AdMRL: a Practical Implementation

Algorithm 1 gives pseudo-code for our algorithm AdMRL, which alternates the updates of dynamics and tasks . Let be the shorthand for the procedure of learning a dynamics using data and then optimizing a policy from initialization on tasks under dynamics with virtual steps. Here parameterized arguments of the procedure are referred to by their parameters (so that the resulting policy, dynamics, are written in and ). For each training task parameterized by , we first initialize the policy randomly, and optimize a policy on the learned dynamics until convergence (Line 4), which we refer to as zero-shot adaptation. We then use the obtained policy to collect data from real environment and perform the MBRL algorithm SLBO (luo2018algorithmic) by interleaving collecting samples, updating models and optimizing policies (Line 5). After collecting samples and performing SLBO updates, we then get an nearly optimal policy .

Then we update the task parameter by gradient ascent. With the policy and , we compute each gradient component (Line 9, 10) and obtain the gradient w.r.t task parameters (Line 11) and perform gradient ascent for the task parameter (Line 12). Now we complete an outer-iteration. Note that for the first training task, we skip the zero-shot adaptation phase and only perform SLBO updates because our dynamical model is untrained. Moreover, because the zero-shot adaptation step is not done, we cannot technically perform our tasks update either because the tasks derivative depends on , the result of zero-shot adaption (Line 8).

1:Initialize model parameter , task parameter and dataset
2:for  iterations do
3:     Initialize policy parameter randomly
4:     If , Zero-shot adaptation
5:     for  iterations do SLBO
6:          collected samples on the real environments using with noise
7:          = VirtualTraining()      
8:     if first task then randomly re-initialize ; otherwise then
9:         Compute gradients and using Eq. 3; compute using Eq. 4; compute using Eq. 6; compute using Eq. 7.
10:         Efficiently compute using conjugate gradient method. (see Section 4.3)
11:         Compute the final gradient
12:         Perform task parameters projected gradient ascent      
Algorithm 1 AdMRL: Model-based Adversarial Meta-Reinforcement Learning

Implementation Details. Computing Eq. (5) for each dimension of involves an inverse-Hessian-vector product. We note that we can compute Eq. (5) by approximately solving the equation , where is and is . However, in large-scale problems (e.g. has thousands of dimensions), it is costly (in computation and memory) to form the full matrix . Instead, the conjugate gradient method provides a way to approximately solve the equation without forming the full matrix of , provided we can compute the mapping

. The corresponding Hessian-vector product can be computed as efficiently as evaluating the loss function 

(pearlmutter1994fast) up to a universal multiplicative factor. Please refer to Appendix B to see how to implement it concretely. In practice, we found that the matrix of is always not positive-definite, which hinders the convergence of conjugate gradient method. Therefore, we turn to solve the equivalent equation .

In terms of time complexity, computing the gradient w.r.t task parameters is quite efficient compared to other steps. On one hand, in each task iteration, for the MBRL algorithm, we need to collect samples for dynamical model fitting, and then rollout virtual samples using the learned dynamical model for policy update to solve the task, which takes time complexity, where and denote the dimensionality of and . On the other hand, we only need to update the task parameter once in each task iteration, which takes time complexity by using conjugate gradient descent, where denotes the dimensionality of . In practice, for MBRL algorithm, we often need a large amount of virtual samples (e.g., millions of) to solve the tasks. In the meantime, the dimension of task parameter is a small constant and we have . Therefore, in our algorithm, the runtime of computing gradient w.r.t task parameters is negligible.

In terms of sample complexity, although computing the gradient estimator requires samples, in practice, however, we can reuse the samples that collected and used by the MBRL algorithm, which means we take almost no extra samples to compute the gradient w.r.t task parameters.

5 Experiments

In our experiments333Our code is available at https://github.com/LinZichuan/AdMRL., we aim to study the following questions: (1) How does AdMRL perform on standard meta-RL benchmarks compared to prior state-of-the-art approaches? (2) Does AdMRL achieve better worst-case performance than distributional meta-RL methods? (3) How does AdMRL perform in environments where task parameters are high-dimensional? (4) Does AdMRL generalize better than distributional meta-RL on out-of-distribution tasks?

We evaluate our approach on a variety of continuous control tasks based on OpenAI gym (brockman2016openai), which uses the MuJoCo physics simulator (todorov2012mujoco).

Low-dimensional velocity-control tasks

Following and extending the setup of (finn2017model; rakelly2019efficient), we first consider a family of environments and tasks relating to controlling 2-D or 3-D velocity control tasks. We consider three popular MuJoCo environments: Hopper, Walker and Ant. For the 3-D task families, we have three task parameters which corresponds to the target -velocity, -velocity, and -position. Given the task parameter, the agent’s goal is to match the target and velocities and position as much as possible. The reward is defined as: where and denotes and velocities and denotes height, and are handcrafted coefficients ensuring that each reward component contributes similarly. The set of task parameters is a 3-D box , which can depend on the particular environment. E.g., Ant3D has and here the range for -position is chosen so that the target can be mostly achievable. For a 2-D task, the setup is similar except only two of these three values are targeted. We experiment with Hopper2D, Walker2D and Ant2D. Details are given in Appendix C. We note that we extend the 2-D settings in (finn2017model; rakelly2019efficient)

to 3-D because when the tasks parameters have more degrees of freedom, the task distribution shifts become more prominent.

High-dimensional tasks

We also create a more complex family of high-dimensional tasks to test the strength of our algorithm in dealing with adversarial tasks among a large family of tasks with more degrees of freedom. Specifically, the reward function is linear in the post-transition state , parameterized by task parameter (where is the state dimension): Here the task parameter set is . In other words, the agent’s goal is to take action to make most linearly correlated with some target vector . We use HalfCheetah where . Note that to ensure that each state coordinate contributes similar to the total reward, we normalize the states by before computing the reward function, where are computed from all states collected by random policy from real environments. The high-dimensional task is called Cheetah-Highdim tasks. Tasks parameterized in this way are surprisingly often semantically meaningful, corresponding to rotations, jumping, etc. Appendix D shows some visualization of the trajectories.

Training

We compare our approach with previous meta-RL methods, including MAML (finn2017model) and PEARL (rakelly2019efficient). The training process for our algorithm is outlined in Algorithm 1. We build our algorithm based on the code that luo2018algorithmic provides. We use the publicly available code for our baselines MAML, PEARL. Most hyper-parameters are taken directly from the supplied implementation. We list all the hyper-parameters used for all algorithms in the Appendix C. We note here that we only run our algorithm for or training tasks, whereas we allow MAML and PEARL to visit 150 tasks during the meta-training for generosity of comparison. The training process of MAML and PEARL requires 80 and 2.5 million samples respectively, while our method AdMRL only requires 0.4 or 0.8 million samples.

Evaluation Metric

For low-dimensional tasks, we enumerate tasks in a grid. For each 2-D environment (Hopper2D, Walker2D, Ant2D) we evaluate at a grid of size . For the 3-D tasks (Ant3D), we evaluate at a box of size . For high-dimensional tasks, we randomly sample 20 testing tasks uniformly on the boundary. For each task , we compare different algorithms in: (zero-shot adaptation performance with no samples), (adaptation performance after collecting samples) and (suboptimality gap), and (worst-case suboptimality gap). In our experiments, we compare AdMRL with MAML and PEARL in all environments with . We also compare AdMRL with distributional variants (e.g., model-based methods with uniform or gaussian task sampling distribution) in worst-case tasks, high-dimensional tasks and out-of-distribution (OOD) tasks.

5.1 Adaptation Performance Compared to Baselines

[width=0.25]figs/oursalleval-hopper.png [width=0.25]figs/oursalleval-walker.png [width=0.25]figs/oursalleval-ant1d.png [width=0.25]figs/oursalleval-ant2d.png [width=0.25]figs/oursalleval-linear.png

Figure 2: Average of returns over all tasks of adapted policies (with 3 random seeds) from our algorithm, MAML and PEARL. Our approach substantially outperforms baselines in training and test time sample efficiency, and even with zero-shot adaptation.

For the tasks described in section 5, we compare our algorithm against MAML and PEARL. Figure 2 shows the adaptation results on the testing tasks set. We produce the curves by: (1) running our algorithm and baseline algorithms by training on adversarially chosen tasks and uniformly sampling random tasks respectively; (2) for each test task, we first do zero-shot adaptation for our algorithm and then run our algorithm and baseline algorithms by collecting samples; (3) estimating the averaged returns of the policies by sampling new roll-outs. The curves show the return averaged across all testing tasks with three random seeds in testing time. Our approach AdMRL outperforms MAML and PEARL across all test tasks, even though our method visits much fewer tasks (7/8) and samples (2/3) than baselines during meta-training. AdMRL outperforms MAML and PEARL with even zero-shot adaptation, namely, collecting no samples.444Note that the zero-shot model-based adaptation is taking advantage of additional information (the reward function) which MAML and PEARL have no mechanism for using. We also find that the zero-shot adaptation performance of AdMRL is often very close to the performance after collecting samples. This is the result of minimizing sub-optimality gap in our method.

5.2 Comparing with Model-based Baselines in Worst-case Sub-optimality Gap

[width=0.23]figs/finalsubheatwarm-optim-paper-ours-ant1d-AdMRL.png [width=0.23]figs/finalsubheatwarm-optim-paper-ours-ant1d-MB-Gauss.png [width=0.23]figs/finalsubheatwarm-optim-paper-ours-ant1d-MB-Unif.png

[width=0.23]figs/finalsubzeroadapt-optim-top5-paper-ant1d.png

Figure 3: (a) Sub-optimality gap of adapted policies for each test task from AdMRL, MB-Unif, and MB-Gauss. Lighter means smaller, which is better. For tasks on the boundary, AdMRL achieves much lower than MB-Gauss and MB-Unif, which indicates AdMRL generalizes better in the worst case. (b) The worst-case sub-optimality gap in the number of adaptation samples . AdMRL successfully minimizes the worst-case suboptimality gap.

[width=0.23]figs/3dplot-adv0-restart1.png [width=0.23]figs/3dplot-adv2-restart1.png [width=0.23]figs/3dplot-adv1-cyclic2.png

[width=0.23]figs/finalsubzeroadapt-optim-top10-paper-linear.png

Figure 4: (a) Visualization of visited training tasks by MB-Unif, MB-Gauss and AdMRL; AdMRL can quickly visit tasks with large suboptimality gap on the boundary and train the model to minimize the worst-case suboptimality gap. (b) The worst-case suboptimality gap in the number of adaptation samples for high-dimensional tasks. AdMRL significantly outperforms baselines in such tasks.

In this section, we aim to investigate the worst-case performance of our approach. We compare our adversarial selection method with distributional

variants — using model-based training but sampling tasks with a uniform or gaussian distribution with variance 1, denoted by

MB-Unif and MB-Gauss, respectively. All methods are trained on 20 tasks and then evaluated on a grid of test tasks. We plot heatmap figures by computing the sub-optimality gap for each test task in figure 3. We find that while both MB-Gauss and MB-Unif tend to over-fit on the tasks in the center, AdMRL can generalize much better to the tasks on the boundary. Figure 3 shows adapation performance on the tasks with worst sub-optimality gap. We find that AdMRL can achieve lower sub-optimality gap in the worst cases.

Performance on high-dimensional tasks

Figure 4 shows the suboptimality gap during adaptation on high-dimensional tasks. We highlight that AdMRL performs significantly better than MB-Unif and MB-Gauss when the task parameters are high-dimensional. In the high-dimensional tasks, we find that each task has diverse optimal behavior. Thus, sampling from a given distribution of tasks during meta-training becomes less efficient — it is hard to cover all tasks with worst suboptimality gap by randomly sampling from a given distribution. On the contrary, our non-distributional adversarial selection way can search for those hardest tasks efficiently and train a model that minimizes the worst suboptimality gap.

Visualization. To understand how our algorithm works, we visualize the task parameter that visited during meta-training in Ant3D environment. We compare our method with MB-Unif and MB-Gauss in figure 4. We find that our method can quickly visit the hard

tasks on the boundary, in the sense that we can find the most informative tasks to train our model. On the contrary, sampling randomly from uniform or gaussian distribution has much less probability to visit the tasks on the boundary.

5.3 Out-of-distribution Performance

We evaluate our algorithm on out-of-distribution tasks in the Ant2D environment. We train agents with tasks drawn in while testing on OOD tasks from . Figure 5 shows the performance of AdMRL in comparison to MB-Unif and MB-Gauss. We find that AdMRL has much lower suboptimality gap than MB-Unif and MB-Gauss on OOD tasks, which shows the generalization power of AdMRL.

[width=0.23]figs/finalworstOODheat-new28-ours-ant1dOOD-AdMRL.png [width=0.23]figs/finalworstOODheat-new28-ours-ant1dOOD-MB-Gauss.png [width=0.23]figs/finalworstOODheat-new28-ours-ant1dOOD-MB-Unif.png

[width=0.23]figs/finalworstOODcurve-new28-top10-paper-ant1dOOD.png

Figure 5: (a) Sub-optimality gap of adapted policies for each OOD test task of adapted policies from AdMRL, MB-Unif and MB-Gauss. Lighter means smaller, which is better. Training tasks are drawn from (as shown in the red box) while we only test the OOD tasks drawn from (on the boundary). Our approach AdMRL generalizes much better and achieves lower than MB-Unif and MB-Gauss on OOD tasks. (b) The worst-case sub-optimality gap in the number of adaptation samples .

6 Conclusion

In this paper, we propose Model-based Adversarial Meta-Reinforcement Learning (AdMRL), to address the distribution shift issue of meta-RL. We formulate the adversarial meta-RL problem and propose a minimax formulation to minimize the worst sub-optimality gap. To optimize efficiently, we derive an estimator of the gradient with respect to the task parameters, and implement the estimator efficiently using the conjugate gradient method. We provide extensive results on standard benchmark environments to show the efficacy of our approach over prior meta-RL algorithms. In the future, several interesting directions lie ahead. (1) Apply AdMRL to more difficult settings such as visual domain. (2) Replace SLBO by other MBRL algorithms. (3) Apply AdMRL to cases where the parameterization of reward function is unknown.

Acknowledgement

We thank Yuping Luo for helpful discussions about the implementation details of SLBO. Zichuan was supported in part by the Tsinghua Academic Fund Graduate Overseas Studies and in part by the National Key Research & Development Plan of China (grant no. 2016YFA0602200 and 2017YFA0604500). TM acknowledges support of Google Faculty Award and Lam Research. The work is also in part supported by SDSI and SAIL.

References

Appendix A Omitted Derivations

a.1 Jacobian of with respect to

We begin with an observation: first-order optimality conditions for necessitate that

(8)

Then, the implicit function theorem tells us that for sufficiently small , there exists as a function of such that

(9)

To first order, we have

(10)

Thus, solving for as a function of and taking the limit as , we obtain

(11)

a.2 Policy Hessian

Fix dynamics , and let denote the probability density of trajectory under policy . Then we have

(12)

Thus we get the basic (REINFORCE) policy gradient

(13)

Differentiating our earlier expression for once more, and then reusing that same expression again, we have

(14)
(15)

Thus

(16)
(17)
(18)
(19)

Appendix B Implementation detail

The section discusses how to compute using standard automatic differentiation packages. We first define the following function:

(20)

where are parameter copies of . We then use Hessian-vector product to avoid directly computing the second derivatives. Specifically, we compute the two parts in Eq. (7) respectively by first differentiating w.r.t and

(21)

and then differentiate w.r.t for twice

(22)

and thus we have .

Appendix C Hyper-parameters

We experimented with the following task settings: Hopper-2D with velocity and height from , Walker-2D with velocity and height from , Ant-2D with velocity and velocity from , Ant-3D with velocity, velocity and height from , Cheetah-Highdim with . We also list the coefficient of the parameterized reward functions in Table 1.

Hopper2D Walker2D Ant2D Ant3D
1 1 1 1
0 0 1 1
5 5 0 30
Table 1: Coefficient in parameterized reward functions

The hyper-parameters of MAML and PEARL are mostly taken directly from the supplied implementation of [finn2017model] and [rakelly2019efficient]. We run MAML for 500 training iterations: for each iteration, MAML uses a meta-batch size of 40 (the number of tasks sampled at each iteration) and a batch size of 20 (the number of rollouts used to compute the policy gradient updates). Overall, MAML requires 80 million samples during meta training. For PEARL, we first collect a batch of training tasks (150) by uniformly sampling from . We run PEARL for 500 training iterations: for each iteration, PEARL randomly sample 5 tasks and collects 1000 samples for each task from both prior (400) and posterior (600) of the context variables; for each gradient update, PEARL uses a meta-batch size of 10 and optimizes the parameters of actor, critic and context encoder by 4000 steps of gradient descent. Overall, PEARL requires 2.5 million samples during meta training.

For AdMRL, we first do zero-shot adaptation for each task by 40 virtual steps (). We then perform SLBO [luo2018algorithmic] by interleaving data collection, dynamical model fitting and policy updates, where we use 3 outer iterations () and 20 inner iterations (). Algorithm 2 shows the pseudo code of the virtual training procedure. For each inner iteration, we update model for 100 steps (), and update policy for 20 steps (), each with 10000 virtual samples (). For the first task, we use (for Hopper2D, Walker2D) or (for Ant2D, Ant3D, Cheetah-Highdim). For all tasks, we sweep the learning rate in {1,2,4,8,16,32} and we use for Hopper2D, for Walker2D, for Ant2D and Ant3D, for Cheetah-Highdim. To compute the gradient w.r.t the task parameters, we do 200 iterations of conjugate gradient descent.

1:procedure VirtualTraining()
2:     for  iterations do
3:         Optimize virtual dynamics over with data sampled from by steps
4:         for  iterations do
5:               {collect samples from the learned dynamics }
6:              Optimize by running TRPO on               
Algorithm 2 Virtual Training in AdMRL

Appendix D Examples of high-dimensional tasks

Figure 6 shows some trajectories in the high-dimensional task Cheetah-Highdim.

[width=0.99]figs/highdim.png

Figure 6: The high-dimensional tasks are surprisingly often semantically meaningful. Policies learned in these tasks can have diverse behaviors, such as front flip (top row), back flip (middle row), jumping (bottom row), etc.