MANGA: Method Agnostic Neural-policy Generalization and Adaptation

11/19/2019 ∙ by Homanga Bharadhwaj, et al. ∙ UNIVERSITY OF TORONTO 23

In this paper we target the problem of transferring policies across multiple environments with different dynamics parameters and motor noise variations, by introducing a framework that decouples the processes of policy learning and system identification. Efficiently transferring learned policies to an unknown environment with changes in dynamics configurations in the presence of motor noise is very important for operating robots in the real world, and our work is a novel attempt in that direction. We introduce MANGA: Method Agnostic Neural-policy Generalization and Adaptation, that trains dynamics conditioned policies and efficiently learns to estimate the dynamics parameters of the environment given off-policy state-transition rollouts in the environment. Our scheme is agnostic to the type of training method used - both reinforcement learning (RL) and imitation learning (IL) strategies can be used. We demonstrate the effectiveness of our approach by experimenting with four different MuJoCo agents and comparing against previously proposed transfer baselines.



There are no comments yet.


page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

One of the most well recognized goals of robotics research is to develop autonomous agents that can perform a wide variety of tasks in various complex environments. Recently numerous deep reinforcement learning (RL) and imitation learning (IL) based approaches have sought to achieve good performance in complex robotic tasks through minimal supervision. However, a major concern in experimenting with the real environment directly is safety, both of the robot and of the environment. Safety concerns and also the issue of reproducibility has drawn robotics research extensively to simulation environments.

An important benefit of simulators is that not only can we reset as many times as needed by varying the initial state and/or injecting stochastic noises such as observation noise and motor noise, but can also arbitrarily configure the environment. This enables us to change the dynamics parameters like mass, shape, size, and inertia of the agent, friction between the agent and the environment, damping coefficients and gravitational acceleration. We leverage this to develop our approach such that a wide ensemble of simulation configurations can be used in training to achieve robustness to a new environment. We especially focus on adaptation to the unknown dynamics of the new environment.

Most previous approaches for transfer to different environments [10, 2, 7, 29] have not explicitly taken advantage of the fact that we can dynamically sample a variety of environments in simulation, and some that have done so [16, 22, 32, 30, 27] do not attempt to learn an efficient off-policy scheme for inferring the dynamics of the environment. To remedy this, we adopt a two-fold approach, and claim the following contributions:

  • learn a good latent space by encoding observations through appropriate regularizations and explicitly concatenate to it an encoding of the vector of dynamics parameter configurations; condition the policy decoder on this latent representation

  • develop a Bayesian meta-learning scheme to infer the dynamics parameter configuration of a given environment from a dataset of off-policy rollouts in that environment.

We demonstrate that by randomly sampling the parameters of the simulation environments, and adapting the policy to these varied configurations in training, we can achieve successful transfer at test time to a completely unseen dynamics configuration of the environment. An important point to note is that at test time, we do not have access to the ground truth system parameters. So, we develop a scheme to learn system parameters from random off-policy state transition data.

A desirable property of the transfer learning method is that it should be zero-shot in the sense that the transferred policy should not require any fine-tuning in the target environment so that safety of the real robot is not compromised, when applied to sim2real transfer. This can be indeed realized by our proposed approach.

Although we evaluate our model in simulation only and study transfer across different simulation environments, the approach can be extended to sim2real transfer settings as well, provided there is access to a real robot, and we can mimic the real dynamics well when we set appropriate dynamics parameters in the simulator.

Ii Related Works

Training robots directly in the real environment is unsafe, especially for domains like navigation/locomotion [17, 25, 18], and hence training in simulation and deploying in the real world has become a common trend in robotics, under the theme of sim2real transfer [13, 34]. An important first step to sim2real transfer is sim2sim transfer [10, 31]. Numerous recent works have tackled a similar problem as ours and studied transfer of policies across different simulation environments, across dynamics models, and from simulation to real environments. Universal Planning Network (UPN) [21] trains for goal-directed tasks and in the process tries to capture ‘transferable representations’ such that the trained encoders can be used for reward-shaping an RL algorithm for a similar albeit slightly complicated task. It is important to note that, complete on-policy training of the RL algorithm still needs to be performed in the new environment and the only ‘transfer’ benefit provided by UPN is in reward shaping. Hence, a good transfer cannot be achieved zero-shot.

Learning a policy that is robust to dynamics change can be naively done by training a policy architecture in different domain randomized configurations. This has been done in the Domain Randomization [16, 22] approaches, the main drawback of which is that it learns an ‘average’ policy that performs reasonably well in a wide range of test environments but is not ‘very good’ for each environment. Motivated by this drawback, we do not aim to develop ‘robust’ policies, but polices that can ‘adapt’ to a given new test environment. A simple way to do this, as shown in [30] could be to maintain a repertoire of policies corresponding to different dynamics configurations (this ensemble is called the strategy) and choose the best policy corresponding to the test environment, by running a few episodes, and considering the policy that yields the most rewards. However, since this method requires number of execution episodes in the test environment that is linearly proportional to the number of policies in the strategy, the approach is not scalable.

In [1], the authors use LSTM [11] value and policy networks that implicitly learn the dynamics parameters of the environment during policy learning via dynamics randomization. However, learning a dynamics model together with policy learning renders less control over what is learned in the latent space and may lead to sensitive hyper-parameter optimization for achieving convergence. Hence, it is advisable to decouple the two procedures as advocated in [32]. Another issue is the convergence of dynamics parameter estimation. Since LSTM assumes time-varying latent variables, and the observations change every time-step while the environmental dynamics remains fixed within an episode, trying to achieve convergence for both a good policy and an estimate of good dynamics may be difficult.

Meta-Learning [8, 28, 9, 20] attempts to develop general models that can adapt to new tasks with a very few model updates. FastMAML [35] modifies MAML [8] by separating the model parameters into general and task-specific parameters. Only the task-specific parameters need to be updated when a new ‘test’ task is given. NoReward MAML [27] extends MAML [8] to handle tasks defined by different dynamics configurations of the environment. The main difference from vanilla MAML is that the authors meta-learn the advantage function which is used to appropriately bias the Monte-Carlo sampling estimates of policy during learning. An important drawback of this approach is that by considering non-temporal state-action transition sequences (just static data of the form ) important dynamics parameters like friction, gravity etc. cannot be appropriately modeled. Another drawback is that the method requires fine-tuning with some data samples in the test environment, and hence is not zero-shot.

Learning a domain-invariant latent space on which the policy is conditioned is another line of domain-adaptation based approaches for policy transfer. Zhang et al. [33] adapt the encoder from sim to real by performing adversarial domain adaptation (ADA) [4, 24] to match the latent space of encoding in sim and real, but require intermediate supervision in the form of position of robotic joints for the latent state while training the encoders. Bharadhwaj et al. [2] do this end-to-end without requiring intermediate supervision. However, both of these approaches suffer from the drawback of not being able to transfer effectively to different dynamics configurations as ADA cannot capture non-visual changes. Hence, they require fine-tuning in the real environment for aligning the dynamics modules, and so, are not zero-shot approaches.

Yu et al. [29] adopt a two stage process for system identification and subsequently policy transfer is developed. The novelty of their method is in training a policy architecture conditioned on the roughly identified model parameters. However, a major concern of this approach is that on-policy state-transition data from the intermediately trained model is required in the target environment for system-parameter identification, which is not safe (since the model has not yet been fully trained). Also, the method proposed in the paper can be used for transfer to a ‘fixed’ target environment - when the target environment is altered i.e. the system dynamics parameters are altered, the entire method including ‘pre-SysID’ needs to be re-trained. However our method, after being trained can be deployed on any test environment with unknown dynamics parameters and does not need to be re-trained when the test environments change.

Iii The Proposed Approach

The components of the proposed approach are described below:

Iii-a The basic model

Our basic model consists of an encoder for observations and a policy (or ‘action’) decoder. The Markov chain corresponding to the model is

, where is the input state (which can either be fully observable or partially observable). is the action space, a sample from which is what the model outputs. We consider

to be a normal distribution whose mean and variance are predicted by the decoder from latent

. Our training scheme is end-to-end and hence we do not need intermediate supervision for latent . In the subsequent sections, we denote the observation encoder by and the action decoder by . Later, we also introduce the dynamics encoder, inverse dynamics model, and state (reconstruction) decoder respectively denoted by , , and . The Dynamics Conditioned Policy module in Fig. 1

describes the basic architecture of MANGA. All the model components are realized by feed-forward neural networks.

Fig. 1: A schematic of the overall architecture of MANGA. The Elemental Dynamics Estimator and their aggregation is described in Sec.III D.

Iii-B Dynamics conditioned policy (DCP)

We condition our policy decoder both on the current observation frame in the environment and on an encoding of dynamics parameters of the agent and the environment. Many other previous papers [16, 7] considered the raw dynamics parameters as input to the policy model (i.e. without encoding them separately from input observations), however, it is important to consider a separate encoding of the parameters so that they scale well and are in sync with the latent encoding of input observations. This is also important because the observations change in each time-step while the dynamics parameter vector, and thus the dynamics encoding, remains fixed within each episode of training.

Consider the process of training our model in a simulation environment and let the dynamics parameters of be denoted by a -dim vector . Now, we encode the ground-truth dynamics parameters through an encoder and feed in the output to the bottleneck layer at time-step of our basic model. The bottleneck layer is the concatenation of , where is the observation in at time and i.e. the vector . The policy decoder then takes as input the vector and outputs the mean and covariance matrix for the action distribution. So,

Here is the output action of the model corresponding to the input observation and the dynamics vector . The policy learned in is not likely to work well in the other environment , even if we provided the dynamics parameters , because we have not trained the policy to distinguish between the dependence on and . To remedy this, we borrow the idea of dynamics randomization from Peng et al. [16].

Iii-C Training the DCP - Improving Generalization through Dynamics Randomization

To implicitly learn the dependence of the policy on input observations and the dynamics of the environment, we train our model across different simulation environments by choosing random values for the dynamics parameters (within appropriate ranges) across the environments. At the start of each episode we sample a certain that defines an environment, choose a random initial pose (state) from the distribution of all states and train the model corresponding to that environment and sample another randomly at the start of the next episode. This explicit conditioning over a wide ensemble of dynamics parameters enables transfer to unseen dynamics parameters.

Our proposed method is agnostic to the type of training procedure, and both Reinforcement Learning (RL) and Imitation Learning (IL) approaches can be used. However, in the experiments we consider a specific RL algorithm for training, for the sake of consistency in comparison. The detailed training procedure for the dynamics conditioned policy is described through Algorithm 1.

1:procedure Train(L-algo)
2:     Initialize params of
3:     Generate randomized environments
4:     for


5:         for each episode do
6:              Randomly choose environment
7:              Obtain dynamics parameters
8:              Randomly sample the initial state
9:              Train using L-algo               
Algorithm 1 Training Procedure of DCP

Iii-D Inferring the dynamics parameters at test time

Since we do not have access to the system’s dynamics parameters at test time, we propose a scheme to learn the system parameters from random off-policy state transition data. During training, we have access to the simulation environments and their corresponding dynamics parameter (). We consider a random policy that samples a random action in the range of allowed actions (or any pre-trained ‘safe’ policy) and allow it to run for a few episodes in each environment. We collect state transition data in tuples of the form (state, action, next state) i.e., . Let be a forward dynamics model of the simulator such that where the true next state is given as


when we have the true value of system parameters . is a noise term, and for the sake of analytic simplicity, we assume is a Gaussian with zero mean and variance . The above defines the likelihood model for state-transition and our aim is to estimate through its posterior where .

Although some previous approaches [27] try to estimate system dynamics parameters from uncorrelated, stand-alone state-transition tuples, we postulate that to correctly estimate dynamics parameters, we must consider correlated state-transition data within episodes. We divide the horizon length of the episodes in each into chunks of length each and estimate for each chunk

in the form of Gaussian distribution with mean

and variance by using an elemental dynamics estimator. If we denote the observation sequence and action sequence within the chunk in as and , this amounts to the estimate of the following posterior:

The length of chunk, , should be large, but not be too large. To aggregate the estimates of s, we exploit the relationship between the posterior of conditioned on a single pair of datapoints, , and the posterior of conditioned on the entire dataset , . We note that:

Because we assume and to both be independent Gaussian distributions, can be obtained as a Gaussian distribution after some elementary computations,



Here the subscript denotes the element of the vector. The posterior is parameterized by as through the parameterization of the functions and which are realized by deep neural networks. Also includes scalars and (). The parameters are optimized so as to approximate the true posterior well.

A popular way of approximating the posterior is to minimize the KL divergence between the true posterior and its approximation , i.e.,

This posterior approximation problem for each environment can be solved without explicitly evaluating the when we consider the following evidence lower bound optimization [12].

Here we can use the re-parameterization trick [12] that replaces the expectation with respect to with the expectation with a standard Gaussian variable by interpreting the Gaussian posterior to be a result of element-wise variable transformation with .

It is important to consider a chunk of temporal sequences instead of standalone tuples (unlike [27]

) so as to effectively realize the posterior of complex dynamics parameters like friction, gravity etc. In general, the posterior of the dynamics parameter can take a complex multi-modal distribution, but it approaches a Gaussian when the no. of samples in the temporal chunk increases and the statistical model is ‘regular’ according to the central limit theorem

[26]. The different estimates of from temporal chunks of length each in each episode of the rollouts are the ‘Elemental Dynamics Estimator’ in Fig. 1. Their ‘Aggregator’ is described by the optimization problem above.

Given state-transition data, we can use the trained model to infer the dynamics parameter vector of the test environment. It is important to note that collecting data in the test environment for this system parameter identification is inexpensive because we only need off-policy data, which can be collected by simply running a random policy or a different pre-trained ‘safe’ policy.

Iii-E Test Time inference

At test time we are given a simulation environment with dynamics parameters which are unknown. Let denote our trained model components that have been adapted through training in different dynamics configurations. Our aim now is to transfer the policy that has been learned in training, without any fine-tuning in the test environment i.e. we are not allowed to train again in . We can do this by using the learned in lieu of ground-truth and running forward inference through the trained model . This scheme is demonstrated in Algorithm 2. Although our approach learns a very good zero-shot initialization in the test environment (Section IV), we show comparisons with other models that require on-policy fine-tuning in the test environment in Section IV. Fine-tuning corresponds to updating the parameters of the policy architecture while executing in the test environment. For MANGA, to achieve good zero-shot initialization, the only execution in the test environment needed is running a random policy (or some external trained policy) to collect state transition data for feeding into the trained dynamics estimation module.

1:procedure Test(TrainedParams)
2:     Initialize with TrainedParams and denote the test environment
3:     Observe off-policy state transition data
4:     Estimate dynamics parameters
5:     Execute the policy from given initial state with the model
Algorithm 2 Test Time Inference

Iii-F Adapting to variations in motor noise

In this section we discuss a scheme to make our model robust to motor noise, which is an important consideration for real robotic tasks [7]. We interpret the addition of motor noise as a form of domain randomization, and consider that in reality we have some specific state dependent deviation. The implication of motor noise is same as adding disturbance to the output action of our policy model. In order to infer a model for the disturbance , we assume it to be a function of the current state weighted by an environment dependent parameter . Hence, , where is a non-linear mapping, specifically a feed-forward neural network whose parameter have been randomly assigned and fixed (similar random networks have been used for exploration and uncertainty estimation in RL [6]). When is randomly set with a large enough output dimension of during the training of the policy, the training scheme under this motor noise is similar to a form of domain randomization. However, we actively identify the perturbation caused by this in environment through the estimation of .

Let the original predicted action at time-step be . The action that is fed to the simulator, because of the motor noise now becomes . Since is an environment dependent parameter just like , we estimate the concatenated vector through the scheme described in Section III D for estimating . Here is a scalar multiplier to the noise.

Iii-G State reconstruction and ignoring nuisance correlates

Since we are training policies adaptable to variations in the environment, we need to ensure that our agent’s policy does not unfairly make correlations with state changes like changes in brightness, direction of light, location of shadow etc. that occur not as a result of the policy. Previous works like [16] do not consider this issue, however we argue that it is important for the very reason that we consider randomized environments. To tackle this and avoid learning nuisance correlates, we enforce an inverse dynamics model based regularization, which was previously used in the Intrinsic Curiosity Module of [15]. Let be the latent state at timestep , and be the inverse dynamics model such that the predicted action

. The loss function is:


In addition to the regularization via an inverse dynamics model, we also enforce input state reconstruction from the learned latent representation. This is important because we do not want our policy to get conditioned on a latent state which can never be reached from the observation space. Thus we aim to learn a reconstruction such that is minimized. The loss function is:

Iv Experiments

Through a series of experiments, we demonstrate the necessity of the different components of the proposed MANGA approach (ablation study) and compare against some external baselines for adaptation to different dynamics at test time. We also experimented to see how adaptive is MANGA to the change in the range of the dynamics parameter variations and how it adapts to motor noise variations at test time.

Fig. 2: Ablation study and comparison. All models are trained for 200,000 episodes and fine-tuned for 100 episodes in the test environment for effective comparison with MAML. Results are averaged over 1,000 episodes of execution in the unseen test environment with dynamics parameters in the range of of base values. NoReg is vanilla MANGA without any additional regularization. OnlyI is MANGA sans the state reconstruction regularization. OnlyS is MANGA sans the inverse dynamics regularization. LSTM is the DR baseline trained with LSTM policy architecture to implicitly estimate during policy learning [1]. FF is the DR baseline corresponding to MANGA without any regularization and with no estimation.
Fig. 3: Training with different ranges of system parameters for 200,000 episodes. The evaluation is on a randomly chosen previously unseen test environment within the same respective range. For effective comparison with MAML, all the models are updated in the test environment for the same number of episodes (100) as MAML. Higher reward is better.
Fig. 4: Plot of training with rollouts in the unseen test environment with dynamics parameters in the range of of base values. MANGA is the model that has been trained by the proposed approach for 200,000 episodes. 200 episodes of the random policy are used to estimate the dynamics parameters of the test environment. DR is the Domain Randomization baseline (FF) corresponding to our model. Oracle is the version of our model that is directly trained in the test environment and has access to the true dynamics parameters of the test environment.

Iv-a MuJoCo Environments (OpenAI Gym)

We consider three different MuJoCo environments [23] of varying complexity - Humanoid-v2, HalfCheetah-v2, and Hopper-v2, where the task in each environment is to move the agent as fast as possible without toppling over [5]. For consistency in comparison with external baselines, we use the default reward setting for each environment as specified in [5] and alter the following dynamics variables for evaluation: mass () and inertia () of the agent, gravitational acceleration (), friction coefficient between the agent and the environment (), stiffness coefficient of joints (), and damping coefficient ().

Each dynamics variable for MuJoCo is in general a vector (for example the vector consists of the mass of different parts of the Half-Cheetah body) of different dimensions. We consider to be the linearized concatenation of all these variables. Let corresponds the dynamics variable in , whose base value is say .

During training, we randomize such that each component gets perturbed in the range . Here denotes randomization and we perform experiments with randomly chosen in the respective range specified by .

Iv-B Training details

Although our proposed approach is method-agnostic, and L-algo in Algorithm 1 can be any RL or IL algorithm, for our specific implementation we used the RL algorithm Proximal-Policy Optimization (PPO) algorithm [19]. We used SGD [3]

optimizer for optimization and the Pytorch 

[14] library in Python for the implementation. For training the dynamics estimator in Fig. 1, we found that choosing a temporal chunk of length timesteps (for each elemental dynamics estimator) performed well. All the functions described in Fig. 1 and Section III are realized by feed-forward neural networks. Other details including the baselines are described in the subsequent sections.

Iv-C Ablation study

We postulate that the auxiliary modules, namely the inverse dynamics model and the state-reconstruction decoder are needed to learn a good latent space for effective transfer. MANGA refers to the proposed approach with all components present. For reference, we compare MANGA with an Oracle. Oracle refers to an untrained agent of the same architecture as MANGA, that has access to ground-truth system parameters, and is trained from scratch directly in the test environment. In Fig. 2, we show results by selectively ablating different components of the proposed model when the dynamics parameters are perturbed in the range of of base values. There is clearly a drop in performance on the test environment when we remove either or both the auxiliary modules. Interestingly, removing the inverse dynamics model causes a very sharp decrease in performance across all the three MuJoCo domains. Hence, it is clear that ignoring nuisance correlates between states and actions is important for quick and effective transfer.

Iv-D Comparisons with existing methods in literature

We compare the performance of MANGA with existing approaches in a new environment whose dynamics parameters are perturbed in the range of of base values. Note that perturbation is large enough to cause significant performance drop when a version of MANGA was trained in the base environment only and then tested in the randomized test environment without any adaptation (, , respectively for Half-Cheetah, Hopper, and Humanoid) . The results are in Fig. 2.

We consider two external baselines, namely Domain Randomization (DR) and meta-learning. For DR, we followed the implementation of the state-of-the art dynamics randomization paper [16] with two variants LSTM and FF. LSTM is the variant that uses an LSTM policy and value architecture while implicitly identifying the system dynamics parameters during policy learning [1]. FF is with the same policy architecture (FF) as our MANGA model and without any system parameter identification. Since the LSTM variant is computationally very expensive and takes a long time to train, we perform only one type of comparison against it. For meta-learning, we implemented No-Reward MAML [27], which performs significantly better than vanilla MAML [8] for the scenario of transfer to different environments dynamics. To ensure fair comparison, all models were trained for the same number of episodes, by executing for the same number of maximum time-steps per episode and the same optimizer was used for the rest of all experiments.

Iv-E Analysis with different randomization ranges

The extent to which we need to randomize the dynamics parameters during training depends on how different the test environment is likely to be with respect to the default setting. We experimented with different test environments in the range of maximum variation of dynamics parameters from the default value. The range of randomized environment during training was also the same ( respectively) in each case.

As evident from Fig. 3, the performance of all the compared models decrease when the range of parameter variations is increased. However, the drop in performance of MANGA is the least compared to the other methods. We attribute this favourable behavior, primarily to the fact that we have separated the processes of system parameter identification and policy learning with regularization. Hence, the latent space learned for conditioning the policy is not potentially negatively affected by the training of the system parameter identification module.

Iv-F Quick Adaptation: Rollouts in the test environment

Although most approaches for policy transfer [27, 30, 32] need rollouts in the test environments for reasonably good transfer, our proposed approach adapts a good policy zeros-shot by estimating dynamics parameters based on the observation of random off-policy state transition data, as shown by the reward at episode of the plots in Fig. 4. Furthermore, we observe that if allowed to update model parameters in the test environment (i.e. fine-tuning), MANGA quickly converges and achieves reward equivalent to the Oracle only within a few hundred episodes.

Iv-G Evaluation of performance in the presence of Motor Noise

We consider two variants of MANGA here: MANGA-Noise and MANGA-NoNoise. MANGA-Noise has been trained by considering random values of corresponding to each randomized environment, and a fixed (i.e. the weights of the random network are fixed) during training. We learn a model for estimating the value of along with as described in Sec III D and F. At test time we consider two situations of motor noise - known noise and unknown noise. Known noise corresponds to the case when at test time, the value of is same as that during training, while unknown noise corresponds to the case when the value of at test-time is different from that during training.

It is evident from Fig. 5 that MANGA-Noise effectively estimates the weight vectors and achieves much higher reward than MANGA-NoNoise in the presence of noise in the test environment. This suggests the effectiveness of the noise estimation technique described in Section III F.

Fig. 5: Evaluation of variants of MANGA in the presence of motor noise in the unseen test environment with dynamics parameters in the range of of base values, after 200,000 episodes of training. MANGA-Noise corresponds to the case when motor noise is present in training and is inferred as described in Section III F. MANGA-NONoise corresponds to the case when motor noise is present during training but encoding of is not input to the latent and for the test env is not inferred. denotes the magnitude of the noise multiplier. The top row corresponds to the scenario of known noise ( same as in training). The bottom row corresponds to the scenario of unknown noise ( is randomly chosen to be different from training).

V Conclusion

In this paper, we introduced a general framework for policy transfer that decouples the processes of policy learning and system identification, is agnostic to the algorithm used for training it and can quickly adapt to an environment at test time with variations in dynamics and motor noise. We compared the proposed approach with existing algorithms for policy transfer and demonstrated its efficacy with respect to robustness to the range of dynamics variations, variation in motor noise, quick adaptation to a test environment and learning of a transferable latent space for policy conditioning.

Vi Acknowledgement

We would like to acknowledge the support of Crissman Loomis-san, Takashi Abe-san, Masanori Koyama-san, Yasuhiro Fujita-san and many other colleagues at Preferred Networks Tokyo who helped shaped this work through the amazing research discussions during Homanga Bharadhwaj’s internship. We also thank Florian Shkurti (University of Toronto) for his valuable feedback on the draft and help in editing it.


  • [1] M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, et al. (2018) Learning dexterous in-hand manipulation. arXiv preprint arXiv:1808.00177. Cited by: §II, Fig. 2, §IV-D.
  • [2] H. Bharadhwaj, Z. Wang, Y. Bengio, and L. Paull (2019-05) A data-efficient framework for training and sim-to-real transfer of navigation policies. In 2019 International Conference on Robotics and Automation (ICRA), Vol. , pp. 782–788. External Links: Document, ISSN 2577-087X Cited by: §I, §II.
  • [3] L. Bottou (2010)

    Large-scale machine learning with stochastic gradient descent

    In Proceedings of COMPSTAT’2010, pp. 177–186. Cited by: §IV-B.
  • [4] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan (2017) Unsupervised pixel-level domain adaptation with generative adversarial networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 3722–3731. Cited by: §II.
  • [5] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba (2016) Openai gym. arXiv preprint arXiv:1606.01540. Cited by: §IV-A.
  • [6] Y. Burda, H. Edwards, A. J. Storkey, and O. Klimov (2018) Exploration by random network distillation. CoRR abs/1810.12894. External Links: Link, 1810.12894 Cited by: §III-F.
  • [7] P. Christiano, Z. Shah, I. Mordatch, J. Schneider, T. Blackwell, J. Tobin, P. Abbeel, and W. Zaremba (2016) Transfer from simulation to real world through learning deep inverse dynamics model. arXiv preprint arXiv:1610.03518. Cited by: §I, §III-B, §III-F.
  • [8] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: §II, §IV-D.
  • [9] C. Finn, K. Xu, and S. Levine (2018) Probabilistic model-agnostic meta-learning. In Advances in Neural Information Processing Systems, pp. 9516–9527. Cited by: §II.
  • [10] D. Gordon, A. Kadian, D. Parikh, J. Hoffman, and D. Batra (2019) SplitNet: sim2sim and task2task transfer for embodied visual navigation. In International Conference in Computer Vision (ICCV), Cited by: §I, §II.
  • [11] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §II.
  • [12] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §III-D, §III-D.
  • [13] A. Pashevich, R. A. Strudel, I. Kalevatykh, I. Laptev, and C. Schmid (2019) Learning to augment synthetic images for sim2real policy transfer. arXiv preprint arXiv:1903.07740. Cited by: §II.
  • [14] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §IV-B.
  • [15] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell (2017) Curiosity-driven exploration by self-supervised prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 16–17. Cited by: §III-G.
  • [16] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel (2018) Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. Cited by: §I, §II, §III-B, §III-B, §III-G, §IV-D.
  • [17] S. M. Richards, F. Berkenkamp, and A. Krause (2018) The Lyapunov neural network: adaptive stability certification for safe learning of dynamical systems. In Conference on Robot Learning, pp. 466–476. Cited by: §II.
  • [18] C. Richter and N. Roy

    Safe visual navigation via deep learning and novelty detection

    Cited by: §II.
  • [19] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: §IV-B.
  • [20] J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077–4087. Cited by: §II.
  • [21] A. Srinivas, A. Jabri, P. Abbeel, S. Levine, and C. Finn (2018) Universal planning networks: learning generalizable representations for visuomotor control. In International Conference on Machine Learning, pp. 4739–4748. Cited by: §II.
  • [22] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel (2017) Domain randomization for transferring deep neural networks from simulation to the real world. CoRR abs/1703.06907. External Links: Link, 1703.06907 Cited by: §I, §II.
  • [23] E. Todorov, T. Erez, and Y. Tassa (2012) Mujoco: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: §IV-A.
  • [24] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell (2017) Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176. Cited by: §II.
  • [25] F. Wang, B. Zhou, K. Chen, T. Fan, X. Zhang, J. Li, H. Tian, and J. Pan (2018) Intervention aided reinforcement learning for safe and practical policy optimization in navigation. In Conference on Robot Learning, pp. 410–421. Cited by: §II.
  • [26] S. Watanabe (20092009)

    Algebraic geometry and statistical learning theory

    Cambridge University Press. Cited by: §III-D.
  • [27] Y. Yang, K. Caluwaerts, A. Iscen, J. Tan, and C. Finn (2019) NoRML: no-reward meta learning. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems AAMAS, pp. 323–331. Cited by: §I, §II, §III-D, §III-D, §IV-D, §IV-F.
  • [28] J. Yoon, T. Kim, O. Dia, S. Kim, Y. Bengio, and S. Ahn (2018) Bayesian model-agnostic meta-learning. In Advances in Neural Information Processing Systems, pp. 7332–7342. Cited by: §II.
  • [29] W. Yu, V. C. V. Kumar, G. Turk, and C. K. Liu (2019) Sim-to-real transfer for biped locomotion. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, Cited by: §I, §II.
  • [30] W. Yu, C. K. Liu, and G. Turk (2019) Policy transfer with strategy optimization. In International Conference on Learning Representations (ICLR), External Links: Link Cited by: §I, §II, §IV-F.
  • [31] W. Yu, J. Tan, C. K. Liu, and G. Turk (2017) Preparing for the unknown: learning a universal policy with online system identification. arXiv preprint arXiv:1702.02453. Cited by: §II.
  • [32] A. Zhang, H. Satija, and J. Pineau (2018) Decoupling dynamics and reward for transfer learning. arXiv preprint arXiv:1804.10689. Cited by: §I, §II, §IV-F.
  • [33] F. Zhang, J. Leitner, Z. Ge, M. Milford, and P. Corke (2017) Adversarial discriminative sim-to-real transfer of visuo-motor policies. arXiv preprint arXiv:1709.05746. Cited by: §II.
  • [34] F. Zhu, L. Zhu, and Y. Yang (2019) Sim-real joint reinforcement transfer for 3d indoor navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11388–11397. Cited by: §II.
  • [35] L. M. Zintgraf, K. Shiarlis, V. Kurin, K. Hofmann, and S. Whiteson (2018) Caml: fast context adaptation via meta-learning. arXiv preprint arXiv:1810.03642. Cited by: §II.