One of the primary appeals of reinforcement learning (RL) is that it provides a framework for the autonomous learning of complex behaviours without the need for human supervision. In recent years RL has had significant success in areas such as playing video games atari2013 ; agent57 , board games alphago ; alphazero and robotic control tasks MPPO ; DAPG ; dreamer . Despite this, progress in applying RL to more practically useful environments has been somewhat limited. One of the main problems is that RL algorithms generally require a well-shaped, dense reward function in order to make learning progress. Often a reward function that fully captures the desired behaviour of an agent is not readily available and has to be engineered manually for each task, requiring a lot of time and domain-specific knowledge. This defeats the point of designing an agent that is capable of learning autonomously. A more general approach is to learn with sparse rewards, where an agent only receives a reward once a task has been completed. This is much easier to specify and is applicable to a wide range of problems, however training becomes significantly more challenging since the agent only receives infrequent feedback at the end of every rollout. This becomes especially challenging in the case of goal-conditioned RL HER ; nair2018visual , where the aim is to train a policy that can achieve a variety of different goals within the environment.
Much of RL’s success has come with model-free approaches, where the policy is learned directly from the reward signal obtained by interacting with the environment. However recently there has been a lot of interest in applying model-based approaches to the same kind of problems dreamer ; muzero ; simple . One of the main drawbacks of model-free RL algorithms is that they tend to be very sample inefficient, requiring a huge number of interactions with the environment in order to make learning progress. On the other hand, model-based methods make use of a learned model to plan their actions without directly interacting with the environment. Learning a model allows these methods to make use of a lot more information that is present in the observed transitions than just the scalar reward signal, and so generally this leads to a significant improvement in sample efficiency. This efficiency can sometimes come at the cost of worse asymptotic performance due to errors in the model introducing a bias towards non-optimal actions, although current state of the art approaches dreamer ; muzero are able to achieve comparable performance to some of the best model-free approaches d4pg ; curl . However, as with most RL algorithms, model-based approaches generally need a dense reward signal to work well. We are not aware of a model-based approach specifically designed to work in the sparse-reward, multi-goal setting.
To date, the most successful general-purpose RL algorithm for dealing with sparse rewards and multiple goals is Hindsight Experience Replay (HER) HER , a model-free algorithm. HER works by taking advantage of the fact that, when learning a goal-conditioned policy with an off-policy RL algorithm, observed transitions from a trajectory can be re-used as examples for attempting to achieve any goal. In particular, by re-labelling transitions with goals achieved at a later point during the same trajectory HER trains the goal-conditioned policy on examples that actually led to success — hence obtaining a much stronger learning signal.
In this paper we present PlanGAN, a model-based algorithm that can naturally be applied to sparse-reward environments with multiple goals. The core of our method builds upon the same principle that underlies HER — namely that any goal observed during a given trajectory can be used as an example of how to achieve that goal from states that occurred earlier on in that same trajectory. However, unlike HER, we do not directly learn a goal-conditioned policy/value function but rather train an ensemble of Generative Adversarial Networks (GANs) GANS which learn to generate plausible future trajectories conditioned on achieving a particular goal. We combine these imagined trajectories into a novel planning algorithm that can reach those goals in an efficient manner.
We test PlanGAN on a number of robotic manipulation and navigation tasks and show that it can achieve similar levels of performance to leading model-free methods (including Hindsight Experience Replay) but with substantially improved sample efficiency. The primary contribution of this paper is to introduce the first model-based method which is explicitly designed for multi-goal, sparse reward environments, leading to a significant improvement in sample efficiency.
2 Related Work
A number of model-based approaches have utilised explicit planning algorithms, but have mostly been applied to single tasks with relatively dense rewards. Nagabandi et al. nn_model
use iterative random shooting within a deterministic neural network dynamics model in order to solve a number of continuous control tasks. Hafner et al.hafner2019planet learn a latent representation from images and then plan within this latent space using CEM. Nagabandi et al. PDDM use a similar planning algorithm (MPPI) mppioriginal within an ensemble of learned models in order to perform dexterous manipulation tasks.
Other methods have had success with a hybrid approach, combining elements of model-based and model-free RL, and as in this work often use ensembles of models in order to improve robustness. STEVE STEVE
uses rollouts produced by an ensemble of models and Q-functions in order to obtain a robust estimate for the Q-learning target. Model-Ensemble TRPOmodelensembleTRPO uses an ensemble of models as a simulator for running a model-free RL algorithm (trust-region policy optimisation) whilst maintaining some level of uncertainty for when the model’s predictions are valid. I2A I2A learns to interpret imagined trajectories generated by a model to augment the model-free training of a policy/value function. Temporal Difference Models (TDMs) TDMs try to link model-based and model-free RL in the context of time-dependent, goal-conditioned value functions. Here, the model is itself the goal-conditioned value function, and is learned with model-free, off-policy RL. However, they require a meaningful distance metric between states to be defined and so do not work with fully sparse rewards. Nasiriany et al. nasiriany2019planning combine TDMs as an implicit model with a planning algorithm that allows them to plan over multiple abstract sub-goals. They apply this to solve long-horizon, goal-conditioned tasks directly from images.
Azizzadenesheli et al. GATS use a Wasserstein GAN with spectral normalisation to learn a predictive model that they use with Monte-Carlo Tree Search to solve ATARI games. Although they do not find particularly strong results overall, they show that they are able to learn an extremely accurate model with stable training of the GAN even in a non-stationary environment. A significant difference with our work is that they train a GAN that takes an action and a state and predicts the next state, whereas we train the GANs to imagine full trajectories (also their focus is on image-based environments). GANs have also been used for curriculum learning in goal-conditioned RL goalgan , where a generator was trained to propose goals at an appropriate level of difficulty for the current agent to achieve.
In terms of learning with sparse rewards, a number of approaches have had success by providing the agent with intrinsic rewards in order to aid with exploration pathak_curiosity ; RND ; countexploration . However, in the multi-goal setting a majority of the most successful approaches have built upon Hindsight Experience Replay (HER) HER . Zhao & Tresp energyHER improve HER’s performance on certain robotics environments by more frequently resampling trajectories where the objects have higher energy. Fang et al. curriculumHER propose an adaptive mechanism to select failed experiences based on a combination of the diversity of the achieved goals and their proximity to the desired goals. Liu et al. CER propose a complementary re-labelling scheme in the context of a competitive exploration game between two agents in order to supplement HER. He at al. SHER introduce a method that combines HER with maximum entropy RL.
Taking a different approach (but still closely related to HER), Ghosh et al. noRLGC
introduce a method that learns goal-conditioned policies without explicitly using reinforcement learning. They use supervised behavioural cloning (a form of imitation learning) to train a policy to reach the goals that have been observed on the trajectories the agent itself has generated. Whilst simpler than HER, it does not use a model and does not claim to significantly improve upon HER’s sample efficiency.
3.1 Goal-Conditioned Reinforcement Learning
We consider the problem of an agent interacting within an environment in order to learn how to achieve any given goal from a set of possible goals . We assume that the environment is fully observable and can be described by: a set of states, ; a set of possible actions, ; a distribution of initial states, ; and a transition function (). In the standard reinforcement setting we have a reward function, . In the goal-conditioned setting the reward also depends on the goal that the agent is trying to achieve, i.e. . Assuming that goals are sampled from some distribution , the aim of goal-conditioned RL is to learn a policy, , that maximises the expected discounted sum of future rewards:
where is a discount factor assigning larger weights to more immediate rewards. We consider the special case where the reward function is sparse and given by an indicator function that only depends on the next state and the goal:
i.e. we have some criteria that tells us whether any given state achieves any given goal , and only provide a reward when this is satisfied.
3.2 Hindsight Experience Replay (HER)
In complex environments it is extremely unlikely that the specified goal will ever be achieved by chance. As such, standard RL algorithms struggle in sparse-reward, multi-goal environments because they receive very little learning signal from which they can improve their policy. The key insight of HER is that trajectories that don’t achieve the specified goal still contain useful information about how to achieve other goals — namely those that are observed later on during the same trajectory. By using an off-policy RL algorithm such as DQN DQN or DDPG ddpg it is possible to re-label samples that were collected by the policy whilst attempting to achieve a goal with an alternative goal , and subsequently re-compute the reward. For example, if is sampled from a replay buffer of past experience, can be replaced with another goal that occurs later in the trajectory, and then a reward for this new goal can be recomputed: . This new transition can still be used in training an off-policy RL algorithm since the original goal only influences the agent’s action, but not the dynamics of the environment. By re-labelling transitions this way HER can significantly speed up the learning of a goal-conditioned policy since it increases the frequency with which the transitions seen in training actually lead to the specified goals being achieved.
The key insight of our method is that the same principle underlying HER — i.e. that any observed trajectory contains useful information about how to achieve the goals observed during that trajectory — has the potential to be used more efficiently as part of a model-based algorithm. In particular, instead of re-labelling transitions and re-computing rewards, we propose to make more complete use of the information contained within the observed transitions by training a generative model that can generate plausible transitions leading from the current state towards a desired goal. That is, we use experience gathered by the agent to train a goal-conditioned model that can generate future trajectories (states and actions) that move the agent towards any goal that we specify. These imagined trajectories do not necessarily need to be optimal in the sense of moving directly towards the goal, since the second key component of our method involves feeding these proposed trajectories into a planning algorithm that decides which action to take in order to achieve the goal in as few steps as possible.
Whilst in principle a number of generative models could be used for this purpose, in this work we choose to use GANs GANS , since they can easily deal with high-dimensional inputs and do not explicitly impose any restrictions on the form of the distribution produced by the generator. Specifically, we choose to use WGANs (Wasserstein GANs) wgan with spectral normalisation spectralnorm , as recent work has shown that these can be trained in a stable manner even when the underlying training data is non-stationary GATS .
4.1 Training the GAN(s)
The aim of the first major component of our method is to train a generative model that can take in the current state along with a desired goal and produce an imagined action and next state that moves the agent towards achieving . We approach this by training an ensemble of conditional-GANs, each consisting of a generator and a discriminator where , are the parameters of the neural networks that represent these functions. The generators take in the current state
, a noise vectorand the target goal in order to produce an imagined action and next state . The discriminators take in , , and and aim to distinguish whether or not this is a transition from a real trajectory that eventually reaches goal or an example created by the generator.
We also consider a variation where concurrently we train an ensemble of deterministic one-step predictive models of the environment. The aim of these predictive models is to take a state-action pair () and predict the difference between the next state and the current state, , as in nn_model . We denote these models as , where represent the parameters neural networks representing these functions. These predictive models can be used to provide an L2 regularisation term in the generator loss that encourages the generated actions and next states to be consistent with the predictions of the one-step models — although this is not necessary to make the method work (we study the effect of using predictive models this way in Section 5). The whole setup is shown schematically in Figure 1.
The loss for the generator is as follows:
where is a replay buffer of real experienced trajectories, is a noise vector where each component is sampled independently from the standard normal and is a parameter that weights how strongly we penalise deviations in the generated action/next state from the average predictions made by one-step models. The loss for the discriminator is:
The replay buffer is populated initially by random trajectories, however we find it helpful to filter (i.e. not store) trajectories where the final achieved goal is identical to the initial achieved goal, since these provide nothing useful for the GANs to learn from. After some initial training further trajectories generated by the planner (described in the next section) are also added to whilst training continues, allowing for continuous, open-ended improvement. Note that this makes the data distribution we are trying to emulate non-stationary as new self-collected data is constantly being added. The sampled goals from the replay buffer are always taken as goals achieved at a randomly chosen time step that occurs later within the same trajectory.
The basic building block is a generator that takes a state, goal and noise vector and produces an action and next state. However, during training we actually generate trajectories consisting of time steps. That is, we take the generated state from the previous step and use this as input to the generator to produce a new action/next state pair, and repeat. The generator is then trained on these end-to-end. In more detail, we sample batches of real trajectories made up of transitions from the buffer: , where each goal is an achieved goal at a later time along that same trajectory. We then use the generator to generate a trajectory , where . Batches of these real and imagined trajectories are then used to calculate the expectations in the losses shown in Equations 3 and 4. Training end-to-end on sequences of transitions imposes more constraints on the generator, requiring full trajectories to be difficult for the discriminator to distinguish rather than just individual transitions, and is crucial for good performance.
Each GAN and one-step model in the ensemble has a different random initialisation and is trained on different batches of data sampled from the same replay buffer. As discussed in the context of using an ensemble of one-step models for model-based RL PDDM , this is enough to give the models significant diversity. We study the benefits of using an ensemble over a single GAN in the Section 5.
4.2 Planning to achieve a goal
Once we have an ensemble of GANs that has been trained on some amount of real data, we use these to plan the actions to take in the environment to achieve a given goal, . Our planner’s basic structure shares similarities with a number of other model-predictive control based approaches nn_model ; hafner2019planet ; PDDM ; MPC — make use of a model to generate a number of imaginary future trajectories, score them, use these scores to choose the next action, and repeat this whole procedure at the next step. The novelty in our approach is in the fact that our trajectories are generated using GANs, the way that we score the trajectories and how we make use of an ensemble of models.
To plan the next action to take from the current state towards a desired goal , we first sample a set of initial actions and next states, . For each , and are generated from a random generator in the ensemble, conditioned on , i.e. , where . Our aim is then to give each of these initially proposed actions a score which captures how effective they are in terms of moving towards the final goal . A good score here should reflect the fact that we want the next action to be moving us towards as quickly as possible whilst also ensuring that the goal can be retained at later time steps. For example, we would not want to score too highly an action that moved an object close to the desired goal with very high velocity such that it would overshoot and not remain there at later time steps.
To obtain such a score we duplicate each of these initial actions and next states times. Each next state is then used as the starting point for a trajectory of length . These hypothetical trajectories are all generated using a different randomly chosen GAN at each time-step, so for example is generated from a random generator in the ensemble conditioned on .
Once we have generated these trajectories, we give each of them a score based on the fraction of time they spend achieving the goal. This means that trajectories that reach the goal quickly are scored highly, but only if they are able to remain there. Trajectories that do not reach the goal within steps are given a score of zero. We can then score each of the initial actions based on the average score of all the imagined trajectories that started with that action. These scores are normalised and denoted as . The final action returned by the planner is either the action with the maximum score or an exponentially weighted average of the initially proposed actions, , where
is a hyperparameter. The rationale for using a different random generator at each step of every hypothetical trajectory is that we will be giving higher scores to initial actions that all of the GANs agree can spend a lot of time achieving the goal. This improves the robustness of the predictions and protects against errors in terms of unrealistic imagined future trajectories generated by any single GAN.
We perform experiments in four continuous environments built in MuJoCo mujoco — Four Rooms Navigation, Three-link Reacher, Fetch Push and Fetch Pick And Place (see Figure 3). Full details about these environments, along with the hyper parameters used for the experiments, can be found in the Appendix. We evaluate the performance in terms of the percentage of goals that the agent is able to reach vs. the number of time steps it has interacted with the real environment111Videos of results are available in the supplementary materials.
We have compared the performance of our algorithm against a number of leading model-free methods designed for multi-goal, sparse reward environments (Figure 4). The most natural baseline to compare with is HER (using DDPG as the core RL algorithm HER ), as this is based on a similar underlying principle to PlanGAN. We also include DDPG without HER to demonstrate how standard model-free RL methods struggle with these tasks. For both of these we use the implementations found in OpenAI Baselines baselines . We also include comparisons with two recently proposed modifications to HER, “Curriculum-Guided HER" curriculumHER (CHER) and “Soft HER" SHER 222using the official implementations found here and here respectively
(SHER). Finally, for the Fetch Push and Fetch Pick And Place environments, we include comparisons with a recent method “Simulated Locomotion Demonstrations" (SLD), which requires an object to be defined. SLD uses the fact that with a simulator objects can move by themselves, so a separate object policy can be learned where the object moves itself to the desired goal. SLD leverages this object policy to guide the learning of the full robot policy. This gives it a significant advantage over PlanGAN as it makes use of separately learned self-generated demonstrations to guide the training, however we see that PlanGAN still achieves significantly better data efficiency. All plots are based on running each experiment using 5 different random seeds, with the solid line representing the median performance and the shaded area representing one standard deviation around the mean. We also include a line showing the average asymptotic performance of HER (as this is the most directly comparable method).
In all of the tasks considered we find that PlanGAN is significantly more sample efficient than any of the other methods, requiring between 4-8 times less data to reach the same performance as HER. This is comparable to the sample efficiency gains reported in nn_model for a model-based approach to dense reward tasks over leading model-free methods.
5.2 Ablation studies
In this section we study how various decisions we have made affect PlanGAN’s performance by performing ablation studies on the two more complicated environments considered (Fetch Push and Fetch Pick And Place). Firstly, we study whether the planner is a crucial component of our set-up. The first panel in Figure 5 in the Appendix shows a comparison of the full PlanGAN with a couple of variations that more directly use the actions proposed by the GANs. Both of these lead to significantly lower success rates, suggesting that the planner we use is crucial.
We then consider how the number of GANs in the ensemble effects PlanGAN’s performance. The second panel in Figure 5 shows results for ensembles made up of 1, 3 and 5 GANs respectively. Whilst less significant than the inclusion of the planner, we find that using only a single GAN leads to slower and significantly less stable training. We also see that the larger ensemble (5 GANs) outperforms the smaller ensemble (3 GANs), but the difference in performance is relatively small. Finally, we consider running the algorithm with , i.e. without any regularisation from the one-step predictive model. We see that the one-step model regularisation provides only a very minor improvement, suggesting that it is not a crucial component of PlanGAN.
We proposed PlanGAN, a model-based method for solving multi-goal environments with sparse rewards. We showed how to train a generative model in order to generate plausible future trajectories that lead from a given state towards a desired goal, and how these can be used within a planning algorithm to achieve these goals efficiently. We demonstrated that this approach leads to a substantial increase in sample efficiency when compared to leading model-free RL methods that can cope with sparse rewards and multiple goals.
In the future we would like to extend this work so that it can be applied to more complex environments. One of the main limitations with the current approach is the planner. When the number of time steps required to complete a task becomes large the planner becomes computationally expensive, since at each step we have to simulate a large number of future steps out until completion. We also need these trajectories to be at least reasonably accurate over a very large number of time steps, as imagined future trajectories that do not reach the desired goal are given a score of zero. If no imagined trajectories reach the goal then the planner is unable to meaningfully choose an action. Future work which may more efficiently deal with longer horizon tasks could involve combining the GAN training with a model-free goal-conditioned value function (creating a hybrid method, similar to STEVE STEVE and Dreamer dreamer ) which could learn to give a value to the actions proposed by the GANs, removing the need for a planner entirely.
-  A. Abdolmaleki, J. T. Springenberg, Y. Tassa, R. Munos, N. Heess, and M. Riedmiller. Maximum a posteriori policy optimisation. In International Conference on Learning Representations, 2018.
-  M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. Pieter Abbeel, and W. Zaremba. Hindsight experience replay. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5048–5058. Curran Associates, Inc., 2017.
-  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv:1701.07875, 2017.
-  K. Azizzadenesheli, B. Yang, W. Liu, E. Brunskill, Z. C. Lipton, and A. Anandkumar. Sample-efficient deep RL with generative adversarial tree search. CoRR, abs/1806.05780, 2018.
-  A. P. Badia, B. Piot, S. Kapturowski, P. Sprechmann, A. Vitvitskyi, D. Guo, and C. Blundell. Agent57: Outperforming the atari human benchmark, 2020.
-  G. Barth-Maron, M. W. Hoffman, D. Budden, W. Dabney, D. Horgan, D. TB, A. Muldal, N. Heess, and T. Lillicrap. Distributional policy gradients. In International Conference on Learning Representations, 2018.
-  M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count-based exploration and intrinsic motivation. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1471–1479. Curran Associates, Inc., 2016.
-  J. Buckman, D. Hafner, G. Tucker, E. Brevdo, and H. Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 8224–8234. Curran Associates, Inc., 2018.
-  Y. Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation. In International Conference on Learning Representations, 2019.
-  P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, Y. Wu, and P. Zhokhov. Openai baselines. https://github.com/openai/baselines, 2017.
-  M. Fang, T. Zhou, Y. Du, L. Han, and Z. Zhang. Curriculum-guided hindsight experience replay. In Advances in Neural Information Processing Systems 32, pages 12623–12634. Curran Associates, Inc., 2019.
-  C. Florensa, D. Held, X. Geng, and P. Abbeel. Automatic goal generation for reinforcement learning agents, 2017.
-  D. Ghosh, A. Gupta, J. Fu, A. Reddy, C. Devin, B. Eysenbach, and S. Levine. Learning to reach goals without reinforcement learning. abs/1912.06088, 2019.
-  I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks, 2014.
-  D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latent imagination. In International Conference on Learning Representations, 2020.
D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and
Learning latent dynamics for planning from pixels.
International Conference on Machine Learning, pages 2555–2565, 2019.
-  Q. He, L. Zhuang, and H. Li. Soft hindsight experience replay, 2020.
-  L. Kaiser, M. Babaeizadeh, P. Milos, B. Osinski, R. H. Campbell, K. Czechowski, D. Erhan, C. Finn, P. Kozakowski, S. Levine, A. Mohiuddin, R. Sepassi, G. Tucker, and H. Michalewski. Model-based reinforcement learning for atari, 2019.
-  T. Kurutach, I. Clavera, Y. Duan, A. Tamar, and P. Abbeel. Model-ensemble trust-region policy optimization. In International Conference on Learning Representations, 2018.
-  D. Liberzon. Calculus of Variations and Optimal Control Theory: A Concise Introduction. Princeton University Press, 2012.
-  T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning, 2015.
-  H. Liu, A. Trott, R. Socher, and C. Xiong. Competitive experience replay. In International Conference on Learning Representations, 2019.
-  T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. ICLR, 2018.
V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and
Playing atari with deep reinforcement learning.
NIPS Deep Learning Workshop. 2013.
-  V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
-  A. Nagabandi, G. Kahn, R. Fearing, and S. Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. pages 7559–7566, 05 2018.
-  A. Nagabandi, K. Konoglie, S. Levine, and V. Kumar. Deep Dynamics Models for Learning Dexterous Manipulation. In Conference on Robot Learning (CoRL), 2019.
-  A. Nair, V. Pong, M. Dalal, S. Bahl, S. Lin, and S. Levine. Visual reinforcement learning with imagined goals, 2018.
-  S. Nasiriany, V. Pong, S. Lin, and S. Levine. Planning with goal-conditioned policies. Advances in Neural Information Processing Systems, 2019.
-  D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In ICML, 2017.
-  V. Pong*, S. Gu*, M. Dalal, and S. Levine. Temporal difference models: Model-free deep RL for model-based control. In International Conference on Learning Representations, 2018.
-  S. Racanière, T. Weber, D. P. Reichert, L. Buesing, A. Guez, D. Rezende, A. P. Badia, O. Vinyals, N. Heess, Y. Li, R. Pascanu, P. Battaglia, D. Hassabis, D. Silver, and D. Wierstra. Imagination-augmented agents for deep reinforcement learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 5694–5705, Red Hook, NY, USA, 2017. Curran Associates Inc.
-  A. Rajeswaran*, V. Kumar*, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations. In Proceedings of Robotics: Science and Systems (RSS), 2018.
-  J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lockhart, D. Hassabis, T. Graepel, T. Lillicrap, and D. Silver. Mastering atari, go, chess and shogi by planning with a learned model, 2019.
-  D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484–503, 2016.
-  D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140–1144, 2018.
-  A. Srinivas, M. Laskin, and P. Abbeel. Curl: Contrastive unsupervised representations for reinforcement learning, 2020.
-  E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control, 2012.
-  G. Williams, B. Goldfain, P. Drews, K. Saigol, J. Rehg, and E. Theodorou. Robust sampling based model predictive control with sparse objective information. 06 2018.
-  R. Zhao and V. Tresp. Energy-based hindsight experience prioritization. In A. Billard, A. Dragan, J. Peters, and J. Morimoto, editors, Proceedings of The 2nd Conference on Robot Learning, volume 87 of Proceedings of Machine Learning Research, pages 113–122. PMLR, 29–31 Oct 2018.
Appendix A Experimental Details
Four Rooms Navigation: A point mass is placed within a miniature four-room maze. Both the state space (four-dimensional - position and velocity of the mass) and the action space (two-dimensional - acceleration in x/y direction) are continuous. Goals are two dimensional (target position) and are sampled uniformly at random.
Reacher (Three Links): Three links are connected together with hinges. The agent must apply torques to these hinges to move the end-point to the specified goal. The state space is 11-dimensional, action-space 3-dimensional and the goal-space 2-dimensional.
Fetch Push: A robotic arm (with its gripper forced shut) interacts with a cube. The aim is to push the cube to a desired goal position. The state space is 21-dimensional (robot positions/velocities, object position/velocity) and the action-space is 4-dimensional. The goal space is the position of the cube (3-dimensional), and goals are sampled uniformly over a region on the table.
Fetch Pick And Place: The robotic arm is the same as Fetch Push, but now the gripper can also be controlled and opened. The aim is to pick up the cube and move it to a desired goal. The state space is 25-dimensional and the action-space is 4-dimensional. The goal is the position of the cube and can also be in the air (above the table), such that the gripper must be used to pick the cube up.
The hyperparameters used were largely the same for all of the experiments reported. Here we give a list and description of them, as well as their default values.
|Symbol in paper||Description||Default values|
|Number of GANs in ensemble||3|
|Number of one-step predictive models||3|
|Number of initial random trajectories stored before we train||250|
|Number of initial training steps||100000|
|Trajectory length of gathered rollouts (and planning horizon)||50|
|Number of additional trajectories gathered (after initial random)||1500 (Four Rooms, Reacher), 2000 (FP), 3000 (FPAP)|
|Number of training steps per new trajectory gathered||250|
|Number of initial actions proposed during planning||25|
|Number of copies of each initial action during planning||100|
|Length of trajectories that we sample/generate during training||5|
|Regularisation strength of the one-step predictive model in the generator loss||30.0|
|Batch size for training GANs||128|
|Batch size for training the one-step models||256|
|Parameter for weighting trajectory scores||5 (Four Rooms, FPAP) N/A (max score) (FP, Reacher)|
|-||Generator optimiser||ADAM(lr=0.0001, )|
|-||Discriminator optimiser||ADAM(lr=0.0001, )|
|-||One-step model optimiser||ADAM(lr=0.001, )|
|-||Generator parameters L2 regularisation||0.0001|
|-||Discriminator parameters L2 regularisation||0.0001|
Note that for Fetch Push and Fetch Pick And Place, the number of initial trajectories does not correspond to environment interactions, as many random trajectories do not move the object and hence do not change the achieved goal, so are not stored in the buffer. Here refers to the number of initial trajectories that are actually stored in the replay buffer (although the discarded trajectories are still counted when reporting the number of environment interactions).
a.3 Network details
All of the generator, discriminator and one-step predictive models consist of fully connected neural networks with two hidden layers of size 512. The hidden layers in the generator have BatchNorm applied to them. Apart from the output layers all activations are ReLU.
All of the generator, discriminator and one-step predictive models consist of fully connected neural networks with two hidden layers of size 512. The hidden layers in the generator have BatchNorm applied to them. Apart from the output layers all activations are ReLU.
Appendix B Ablation Studies
We performed ablation studies on the two more challenging environments considered (Fetch Push and Fetch Pick And Place). Firstly we aimed to address how necessary the use of the planner was. To do this we carried out experiments where we just trained the GANs and then used their proposed actions directly to generate new trajectories. We considered two variations — firstly simply choosing the first action proposed by a random GAN in the ensemble (NoPlanner in the first panel of Figure 5), and the second where we take the average over a large number of actions proposed by the ensemble of GANs without scoring them (NoPlannerAvg in the first panel).
The next question we addressed was how the number of GANs in the ensemble impacts performance. We run experiments for a small ensemble (3 GANs, and the “standard setting"), a larger ensemble (5 GANs) and no ensemble (1 GAN). The results are shown in the second panel of Figure 5. We see that no ensemble leads to slower and less stable training, particularly for Fetch Pick And Place. We see that a larger ensemble does lead to an improvement over the smaller ensemble, although the difference is relatively small.
Finally we consider running the experiment without any regularisation from the one-step predictive model (NoOSMReg), i.e. . We see that this only leads to a very minor decrease in performance, suggesting that the inclusion of a one-step predictive model is not really a crucial component of PlanGAN.
Note that other than the described changes the parameters used for each experiment were as described in Appendix A.