One-shot imitation is a powerful way to show agents how to solve a task. For instance, one or a few demonstrations are typically enough to teach people how to solve a new manufacturing task. In this paper, we introduce an AI agent that when provided with a novel demonstration is able to (i) mimic the demonstration with high-fidelity, or (ii) forego high-fidelity imitation to solve the intended task more efficiently. Both types of imitation can be useful in different domains.
Motor control is a notoriously difficult problem, and we are often deceived by how simple a manipulation task might appear to be. Tying shoe-laces, a behaviour many of us learn by imitation, might appear to be simple. Yet, tying shoe-laces is something most 6 year olds struggle with, long after object recognition, walking, speech, often translation, and sometimes even reading comprehension. This long process of learning that eventually results in our ability to rapidly imitate many behaviours provides inspiration for the work in this paper.
We refer to high-fidelity imitation as the act of closely mimicking a demonstration trajectory, even when some actions may be accidental or irrelevant to the task. This is sometimes called over-imitation (McGuigan et al., 2011). It is known that humans over-imitate more than other primates (Horner & Whiten, 2005) and that this may be useful for rapidly acquiring new skills (Legare & Nielsen, 2015)
. For AI agents however, learning to closely imitate even one single demonstration from raw sensory input can be difficult. Many recent works focus on using expensive reinforcement learning (RL) methods to solve this problem(Sermanet et al., 2018; Liu et al., 2017; Peng et al., 2018; Aytar et al., 2018). In contrast, high-fidelity imitation in humans is often cheap: in one-shot we can closely mimic a demonstration. Inspired by this, we introduce a meta-learning approach (MetaMimic — Figure 1) to learn high-fidelity one-shot imitation policies by off-policy RL. These policies, when deployed, require a single demonstration as input in order to mimic the new skill being demonstrated.
AI agents could acquire a large and diverse set of skills by high-fidelity imitation with RL. However, representing many behaviours requires the adoption of a model with very high capacity, such as a very large deep neural network. Unfortunately, showing that RL methods can be used to train massive deep neural networks has been an open question because of the variance inherent to these methods. Indeed, traditional deep RL neural networks tend to be small, to the point that researchers have recently questioned their contribution(Rajeswaran et al., 2017b). In this paper, we show that it is possible to train massive deep networks by off-policy RL to represent many behaviours. Moreover, we show that bigger networks generalize better. These results therefore provide important evidence that RL is indeed a scalable and viable framework for the design of AI agents. Specifically this paper makes the following contributions111Videos presenting our results are available at https://vimeo.com/metamimic.:
It introduces the MetaMimic algorithm and shows that it is capable of one-shot high-fidelity imitation from video in a complex manipulation domain.
It shows that MetaMimic can harness video demonstrations and enrich them with actions and rewards so as to learn uncoditional policies capable of solving manipulation tasks more efficiently than teleoperating humans. By retaining and taking advantage of all its experiences, MetaMimic also substantially outperforms the state-of-the-art D4PG RL agent, when D4PG uses only the current task experiences.
The experiments provide ablations showing that larger networks (to the best of our knowledge, the largest networks ever used in deep RL) lead to improved generalization in high-fidelity imitation. The ablations also highlight the important value of instance normalization.
The experiments show that increasing the number of demonstrations during training leads to better generalization on one-shot high-fidelity imitation tasks.
MetaMimic is an algorithm to learn both one-shot high-fidelity imitation policies and unconditional task policies that outperform demonstrators. Component 1 takes as input a dataset of demonstrations, and produces (i) a set of rich experiences and (ii) a one-shot high-fidelity imitation policy. Component 2 takes as input a set of rich experiences and produces an unconditional task policy.
Component 1 uses a dataset of demonstrations to define a set of imitation tasks and using RL it trains a conditional policy to perform well across this set. Here each demonstration is a sequence of observations, without corresponding actions. This component can use any RL algorithm, and is applicable whenever the agent can be run in the environment and the environment’s initial conditions can be precisely set. In practice we make use of D4PG (Barth-Maron et al., 2018), an efficient off-policy RL algorithm, for training the agent’s policy from demonstration data. In Section 2.1 we give a detailed description of our approach for learning one-shot high-fidelity imitation policies. Here we also describe the neural network architectures used in our approach to imitation; our results will show that as the number of imitation tasks increases it becomes necessary to train large-scale neural network policies to generalize well. Furthermore, the process of training the imitation policies results in a memory of experiences which includes both actions and rewards. As shown in Section 2.2 we can replay these experiences to learn unconditional policies capable of solving new tasks and outperforming human demonstrators.
|Given: an experience replay memory a dataset of demonstrations a reward function Initialize memory for do Sample demo from Set initial state to initial demo state for to do Sample action Execute and observe and Calculate Store in end for end for|
2.1 Learning a Policy for One-shot High-Fidelity Imitation
We consider a single stochastic task and define a demonstration of this task as a sequence of observations . While there is only one task, there is ample diversity in the environment’s initial conditions and in the demonstrations. Each observation can be either an image or a combination of both images and proprioceptive information. We let represent a deterministic parameterized imitation policy that produces actions by conditioning on the current observations and a given demonstration. We also let denote the environment renderer, which accepts an arbitrary policy and produces a sequence of observations . We can think of this last step as producing a rollout trajectory given the policy, as illustrated on the right side of Figure 2.
The goal of high-fidelity imitation is to estimate the parametersof a policy that maximizes the expected imitation return, e.g. similarity between the observations and sampled demonstrations . That is,
where is a similarity measure (reward), which will be discussed in greater detail at the end of this section, and is the discount factor. In general it is not possible to differentiate through the environment . We thus choose to optimize this objective with RL, while sampling trajectories by acting on the environment. Finally, we refer to this process as one-shot imitation because although we make use of multiple demonstrations at training time to learn the imitation policy, at test time we are able to follow a single novel demonstration using the learned conditional policy .
We adopt the recently proposed D4PG algorithm (Barth-Maron et al., 2018) as a subroutine for training the imitation policy. This is a distributed off-policy RL algorithm that interacts with the environment using independent actors (see Figure 2), each of which inserts trajectory data into a replay table. A learner process in parallel samples (possibly prioritized) experiences from this replay dataset and optimizes the policy in order to maximize the expected return. MetaMimic first builds on this earlier work by making a very specific choice of reward and policy.
At the beginning of every episode a single demonstration is sampled, and the initial conditions of the environment are set to those of the demonstration, i.e. . The actor then interacts with the environment by producing actions . While this policy representation is popular in the feature-based supervised one-shot imitation literature, in our case the observations and demonstrations are sequences of high-dimensional sensory inputs and hence this approach becomes computationally prohibitive. To overcome this challenge, we simplify the model to only consider local context . In this formulation, the future demonstration state can be interpreted as a goal state and the approach may be thought of as goal-conditional imitation with a time-varying goal.
At every timestep, we compute the reward . While in general this reward can depend on the entire trajectory—or on small subsequences such as —in practice we will restrict it to depend only on the goal state . Ideally the reward function can be learned during training (Ganin et al., 2018; Nair et al., 2018). However, in this work we experiment with a simple reward function based on the Euclidean distance222Ganin et al. (2018) show that -distance is an optimal discriminator for conditional generation. over observations:
where are the raw pixel observations and are proprioceptive measurements (joint and end effector positions and velocities). Both components of the reward function have limitations: has no information about objects in the environment, so it may fail to encourage the imitator to interact with the objects; where as contains information about the body and objects, but is insufficient to uniquely describe either. In practice, we found a combination of both to work best.
Again we note that the next demonstration state can be interpreted as a goal state. In this perspective the goals are set by the demonstration, and the agent is rewarded by the degree to which it reaches those goals. Because the imitation goals are explicitly given to the policy, the imitation policy is able to imitate many diverse demonstrations, and even generalize to unseen demonstrations as described in Section 2.2.
High-fidelity imitation is a fine-grained perception task and hence the choice of policy architecture is critical. In particular, the policy must be able to closely to mimic not only one but many possible ways of accomplishing the same stochastic task under different environment configurations. This representational demand motivates the introduction of high-capacity deep neural networks. We found the architecture, shown in Figure 3
, with residual connections, 20 convolution layers with 512 channels for a total of 22 million parameters, and instance normalization to drastically improve performance, as shown in Figure 6 of the Experiments section. Following the recommendations of(Rajeswaran et al., 2017b), we compare the performance of this model with a smaller (15 convolution layers with 32 channels and 1.6 million) network proposed recently by Espeholt et al. (2018) and find size to matter. We note however that the IMPALA network of Espeholt et al. (2018) is large in comparison to previous networks used in important RL and control milestones, including AlphaGo (Silver et al., 2016), Atari DQN (Mnih et al., 2013), dexterous in-hand manipulation (OpenAI et al., 2018), QT-Opt for vision-based robotic manipulation (Kalashnikov et al., 2018), and DOTA among others.
2.2 Learning an Unconditional Task Policy
MetaMimic fills its replay memory with a rich set of experiences, which can also be used to learn a task more quickly. In order to train a task policy using RL we need to both explore, i.e. to find a sequence of actions that leads to high reward; and to learn, i.e. to harness reward signals to improve the generation of actions so as to generalize well. Unfortunately, stumbling on a rewarding sequence of actions for many control tasks is unlikely, especially when reward is only provided after a long sequence of actions.
A powerful way of using demonstrations for exploration is to inject the demonstration data directly into an off-policy experience-replay memory (Hester et al., 2017; Večerík et al., 2017; Nair et al., 2017b). However, these methods require access to privileged information about the demonstration – the sequences of actions and rewards – which is often not available. Our method takes a different approach. While our high-fidelity imitation policy attempts to imitate the demonstration from observations only, it generates its own observations, actions and rewards. These experiences are often rewarding enough to help with exploration. Therefore, instead of injecting demonstration trajectories, we place all experiences generated by our imitation policy in the experience-replay memory, as illustrated in Figure 1. The key design principle behind our approach is that RL agents should store all their experiences and take advantage of them for solving new problems.
More precisely, as the imitation policy interacts with the environment we also assume the existence of a task reward (). Given these rewards we can introduce an additional, off-policy task learner which optimizes an unconditional task policy . This policy can be learned from transitions generated asynchronously by the imitation actors following policy . This learning process is made possible by the fact that the task learner is simply optimizing the cumulative reward in an off-policy fashion. It should also be noted that this does not require privileged information about demonstrations because the sampled transitions are generated by the imitation actor’s process of learning to imitate.
Due to the existence of demonstrations, the imitation trajectories are likely to lie in areas of high reward and as a result these samples can help circumvent the exploration problem. However, they are also likely to be very off-policy initially during learning. As a result, we augment these trajectories with samples generated asynchronously by additional task actors using the unconditional task policy . The task learner then trains its policy by sampling from both imitation and task trajectories. For more algorithmic details see Appendix A.
As the imitation policy improves, rewarding experiences are added to the replay memory and the task learner draws on these rewarding sequences to circumvent the exploration problem through off-policy learning. We will show this helps accelerate learning of the task policy, and that it works as well as methods that have direct access to expert actions and expert rewards.
In this section, we analyze the performance of our imitation and task policies. We chose to evaluate our methods in a particularly challenging environment: a robotic block stacking setup with both sparse rewards and diverse initial conditions, learned from visual observations. In this space, our goal is to learn a policy performing high-fidelity imitation from human demonstration, while generalizing to new initial conditions.
Our environment consists of a Kinova Jaco arm with six arm joints and three actuated fingers, simulated in MuJoCo. In the block stacking task (Nair et al., 2017b), the robot interacts with two blocks on a tabletop. The task reward is a sparse piecewise constant function as described in (Zhu et al., 2018). The reward defines three stages (i) reaching, (ii) lifting, (iii) stacking. The reward only changes when the environment transitions from one stage to another. Our policy controls the simulated robot by setting the joint velocity commands, producing 9-dimensional continuous velocities in the range of at Hz. The environment outputs the visual observation as a RGB image, as well as the proprioceptive features consisting of positions and angular velocities of the arm and fingers joints.
In this environment, we collected demonstrations using a SpaceNavigator 3D motion controller, which allows human operators to control the robot arm with a position controller. We collected episodes of demonstrations as imitation targets. Another episodes were gathered for validation purposes by a different human demonstrator.
Note that the images shown in this paper have been rendered with the path-tracer Mitsuba for illustration purposes. Our agent does not however require such high-quality video input– the environment output generated by MuJoCo which our agent observes is lower in resolution and quality.
3.2 High-fidelity one-shot imitation
We use D4PG (see Appendix C.1) to train the imitation policy in a one-shot manner (sec. 2.1) on the block stacking task. The policy observes the visual input , as well a demonstration sequence randomly sampled from the set of expert episodes. In fig. 4 we show that our policy can closely mimic novel, diverse, test-set demonstration videos. Recall these test demonstrations are provided by a different expert and require generalization to a distinct stacking style. The test demonstrations are so different from the training ones that the average cumulative reward of the test demonstrations is lower than that of training by as much as 70. (The average episodic reward for the training demonstration set is 355 and that for the test set is 285.) On these novel demonstrations we are able to achieve 52% of the reward of the demonstration without any task reward, solely by doing high-fidelity imitation.
It is important to note that we achieve this while placing significantly less assumptions on the environment and demonstration than comparable methods. Unlike supervised methods (e.g. (Nair et al., 2017b)) we can imitate without actions at training time. And while proprioceptive features are used as part of computing the imitation reward, they are not observed by the policy, which means MetaMimic’s imitation policy can mimic block stacking demonstrations purely from video at test time. And finally, as opposed to pure reinforcement-learning approaches ((Barth-Maron et al., 2018)) we do not train on a task reward.
Generalization: To analyze how well our learned policy generalizes to novel demonstrations, we run the policy conditioned on demonstrations from the validation set. As fig. 5 shows, validation rewards track the training curves fairly well. We also notice that policies trained on a small number of demonstrations achieve high imitation reward in training, but low reward on validation, while policies trained on a bigger set generalize much better. While we do not use a task reward in training, we use it to measure the performance on the stacking task. We see the same behavior as for the imitation reward: Policies trained on or more demonstrations generalize very well. On the training set, performance varies from 67% of the average demonstration reward to 81% of the average demonstration reward.
Network architecture: Most reinforcement learning methods use comparatively small networks. However, high-fidelity imitation of this stochastic task requires the coordination of fine-grained perception and accurate motor control. These problems can strongly benefit from large architectures used for other difficult vision tasks. And due the fact that we are training with a dense reward, the training signal is rich enough to properly train even large models. In fig. 3 we demonstrate that indeed a large ResNet34-style network (He et al., 2016) clearly outperforms the network from IMPALA (Espeholt et al., 2018). Additionally, we show that using instance normalization (Ulyanov et al., ) improves performance even more. We chose instance normalization instead of batch norm (Ioffe & Szegedy, 2015) to avoid distribution drift between training and running the policy (Ioffe, 2017). To our knowledge, this is the largest neural network trained end-to-end using reinforcement learning.
3.3 Task Policy
In the previous section we have shown that we are able to learn a high-fidelity imitation policy which mimics a novel demonstration in a single shot. It does however require a demonstration sequence as an input at test time. The task policy (see sec. 2.2), not conditioned on a demonstration sequence, is trained concurrently to the imitation policy, and learns from the imitation experiences along with its own experiences.
In fig. 7 we show a qualitative comparison of the task policy and a corresponding demonstration sequence. This is achieved by starting the task policy at the same initial state as the demonstration sequence. The task policy is not merely imitating the demonstration sequence, but has learned to perform the same task in much shorter time. For our task, the policy is able to outperform the demonstrations it learned from by 50% in terms of task reward (see fig. 8). Pure RL approaches are not able to reach this performance; the D4PG baseline scored significantly below the demonstration reward.
A really powerful technique in RL is to use demonstrations as a curriculum. To do that, one starts the episode during training such that the initial state is set to a random state along a random demonstration trajectory. This technique enables the agent to see the later and often rewarding part of a demonstration episode often. This approach has been shown to be very beneficial to RL as well as imitation learning(Resnick et al., 2018; Hosu & Rebedea, 2016; Popov et al., 2017). It, however, requires an environment which can be reset to any demonstration state, which is often only possible in simulation. We compare D4PG and our method both with and without a demonstration curriculum. As shown in previous results, using this curriculum for training significantly improves convergence for both methods. Our method without a demonstration curriculum performs as well as D4PG with the curriculum. When trained with the curriculum, our method significantly outperform all other methods; see fig. 8.
Last but not least, we compare our task policy against D4PGfD (Vecerík et al., 2017) (see sec. 4 for more details). D4PGfD differs from our approach in that it requires demonstration actions. While it takes time for our imitation policy to take off and help with training of the task policy, D4PGfD can help with exploration right away. It therefore is more efficient. Despite having no access to actions, however, our task policy catches up with D4PGfD quickly and reaches the same performance as the policy trained with D4PGfD. This speaks to the efficiency of our imitation policy. See fig. 8 for more details.
4 Related Work
General imitation learning:
Many prior works focus on using imitation learning to directly learn a task policy. There are two main approaches: Behavior cloning (BC) which attempts to learn a task policy by supervised learning(Pomerleau, 1989), and inverse RL (IRL) which attempts to learn a reward function from a set of demonstrations, and then uses RL to learn a task policy that maximizes that learned reward (Ng et al., 2000; Abbeel & Ng, 2004; Ziebart et al., 2008).
While BC has problems with accumulating errors over long sequences, it has been used successfully both on its own (Rahmatizadeh et al., 2017) and as an auxiliary loss in combination with RL (Rajeswaran et al., 2017a; Nair et al., 2017b). IRL methods do not necessarily require expert actions. Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) is one example. GAIL constructs a reward function that measures the similarity between expert-generated observations and observations generated by the current policy. GAIL has been successfully applied in a number of different environments (Ho & Ermon, 2016; Li et al., 2017a; Merel et al., 2017; Zhu et al., 2018). While these methods work quite well, they focus on learning task policies, and not one-shot imitation.
One-shot imitation learning: Our approach is a form of one-shot imitation. A few recent works have explored one-shot task-based imitation learning (Finn et al., 2017; Duan et al., 2017; Wang et al., 2017), i.e. given a single demonstration, generalize to a new task instance with no additional environment interactions. These methods do not focus on high-fidelity imitation and therefore may not faithfully execute the same plan as the demonstrator at test time.
Imitation by tracking: Our method learns from demonstrations using a tracking reward (Atkeson & Schaal, 1997). This method has seen increased popularity in games (Aytar et al., 2018) and control (Sermanet et al., 2018; Liu et al., 2017; Peng et al., 2018). All these methods use tracking to imitate a single demonstration trajectory. Imitation by tracking has several advantages. For example it does not require access to expert actions at training time, can track long demonstrations, and is amenable to third person imitation (Sermanet et al., 2016). To our knowledge, MetaMimic is the first to train a single policy to closely track hundreds of demonstration trajectories, as well as generalize to novel demonstrations.
Inverse dynamics models: Our method is closely related to recent work on learned inverse dynamics models (Pathak et al., 2018; Nair et al., 2017a). These works train inverse dynamics models without expert demonstrations by self-supervision. However since these methods are based on random exploration they rely on high level control policies, structured exploration, and short horizon tasks. Torabi et al. (2018) also train an inverse dynamics model to learn an unconditional policy. Their method, however, uses supervised learning, and does not outperform BC.
Multi-task off-policy reinforcement learning: Our approach is related to recent work that learns a family of policies, with a shared pool of experiences (Sutton et al., 2011; Andrychowicz et al., 2017; Cabi et al., 2017; Riedmiller et al., 2018). This allows for sparse reward tasks to be solved faster, when paired with related dense reward tasks. Cabi et al. (2017) and Riedmiller et al. (2018) require the practitioner to design a family of tasks and reward functions related to the task of interest. In this paper, we circumvent the need of auxiliary task design via imitation.
fD-style methods: When demonstration actions are available, one can embed the expert demonstrations into the replay memory (Hester et al., 2017; Vecerík et al., 2017). Through off-policy learning, the demonstrations could lead to better exploration. This is similar to our approach as detailed in sec 2.2. We, however, eliminate the need for expert actions through high-fidelity imitation.
For a tabular comparison of the different imitation techniques, please refer to Table 2 in the Appendix.
5 Conclusions and Future Work
In this paper, we introduced MetaMimic, a method to 1) train a high-fidelity one-shot imitation policy, and to 2) efficiently train a task policy. MetaMimic employs the largest neural network trained via RL, and works from vision, without the need of expert actions. The one-shot imitation policy can generalize to unseen trajectories and can mimic them closely. Bootstrapping on imitation experiences, the task policy can quickly outperform the demonstrator, and is competitive with methods that receive privileged information.
The framework presented in this paper can be extended in a number of ways. First, it would be exciting to combine this work with existing methods for learning third-person imitation rewards (Sermanet et al., 2016, 2018; Aytar et al., 2018). This would bring us a step closer to how humans imitate: By watching other agents act in the environment. Second, it would be exciting to extend MetaMimic to imitate demonstrations of a variety of tasks. This may allow it to generalize to demonstrations of unseen tasks.
To improve the ease of application of MetaMimic to robotic tasks, it would be desirable to address the question of how to relax the initialization constraints for high-fidelity imitation; specifically not having to set the initial agent observation to be close to the initial demonstration observation.
Abbeel & Ng (2004)
Pieter Abbeel and Andrew Y Ng.
Apprenticeship learning via inverse reinforcement learning.
Proceedings of the twenty-first international conference on Machine learning, pp. 1. ACM, 2004.
- Andrychowicz et al. (2017) Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pp. 5048–5058, 2017.
- Atkeson & Schaal (1997) Christopher G Atkeson and Stefan Schaal. Robot learning from demonstration. In ICML, volume 97, pp. 12–20. Citeseer, 1997.
- Aytar et al. (2018) Yusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyu Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. arXiv preprint arXiv:1805.11592, 2018.
- Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
- Barth-Maron et al. (2018) Gabriel Barth-Maron, Matthew W Hoffman, David Budden, Will Dabney, Dan Horgan, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributed distributional deterministic policy gradients. arXiv preprint arXiv:1804.08617, 2018.
- Bellemare et al. (2017) Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. arXiv preprint arXiv:1707.06887, 2017.
- Cabi et al. (2017) Serkan Cabi, Sergio Gómez Colmenarejo, Matthew W Hoffman, Misha Denil, Ziyu Wang, and Nando De Freitas. The intentional unintentional agent: Learning to solve many continuous control tasks simultaneously. arXiv preprint arXiv:1707.03300, 2017.
- Clevert et al. (2015) Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
- Duan et al. (2017) Yan Duan, Marcin Andrychowicz, Bradly Stadie, OpenAI Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. In Advances in neural information processing systems, pp. 1087–1098, 2017.
- Espeholt et al. (2018) Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
- Finn et al. (2016) Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pp. 49–58, 2016.
- Finn et al. (2017) Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot visual imitation learning via meta-learning. arXiv preprint arXiv:1709.04905, 2017.
- Ganin et al. (2018) Yaroslav Ganin, Tejas Kulkarni, Igor Babuschkin, SM Eslami, and Oriol Vinyals. Synthesizing programs for images using reinforced adversarial learning. arXiv preprint arXiv:1804.01118, 2018.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
- Hester et al. (2017) Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, et al. Deep q-learning from demonstrations. arXiv preprint arXiv:1704.03732, 2017.
- Ho & Ermon (2016) Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pp. 4565–4573, 2016.
- Horgan et al. (2018) Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado Van Hasselt, and David Silver. Distributed prioritized experience replay. arXiv preprint arXiv:1803.00933, 2018.
- Horner & Whiten (2005) Victoria Horner and Andrew Whiten. Causal knowledge and imitation/emulation switching in chimpanzees (pan troglodytes) and children (homo sapiens). Animal cognition, 8(3):164–181, 2005.
- Hosu & Rebedea (2016) Ionel-Alexandru Hosu and Traian Rebedea. Playing atari games with deep reinforcement learning and human checkpoint replay. arXiv preprint arXiv:1607.05077, 2016.
Batch renormalization: Towards reducing minibatch dependence in batch-normalized models.In Advances in Neural Information Processing Systems, pp. 1945–1953, 2017.
- Ioffe & Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
- Kalashnikov et al. (2018) Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293, 2018.
- Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Legare & Nielsen (2015) Cristine H Legare and Mark Nielsen. Imitation and innovation: The dual engines of cultural learning. Trends in cognitive sciences, 19(11):688–699, 2015.
- Li et al. (2017a) Yunzhu Li, Jiaming Song, and Stefano Ermon. Inferring the latent structure of human decision-making from raw visual inputs. arXiv preprint arXiv:1703.08840, 2017a.
- Li et al. (2017b) Yunzhu Li, Jiaming Song, and Stefano Ermon. Infogail: Interpretable imitation learning from visual demonstrations. In Advances in Neural Information Processing Systems, pp. 3812–3822, 2017b.
- Liu et al. (2017) YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. arXiv preprint arXiv:1707.03374, 2017.
- McGuigan et al. (2011) Nicola McGuigan, Jenny Makinson, and Andrew Whiten. From over-imitation to super-copying: Adults imitate causally irrelevant aspects of tool use with higher fidelity than young children. British Journal of Psychology, 102(1):1–18, 2011.
- Merel et al. (2017) Josh Merel, Yuval Tassa, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess. Learning human behaviors from motion capture by adversarial imitation. arXiv preprint arXiv:1707.02201, 2017.
- Mnih et al. (2013) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
- Nair et al. (2017a) Ashvin Nair, Dian Chen, Pulkit Agrawal, Phillip Isola, Pieter Abbeel, Jitendra Malik, and Sergey Levine. Combining self-supervised learning and imitation for vision-based rope manipulation. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 2146–2153. IEEE, 2017a.
- Nair et al. (2017b) Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Overcoming exploration in reinforcement learning with demonstrations. arXiv preprint arXiv:1709.10089, 2017b.
- Nair et al. (2018) Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. arXiv preprint arXiv:1807.04742, 2018.
- Ng et al. (2000) Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pp. 663–670, 2000.
- OpenAI et al. (2018) OpenAI, :, M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, and W. Zaremba. Learning Dexterous In-Hand Manipulation. ArXiv e-prints, August 2018.
- Pathak et al. (2018) Deepak Pathak, Parsa Mahmoudieh, Guanghao Luo, Pulkit Agrawal, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A Efros, and Trevor Darrell. Zero-shot visual imitation. arXiv preprint arXiv:1804.08606, 2018.
- Peng et al. (2018) Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. arXiv preprint arXiv:1804.02717, 2018.
- Pomerleau (1989) Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances in neural information processing systems, pp. 305–313, 1989.
- Popov et al. (2017) Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, and Martin Riedmiller. Data-efficient deep reinforcement learning for dexterous manipulation. arXiv preprint arXiv:1704.03073, 2017.
- Rahmatizadeh et al. (2017) Rouhollah Rahmatizadeh, Pooya Abolghasemi, Ladislau Bölöni, and Sergey Levine. Vision-based multi-task manipulation for inexpensive robots using end-to-end learning from demonstration. arXiv preprint arXiv:1707.02920, 2017.
- Rajeswaran et al. (2017a) Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, John Schulman, Emanuel Todorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. arXiv preprint arXiv:1709.10087, 2017a.
- Rajeswaran et al. (2017b) Aravind Rajeswaran, Kendall Lowrey, Emanuel V. Todorov, and Sham M Kakade. Towards generalization and simplicity in continuous control. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 6550–6561. 2017b.
- Resnick et al. (2018) Cinjon Resnick, Roberta Raileanu, Sanyam Kapoor, Alex Peysakhovich, Kyunghyun Cho, and Joan Bruna. Backplay:” man muss immer umkehren”. arXiv preprint arXiv:1807.06919, 2018.
- Riedmiller et al. (2018) Martin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van de Wiele, Volodymyr Mnih, Nicolas Heess, and Jost Tobias Springenberg. Learning by playing-solving sparse reward tasks from scratch. arXiv preprint arXiv:1802.10567, 2018.
- Sermanet et al. (2016) Pierre Sermanet, Kelvin Xu, and Sergey Levine. Unsupervised perceptual rewards for imitation learning. arXiv preprint arXiv:1612.06699, 2016.
- Sermanet et al. (2018) Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, and Sergey Levine. Time-contrastive networks: Self-supervised learning from video. 2018.
- Silver et al. (2014) David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014.
- Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
- Sutton et al. (2011) Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761–768. International Foundation for Autonomous Agents and Multiagent Systems, 2011.
- Torabi et al. (2018) Faraz Torabi, Garrett Warnell, and Peter Stone. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954, 2018.
- (52) D Ulyanov, A Vedaldi, and VS Lempitsky. Instance normalization: the missing ingredient for fast stylization. corr abs/1607.08022 (2016).
- Večerík et al. (2017) Matej Večerík, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, and Martin Riedmiller. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817, 2017.
- Vecerík et al. (2017) Matej Vecerík, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, and Martin A Riedmiller. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. CoRR, abs/1707.08817, 2017.
- Wang et al. (2017) Ziyu Wang, Josh S Merel, Scott E Reed, Nando de Freitas, Gregory Wayne, and Nicolas Heess. Robust imitation of diverse behaviors. In Advances in Neural Information Processing Systems, pp. 5326–5335, 2017.
- Zhu et al. (2018) Yuke Zhu, Ziyu Wang, Josh Merel, Andrei Rusu, Tom Erez, Serkan Cabi, Saran Tunyasuvunakool, János Kramár, Raia Hadsell, Nando de Freitas, et al. Reinforcement and imitation learning for diverse visuomotor skills. arXiv preprint arXiv:1802.09564, 2018.
- Ziebart et al. (2008) Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pp. 1433–1438. Chicago, IL, USA, 2008.
Appendix A Additional Algorithm Details
Appendix B Hyperparameters
|Image Width||128||Did not tune|
|Image Height||128||Did not tune|
|101||Did not tune|
|N step||5||Did not tune|
|Actor learning rate||loguniform([5e-5, 2e-4])||–|
|Critic learning rate||loguniform([5e-5, 2e-4])||–|
|Optimizer||Adam Kingma & Ba (2014)||Did not tune|
|Batch size||64||Did not tune|
|Target update period||100||Did not tune|
|Discount factor ()||0.99||Did not tune|
|Replay capacity||1e6||Did not tune|
|Early termination cutoff||0.5||Important|
|Pixel reward coefficient ()||15||–|
|Joint reward coefficient ()||2||–|
|Demo as a curriculum||–||See experiments|
|Sampling||Uniform||Did not tune|
|Removal||First-in-first-out||Did not tune|
Appendix C Training Details
We use D4PG (Barth-Maron et al., 2018) as our main training algorithm. Briefly, D4PG is a distributed off-policy reinforcement learning algorithm for continuous control problems. In a nutshell, D4PG uses Q-learning for policy evaluation and Deterministic Policy Gradients (DPG) (Silver et al., 2014) for policy optimization. An important characteristic of D4PG is that it maintains a replay memory (possibility prioritized (Horgan et al., 2018)) that stores SARS tuples which allows for off-policy learning. D4PG also adopts target networks for increased training stability. In addition to these principles, D4PG utilized distributed training, distributional value functions, and multi-step returns to further increase efficiency and stability. In this section, we explain the different ingredients of D4PG.
D4PG maintains an online value network and an online policy network . The target networks are of the same structures as the value and policy network, but are parameterized by different parameters and which are periodically updated to the current parameters of the online networks.
Given the function, we can update the policy using DPG:
Instead of using a scalar function, D4PG adopts a distributional value function such that where
is a random variable such thatw.p. . The ’s take on discrete values that ranges uniformly between and such that for .
To construct a bootstrap target, D4PG uses N-step returns. Given a sampled tuple from the replay memory: , we construct a new random variable such that w.p. . Notice, no longer has the same support. We therefore adopt the same projection employed by Bellemare et al. (2017). The training loss for the value fuction
where is the cross entropy.
Last but not least, D4PG is also distributed following Horgan et al. (2018). Since all learning processes only rely on the replay memory, we can easily decouple the ‘actors’ from the ‘learners’. D4PG therefore uses a large number of independent actor processes which act in the environment and write data to a central replay memory process. The learners could then draw samples from the replay memory for learning. The learner also serves as a parameter server to the actors which periodically update their policy parameters from the learner. For more details see Algorithms 1-4.
c.2 Further details
When using a demonstration curriculum, we randomly sample the initial state from the first 300 steps of a random demonstration.
For the Jaco arm experiments, we consider the vectors between the hand and the target block for the environment and the demonstration and compute the L2 distance between these. The episode terminates if the distance exceeds a threshold (0.01).