Imitation Learning by Reinforcement Learning

08/10/2021
by   Kamil Ciosek, et al.
0

Imitation Learning algorithms learn a policy from demonstrations of expert behavior. Somewhat counterintuitively, we show that, for deterministic experts, imitation learning can be done by reduction to reinforcement learning, which is commonly considered more difficult. We conduct experiments which confirm that our reduction works well in practice for a continuous control task.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

11/21/2019

State Alignment-based Imitation Learning

Consider an imitation learning problem that the imitator and the expert ...
03/31/2020

Augmented Q Imitation Learning (AQIL)

The study of unsupervised learning can be generally divided into two cat...
05/08/2021

RAIL: A modular framework for Reinforcement-learning-based Adversarial Imitation Learning

While Adversarial Imitation Learning (AIL) algorithms have recently led ...
09/15/2019

VILD: Variational Imitation Learning with Diverse-quality Demonstrations

The goal of imitation learning (IL) is to learn a good policy from high-...
04/15/2021

Skeletal Feature Compensation for Imitation Learning with Embodiment Mismatch

Learning from demonstrations in the wild (e.g. YouTube videos) is a tant...
11/16/2018

An Algorithmic Perspective on Imitation Learning

As robots and other intelligent agents move from simple environments and...
06/23/2014

Reinforcement and Imitation Learning via Interactive No-Regret Learning

Recent work has demonstrated that problems-- particularly imitation lear...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Typically, Reinforcement Learning (RL) assumes access to a pre-specified reward and then learns a policy maximizing the expected average of this reward along a trajectory. However, specifying rewards is difficult for many practical tasks (Atkeson and Schaal, 1997; Zhang et al., 2018; Ibarz et al., 2018). In such cases, it is convenient to instead perform Imitation Learning (IL), learning a policy from expert demonstrations.

There are two major categories of Imitation Learning algorithms: Behavioral Cloning and Inverse Reinforcement Learning. Behavioral Cloning learns the policy by supervised learning on expert data, but is not robust to training errors, failing in settings where expert data is limited

(Ross and Bagnell, 2010). Inverse Reinforcement Learning (IRL) achieves improved performance on limited data by constructing reward signals and calling an RL oracle to maximize these rewards (Ng et al., 2000).

The most versatile IRL method is adversarial IL (Ho and Ermon, 2016; Li et al., 2017; Ghasemipour et al., 2020), which minimizes a divergence between the distribution of data produced by the agent and provided by the expert. Adversarial IL learns a representation and a policy simultaneously by using a non-stationary reward obtained from a discriminator network. However, training adversarial IL combines two components which are hard to stabilize: a discriminator network, akin to the one used in GANs, as well as a policy, typically learned with an RL algorithm with actor and critic networks. This complexity makes the training process both brittle and very costly.

There is a clear need for imitation learning algorithms that are simpler and easier to deploy. To address this need, Wang et al. (2019) proposed to reduce imitation learning to a single instance of Reinforcement Learning problem, where reward is defined to be one for state-action pairs from the expert trajectory and zero for other state-action pairs. A closely related, but not identical, algorithm has been proposed by Reddy et al. (2020) (we describe the differences in Section 5). However, while empirical performance of these approaches has been good, they enjoy no performance guarantees at all, even in the asymptotic setting where expert data is infinite.

Contributions

We fill in this missing justification for this algorithm, providing the needed theoretical analysis. Specifically, in Sections 3 and 4

, we show a total variation bound between the expert policy and the imitation policy, providing a high-probability performance guarantee for a finite dataset of expert data and linking the reduction to adversarial imitation learning algorithms. For stochastic experts, we describe how the reduction fails, completing the analysis. Moreover, in Section

6, we empirically evaluate the performance of the reduction as the amount of available expert data varies.

2 Preliminaries

Markov Decision Process

An average-reward Markov Decision Process

(Puterman, 2014; Feinberg and Shwartz, 2012) is a tuple , where is a state space, is the action space, is the transition model, is a bounded reward function and is the initial state. Here, we write

to denote probability distributions over a set

. A stationary policy maps environment states to actions. A policy

induces a Markov chain

. In the theoretical part of the paper, we treat MDPs with finite state and action spaces. Given a starting state of the MDP, and a policy , the limiting distribution over states is defined as

(1)

where

denotes the indicator vector. We adopt the convention that the subscript

indicates a distribution over states and a no subscript indicates a distribution over state-action pairs. We denote . While the limit in equation 1 is guaranteed to exist for all finite MDPs (Puterman, 2014), without requiring ergodicity, in this paper we consider policies that induce ergodic chains. Correspondingly, we drop the dependence on the initial state from the notation. The expected per-step reward of is defined as

(2)

where is the total return.

Expert Dataset

Assume that the expert follows an ergodic policy . Denote the corresponding distribution over expert state-action pairs as , where we dropped the dependency on the initial state by ergodicity. Consider a finite trajectory . Denote by the histogram (empirical distribution) of data seen along this trajectory and denote by the corresponding histogram over the states. Denote by the support of , i.e. the set of state-action tuples visited by the expert.

Imitation Learning

Learning to imitate means obtaining a policy that mimics expert behavior. This can be formalized in two ways. First, we can seek an imitation policy which obtains a expected per-step reward that is -close to the expert on any bounded reward signal. Formally, we want to satisfy

(3)

where is a small constant and we denote the limiting distribution of state-action tuples generated by the imitation learner with . Second, we can seek to ensure that the distributions of state-action pairs generated by the expert and the imitation learner are similar (Ho and Ermon, 2016; Finn et al., 2016). In particular, if we measure the divergence between distributions with total variation, we want to ensure

(4)

We recall in Section 4.1 that equation 3 and equation 4 are in fact closely related. We provide further background on adversarial imitation learning and other divergences in Section 5.

3 Imitation Learning by Reinforcement Learning

Algorithm

To perform imitation learning, we first obtain an expert dataset and construct an intrinsic reward

(5)

We can then use any RL algorithm to solve for this reward. We demonstrate in Section 6 that the algorithm can be effectively deployed in practice. We note that for finite MDPs equation 5 exactly matches the algorithm proposed by Wang et al. (2019)

as a heuristic.

Guarantee on Imitation Policy

The main focus of the paper lies on showing theoretical properties of the imitation policy obtained when solving for the reward signal equation 5. Our formal guarantees hold under three assumptions.

Assumption 1.

The expert policy is deterministic.

Assumption 2.

The expert policy induces an irreducible and aperiodic Markov chain.

Assumption 3.

The imitation learner policy induces an irreducible and aperiodic Markov chain.

Assumption 1 is critical for our proof to go through and cannot be relaxed. It is also essential to rationalize the approach of Wang et al. (2019). In fact, no divergence-minimizing reduction to RL can exist for stochastic experts since for any MDP there is always an optimal policy which is deterministic. Assumptions 2 and 3 could in principle be relaxed, to allow for periodic chains, but it would complicate the reasoning, which we wanted to avoid. We note that we only interact with the expert policy via the finite dataset of samples.

Our main contribution is Proposition 1, in which we quantify the performance of our imitation learner. We state it below and prove it in Section 4.

Proposition 1.

Consider an imitation learner trained on a dataset of size , which attains the limiting distribution of state-action pairs . Under Assumptions 1, 2 and 3, given we have expert demonstrations, with probability at least , the imitation learner attains total variation distance from the expert of at most

(6)

The constants , and reflect the mixing time of the expert policy and are defined in Section 4.2. Moreover, with the same probability, for any extrinsic reward function, the imitation learner achieves expected per-step extrinsic reward of at least

(7)

Proposition 1 shows that the policy learned by imitation learner satisfies two important desiderata: we can guarantee both the closeness of the generated state-action distribution to the expert distribution as well as recovery of expert reward. Moreover, the total variation bound equation 6 links the obtained policy to adversarial imitation learning algorithms. We describe this link in more detail in Section 5.

4 Proof

Structure of Results

In Sections 4.1 and 4.2, we recall in our notation standard results about mixing in Markov chains and about the total variation distance between probability distributions. We then give our proof, which has two main parts. In the first part, in Section 4.3, we show that using the intrinsic rewards as in equation equation 5, it is possible to achieve a high expected per-step intrinsic reward. In the second part (Section 4.4), we show that, for any bounded extrinsic reward, achieving a large expected per-step intrinsic reward guarantees a large expected per-step extrinsic reward. In Section 4.5, we combine these results and prove Proposition 1.

4.1 Total Variation

Consider probability distributions defined on a discrete set . The total variation distance between distributions is defined as

(8)

In our application, the set is either the set of MDP states or the set of state state-action pairs . Below, we restate in our notation a standard result about the total variation distance.

Lemma 1.

If for any vector it holds that , then we have .

Proof.

By setting , it follows that for all . Therefore for all . For any on the right-hand side of equation 8, we can instantiate , completing the proof. ∎

Lemma 2.

If , then we have for any vector .

Proof.

We have

Here, the first inequality follows because elements of are in the interval . The statement of the lemma follows from the property . ∎

Lemma 1 and 2 are important because they connect two desiderata equation 3 and equation 4 one might pose for imitation learners. Specifically, attaining expected per-step reward close to the expert’s is the same (up to a multiplicative constant) as closeness in total variation between the state-action distribution of the expert and the imitation learner.

4.2 Mixing in Markov Chains

By standard properties of Markov chains, Assumption 2 implies exponential mixing, formalized in the Lemma below.

Lemma 3.

Denote with the Markov chain as induced by the expert policy. There are constants and such that . Moreover, defining , we have . The mixing time of the chain , defined as the smallest so that is bounded as .

Proof.

For the statement about distributions over states, see Theorem 4.9 in the book by Levin et al. (2017). For the statement about distributions over state-action pairs, we use the fact that the expert policy is deterministic so that . ∎

In the following Lemma, we recall McDiarmid’s inequality for Markov chains.

Lemma 4.

Consider a Markov Chain . For any coefficients and a function so that for all , we have

where is the mixing time of the chain.

Proof.

Instantiate Corollary 2.11 of Paulin and others (2015), where the constant 4.5 appears by plugging in equation (2.9) of Paulin and others (2015). ∎

We now show a Lemma that quantifies how fast the histogram approaches the stationary distribution.

Lemma 5.

The total variation distance between the expert distribution and the expert histogram based on samples can be bounded as

(9)

with probability at least (where the probability space is defined by the process generating the expert histogram).

Proof.

We first prove the statement for histograms over the states. Recall the notation . Recall that we denote the stationary distribution (over states) of the Markov chain induced by the expert with .

Define . For two state sequences and we have . Using this property, instantiate Lemma 4 with for each state separately with as above. Using the fact that , this gives us , where we introduced the notation . Applying the union bound over all states, we have . This implies . Introducing , this is equivalent to

(10)

We now quantify the distance between and the stationary distribution.

(11)

Here, the last inequality follows from Lemma 3 and the identity . We combine equation 10 and equation 11 using the triangle inequality. Using the property again and setting , we obtain

(12)

with probability at least . Since the expert policy is deterministic, the total variation distance between the distributions of state-action tuples and states is the same, i.e. . ∎

4.3 High Expected Intrinsic Per-Step Reward is Achievable

We now want to show that it is possible for the imitation learner to attain a large intrinsic reward. Informally, the proof asks what intrinsic reward the expert would have achieved. We then conclude that a learner specifically optimizing the intrinsic reward will obtain at least as much.

Lemma 6.

Generating an expert histogram with points, with probability at least , we obtain a dataset such that a policy maximizing the intrinsic reward satisfies , where we used the shorthand notation to denote the state-action distribution of .

Proof.

We invoke Lemma 5 obtaining with probability as in the statement of the lemma. First, we prove

(13)

by setting , and in equation equation 8.

Combining

(14)

and equation equation 13, we obtain

(15)

which means that the expert policy achieves expected per-step intrinsic reward of at least . This lower bounds the expected per-step reward obtained by the optimal policy. ∎

4.4 Maximizing Intrinsic Reward Leads to High Per-Step Extrinsic Reward

We now aim to prove that the intrinsic and extrinsic rewards are connected, i.e. maximizing the intrinsic reward leads to a large expected per-step extrinsic reward. The proofs in this section are based on the insight that, by construction of intrinsic reward in equation 5, attaining intrinsic reward of one in a given state implies agreement with the expert. In the following Lemma, we quantify the outcome of achieving such agreement for consecutive steps.

Lemma 7.

Assume an agent traverses a sequence of state-action pairs, in a way consistent with the expert policy. Denote the expert’s expected extrinsic per-step reward with . Denote by the expected state-action occupancy at time , starting in state distribution . The per-step extrinsic reward of the agent in expectation over realizations of the sequence satisfies

(16)
Proof.

Invoking Lemma 3, we have that . Invoking Lemma 2, we have that

(17)

for any timestep . In other words, the expected per-step reward obtained in step of the sequence is at least . The per-step reward in the sequence is at least

In Lemma 7, we have shown that, on average, agreeing with the expert for a number of steps guarantees a certain level of extrinsic reward. We will now use this result to guarantee extrinsic reward obtained over a long trajectory.

Lemma 8.

For any extrinsic reward signal bounded in , an imitation learner which attains expected per-step intrinsic reward of also attains extrinsic per-step reward of at least

with probability one, where is the expected per-step extrinsic reward of the expert.

Proof.

Consider the imitation learner’s trajectory of length . We will now consider sub-sequences where the agent agrees with the expert. Denote by the number of such sequences of length . Denote by the per-step extrinsic reward obtained by the agent in the th sequence of length .

Assuming the worst case reward on states which are not in any of the sequences (i.e. where the agent disagrees with the expert), the total extrinsic return along the trajectory is at least . Dividing by the sequence length, the per-step extrinsic reward is at least

(18)

Denote by the difference between the average extrinsic reward obtained in sequences of length and its expected value, where we denoted by the initial state distribution among sequences of length . We can now re-write equation 18 as:

(19)

Using Lemma 7, we can re-write this further

(20)
(21)

Denote by the number of timesteps the imitation learner disagrees with the expert. Observe that we have since bad timesteps can partition the trajectory into at most sub-sequences and is the total number of sub-sequences. This gives

(22)

Now, using the fact that the imitation learner policy is ergodic, we can take limits as . The left hand side converges to with probability one. On the right-hand side, (the fraction of time the imitation learner agrees with the expert) converges to with probability one. Using ergodicity again, converges to with probability one.

It remains to prove that the error term converges to zero with probability one. First, we will show that, for every , either is always zero, i.e. the streak length does not occur in sub-sequences of any length or as . Indeed, the chain is ergodic, which means that, if we traversed a streak of length , we have non-zero probability of returning to where the streak begun and then retracing it. Let us call the of set of all s where by . Moreover, let us call the set of s with as with . We have:

(23)

The second term in the sum equals zero with because of the definition of . The term

converges to zero with probability one by the law of large numbers, where we use the fact that

as for . ∎

4.5 Proof of Proposition 1

We now show how Lemma 6 and Lemma 8 can be combined to obtain Proposition 1.

Proof.

We use the assumption that

(24)

We will instantiate Lemma 6 with

(25)

Equations equation 24 and equation 25 imply that . We can rewrite this as , which implies . This implies that

(26)

Moreover, using equation 24 again, we have that . This is equivalent to . Using equation 25, we obtain , equivalent to , which implies . Rewriting this gives

(27)

where we use the notation .

We first show the statement about recovering expected per-step expert reward equation 7. Invoking Lemma 6 and using equation 26, we have that it is possible for the imitation learner to achieve per-step expected intrinsic reward of at least with probability . Invoking Lemma 8, this implies achieving extrinsic reward of at least

This implies

where the last inequality follows from equation 27. Since this statement holds for any extrinsic reward signal , we obtain the total variation bound by invoking Lemma 1:

5 Related Work

Behavioral Cloning

The simplest way of performing Imitation Learning is to learn a policy by fitting a Maximum Likelihood model on the expert dataset. While such ‘behavioral cloning’ is easy to implement, it does not take into account the sequential nature of the Markov decision process, leading to catastrophic compounding of errors (Ross and Bagnell, 2010). In practice, the performance of policies obtained with behavioral cloning is highly dependent on the amount of available data. In contrast, our reduction avoids the problem faced by behavioral cloning and provably works with limited expert data.

Apprenticeship Learning

Apprenticeship Learning (Abbeel and Ng, 2004) assumes the existence of a pre-learned linear representation for rewards and then proceeds in multiple iterations. In each iteration, the algorithm first computes an intrinsic reward signal and then calls an RL oracle to obtain a policy. Our algorithm also uses an RL oracle, but unlike Apprenticeship Learning, we only call it once.

Adversarial IL

Modern adversarial IL algorithms (Ho and Ermon, 2016; Finn et al., 2016; Li et al., 2017; Sun et al., 2019) remove the requirement to provide a representation by learning it online. They work by minimizing a divergence between the expert state-action distribution and the one generated by the RL agent. For example, the GAIL algorithm (Ho and Ermon, 2016) minimizes the Jensen-Shannon divergence. It can be related to the total variation objective of equation equation 4 in two ways. First, the TV and JS divergences obey , so that implies . This means that GAIL indirectly minimizes the total variation distance. Second, we have . This means that our algorithm, which minimizes Total Variation, also minimizes the GAIL objective. Both of these properties imply that, up to multiplicative constants, running our algorithm is equivalent to running GAIL. This has huge practical significance because our algorithm does not need to train a discriminator.

Another adversarial algorithm, InfoGAIL (Li et al., 2017) minimizes the 1-Wasserstein divergence, which obeys for any reward function Lipschitz in the norm. This is similar to the property equation 3, implied by our Proposition 1, except we guarantee closeness of expected per-step reward for any bounded reward function as opposed to Lipschitz-continuous functions.

Random Expert Distillation

Wang et al. (2019) propose an algorithm which, for finite MDPs, is the same as ours. However, they do not justify the properties of the imitation policy formally. Our work can be thought of as complementary. While Wang et al. (2019)

conducted an empirical evaluation using sophisticated support estimators, we fill in the missing theory. Moreover, while our empirical evaluation is smaller in scope, we attempt to be more complete, demonstrating convergence in cases where only partial trajectories are given.

The SQIL heuristic

SQIL (Reddy et al., 2020) is close to our work in that it proposes a similar algorithm. At any given time, SQIL performs off-policy Reinforcement Learning on a dataset sampled from the mixture distribution , where is the distribution of data under the expert. Since the rewards are one for the data sampled from and zero for state-action pairs sampled from , the expected reward obtained at a state-action pair is given by , which is non-stationary and varies between zero and one. The benefit of SQIL is that it does not require an identity oracle. However, while SQIL has demonstrated good empirical performance, it does not come with a theoretical guarantee of any kind, making it hard to deploy in settings where we need a theoretical certificate of policy quality.

Expert Feedback Loop

IL algorithms such as SMILe (Ross and Bagnell, 2010) and DAGGER (Ross et al., 2011) assume the ability to query the expert for more data. While this makes it easier to reproduce expert behavior, the ability to execute queries is not always available in realistic scenarios. Our reduction is one-off, and does not need to execute expert queries to obtain more data.

6 Experiment

In this section, we empirically investigate the performance of our reduction on a continuous control task. Because Wang et al. (2019) have already conducted an extensive evaluation of various ways of estimating the support for the expert distribution, we do not attempt this here. Instead, we focus on one aspect: the amount of expert data needed to achieve good imitation quality.

Implementation

We use the Hopper continuous control environment from the PyBullet gym suite (Ellenberger, 2018) due to their permissive MIT license. The expert policy is obtained by training a SAC agent for 200 000 steps. Since our proof is designed for finite MDPs, we need to redefine the reward signal in order to apply our algorithm to problems with continuous state-action space. To do this, we define , where is the distance. Running our experiment takes less than two hours on a laptop.

Figure 1: Performance of imitation learner as a function of available data.

Performance as Function of Quantity of Expert Data

The plot in Figure 1

shows the performance of the imitation learner (ILR) as a function of how much expert data is available, compared to behavioral cloning (BC). The amount of data is measured in episodes, where fractional episodes have the meaning that state-action tuples are taken from the beginning of the episode. Confidence bars represent 1.96 standard deviations. Our plot confirms that the reduction is significantly more data-efficient than behavioral cloning.

7 Societal Implications

Negative Effects

Since our most important contribution is a bound stating that the imitation learner closely mimics the expert, the ethical implications of actions taken by our algorithm crucially depend on the provided demonstrations. The most direct avenue for misuse is for malicious actors to intentionally demonstrate unethical behavior. Second, assuming the demonstrations are provided in good faith, there is a risk of excessive reliance on the provided bound. Specifically, it is still possible that our algorithm fails to recover expert behavior in situations where assumptions needed by our proof are not met, for example if the expert policy is stochastic. In certain settings, the bounds can also turn out to be very loose, for example when the expert policy takes long to mix. This means that one should not deploy our reduction in safety-critical environments without validating the assumptions first.

Positive Effects

The proposed reduction has the benefits of adversarial imitation learning, but without having to train the discriminator. This means that training is cheaper, making imitation learning more affordable and research on imitation learning more democratic.

8 Conclusions

We have shown that, for deterministic experts, Imitation Learning can be performed with a single invocation of an RL oracle. We have derived a bound guaranteeing the performance of the obtained policy and relating the reduction to adversarial imitation learning algorithms. Finally, we have provided an evaluation of the proposed reduction on a family of continuous control tasks.

References

  • P. Abbeel and A. Y. Ng (2004) Apprenticeship learning via inverse reinforcement learning. In

    Proceedings of the twenty-first international conference on Machine learning

    ,
    pp. 1. Cited by: §5.
  • C. G. Atkeson and S. Schaal (1997) Robot learning from demonstration. In ICML, Vol. 97, pp. 12–20. Cited by: §1.
  • B. Ellenberger (2018) PyBullet gymperium. Note: https://github.com/benelot/pybullet-gym Cited by: §6.
  • E. A. Feinberg and A. Shwartz (2012) Handbook of markov decision processes: methods and applications. Vol. 40, Springer Science & Business Media. Cited by: §2.
  • C. Finn, S. Levine, and P. Abbeel (2016) Guided cost learning: deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pp. 49–58. Cited by: §2, §5.
  • S. K. S. Ghasemipour, R. Zemel, and S. Gu (2020) A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning, pp. 1259–1277. Cited by: §1.
  • J. Ho and S. Ermon (2016) Generative adversarial imitation learning. In Advances in neural information processing systems, pp. 4565–4573. Cited by: §1, §2, §5.
  • B. Ibarz, J. Leike, T. Pohlen, G. Irving, S. Legg, and D. Amodei (2018) Reward learning from human preferences and demonstrations in atari. In Advances in neural information processing systems, pp. 8011–8023. Cited by: §1.
  • D. A. Levin, Y. Peres, and E. Wilmer (2017) Markov chains and mixing times, vol. 107. American Mathematical Soc 799. Cited by: §4.2.
  • Y. Li, J. Song, and S. Ermon (2017) Infogail: interpretable imitation learning from visual demonstrations. In Advances in Neural Information Processing Systems, pp. 3812–3822. Cited by: §1, §5, §5.
  • A. Y. Ng, S. J. Russell, et al. (2000) Algorithms for inverse reinforcement learning.. In Icml, Vol. 1, pp. 2. Cited by: §1.
  • D. Paulin et al. (2015) Concentration inequalities for markov chains by marton couplings and spectral methods. Electronic Journal of Probability 20. Cited by: §4.2.
  • M. L. Puterman (2014) Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons. Cited by: §2.
  • S. Reddy, A. D. Dragan, and S. Levine (2020) SQIL: imitation learning via reinforcement learning with sparse rewards. In International Conference on Learning Representations, External Links: Link Cited by: §1, §5.
  • S. Ross and D. Bagnell (2010) Efficient reductions for imitation learning. In

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    ,
    pp. 661–668. Cited by: §1, §5, §5.
  • S. Ross, G. Gordon, and D. Bagnell (2011) A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627–635. Cited by: §5.
  • W. Sun, A. Vemula, B. Boots, and D. Bagnell (2019) Provably efficient imitation learning from observation alone. In International Conference on Machine Learning, pp. 6036–6045. Cited by: §5.
  • R. Wang, C. Ciliberto, P. V. Amadori, and Y. Demiris (2019) Random expert distillation: imitation learning via expert policy support estimation. In International Conference on Machine Learning, pp. 6536–6544. Cited by: §1, §3, §3, §5, §6.
  • T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel (2018) Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. Cited by: §1.