Learning from Trajectories via Subgoal Discovery

11/03/2019 ∙ by Sujoy Paul, et al. ∙ MERL University of California, Riverside 0

Learning to solve complex goal-oriented tasks with sparse terminal-only rewards often requires an enormous number of samples. In such cases, using a set of expert trajectories could help to learn faster. However, Imitation Learning (IL) via supervised pre-training with these trajectories may not perform as well and generally requires additional finetuning with expert-in-the-loop. In this paper, we propose an approach which uses the expert trajectories and learns to decompose the complex main task into smaller sub-goals. We learn a function which partitions the state-space into sub-goals, which can then be used to design an extrinsic reward function. We follow a strategy where the agent first learns from the trajectories using IL and then switches to Reinforcement Learning (RL) using the identified sub-goals, to alleviate the errors in the IL step. To deal with states which are under-represented by the trajectory set, we also learn a function to modulate the sub-goal predictions. We show that our method is able to solve complex goal-oriented tasks, which other RL, IL or their combinations in literature are not able to solve.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reinforcement Learning (RL) aims to take sequential actions so as to maximize, by interacting with an environment, a certain pre-specified reward function, designed for the purpose of solving a task. RL using Deep Neural Networks (DNNs) has shown tremendous success in several tasks such as playing games

Mnih et al. (2015); Silver et al. (2016), solving complex robotics tasks Levine et al. (2016); Duan et al. (2016), etc. However, with sparse rewards, these algorithms often require a huge number of interactions with the environment, which is costly in real-world applications such as self-driving cars Bojarski et al. (2016), and manipulations using real robots Levine et al. (2016). Manually designed dense reward functions could mitigate such issues, however, in general, it is difficult to design detailed reward functions for complex real-world tasks.

Imitation Learning (IL) using trajectories generated by an expert can potentially be used to learn the policies faster Argall et al. (2009). But, the performance of IL algorithms Ross et al. (2011) are not only dependent on the performance of the expert providing the trajectories, but also on the state-space distribution represented by the trajectories, especially in case of high dimensional states. In order to avoid such dependencies on the expert, some methods proposed in the literature Sun et al. (2017); Cheng et al. (2018) take the path of combining RL and IL. However, these methods assume access to the expert value function, which may become impractical in real-world scenarios.

In this paper, we follow a strategy which starts with IL and then switches to RL. In the IL step, our framework performs supervised pre-training which aims at learning a policy which best describes the expert trajectories. However, due to limited availability of expert trajectories, the policy trained with IL will have errors, which can then be alleviated using RL. Similar approaches are taken in Cheng et al. (2018) and Nagabandi et al. (2018), where the authors show that supervised pre-training does help to speed-up learning. However, note that the reward function in RL is still sparse, making it difficult to learn. With this in mind, we pose the following question: can we make more efficient use of the expert trajectories, instead of just supervised pre-training?

Given a set of trajectories, humans can quickly identify waypoints, which need to be completed in order to achieve the goal. We tend to break down the entire complex task into sub-goals and try to achieve them in the best order possible. Prior knowledge of humans helps to achieve tasks much faster Andreas et al. (2017); Dubey et al. (2018) than using only the trajectories for learning. The human psychology of divide-and-conquer has been crucial in several applications and it serves as a motivation behind our algorithm which learns to partition the state-space into sub-goals using expert trajectories. The learned sub-goals provide a discrete reward signal, unlike value based continuous reward Ng et al. (1999); Sun et al. (2018), which can be erroneous, especially with a limited number of trajectories in long time horizon tasks. As the expert trajectories set may not contain all the states where the agent may visit during exploration in the RL step, we augment the sub-goal predictor via one-class classification to deal with such under-represented states. We perform experiments on three goal-oriented tasks on MuJoCo Todorov (2014) with sparse terminal-only reward, which state-of-the-art RL, IL or their combinations are not able to solve.

2 Related Works

Our work is closely related to learning from demonstrations or expert trajectories as well as discovering sub-goals in complex tasks. We first discuss works on imitation learning using expert trajectories or reward-to-go. We also discuss the methods which aim to discover sub-goals, in an online manner during the RL stage from its past experience.

Imitation Learning. Imitation Learning Schaal (1999); Silver et al. (2008); Chernova and Veloso (2009); Rajeswaran et al. (2017); Hester et al. (2018)

uses a set of expert trajectories or demonstrations to guide the policy learning process. A naive approach to use such trajectories is to train a policy in a supervised learning manner. However, such a policy would probably produce errors which grow quadratically with increasing steps. This can be alleviated using Behavioral Cloning (BC) algorithms

Ross et al. (2011); Ross and Bagnell (2014); Torabi et al. (2018), which queries expert action at states visited by the agent, after the initial supervised learning phase. However, such query actions may be costly or difficult to obtain in many applications. Trajectories are also used by Levine and Koltun (2013), to guide the policy search, with the main goal of optimizing the return of the policy rather than mimicking the expert. Recently, some works Sun et al. (2017); Chang et al. (2015); Sun et al. (2018) aim to combine IL with RL by assuming access to experts reward-to-go at the states visited by the RL agent. Cheng et al. (2018) take a moderately different approach where they switch from IL to RL and show that randomizing the switch point can help to learn faster. The authors in Ranchod et al. (2015) use demonstration trajectories to perform skill segmentation in an Inverse Reinforcement Learning (IRL) framework. The authors in Murali et al. (2016) also perform expert trajectory segmentation, but do not show results on learning the task, which is our main goal. SWIRL Krishnan et al. (2019) make certain assumptions on the expert trajectories to learn the reward function and their method is dependent on the discriminability of the state features, which we on the other hand learn end-to-end.

Learning with Options. Discovering and learning options have been studied in the literature Sutton et al. (1999); Precup (2000); Stolle and Precup (2002) which can be used to speed-up the policy learning process. Silver and Ciosek (2012) developed a framework for planning based on options in a hierarchical manner, such that low level options can be used to build higher level options. Florensa et al. (2017)

propose to learn a set of options, or skills, by augmenting the state space with a latent categorical skill vector. A separate network is then trained to learn a policy over options. The Option-Critic architecture

Bacon et al. (2017) developed a gradient based framework to learn the options along with learning the policy. This framework is extended in Riemer et al. (2018) to handle a hierarchy of options. Held et al. (2017) proposed a framework where the goals are generated using Generative Adversarial Networks (GAN) in a curriculum learning manner with increasingly difficult goals. Researchers have shown that an important way of identifying sub-goals in several tasks is identifying bottle-neck regions in tasks. Diverse Density McGovern and Barto (2001), Relative Novelty Şimşek and Barto (2004), Graph Partitioning Şimşek et al. (2005), clustering Mannor et al. (2004) can be used to identify such sub-goals. However, unlike our method, these algorithms do not use a set of expert trajectories, and thus would still be difficult to identify useful sub-goals for complex tasks.

3 Methodology

We first provide a formal definition of the problem we are addressing in this paper, followed by a brief overall methodology, and then present a detailed description of our framework.

Problem Definition.

Consider a standard RL setting where an agent interacts with an environment which can be modeled by a Markov Decision Process (MDP)

, where is the set of states, is the set of actions, is a scalar reward function, is the discount factor and is the initial state distribution. Our goal is to learn a policy , with , which optimizes the expected discounted reward , where and , and .

With sparse rewards, optimizing the expected discounted reward using RL may be difficult. In such cases, it may be beneficial to use a set of state-action trajectories generated by an expert to guide the learning process. is the number of trajectories in the dataset and is the length of the trajectory. We propose a methodology to efficiently use by discovering sub-goals from these trajectories and use them to develop an extrinsic reward function.

(a)
(b)
Figure 1: (a) This shows an overview of our proposed framework to train the policy network along with sub-goal based reward function with out-of-set augmentation. (b) An example state-partition with two independent trajectories in black and red. Note that the terminal state is shown as a separate state partition because we assume it to be indicated by the environment and not learned.

Overall Methodology. Several complex, goal-oriented, real-world tasks can often be broken down into sub-goals with some natural ordering. Providing positive rewards after completing these sub-goals can help to learn much faster compared to sparse, terminal-only rewards. In this paper, we advocate that such sub-goals can be learned directly from a set of expert demonstration trajectories, rather than manually designing them.

A pictorial description of our method is presented in Fig. 0(a). We use the set to first train a policy by applying supervised learning. This serves a good initial point for policy search using RL. However, with sparse rewards, the search can still be difficult and the network may forget the learned parameters in the first step if it does not receive sufficiently useful rewards. To avoid this, we use to learn a function , which given a state, predicts sub-goals. We use this function to obtain a new reward function, which intuitively informs the RL agent whenever it moves from one sub-goal to another. We also learn a utility function to modulate the sub-goal predictions over the states which are not well-represented in the set . We approximate the functions , , and using neural networks. We next describe our meaning of sub-goals followed by an algorithm to learn them.

3.1 Sub-goal Definition

Definition 1. Consider that the state-space is partitioned into sets of states as , s.t., and and is the number of sub-goals specified by the user. For each , we say that the particular action takes the agent from one sub-goal to another iff , for some and .

We assume that there is an ordering in which groups of states appear in the trajectories as shown in Fig. 0(b)

. However, the states within these groups of states may appear in any random order in the trajectories. These groups of states are not defined a priori and our algorithm aims at estimating these partitions. Note that such orderings are natural in several real-world applications where a certain sub-goal can only be reached after completing one or more previous sub-goals. We show (empirically in the supplementary) that our assumption is soft rather than being strict, i.e., the degree by which the trajectories deviate from the assumption determines the granularity of the discovered sub-goals. We may consider that states in the trajectories of

appear in increasing order of sub-goal indices, i.e., achieving sub-goal is harder than achieving sub-goal . This gives us a natural way of defining an extrinsic reward function, which would help towards faster policy search. Also, all the trajectories in should start from the initial state distribution and end at the terminal states.

3.2 Learning Sub-Goal Prediction

We use to partition the state-space into sub-goals, with

being a hyperparameter. We learn a neural network to approximate

, which given a state predicts a probability mass function (p.m.f.) over the possible sub-goal partitions . The order in which the sub-goals occur in the trajectories, i.e., , acts as a supervisory signal, which can be derived from our assumption mentioned above.

We propose an iterative framework to learn using these ordered constraints. In the first step, we learn a mapping from states to sub-goals using equipartition labels among the sub-goals. Then we infer the labels of the states in the trajectories and correct them by imposing ordering constraints. We use the new labels to again train the network and follow the same procedure until convergence. These two steps are as follows.

Learning Step. In this step we consider that we have a set of tuples , which we use to learn the function , which can be posed as a multi-class classification problem with

categories. We optimize the following cross-entropy loss function,

(1)

where is the indicator function and is the number of states in the dataset . To begin with, we do not have any labels , and thus we consider equipartition of all the sub-goals in along each trajectory. That is, given a trajectory of states for some , the initial sub-goals are,

(2)

Using this initial labeling scheme, similar states across trajectories may have different labels, but the network is expected to converge at the Maximum Likelihood Estimate (MLE) of the entire dataset. We also optimize CASL Paul et al. (2018) for stable learning as the initial labels can be erroneous. In the next iteration of the learning step, we use the inferred sub-goal labels, which we obtain as follows.

Inference Step. Although the equipartition labels in Eqn. 2 may have similar states across different trajectories mapped to dissimilar sub-goals, the learned network modeling provides maps similar states to the same sub-goal. But, Eqn. 1, and thus the predictions of

does not account for the natural temporal ordering of the sub-goals. Even with architectures such as Recurrent Neural Networks (RNN), it may be better to impose such temporal order constraints explicitly rather than relying on the network to learn them. We inject such order constraints using Dynamic Time Warping (DTW).

Formally, for the trajectory in , we obtain the following set: , where is a vector representing the p.m.f. over the sub-goals . However, as the predictions do not consider temporal ordering, the constraint that sub-goal occurs after sub-goal , for , is not preserved. To impose such constraints, we use DTW between the two sequences , which are the standard basis vectors in the dimensional Euclidean space and . We use the -norm of the difference between two vectors as the similarity measure in DTW. In this process, we obtain a sub-goal assignment for each state in the trajectories, which become the new labels for training in the learning step.

We then invoke the learning step using the new labels (instead of Eqn. 2), followed by the inference step to obtain the next sub-goal labels. We continue this process until the number of sub-goal labels changed between iterations is less than a certain threshold. This method is presented in Algorithm 1, where the superscript represents the iteration number in learning-inference alternates.

Reward Using Sub-Goals. The ordering of the sub-goals, as discussed before, provides a natural way of designing a reward function as follows:

(3)

where the agent in state takes action and reaches state . The augmented reward function would become . Considering that we have a function of the form , and without loss of generality that , so that for the initial state , it follows from Ng et al. (1999) that every optimal policy in , will also be optimal in . However, the new reward function may help to learn the task faster.

Out-of-Set Augmentation. In several applications, it might be the case that the trajectories only cover a small subset of the state space, while the agent, during the RL step, may visit states outside of the states in . The sub-goals estimated at these out-of-set states may be erroneous. To alleviate this problem, we use a logical assertion on the potential function that the sub-goal predictor is confident only for states which are well-represented in , and not elsewhere. We learn a neural network to model a utility function , which given a state, predicts the degree by which it is seen in the dataset . To do this, we build upon Deep One-Class Classification Ruff et al. (2018)

, which performs well on the task of anomaly detection. The idea is derived from Support Vector Data Description (SVDD)

Tax and Duin (2004), which aims to find the smallest hypersphere enclosing the given data points with minimum error. Data points outside the sphere are then deemed as anomalous. We learn the parameters of by optimizing the following function:

where is a vector determined a priori Ruff et al. (2018), is modeled by a neural network with parameters , s.t. . The second part is the regularization loss with all the parameters of the network lumped to . The utility function can be expressed as follows:

(4)

A lower value of indicates that the state has been seen in . We modify the potential function and thus the extrinsic reward function, to incorporate the utility score as follows:

(5)

where denotes the modified potential function. It may be noted that as the extrinsic reward function is still a potential-based function Ng et al. (1999), the optimality conditions between the MDP and still hold as discussed previously.

  Input: Expert trajectory set
  Output: Sub-goal predictor
  
  Obtain for each using Eqn. 2
  repeat
     Optimize Eqn. 1 to obtain
     Predict p.m.f of for each using
     Obtain new sub-goals using the p.m.f in DTW
     done = True, if , else False
     
  until done is True
Algorithm 1 Learning Sub-Goal Prediction

Supervised Pre-Training. We first pre-train the policy network using the trajectories (details in supplementary). The performance of the pre-trained policy network is generally quite poor and is upper bounded by the expert performance from which the trajectories are drawn. We then employ RL, which starts from the pre-trained policy, to learn from the subgoal based reward function. Unlike standard imitation learning algorithms, e.g., DAgger, which finetune the pre-trained policy with the expert in the loop, our algorithm only uses the initial set of expert trajectories and does not invoke the expert otherwise.

4 Experiments

(a) BiMGame (b) AntTarget (c) AntMaze
Figure 2: This figure presents the three environments used in this paper - (a) Ball-in-Maze Game (BiMGame) (b) Ant locomotion in an open environment with an end goal (AntTarget) (c) Ant locomotion in a maze with an end goal (AntMaze)

In this section, we perform experimental evaluation of the proposed method of learning from trajectories and compare it with other state-of-the-art methods. We also perform ablation of different modules of our framework.

Tasks. We perform experiments on three challenging environments as shown in Fig. 2. First is Ball-in-Maze Game (BiMGame) introduced in van Baar et al. (2018), where the task is to move a ball from the outermost to the innermost ring using a set of five discrete actions - clock-wise and anti-clockwise rotation by along the two principal dimensions of the board and “no-op" where the current orientation of the board is maintained. The states are images of size . The second environment is AntTarget which involves the Ant Schulman et al. (2015). The task is to reach the center of a circle of radius m with the Ant being initialized on a arc of the circle. The state and action are continuous with and dimensions respectively. The third environment, AntMaze, uses the same Ant, but in a U-shaped maze used in Held et al. (2017). The Ant is initialized on one end of the maze with the goal being the other end indicated as red in Fig. 1(c). Details about the network architectures we use for , and can be found in the supplementary material.

Reward. For all tasks, we use sparse terminal-only reward, i.e., only after reaching the goal state and otherwise. Standard RL methods such as A3C Mnih et al. (2016) are not able to solve these tasks with such sparse rewards.

Trajectory Generation. We generate trajectories from A3C Mnih et al. (2016) policies trained with dense reward, which we do not use in any other experiments. We also generate sub-optimal trajectories for BiMGame and AntMaze. To do so for BiMGame, we use the simulator via Model Predictive Control (MPC) as in Paul and van Baar (2018) (details in the supplementary). For AntMaze, we generate sub-optimal trajectories from an A3C policy stopped much before convergence. We generate around trajectories for BiMGame and AntMaze, and for AntTarget. As we generate two separate sets of trajectories for BiMGame and AntTarget, we use the sub-optimal set for all experiments, unless otherwise mentioned.

(a) BiMGame
(b) AntTarget
(c) AntMaze
Figure 3: This figure shows the comparison of our proposed method with the baselines. Some lines may not be visible as they overlap. For tasks (a) and (c) our method clearly outperforms others. For task (b), although value reward initially performs better, our method eventually achieves the same performance. For a fair comparison, we do not use the out-of-set augmentation to generate this plot.

Baselines. We primarily compare our method with RL methods which utilize trajectory or expert information - AggreVaTeD Sun et al. (2017) and value based reward shaping Ng et al. (1999), equivalent to the in THOR Sun et al. (2018). For these methods, we use to fit a value function to the sparse terminal-only reward of the original MDP and use it as the expert value function. We also compare with standard A3C, but pre-trained using . It may be noted that we pre-train all the methods using the trajectory set to have a fair comparison. We report results with mean cumulative reward and over independent runs.

(a) BiMGame
(b) AntTarget
(c) AntMaze
Figure 4: (a) This plot presents the learning curves associated with different number of learned sub-goals for the three tasks. For BiMGame and AntTarget, the number of sub-goals hardly matters. However, due to the inherently longer length of the task for AntMaze , lower number of sub-goals such as perform much worse than with higher .

Comparison with Baselines. First, we compare our method with other baselines in Fig 3. Note that as out-of-set augmentation using can be applied for other methods which learn from trajectories, such as value-based reward shaping, we present the results for comparison with baselines without using , i.e., Eqn. 3. Later, we perform an ablation study with and without using . As may be observed, none of the baselines show any sign of learning for the tasks, except for ValueReward, which performs comparably with the proposed method for AntTarget only. Our method, on the other hand, is able to learn and solve the tasks consistently over multiple runs. The expert cumulative rewards are also drawn as straight lines in the plots and imitation learning methods like DAgger Ross et al. (2011) can only reach that mark. Our method is able to surpass the expert for all the tasks. In fact, for AntMaze, even with a rather sub-optimal expert (an average cumulative reward of only ), our algorithm achieves about cumulative reward at million steps.

The poor performance of the ValueReward and AggreVaTeD can be attributed to the imperfect value function learned with a limited number of trajectories. Specifically, with an increase in the trajectory length, the variations in cumulative reward in the initial set of states are quite high. This introduces a considerable amount of error in the estimated value function in the initial states, which in turn traps the agent in some local optima when such value functions are used to guide the learning process.

Variations in Sub-Goals. The number of sub-goals is specified by the user, based on domain knowledge. For example, in the BiMGame, the task has four bottle-necks, which are states to be visited to complete the task and they can be considered as sub-goals. We perform experiments with different sub-goals and present the plots in Fig. 4. It may be observed that for BiMGame and AntTarget, our method performs well over a large variety of sub-goals. On the other hand for AntMaze, as the length of the task is much longer than AntTarget (12m vs 5m), learn much faster than , as higher number of sub-goals provides more frequent rewards. Note that the variations in speed of learning with number of sub-goals is also dependent on the number of expert trajectories. If the pre-training is good, then less frequent sub-goals might work fine, whereas if we have a small number of expert trajectories, the RL agent may need more frequent reward (see the supplementary material for more experiments).

(a) BiMGame (b) AntTarget
Figure 5: This plot presents the comparison of our proposed method for with and without using the one-class classification method for out-of-set augmentation.

Effect of Out-of-Set Augmentation. The set may not cover the entire state-space. To deal with this situation we developed the extrinsic reward function in Eqn. 5 using . To evaluate its effectiveness we execute our algorithm using Eqn. 3 and Eqn. 5, and show the results in Fig. 5, with legends showing without and with respectively. For BiMGame, we used the optimal A3C trajectories, for this evaluation. This is because, using MPC trajectories with Eqn. 3 can still solve the task with similar reward plots, since MPC trajectories visit a lot more states due to its short-tem planning. The (optimal) A3C trajectories on the other hand, rarely visit some states, due to its long-term planning. In this case, using Eqn. 3 actually traps the agents to a local optimum (in the outermost ring), whereas using as in Eqn. 5, learns to solve the task consistently (Fig. 4(a)).

For AntTarget in Fig. 4(b), using performs better than without using (and also surpasses Value based Reward Shaping). This is because the trajectories only span a small sector of the circle (Fig. 6(b)) while the Ant is allowed to visit states outside of it in the RL step. Thus, avoids incorrect sub-goal assignments to states not well-represented in and helps in the overall learning.

(a) BiMGame (b) AntMaze
Figure 6: This plot presents a comparison of our proposed method for two different types of expert trajectories. The corresponding expert rewards are also plotted as horizontal lines.
(a) BiMGame (b) AntTarget (c) AntMaze
Figure 7: This figure presents the learned sub-goals for the three tasks which are color coded. Note that for (b) and (c), multiple sub-goals are assigned the same color, but they can be distinguished by their spatial locations.

Effect of Sub-Optimal Expert. In general, the optimality of the expert may have an effect on performance. The comparison of our algorithm with optimal vs. sub-optimal expert trajectories are shown in Fig. 6. As may be observed, the learning curve for both the tasks is better for the optimal expert trajectories. However, in spite of using such sub-optimal experts, our method is able to surpass and perform much better than the experts. We also see that our method performs better than even the optimal expert (as it is only optimal w.r.t. some cost function) used in AntMaze.

Visualization. We visualize the sub-goals discovered by our algorithm and plot it on the x-y plane in Fig. 7. As can be seen in BiMGame, with sub-goals, our method is able to discover the bottle-neck regions of the board as different sub-goals. For AntTarget and AntMaze, the path to the goal is more or less equally divided into sub-goals. This shows that our method of sub-goal discovery can work for both environments with and without bottle-neck regions. The supplementary material has more visualizations and discussion.

5 Discussions

The experimental analysis we presented in the previous section contain the following key observations:

  • [leftmargin=*]

  • Our method for sub-goal discovery works both for tasks with inherent bottlenecks (e.g. BiMGame) and for tasks without any bottlenecks (e.g. AntTarget and AntMaze), but with temporal orderings between groups of states in the expert trajectories, which is the case for many applications.

  • Experiments show, that our assumption on the temporal ordering of groups of states in expert trajectories is soft, and determines the granularity of the discovered sub-goals (see supplementary).

  • Discrete rewards using sub-goals performs much better than value function based continuous rewards. Moreover, value functions learned from long and limited number of trajectories may be erroneous, whereas segmenting the trajectories based on temporal ordering may still work well.

  • As the expert trajectories may not cover the entire state-space regions the agent visits during exploration in the RL step, augmenting the sub-goal based reward function using out-of-set augmentation performs better compared to not using it.

6 Conclusion

In this paper, we presented a framework to utilize the demonstration trajectories in an efficient manner by discovering sub-goals, which are waypoints that need to be completed in order to achieve a certain complex goal-oriented task. We use these sub-goals to augment the reward function of the task, without affecting the optimality of the learned policy. Experiments on three complex task show that unlike state-of-the-art RL, IL or methods which combines them, our method is able to solve the tasks consistently. We also show that our method is able to perform much better than sub-optimal experts used to obtain the expert trajectories and at least as good as the optimal experts. Our future work will concentrate on extending our method for repetitive non-goal oriented tasks.

References

  • J. Andreas, D. Klein, and S. Levine (2017) Modular multitask reinforcement learning with policy sketches. In ICML, pp. 166–175. Cited by: §1.
  • B. D. Argall, S. Chernova, M. Veloso, and B. Browning (2009) A survey of robot learning from demonstration. Robotics and autonomous systems 57 (5), pp. 469–483. Cited by: §1.
  • P. Bacon, J. Harb, and D. Precup (2017) The option-critic architecture.. In AAAI, pp. 1726–1734. Cited by: §2.
  • M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al. (2016) End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316. Cited by: §1.
  • K. Chang, A. Krishnamurthy, A. Agarwal, H. Daume III, and J. Langford (2015) Learning to search better than your teacher. ICML. Cited by: §2.
  • C. Cheng, X. Yan, N. Wagener, and B. Boots (2018) Fast policy learning through imitation and reinforcement. UAI. Cited by: §1, §1, §2.
  • S. Chernova and M. Veloso (2009) Interactive policy learning through confidence-based autonomy. JAIR 34, pp. 1–25. Cited by: §2.
  • Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel (2016) Benchmarking deep reinforcement learning for continuous control. In ICML, pp. 1329–1338. Cited by: §1.
  • R. Dubey, P. Agrawal, D. Pathak, T. L. Griffiths, and A. A. Efros (2018) Investigating human priors for playing video games. arXiv preprint arXiv:1802.10217. Cited by: §1.
  • C. Florensa, Y. Duan, and P. Abbeel (2017) Stochastic neural networks for hierarchical reinforcement learning. ICLR. Cited by: §2.
  • D. Held, X. Geng, C. Florensa, and P. Abbeel (2017) Automatic goal generation for reinforcement learning agents. ICML. Cited by: §2, §4.
  • T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, I. Osband, et al. (2018) Deep q-learning from demonstrations. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    ,
    Cited by: §2.
  • S. Krishnan, A. Garg, R. Liaw, B. Thananjeyan, L. Miller, F. T. Pokorny, and K. Goldberg (2019) SWIRL: a sequential windowed inverse reinforcement learning algorithm for robot tasks with delayed rewards. IJRR 38 (2-3), pp. 126–145. Cited by: §2.
  • S. Levine, C. Finn, T. Darrell, and P. Abbeel (2016) End-to-end training of deep visuomotor policies. JMLR 17 (1), pp. 1334–1373. Cited by: §1.
  • S. Levine and V. Koltun (2013) Guided policy search. In ICML, pp. 1–9. Cited by: §2.
  • S. Mannor, I. Menache, A. Hoze, and U. Klein (2004) Dynamic abstraction in reinforcement learning via clustering. In ICML, pp. 71. Cited by: §2.
  • A. McGovern and A. G. Barto (2001) Automatic discovery of subgoals in reinforcement learning using diverse density. ICML. Cited by: §2.
  • V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu (2016) Asynchronous methods for deep reinforcement learning. In ICML, pp. 1928–1937. Cited by: §4, §4.
  • V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: §1.
  • A. Murali, A. Garg, S. Krishnan, F. T. Pokorny, P. Abbeel, T. Darrell, and K. Goldberg (2016)

    Tsc-dl: unsupervised trajectory segmentation of multi-modal surgical demonstrations with deep learning

    .
    In ICRA, pp. 4150–4157. Cited by: §2.
  • A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine (2018) Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In ICRA, pp. 7559–7566. Cited by: §1.
  • A. Y. Ng, D. Harada, and S. Russell (1999) Policy invariance under reward transformations: theory and application to reward shaping. In ICML, Vol. 99, pp. 278–287. Cited by: §1, §3.2, §3.2, §4.
  • S. Paul, S. Roy, and A. K. Roy-Chowdhury (2018) W-talc: weakly-supervised temporal activity localization and classification. In

    Proceedings of the European Conference on Computer Vision (ECCV)

    ,
    pp. 563–579. Cited by: §3.2.
  • S. Paul and J. van Baar (2018) Trajectory-based learning for ball-in-maze games. arXiv preprint arXiv:1811.11441. Cited by: §4.
  • D. Precup (2000) Temporal abstraction in reinforcement learning. University of Massachusetts Amherst. Cited by: §2.
  • A. Rajeswaran, V. Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine (2017) Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. RSS. Cited by: §2.
  • P. Ranchod, B. Rosman, and G. Konidaris (2015) Nonparametric bayesian reward segmentation for skill discovery using inverse reinforcement learning. In IROS, pp. 471–477. Cited by: §2.
  • M. Riemer, M. Liu, and G. Tesauro (2018) Learning abstract options. NIPS. Cited by: §2.
  • S. Ross and J. A. Bagnell (2014) Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979. Cited by: §2.
  • S. Ross, G. Gordon, and D. Bagnell (2011) A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, pp. 627–635. Cited by: §1, §2, §4.
  • L. Ruff, N. Görnitz, L. Deecke, S. A. Siddiqui, R. Vandermeulen, A. Binder, E. Müller, and M. Kloft (2018) Deep one-class classification. In ICML, pp. 4390–4399. Cited by: §3.2.
  • S. Schaal (1999) Is imitation learning the route to humanoid robots?. Trends in cognitive sciences 3 (6), pp. 233–242. Cited by: §2.
  • J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel (2015) High-dimensional continuous control using generalized advantage estimation. ICLR. Cited by: §4.
  • D. Silver, J. Bagnell, and A. Stentz (2008) High performance outdoor navigation from overhead data using imitation learning. RSS. Cited by: §2.
  • D. Silver and K. Ciosek (2012) Compositional planning using optimal option models. arXiv preprint arXiv:1206.6473. Cited by: §2.
  • D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484. Cited by: §1.
  • Ö. Şimşek and A. G. Barto (2004) Using relative novelty to identify useful temporal abstractions in reinforcement learning. In ICML, pp. 95. Cited by: §2.
  • Ö. Şimşek, A. P. Wolfe, and A. G. Barto (2005) Identifying useful subgoals in reinforcement learning by local graph partitioning. In ICML, pp. 816–823. Cited by: §2.
  • M. Stolle and D. Precup (2002) Learning options in reinforcement learning. In SARA, pp. 212–223. Cited by: §2.
  • W. Sun, J. A. Bagnell, and B. Boots (2018) Truncated horizon policy search: combining reinforcement learning & imitation learning. arXiv preprint arXiv:1805.11240. Cited by: §1, §2, §4.
  • W. Sun, A. Venkatraman, G. J. Gordon, B. Boots, and J. A. Bagnell (2017) Deeply aggrevated: differentiable imitation learning for sequential prediction. In ICML, pp. 3309–3318. Cited by: §1, §2, §4.
  • R. S. Sutton, D. Precup, and S. Singh (1999) Between mdps and semi-mdps: a framework for temporal abstraction in reinforcement learning. Artificial intelligence 112 (1-2), pp. 181–211. Cited by: §2.
  • D. M. Tax and R. P. Duin (2004) Support vector data description. Machine learning 54 (1), pp. 45–66. Cited by: §3.2.
  • E. Todorov (2014) Convex and analytically-invertible dynamics with contacts and constraints: theory and implementation in mujoco.. In ICRA, pp. 6054–6061. Cited by: §1.
  • F. Torabi, G. Warnell, and P. Stone (2018) Behavioral cloning from observation. IJCAI. Cited by: §2.
  • J. van Baar, A. Sullivan, R. Cordorel, D. Jha, D. Romeres, and D. Nikovski (2018)

    Sim-to-real transfer learning using robustified controllers in robotic tasks involving complex dynamics.

    .
    ICRA. Cited by: §4.