PICO: Primitive Imitation for COntrol

In this work, we explore a novel framework for control of complex systems called Primitive Imitation for Control PICO. The approach combines ideas from imitation learning, task decomposition, and novel task sequencing to generalize from demonstrations to new behaviors. Demonstrations are automatically decomposed into existing or missing sub-behaviors which allows the framework to identify novel behaviors while not duplicating existing behaviors. Generalization to new tasks is achieved through dynamic blending of behavior primitives. We evaluated the approach using demonstrations from two different robotic platforms. The experimental results show that PICO is able to detect the presence of a novel behavior primitive and build the missing control policy.



There are no comments yet.


page 2

page 3

page 7

page 12

page 14


Watch, Try, Learn: Meta-Learning from Demonstrations and Reward

Imitation learning allows agents to learn complex behaviors from demonst...

CLIC: Curriculum Learning and Imitation for feature Control in non-rewarding environments

In this paper, we propose an unsupervised reinforcement learning agent c...

Deep Imitation Learning for Bimanual Robotic Manipulation

We present a deep imitation learning framework for robotic bimanual mani...

Learning a Behavioral Repertoire from Demonstrations

Imitation Learning (IL) is a machine learning approach to learn a policy...

Operation and Imitation under Safety-Aware Shared Control

We describe a shared control methodology that can, without knowledge of ...

Learning Calibratable Policies using Programmatic Style-Consistency

We study the important and challenging problem of controllable generatio...

Teleoperator Imitation with Continuous-time Safety

Learning to effectively imitate human teleoperators, with generalization...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Human-robotic fusions controlled through brain computer interfaces (BCI) have tremendous potential to impact human health and capabilities through applications like intelligent motor and cognitive prostheses [13, 23, 21, 10, 14, 32, 3]

. BCI-based control approaches are currently limited by the number of independent degrees of freedom that can be reliability controlled directly. The number of degrees of freedom on motor prostheses for example can be several dozen 

[24]. This is about an order of magnitude more degrees of freedom than can be reliability generated by current non-invasive BCI systems. The dilemma is how to control high-degree of freedom complex machines with only a few control inputs.

One approach to increase the impact of limited control inputs is through modularity and hierarchical control mechanisms [32, 3]. The idea is to use the limited number of inputs to select a primitive control policy, from a library of primitive behaviors, and potentially a target. Complex tasks are performed by chaining primitive behaviors.

As an example of this scenario, consider a Universal Robots UR5 [2] manipulator mounted on a Clearpath Husky platform [1] as shown in Fig. 1. The UR5 is used to demonstrate reaching, grabbing, and lifting a block on a table. Other tasks may require performing these actions in another order, so it may be useful to learn and maintain a collection of these primitive behaviors for later use. While the underlying behavior primitives are well defined for the reach-and-grasp scenario, other example scenarios may not have as well defined or labeled primitives. In this work, we assume that the underlying label of the behaviors shown in the task demonstrations is unknown.

Figure 1: Husky-UR5 Reach and Grasp Environment

The questions we investigate are how might we learn and maintain the primitive library from unlabeled demonstrations and, assuming the behavior primitive library exists, how would one know when to use, adapt, or create a new primitive behavior. We propose that the behavior library should be actively maintained to minimize redundancy and maximize the ability to reconstruct complex tasks through chains of the primitive behaviors. In this work, we explore techniques to directly optimize for these criteria by building on methods that learn from demonstration.

We explore maintaining a behavior primitive library in an online learning scenario. Given a potentially non-empty behavior primitive library and a new set of unlabeled task demonstrations, we seek to update the behavior primitive library to maximally accommodate the new demonstrations while maintaining the ability to reconstruct previously demonstrated trajectories.

Our contribution is an approach called that simultaneously learns subtask decomposition from unlabeled task demonstrations, trains behavior primitives, and learns a hierarchical control mechanism that allows blending of primitive behaviors to create even greater behavioral diversity, an overview is shown in Fig. 2. Our approach directly optimizes the contents of the primitive library to maximize the ability to reconstruct unlabeled task demonstrations from sequences of primitive behaviors.

Figure 2: An overview of . The approach takes as input unlabeled demonstrations and a library of primitive behaviors. The goal is to predict the primitive behavior label associated with each time point in all demonstrations. Additional behavior primitive models can be trained to fill gaps that are not well represented by existing behavior primitives.

2 Preliminaries

Learning from demonstration (LfD) and imitation learning allow agents to execute a task by observing the task being performed [15]. In the robotics domain, a goal of imitation learning is to produce a mapping, , from states to actions, known as a control policy [6, 27], that has the maximum likelihood of producing the demonstration dataset , where each is a demonstration trajectory of of state, action pairs. The demonstrations can be created by another control policy [25], by a human expert [18], or in a simulated environment [29, 17]. Let parameterized by . The goal is then to optimize Equation 1 by varying .


Following optimization, covariate drift can cause errors in the control process that can place the robot in a previously unobserved state. Control policies will have higher action prediction errors in parts of the state space that it has not observed, leading to poor action predictions and compounding errors with increased iterations of the policy. One approach that has been introduced to decrease the impact of covariate shift is to introduce noise into the demonstrations used for learning [20]. This approach increases the amount of state space covered by the policy and improves action predictions around the demonstrations, leading to better generalization and error tolerance.

2.1 Model Agnostic Meta-Learning

In meta-learning a model is trained on a variety of learning tasks and the parameters of the method are fine-tuned for generalization. The idea of meta-learning is to combine a set of learner models to improve performance on a task more quickly than one without pretrained models. This is a common strategy for one-shot [26]

or few shot scenarios, where a model must be trained using one or a few examples. Some approaches for meta-learning come from the reinforcement learning 

[11], which typically differ in how they update individual learners. Some meta-learning methods update models using gradient information [11] and others learn how to update learners from data [5, 7].

3 Related Work

Imitation learning alone does not provide a mechanism to generalize demonstrations to new tasks. One mechanism to address this challenge is task decomposition, which has the goal of identifying subtasks from demonstration. Subtasks can be made into sub-policies through imitation learning, including methods methods that combine subtask discovery with imitation learning [29, 31]. By decomposing demonstrations into subtasks, it becomes possible to permute the sequence of sub-policies to achieve greater task diversity and generalizability. However, decomposing demonstrations into subtasks that are maximally useful for recombination is a challenge in task decomposition [29].

Once sub-task policies are established, a hierarchical control policy can be learned that identifies the sequence of policies needed to achieve a specified goal. Given a sufficiently diverse set of demonstrations the reasoning layer can be learned from a set of demonstrations [31]. Several approaches for learning hierarchical architectures for control policies from limited demonstrations have been proposed [29, 31, 9]. We were inspired by the work on mixtures-of-experts[28, 16] which includes a similar hierarchical representation.

Some approaches assume that the behavior primitive library is fully trained in advance [31]. In the reinforcement learning domain, the options framework [30, 4, 19] and hierarchical reinforcement learning [8] are common approaches for organising hierarchies of policies. The techniques in reinforcement learning are often predicated on being able to interact with an environment and collect a lot of data. In this work, we focus on learning hierarchical task decomposition strategies from a limited set of demonstrations.

3.1 Task Sketch for Sub-policy Discovery

Some related approaches [4, 22] perform demonstration decomposition by combining both demonstrations and task sketches. The literature refers to these approaches as weakly-supervised because the order of tasks is given and the exact transition points within a demonstration must be inferred.

Let be our dataset containing trajectories of length containing state-action tuples for state and action . Given a library of sub-tasks policies , A task sketch ) is a sequence of sub-tasks labels where is the length of the sketch. A path is a sequence of sub-task labels where is the length of a demonstration. We assume that . We say that a path matches a task sketch if after removing all adjacent duplicate sub-tasks in . For example, the path matches the task sketch .

4 Methods

In this section we describe the approaches most closely aligned with our work referred to as CTC [12] and TACO [29]. Then, we introduce our approach called Primitive Imitation for COntrol ().

4.1 Connectionist Temporal Classification

Given a dataset and task sketch , one approach to obtain a set of generalizable sub-tasks is to separately learn alignment of trajectories to the task sketch then learn the control policies for sub-tasks with behavior cloning. Connectionist Temporal Classification (CTC) [12] addresses the problem of aligning sequences of dissimilar lengths. There are potentially multiple ways in which a path could be aligned to a task sketch. Let be the set of all paths of length that match the task sketch

. The CTC objective maximises the probability of the task sketch

given the input trajectory :


is commonly represented as a neural network with parameters

that outputs the probability of each sub-task policy in . The objective is solved efficiently using dynamic programming. Inference using the neural network model is used to find a maximum likelihood path for a trajectory . The labels in provide an association between state-action tuples and subtask policies . The state-action policies associated with a single sub-task are used to create a sub-task policy using behavior cloning.

4.2 Temporal Alignment for Control

Given a demonstration and a task sketch , Temporal Alignment for Control (TACO) [29] will learn where each subtask begins and ends in the trajectory and simultaneously trains a library of sub-tasks policies . TACO maximizes the joint log likelihood of the task sequence and the actions from sub-task policies contained in conditioned on the states. Let and be the set of actions and states respectively in trajectory .


where is the product of action probabilities associated with any given path . The path determines which data within corresponds to each sub-task policy and is the behavior cloning objective from Equation 1.

4.3 Primitive Imitation for Control (PICO)

In this work, we introduce Primitive Imitation for Control. (). The approach differs from previous work in a few important ways. similarly decomposes behavior primitives from demonstration. It optimizes the action conditioned on state and does not require a task sketch, and unlike CTC[12], our approach simultaneously learns to segment demonstrations and trains underlying behavior primitive models.

We aim to reconstruct the given trajectories as well as possible using the existing sub-task policy library. As shown in Equation 5, we seek to minimize the sum of squared error between the observed action and the predicted action for all actions over all timepoints and all trajectories . We refer to this objective as minimizing reconstruction error. Let be the state-action tuple corresponding to timepoint in trajectory . The action prediction, equation 6, is the product of the probability of a sub-task policy conditioned on the state and the action predicted by policy for the state . Substituting equation 6 into 5 results in Equation 7 which is the optimization problem for .


4.4 Neural Network Architecture

Estimates of both and

are given by a recurrent neural network architecture. Figure

4 gives an overview of the recurrent and hierarchical network architecture. We solve for the objective in Equation 7 directly by back propagation through a recurrent neural network with equation 5

as the loss function. The model architecture is composed of two branches that are recombined to compute the action prediction at each timepoint.

To more easily compare with other approaches that do not blend sub-task policies, we estimate the maximum likelihood sub-task policy label at each timepoint. We refer to sub-task policies as behavior primitives. The behavior primitive label prediction is given by the maximum likelihood estimate of shown in Equation 8 for time in trajectory .


Figure 3 illustrates how we compute the predicted action at time . In the figure, the probability of given state is for . The latent representation at the current timepoint is a function of both the value of the latent representation of the previous state and the current state

Figure 3: Hierarchical recurrent deep network architecture for task decomposition, novel behavior primitive discovery, and behavior blending.
Figure 4: Neural network architecture for . Given a set of input trajectories and a behavior primitive library, the core architecture follows two branches, the left most branch estimates a distribution over the behavior primitives. The right hand branch estimates the action prediction from each primitive behavior sub-model. We compute the predicted action as a linear combination between the behavior primitive distribution and the set of predicted actions from all behavior primitives.

Figure 4 details the architecture used for

based on the Husky+UR5 dataset example. Unless otherwise specified, the fully connected (FC) layers have ReLU activations, except for the output layers from behavior primitive models. The last layer of behavior primitive models have linear activations to support diverse action predictions. While not shown in Figure

4, the network architecture also returns the predicted latent embedding and behavior primitive distribution for additional visualization and analysis.

4.5 Discovering and Training New Behavior Primitives

An important aspect of our approach is the ability to discover and create new behavior primitives from a set of trajectories and a partial behavior primitive library. detects and trains new behavior primitive models simultaneously. As shown in figure 3, supports building new behavior primitive models by adding additional randomly initialized behavior models to the library prior to training. For our experiments, we assume that we know the correct number of missing primitives.

We define a gap in a trajectory as region within a demonstration where actions are not predicted with high probability using the existing behavior primitive models. A gap in a trajectory implies that the current library of behavior primitives is insufficient to describe a set of state-action tuples in some part of the given trajectory. This also implies that the probability that the data for time point was generated by the current library of behavior primitive models is low for all . These low probabilities increase the likelihood that an additional randomly initialized behavior primitive policy might have a higher probability for . The data is then used to train . For nearby data in the same gap region , it is now more likely that for . This mechanism allows to develop in to a new behavior primitive that is not well covered by existing primitives.

4.6 Training Details

is trained end-to-end by back propagation. This is possible because all functions in the model are differentiable with the exception the argmax function. For experiments making use of pretrained behavior primitive models, the contents of the behavior primitive library are trained using the DART [20] technique for imitation learning.

As shown in Equation 5, the loss used to train the model is mean squared error between the predicted and observed actions over all timepoints and all demonstrations. There is no loss term for label prediction accuracy, because we assume that the demonstrations are unlabeled.

4.7 Metrics

Two metrics are computed to estimate performance. First, we evaluate mean squared error (MSE) as shown in Equation 5 between the predicted and given action. Second, we compute behavior primitive label accuracy which is a comparison between the predicted and given behavior primitive label. Label accuracy is computed as the number of matching labels divided by the total number of comparisons. Both metrics are computed over all timepoints and over all demonstrations in the test set.

4.8 Baseline Implementations

Shiarli et al. [29] developed TACO, which aligned subtasks to demonstrations given a library of primitives and a task sketch, where a task sketch describes the sequence in which subtasks will appear. In addition, in their recent work [29], they extended the connectionist temporal classification (CTC) algorithm [12]

, commonly used to align sequences for speech recognition, for use with identifying subtasks. For this work, we use TACO and the extended version of CTC as baseline comparisons for our algorithm, using an open source implementation 

111https://github.com/KyriacosShiarli/taco. Both were tested using MLP and RNN architectures.

5 Experiments and Discussion

We evaluate using a reach-grab-lift task using a Husky+UR5 environment. The dataset consists of 100 demonstrations of a Clearpath Husky robot with a UR5 manipulator performing a variety of reach, grasp, and lift tasks, see Figure 1. The number of time steps in the demonstrations varied from 1000 to 1800, but each used all three primitives: reach, grasp, and lift.

The first experiment quantifies the ability of to identify primitive task labels from demonstration independently from learning behavior primitives. The second experiment evaluates the ability of to identify parts of demonstrations that are not represented by existing behavior primitives and rebuild the missing behavior primitive.

5.1 Reconstruction from existing primitives

Our initial experiment is an ablation study that separately evaluates the estimate of the primitive behavior probability distribution and the action predictions from learning behavior primitives. We train and freeze behavior primitive models for

reach, grasp, and lift using the ground truth labeled data from trajectories. We evaluated , TACO [29], and CTC based on label classification accuracy. For Taco and CTC we additionally compared the methods using MLP and RNN based underlying network models. We evaluated all methods based on an 80/20 split of demonstrations into training and test sets. The average of five independent runs were obtained for each approach. In Table 1, we show the results of the comparison.

(a) Sample trajectory label accuracy
(b) Missing primitive label accuracy
Figure 5: Example behavior primitive label accuracy for a single test demonstration. We compared the label predictions given by (red) to the ground truth (blue) (a) A sample reconstruction for a single trajectory with an existing behavior primitive library. Timepoints are on the x-axis. and behavior primitive label is on they y-axis. The labels 0,1, and 2 correspond to reach, grasp, and lift respectively. (b) Reconstruction of an example trajectory and discovery of a missing behavior primitive (grasp).

Figure 5(a), shows a comparisons between between the predicted label based on Equation 8 and the ground truth label. Over all trajectories in the test set, the average label classification accuracy was 96% compared to the ground truth label. The summary of results are shown in Table 1.

5.2 Behavior Primitive Discovery

In our next experiment, we evaluate the ability of to recognize and build a missing behavior primitive model. We ran a leave-one-behavior-out experiment where one of the three primitives (i.e. reach, grasp, lift) was replaced with a randomly-initialized behavior primitive. This experiment used the same 100 trajectories on the Husky+UR5 dataset discussed in the previous section and a 80/20 split between training and validation sets. Again, five trials were run with the training and validation sets randomly chosen. The label accuracy and action prediction MSE are shown in 6

. The leftmost bar shows the results with all primitives pre-trained with behavior cloning. The remaining bars show the accuracy when reach, grasp and lift, respectively, were replaced with the gap primitive. Note, the gap primitive was updated throughout the training with back-propagation such that the final primitive ideally would perform as well as the original pre-trained, behavior-cloned version; this comparison is shown with the action prediction MSE. The error bars show the standard deviation across the five trials. While the label accuracy across all three replaced primitives is approximately the same, the action prediction for the lift primitive is significantly worse. We believe this is due to the larger variance in lift trajectories. Unlike the reach and grasp which have restrictions placed on their final target position (it needs to be near the block), the final position of lift is randomly placed above the block’s starting position.

As shown in the sample trajectory in Figure 5(b), the label prediction of the trained model closely aligns with the ground truth label from the example trajectory. Over all of the test trajectories, the average label classification accuracy was 96%.

(a) behavior label accuracy
(b) action prediction MSE
Figure 6: Accuracy of to correctly identify a primitive’s label on the validation set (twenty randomly selected trajectories). (a) The leftmost bar shows performance when all primitives are in the library, successive bars denote accuracy when the reach, grasp, and lift primitives are dropped out and learned from a randomly generated “gap" primitive. Error bars represent the standard deviation across five validation trials. (b) Mean squared error between the ground truth action and the learned model’s estimate averaged across twenty randomly selected test trajectories five times.

5.3 Visualizing the Learned Latent Space

To better understand the role of the embedding space for predicting the primitive probability distribution, we visualized the embedding of all states vectors from the test set in the recurrent hidden layer. We would expect that a useful latent embedding would naturally cluster states that correspond to different primitives into distinct locations in the embedding space.

Figure 7: The organization of the learned latent space associated with the Husky-UR5 dataset for reach, grasp, and lift (red, green, and purple respectively).

Figure 7 shows layout of the latent space in two dimensions. Each point corresponds to a state vector from the test dataset. The points are colored by the ground truth label.

5.4 Jaco Dial Domain Dataset

Figure 8: The joint-domain dial scenario. A Jaco manipulator modeled in Mujoco presses a sequences of 4 keys on a dialpad. The positions of the keys are randomly shuffled for each demonstration. The positions of the joints and positions of the keys are given as state information.

We also make use of the Jaco dial domain dataset[29] illustrated in Figure 8. The dial dataset is composed of demonstrations from a Jaco manipulator pressing 4 keys in sequence (e.g. 3,5,4,7). The positions of the keys are randomly shuffled for each demonstration, but the position of each key is given in the state vector. The intention is to treat pressing an individual digit as a behavior primitive. For this dataset, label prediction accuracy is a challenging metric without a task sketch because the starting position of the jaco may not provide clues about which button will be pressed. As the jaco gets closer to a button, it becomes more clear which button will be pressed. The dataset of dialpad demonstrations were generated using default parameters and code from TACO[29].

5.5 Dial Domain Comparison

The goal of this comparison is to evaluate the label prediction accuracy of the metacontroller in . To isolate the label predictions of the metacontroller, the behavior primitive library is pretrained on the training dataset including 1200 demonstrations and frozen. Label classification and action prediction accuracy is then evaluated on the test set including 280 demonstrations.

The average results of 5 runs are shown for TACO and CTC. We evaluate each approach using the same label accuracy and action prediction metrics. The summary of results are shown in Table 2. We found that our approach achieves the highest label accuracy at 65%. The overall label accuracy of on the dial dataset is lower than the Husky+UR5 dataset. Additional analysis revealed that many of the mislabeling occurred at the beginning of a new key press where context about where the Jaco is moving next is weakest. The dataset is also more challenging than than the Husky dataset because the number of unique behavior primitives has increased from 3 to 10.

Also of note, we compare our results to TACO which is a weakly supervised approach. TACO is given the ordering of tasks. For task sequences of length 4, this means that a random baseline would be expected to achieve an accuracy of 25%. For an unlabeled approach like , any of the 10 behavior primitives could be selected at each timepoint. This means that unlabeled demonstrations the expected accuracy of a random baseline would be 10%.

6 Conclusion

In this paper, we describe , an approach to learn behavior primitives from unlabeled demonstrations and a partial set of behavior primitives. We optimize a metric that directly minimizes reconstruction error for a set of demonstrations using sequences of behavior primitives. We directly compare our results to similar approaches using demonstrations generated from simulations of two different robotic platforms and achieve both better label accuracy and reconstruction accuracy as measured by action prediction mean squared error. While we have demonstrated success in these tasks, there are limitations to our approach. The number additional primitives to add to the library must be decided prior to training. In spite of these limitations, we believe that is a useful contribution to the community that may be relevant in a number of different domains.

Husky UR5 Label Accuracy MSE Action Prediction
96% 0.053
TACO (MLP) 74% 3.59
TACO (RNN) 73% 3.75
CTC (MLP) 25% 4.20
CTC (RNN) 33% 2.68
Table 1: Method comparisons using the Husky UR5 Reach and Grasp dataset.
Jaco Pinpad Label Accuracy MSE Action Prediction
65% 0.0061
TACO (MLP) 47% 0.55
TACO (RNN) * *
CTC (MLP) 31% 0.57
CTC (RNN) 29% 0.58
Table 2: Method comparisons using the Jaco Pinpad dataset. *TACO (RNN) resulted in NaN loss after repeated attempts.


  • [1] Note: www.clearpathrobotics.com/husky-unmanned-ground-vehicle-robot/Accessed: 2019-09-10 Cited by: §1.
  • [2] Note: www.universal-robots.comAccessed: 2019-09-10 Cited by: §1.
  • [3] I. Akinola, B. Chen, J. Koss, A. Patankar, J. Varley, and P. Allen (2017-11) Task level hierarchical system for bci-enabled shared autonomy. In 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Vol. , pp. 219–225. External Links: Document, ISSN Cited by: §1, §1.
  • [4] J. Andreas, D. Klein, and S. Levine (2017) Modular multitask reinforcement learning with policy sketches. In

    Proceedings of the 34th International Conference on Machine Learning-Volume 70

    pp. 166–175. Cited by: §3.1, §3.
  • [5] M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. de Freitas (2016) Learning to learn by gradient descent by gradient descent. External Links: 1606.04474 Cited by: §2.1.
  • [6] B. Argall, S. Chernova, M. Veloso, and B. Browning (2009) A survey of robot learning from demonstration. Robotics and Autonomous Systems 57 (5), pp. 469 – 483. External Links: ISSN 0921-8890, Document, Link Cited by: §2.
  • [7] Y. Bengio, S. Bengio, and J. Cloutier (2002-01) Learning a synaptic learning rule. IJCNN-91-Seattle International Joint Conference on Neural Networks, pp. . External Links: Document Cited by: §2.1.
  • [8] T. G. Dietterich (2000-11) Hierarchical reinforcement learning with the maxq value function decomposition.

    Journal of Artificial Intelligence Research

    13, pp. 227–303.
    External Links: ISSN 1076-9757, Link, Document Cited by: §3.
  • [9] Y. Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba (2017) One-shot imitation learning. In Advances in neural information processing systems, pp. 1087–1098. Cited by: §3.
  • [10] M. S. Fifer, G. Hotson, B. Wester, D. McMullen, Y. Wang, Ma. Johannes, K. Katyal, J. Helder, M. Para, R. J. Vogelstein, et al. (2013) Simultaneous neural control of simple reaching and grasping with the modular prosthetic limb using intracranial eeg. IEEE transactions on neural systems and rehabilitation engineering 22 (3), pp. 695–705. Cited by: §1.
  • [11] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: §2.1.
  • [12] A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber (2006) Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pp. 369–376. Cited by: §4.1, §4.3, §4.8, §4.
  • [13] C. Guger, W. Harkam, C. Hertnaes, and G. Pfurtscheller (1999) Prosthetic control by an eeg-based brain-computer interface (bci). In Proc. aaate 5th european conference for the advancement of assistive technology, pp. 3–6. Cited by: §1.
  • [14] G. Hotson, D. P. McMullen, M. S. Fifer, M. S. Johannes, K. D. Katyal, M. P. Para, R. Armiger, W. S. Anderson, N. V. Thakor, B. A. Wester, et al. (2016) Individual finger control of a modular prosthetic limb using high-density electrocorticography in a human subject. Journal of neural engineering 13 (2), pp. 026017. Cited by: §1.
  • [15] A. Hussein, M. Gaber, E. Elyan, and C. Jayne (2017-04) Imitation learning: a survey of learning methods. ACM Comput. Surv. 50 (2), pp. 21:1–21:35. External Links: ISSN 0360-0300, Link, Document Cited by: §2.
  • [16] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton (1991) Adaptive mixtures of local experts. Neural Computation 3, pp. 79–87. Cited by: §3.
  • [17] T. Kipf, Y. Li, H. Dai, V. Zambaldi, A. Sanchez-Gonzalez, E. Grefenstette, P. Kohli, and P. Battaglia (2019-09–15 Jun) CompILE: compositional imitation learning and execution. In Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 3418–3428. External Links: Link Cited by: §2.
  • [18] G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto (2012) Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research 31 (3), pp. 360–375. Cited by: §2.
  • [19] T. Kulkarni, K. Narasimhan, A. Saeedi, and J. Tenenbaum (2016) Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation. In Advances in neural information processing systems, pp. 3675–3683. Cited by: §3.
  • [20] M. Laskey, J. Lee, R. Fox, A. Dragan, and K. Goldberg (2017) DART: noise injection for robust imitation learning. External Links: 1703.09327 Cited by: §2, §4.6.
  • [21] D. McFarland and J. Wolpaw (2008) Brain-computer interface operation of robotic and prosthetic devices. Computer 41 (10), pp. 52–56. Cited by: §1.
  • [22] T. Mu, K. Goel, and E. Brunskill (2019) PLOTS: procedure learning from observations using subtask structure. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1007–1015. Cited by: §3.1.
  • [23] G. R. Muller-Putz and G. Pfurtscheller (2007) Control of an electrical prosthesis with an ssvep-based bci. IEEE Transactions on Biomedical Engineering 55 (1), pp. 361–364. Cited by: §1.
  • [24] P. F. Pasquina, B. N. Perry, M. E. Miller, G. S. Ling, and J. W. Tsao (2015) Recent advances in bioelectric prostheses. Neurology: Clinical Practice 5 (2), pp. 164–170. Cited by: §1.
  • [25] A. Rusu, S. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, and R. Hadsell (2015) Policy distillation. External Links: 1511.06295 Cited by: §2.
  • [26] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap (2016) One-shot learning with memory-augmented neural networks. External Links: 1605.06065 Cited by: §2.1.
  • [27] S. Schaal and C. Atkeson (2010) Learning control in robotics. IEEE Robotics & Automation Magazine 17 (2), pp. 20–29. Cited by: §2.
  • [28] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. E. Hinton, and J. Dean (2017) Outrageously large neural networks: the sparsely-gated mixture-of-experts layer. CoRR abs/1701.06538. External Links: Link, 1701.06538 Cited by: §3.
  • [29] K. Shiarlis, M. Wulfmeier, S. Salter, S. Whiteson, and I. Posner (2018-07) TACO: learning task decomposition via temporal alignment for control. In International Conference on Machine Learning, Cited by: §2, §3, §3, §4.2, §4.8, §4, §5.1, §5.4.
  • [30] M. Stolle and D. Precup (2002) Learning options in reinforcement learning. In Abstraction, Reformulation, and Approximation, Berlin, Heidelberg, pp. 212–223. External Links: ISBN 978-3-540-45622-3 Cited by: §3.
  • [31] D. Xu, S. Nair, Y. Zhu, J. Gao, A. Garg, L. Fei-Fei, and S. Savarese (2018) Neural task programming: learning to generalize across hierarchical tasks. In IEEE International Conference on Robotics and Automation, pp. 1–8. Cited by: §3, §3, §3.
  • [32] J. Zhao, W. Li, X. Mao, H. Hu, L. Niu, and G. Chen (2017-06) Behavior-based ssvep hierarchical architecture for telepresence control of humanoid robot to achieve full-body movement. IEEE Transactions on Cognitive and Developmental Systems 9 (2), pp. 197–209. External Links: Document, ISSN Cited by: §1, §1.