Human-robotic fusions controlled through brain computer interfaces (BCI) have tremendous potential to impact human health and capabilities through applications like intelligent motor and cognitive prostheses [13, 23, 21, 10, 14, 32, 3]
. BCI-based control approaches are currently limited by the number of independent degrees of freedom that can be reliability controlled directly. The number of degrees of freedom on motor prostheses for example can be several dozen. This is about an order of magnitude more degrees of freedom than can be reliability generated by current non-invasive BCI systems. The dilemma is how to control high-degree of freedom complex machines with only a few control inputs.
One approach to increase the impact of limited control inputs is through modularity and hierarchical control mechanisms [32, 3]. The idea is to use the limited number of inputs to select a primitive control policy, from a library of primitive behaviors, and potentially a target. Complex tasks are performed by chaining primitive behaviors.
As an example of this scenario, consider a Universal Robots UR5  manipulator mounted on a Clearpath Husky platform  as shown in Fig. 1. The UR5 is used to demonstrate reaching, grabbing, and lifting a block on a table. Other tasks may require performing these actions in another order, so it may be useful to learn and maintain a collection of these primitive behaviors for later use. While the underlying behavior primitives are well defined for the reach-and-grasp scenario, other example scenarios may not have as well defined or labeled primitives. In this work, we assume that the underlying label of the behaviors shown in the task demonstrations is unknown.
The questions we investigate are how might we learn and maintain the primitive library from unlabeled demonstrations and, assuming the behavior primitive library exists, how would one know when to use, adapt, or create a new primitive behavior. We propose that the behavior library should be actively maintained to minimize redundancy and maximize the ability to reconstruct complex tasks through chains of the primitive behaviors. In this work, we explore techniques to directly optimize for these criteria by building on methods that learn from demonstration.
We explore maintaining a behavior primitive library in an online learning scenario. Given a potentially non-empty behavior primitive library and a new set of unlabeled task demonstrations, we seek to update the behavior primitive library to maximally accommodate the new demonstrations while maintaining the ability to reconstruct previously demonstrated trajectories.
Our contribution is an approach called that simultaneously learns subtask decomposition from unlabeled task demonstrations, trains behavior primitives, and learns a hierarchical control mechanism that allows blending of primitive behaviors to create even greater behavioral diversity, an overview is shown in Fig. 2. Our approach directly optimizes the contents of the primitive library to maximize the ability to reconstruct unlabeled task demonstrations from sequences of primitive behaviors.
Learning from demonstration (LfD) and imitation learning allow agents to execute a task by observing the task being performed . In the robotics domain, a goal of imitation learning is to produce a mapping, , from states to actions, known as a control policy [6, 27], that has the maximum likelihood of producing the demonstration dataset , where each is a demonstration trajectory of of state, action pairs. The demonstrations can be created by another control policy , by a human expert , or in a simulated environment [29, 17]. Let parameterized by . The goal is then to optimize Equation 1 by varying .
Following optimization, covariate drift can cause errors in the control process that can place the robot in a previously unobserved state. Control policies will have higher action prediction errors in parts of the state space that it has not observed, leading to poor action predictions and compounding errors with increased iterations of the policy. One approach that has been introduced to decrease the impact of covariate shift is to introduce noise into the demonstrations used for learning . This approach increases the amount of state space covered by the policy and improves action predictions around the demonstrations, leading to better generalization and error tolerance.
2.1 Model Agnostic Meta-Learning
In meta-learning a model is trained on a variety of learning tasks and the parameters of the method are fine-tuned for generalization. The idea of meta-learning is to combine a set of learner models to improve performance on a task more quickly than one without pretrained models. This is a common strategy for one-shot 
or few shot scenarios, where a model must be trained using one or a few examples. Some approaches for meta-learning come from the reinforcement learning, which typically differ in how they update individual learners. Some meta-learning methods update models using gradient information  and others learn how to update learners from data [5, 7].
3 Related Work
Imitation learning alone does not provide a mechanism to generalize demonstrations to new tasks. One mechanism to address this challenge is task decomposition, which has the goal of identifying subtasks from demonstration. Subtasks can be made into sub-policies through imitation learning, including methods methods that combine subtask discovery with imitation learning [29, 31]. By decomposing demonstrations into subtasks, it becomes possible to permute the sequence of sub-policies to achieve greater task diversity and generalizability. However, decomposing demonstrations into subtasks that are maximally useful for recombination is a challenge in task decomposition .
Once sub-task policies are established, a hierarchical control policy can be learned that identifies the sequence of policies needed to achieve a specified goal. Given a sufficiently diverse set of demonstrations the reasoning layer can be learned from a set of demonstrations . Several approaches for learning hierarchical architectures for control policies from limited demonstrations have been proposed [29, 31, 9]. We were inspired by the work on mixtures-of-experts[28, 16] which includes a similar hierarchical representation.
Some approaches assume that the behavior primitive library is fully trained in advance . In the reinforcement learning domain, the options framework [30, 4, 19] and hierarchical reinforcement learning  are common approaches for organising hierarchies of policies. The techniques in reinforcement learning are often predicated on being able to interact with an environment and collect a lot of data. In this work, we focus on learning hierarchical task decomposition strategies from a limited set of demonstrations.
3.1 Task Sketch for Sub-policy Discovery
Some related approaches [4, 22] perform demonstration decomposition by combining both demonstrations and task sketches. The literature refers to these approaches as weakly-supervised because the order of tasks is given and the exact transition points within a demonstration must be inferred.
Let be our dataset containing trajectories of length containing state-action tuples for state and action . Given a library of sub-tasks policies , A task sketch ) is a sequence of sub-tasks labels where is the length of the sketch. A path is a sequence of sub-task labels where is the length of a demonstration. We assume that . We say that a path matches a task sketch if after removing all adjacent duplicate sub-tasks in . For example, the path matches the task sketch .
4.1 Connectionist Temporal Classification
Given a dataset and task sketch , one approach to obtain a set of generalizable sub-tasks is to separately learn alignment of trajectories to the task sketch then learn the control policies for sub-tasks with behavior cloning. Connectionist Temporal Classification (CTC)  addresses the problem of aligning sequences of dissimilar lengths. There are potentially multiple ways in which a path could be aligned to a task sketch. Let be the set of all paths of length that match the task sketch
. The CTC objective maximises the probability of the task sketchgiven the input trajectory :
is commonly represented as a neural network with parametersthat outputs the probability of each sub-task policy in . The objective is solved efficiently using dynamic programming. Inference using the neural network model is used to find a maximum likelihood path for a trajectory . The labels in provide an association between state-action tuples and subtask policies . The state-action policies associated with a single sub-task are used to create a sub-task policy using behavior cloning.
4.2 Temporal Alignment for Control
Given a demonstration and a task sketch , Temporal Alignment for Control (TACO)  will learn where each subtask begins and ends in the trajectory and simultaneously trains a library of sub-tasks policies . TACO maximizes the joint log likelihood of the task sequence and the actions from sub-task policies contained in conditioned on the states. Let and be the set of actions and states respectively in trajectory .
where is the product of action probabilities associated with any given path . The path determines which data within corresponds to each sub-task policy and is the behavior cloning objective from Equation 1.
4.3 Primitive Imitation for Control (PICO)
In this work, we introduce Primitive Imitation for Control. (). The approach differs from previous work in a few important ways. similarly decomposes behavior primitives from demonstration. It optimizes the action conditioned on state and does not require a task sketch, and unlike CTC, our approach simultaneously learns to segment demonstrations and trains underlying behavior primitive models.
We aim to reconstruct the given trajectories as well as possible using the existing sub-task policy library. As shown in Equation 5, we seek to minimize the sum of squared error between the observed action and the predicted action for all actions over all timepoints and all trajectories . We refer to this objective as minimizing reconstruction error. Let be the state-action tuple corresponding to timepoint in trajectory . The action prediction, equation 6, is the product of the probability of a sub-task policy conditioned on the state and the action predicted by policy for the state . Substituting equation 6 into 5 results in Equation 7 which is the optimization problem for .
4.4 Neural Network Architecture
Estimates of both and
are given by a recurrent neural network architecture. Figure4 gives an overview of the recurrent and hierarchical network architecture. We solve for the objective in Equation 7 directly by back propagation through a recurrent neural network with equation 5
as the loss function. The model architecture is composed of two branches that are recombined to compute the action prediction at each timepoint.
To more easily compare with other approaches that do not blend sub-task policies, we estimate the maximum likelihood sub-task policy label at each timepoint. We refer to sub-task policies as behavior primitives. The behavior primitive label prediction is given by the maximum likelihood estimate of shown in Equation 8 for time in trajectory .
Figure 3 illustrates how we compute the predicted action at time . In the figure, the probability of given state is for . The latent representation at the current timepoint is a function of both the value of the latent representation of the previous state and the current state
Figure 4 details the architecture used for
based on the Husky+UR5 dataset example. Unless otherwise specified, the fully connected (FC) layers have ReLU activations, except for the output layers from behavior primitive models. The last layer of behavior primitive models have linear activations to support diverse action predictions. While not shown in Figure4, the network architecture also returns the predicted latent embedding and behavior primitive distribution for additional visualization and analysis.
4.5 Discovering and Training New Behavior Primitives
An important aspect of our approach is the ability to discover and create new behavior primitives from a set of trajectories and a partial behavior primitive library. detects and trains new behavior primitive models simultaneously. As shown in figure 3, supports building new behavior primitive models by adding additional randomly initialized behavior models to the library prior to training. For our experiments, we assume that we know the correct number of missing primitives.
We define a gap in a trajectory as region within a demonstration where actions are not predicted with high probability using the existing behavior primitive models. A gap in a trajectory implies that the current library of behavior primitives is insufficient to describe a set of state-action tuples in some part of the given trajectory. This also implies that the probability that the data for time point was generated by the current library of behavior primitive models is low for all . These low probabilities increase the likelihood that an additional randomly initialized behavior primitive policy might have a higher probability for . The data is then used to train . For nearby data in the same gap region , it is now more likely that for . This mechanism allows to develop in to a new behavior primitive that is not well covered by existing primitives.
4.6 Training Details
is trained end-to-end by back propagation. This is possible because all functions in the model are differentiable with the exception the argmax function. For experiments making use of pretrained behavior primitive models, the contents of the behavior primitive library are trained using the DART  technique for imitation learning.
As shown in Equation 5, the loss used to train the model is mean squared error between the predicted and observed actions over all timepoints and all demonstrations. There is no loss term for label prediction accuracy, because we assume that the demonstrations are unlabeled.
Two metrics are computed to estimate performance. First, we evaluate mean squared error (MSE) as shown in Equation 5 between the predicted and given action. Second, we compute behavior primitive label accuracy which is a comparison between the predicted and given behavior primitive label. Label accuracy is computed as the number of matching labels divided by the total number of comparisons. Both metrics are computed over all timepoints and over all demonstrations in the test set.
4.8 Baseline Implementations
Shiarli et al.  developed TACO, which aligned subtasks to demonstrations given a library of primitives and a task sketch, where a task sketch describes the sequence in which subtasks will appear. In addition, in their recent work , they extended the connectionist temporal classification (CTC) algorithm 
, commonly used to align sequences for speech recognition, for use with identifying subtasks. For this work, we use TACO and the extended version of CTC as baseline comparisons for our algorithm, using an open source implementation111https://github.com/KyriacosShiarli/taco. Both were tested using MLP and RNN architectures.
5 Experiments and Discussion
We evaluate using a reach-grab-lift task using a Husky+UR5 environment. The dataset consists of 100 demonstrations of a Clearpath Husky robot with a UR5 manipulator performing a variety of reach, grasp, and lift tasks, see Figure 1. The number of time steps in the demonstrations varied from 1000 to 1800, but each used all three primitives: reach, grasp, and lift.
The first experiment quantifies the ability of to identify primitive task labels from demonstration independently from learning behavior primitives. The second experiment evaluates the ability of to identify parts of demonstrations that are not represented by existing behavior primitives and rebuild the missing behavior primitive.
5.1 Reconstruction from existing primitives
Our initial experiment is an ablation study that separately evaluates the estimate of the primitive behavior probability distribution and the action predictions from learning behavior primitives. We train and freeze behavior primitive models forreach, grasp, and lift using the ground truth labeled data from trajectories. We evaluated , TACO , and CTC based on label classification accuracy. For Taco and CTC we additionally compared the methods using MLP and RNN based underlying network models. We evaluated all methods based on an 80/20 split of demonstrations into training and test sets. The average of five independent runs were obtained for each approach. In Table 1, we show the results of the comparison.
5.2 Behavior Primitive Discovery
In our next experiment, we evaluate the ability of to recognize and build a missing behavior primitive model. We ran a leave-one-behavior-out experiment where one of the three primitives (i.e. reach, grasp, lift) was replaced with a randomly-initialized behavior primitive. This experiment used the same 100 trajectories on the Husky+UR5 dataset discussed in the previous section and a 80/20 split between training and validation sets. Again, five trials were run with the training and validation sets randomly chosen. The label accuracy and action prediction MSE are shown in 6
. The leftmost bar shows the results with all primitives pre-trained with behavior cloning. The remaining bars show the accuracy when reach, grasp and lift, respectively, were replaced with the gap primitive. Note, the gap primitive was updated throughout the training with back-propagation such that the final primitive ideally would perform as well as the original pre-trained, behavior-cloned version; this comparison is shown with the action prediction MSE. The error bars show the standard deviation across the five trials. While the label accuracy across all three replaced primitives is approximately the same, the action prediction for the lift primitive is significantly worse. We believe this is due to the larger variance in lift trajectories. Unlike the reach and grasp which have restrictions placed on their final target position (it needs to be near the block), the final position of lift is randomly placed above the block’s starting position.
As shown in the sample trajectory in Figure 5(b), the label prediction of the trained model closely aligns with the ground truth label from the example trajectory. Over all of the test trajectories, the average label classification accuracy was 96%.
5.3 Visualizing the Learned Latent Space
To better understand the role of the embedding space for predicting the primitive probability distribution, we visualized the embedding of all states vectors from the test set in the recurrent hidden layer. We would expect that a useful latent embedding would naturally cluster states that correspond to different primitives into distinct locations in the embedding space.
Figure 7 shows layout of the latent space in two dimensions. Each point corresponds to a state vector from the test dataset. The points are colored by the ground truth label.
5.4 Jaco Dial Domain Dataset
We also make use of the Jaco dial domain dataset illustrated in Figure 8. The dial dataset is composed of demonstrations from a Jaco manipulator pressing 4 keys in sequence (e.g. 3,5,4,7). The positions of the keys are randomly shuffled for each demonstration, but the position of each key is given in the state vector. The intention is to treat pressing an individual digit as a behavior primitive. For this dataset, label prediction accuracy is a challenging metric without a task sketch because the starting position of the jaco may not provide clues about which button will be pressed. As the jaco gets closer to a button, it becomes more clear which button will be pressed. The dataset of dialpad demonstrations were generated using default parameters and code from TACO.
5.5 Dial Domain Comparison
The goal of this comparison is to evaluate the label prediction accuracy of the metacontroller in . To isolate the label predictions of the metacontroller, the behavior primitive library is pretrained on the training dataset including 1200 demonstrations and frozen. Label classification and action prediction accuracy is then evaluated on the test set including 280 demonstrations.
The average results of 5 runs are shown for TACO and CTC. We evaluate each approach using the same label accuracy and action prediction metrics. The summary of results are shown in Table 2. We found that our approach achieves the highest label accuracy at 65%. The overall label accuracy of on the dial dataset is lower than the Husky+UR5 dataset. Additional analysis revealed that many of the mislabeling occurred at the beginning of a new key press where context about where the Jaco is moving next is weakest. The dataset is also more challenging than than the Husky dataset because the number of unique behavior primitives has increased from 3 to 10.
Also of note, we compare our results to TACO which is a weakly supervised approach. TACO is given the ordering of tasks. For task sequences of length 4, this means that a random baseline would be expected to achieve an accuracy of 25%. For an unlabeled approach like , any of the 10 behavior primitives could be selected at each timepoint. This means that unlabeled demonstrations the expected accuracy of a random baseline would be 10%.
In this paper, we describe , an approach to learn behavior primitives from unlabeled demonstrations and a partial set of behavior primitives. We optimize a metric that directly minimizes reconstruction error for a set of demonstrations using sequences of behavior primitives. We directly compare our results to similar approaches using demonstrations generated from simulations of two different robotic platforms and achieve both better label accuracy and reconstruction accuracy as measured by action prediction mean squared error. While we have demonstrated success in these tasks, there are limitations to our approach. The number additional primitives to add to the library must be decided prior to training. In spite of these limitations, we believe that is a useful contribution to the community that may be relevant in a number of different domains.
|Husky UR5||Label Accuracy||MSE Action Prediction|
|Jaco Pinpad||Label Accuracy||MSE Action Prediction|
-  Note: www.clearpathrobotics.com/husky-unmanned-ground-vehicle-robot/Accessed: 2019-09-10 Cited by: §1.
-  Note: www.universal-robots.comAccessed: 2019-09-10 Cited by: §1.
-  (2017-11) Task level hierarchical system for bci-enabled shared autonomy. In 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Vol. , pp. 219–225. External Links: Cited by: §1, §1.
Modular multitask reinforcement learning with policy sketches.
Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 166–175. Cited by: §3.1, §3.
-  (2016) Learning to learn by gradient descent by gradient descent. External Links: Cited by: §2.1.
-  (2009) A survey of robot learning from demonstration. Robotics and Autonomous Systems 57 (5), pp. 469 – 483. External Links: Cited by: §2.
-  (2002-01) Learning a synaptic learning rule. IJCNN-91-Seattle International Joint Conference on Neural Networks, pp. . External Links: Cited by: §2.1.
Hierarchical reinforcement learning with the maxq value function decomposition.
Journal of Artificial Intelligence Research13, pp. 227–303. External Links: Cited by: §3.
-  (2017) One-shot imitation learning. In Advances in neural information processing systems, pp. 1087–1098. Cited by: §3.
-  (2013) Simultaneous neural control of simple reaching and grasping with the modular prosthetic limb using intracranial eeg. IEEE transactions on neural systems and rehabilitation engineering 22 (3), pp. 695–705. Cited by: §1.
-  (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: §2.1.
-  (2006) Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pp. 369–376. Cited by: §4.1, §4.3, §4.8, §4.
-  (1999) Prosthetic control by an eeg-based brain-computer interface (bci). In Proc. aaate 5th european conference for the advancement of assistive technology, pp. 3–6. Cited by: §1.
-  (2016) Individual finger control of a modular prosthetic limb using high-density electrocorticography in a human subject. Journal of neural engineering 13 (2), pp. 026017. Cited by: §1.
-  (2017-04) Imitation learning: a survey of learning methods. ACM Comput. Surv. 50 (2), pp. 21:1–21:35. External Links: Cited by: §2.
-  (1991) Adaptive mixtures of local experts. Neural Computation 3, pp. 79–87. Cited by: §3.
-  (2019-09–15 Jun) CompILE: compositional imitation learning and execution. In Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 3418–3428. External Links: Cited by: §2.
-  (2012) Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research 31 (3), pp. 360–375. Cited by: §2.
-  (2016) Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation. In Advances in neural information processing systems, pp. 3675–3683. Cited by: §3.
-  (2017) DART: noise injection for robust imitation learning. External Links: Cited by: §2, §4.6.
-  (2008) Brain-computer interface operation of robotic and prosthetic devices. Computer 41 (10), pp. 52–56. Cited by: §1.
-  (2019) PLOTS: procedure learning from observations using subtask structure. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1007–1015. Cited by: §3.1.
-  (2007) Control of an electrical prosthesis with an ssvep-based bci. IEEE Transactions on Biomedical Engineering 55 (1), pp. 361–364. Cited by: §1.
-  (2015) Recent advances in bioelectric prostheses. Neurology: Clinical Practice 5 (2), pp. 164–170. Cited by: §1.
-  (2015) Policy distillation. External Links: Cited by: §2.
-  (2016) One-shot learning with memory-augmented neural networks. External Links: Cited by: §2.1.
-  (2010) Learning control in robotics. IEEE Robotics & Automation Magazine 17 (2), pp. 20–29. Cited by: §2.
-  (2017) Outrageously large neural networks: the sparsely-gated mixture-of-experts layer. CoRR abs/1701.06538. External Links: Cited by: §3.
-  (2018-07) TACO: learning task decomposition via temporal alignment for control. In International Conference on Machine Learning, Cited by: §2, §3, §3, §4.2, §4.8, §4, §5.1, §5.4.
-  (2002) Learning options in reinforcement learning. In Abstraction, Reformulation, and Approximation, Berlin, Heidelberg, pp. 212–223. External Links: Cited by: §3.
-  (2018) Neural task programming: learning to generalize across hierarchical tasks. In IEEE International Conference on Robotics and Automation, pp. 1–8. Cited by: §3, §3, §3.
-  (2017-06) Behavior-based ssvep hierarchical architecture for telepresence control of humanoid robot to achieve full-body movement. IEEE Transactions on Cognitive and Developmental Systems 9 (2), pp. 197–209. External Links: Cited by: §1, §1.