LASER: Learning a Latent Action Space for Efficient Reinforcement Learning

03/29/2021
by   Arthur Allshire, et al.
Stanford University
0

The process of learning a manipulation task depends strongly on the action space used for exploration: posed in the incorrect action space, solving a task with reinforcement learning can be drastically inefficient. Additionally, similar tasks or instances of the same task family impose latent manifold constraints on the most effective action space: the task family can be best solved with actions in a manifold of the entire action space of the robot. Combining these insights we present LASER, a method to learn latent action spaces for efficient reinforcement learning. LASER factorizes the learning problem into two sub-problems, namely action space learning and policy learning in the new action space. It leverages data from similar manipulation task instances, either from an offline expert or online during policy learning, and learns from these trajectories a mapping from the original to a latent action space. LASER is trained as a variational encoder-decoder model to map raw actions into a disentangled latent action space while maintaining action reconstruction and latent space dynamic consistency. We evaluate LASER on two contact-rich robotic tasks in simulation, and analyze the benefit of policy learning in the generated latent action space. We show improved sample efficiency compared to the original action space from better alignment of the action space to the task space, as we observe with visualizations of the learned action space manifold. Additional details: https://www.pair.toronto.edu/laser

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 6

06/20/2019

Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks

Reinforcement Learning (RL) of contact-rich manipulation tasks has yield...
11/14/2020

PLAS: Latent Action Space for Offline Reinforcement Learning

The goal of offline reinforcement learning is to learn a policy from a f...
06/05/2021

Learning Routines for Effective Off-Policy Reinforcement Learning

The performance of reinforcement learning depends upon designing an appr...
10/19/2021

Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance Action Space

Intelligent agents must be able to think fast and slow to perform elabor...
11/18/2020

LAVA: Latent Action Spaces via Variational Auto-encoding for Dialogue Policy Optimization

Reinforcement learning (RL) can enable task-oriented dialogue systems to...
01/12/2018

Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution

We present a novel deep neural network architecture for representing rob...
06/06/2012

Evidence-Based Robust Design of Deflection Actions for Near Earth Objects

This paper presents a novel approach to the robust design of deflection ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Deep Reinforcement Learning (RL) has fueled rapid progress in robot manipulation by enabling learning of closed loop visuomotor control policies that integrate perception and control in a single system [1]. However, the focus of end-to-end policy learning has been on the complexity of the observation (or state) space, while the decision space parameterization that affords efficient learning has been less studied. The best action space to learn continuous control of a robotic task depends on its specific characteristics [2].

Consider the task of opening a door with unknown swing radius (kinematics) and torsion spring (dynamics). The agent must discover that the task progresses in a particular manifold of the action space, while yanking the arm up or down or flailing around has no value, as illustrated in Fig. 1. This form of reasoning is necessary to perform many similar everyday tasks and arguably forms the basis of efficient generalization. However, a generic RL agent’s policy is often trained in raw actuation spaces, such as joint angles or torques, discarding the latent structure in the manipulation task or task-family.

Fig. 1: Learning Latent action spaces for efficient reinforcement learning. Manipulation tasks, such as opening a door, are often structured and do not require exploration in the entire action space, only on certain manifold. LASER learns this action space manifold from data, either offline (expert) or online (training with LASER), enabling faster learning in subsequent novel instances of the task by transferring the knowledge via an efficient latent action space.

We can model the solution of a sensorimotor control task (a policy), without loss of generality, as a function which maps observations to control commands in the low-level action space that are sent to the robot’s actuators. Inspired by prior work [3], we propose to factorize the original problem into two sub-problems: first, learning a mapping from observations to actions in a new space of reference signals provided by a robot controller; and second, using the robot controller to map from reference signals to actuation commands. The combined control law becomes , where is an abstract action providing a reference signal to be tracked by the mapping, . In contrast to prior works, we propose to learn the new action space (and the controller mapping to robot’s actuation commands) from the robot’s experiences on similar tasks. As a result, the original hard policy learning problem, , is factorized into two coupled, simpler problems: 1) finding a suitable action representation (i.e., defining the mapping between this space and the original low-level action space, ), and 2) finding the mapping between observations and actions in this new latent representation space, .

In this work we propose an algorithmic approach to learn a Latent Action Space for Efficient exploration in Reinforcement learning (LASER) in a data-driven manner. LASER learns from a set of task instances an optimal action space to be used in all instances of the task. Then, LASER’s learned action space accelerates posterior training processes in previously seen task instances by encoding implicit structure of the task in an efficient latent action representation.

LASER is trained as a encoder-decoder model that learns to map the manifold of low-level control inputs (in our experiments, joint positions and joint torques) to a latent action space. To this end, in LASER latent spaces we enforce controllability (coverage of required raw controls for a task or task family), as well as dynamical consistency across states (same action in similar states has similar effect). We experiment with two LASER variants: learning iteratively while policy learning (online), and learning from batch data generated by an expert policy (offline). We evaluate these LASER variants in two manipulation tasks and observe in both the online and offline settings that the learned action space accelerates training of new task instances. We also analyze the learned action space and observe that the dimensions are disentangled and well aligned with the task semantics.

Summary of contributions:

  1. [wide, labelwidth=!, labelindent=0pt]

  2. We present LASER, an algorithmic approach to learning efficient latent action spaces from off-policy or online actions of an expert to accelerate posterior training of unseen tasks,

  3. We compare learning efficiency with RL in original action space with LASER learned action space, and observe that the learned action space by LASER provides marked improvements in subsequent learning iterations, indicating a transference of information between tasks.

  4. We evaluate different variants of the LASER framework including state conditional reconstruction as well as variational reconstruction in two manipulation tasks in simulation, and find that the learned action spaces correlate clearly to the dimensions of the task space,

Ii Related Work

Robot control literature has multiple analytical maps (controllers) to transform action spaces (such as joint space) to task spaces (such as end effector position, velocity and acceleration) [4, 5, 6, 7]. Prior work has shown that the choice of action space, often based on sophisticated analytical mappings, affects policy learning [3, 8] and proposed action space abstractions that facilitated learning of different families of tasks [3, 9, 10, 11, 12]. However, the choice of an optimal action space given a task family is unclear a priori. For instance, in a tennis swing, it is important to control position, velocity, and possibly the acceleration of the end-effector [13]

, while in a surface-to-surface alignment task, minimizing the moment around a contact is important for robustness 

[14]. In this work, we propose an autonomous data-driven method to infer a latent action space better suited for learning from experiences on similar tasks.

Data-driven discovery of (near-)optimal action abstractions for efficient RL has been scarcely studied as an alternative to human-derived analytical controllers. Only the discovery of temporal action abstractions (options) have received significant attention, both in hierarchical control and option learning framework [15, 16, 17, 18, 19, 20, 21, 22]. However, temporal action abstraction is orthogonal to the underlying action space and can be applied to LASER as well, and is, therefore, not the focus of this study.

Some very recent works have explored learning abstractions for action spaces. Some of them [23, 24]

proposed to learn an equivalent latent full Markov decision process (MDP) of the original problem where reinforcement learning (RL) is easier. We only learn a transformation of the action space, without needing to learn the dynamics model.

Chandak et al. [25] learn a continuous manifold to embed discrete actions based on the similarity of their effects. In the new continuous action space, the learning process is faster because the solution can make use of the expected correlated outcomes of close-by discrete actions. We map between two continuous action spaces for robot control. Similarly to us, Losey et al. [26] learn a new action space as a manifold within the original low-level action space. However, their method is not suited to facilitate policy learning but to simplify human teleoperation.

The work we present here is connected to meta-learning, where the objective is to transfer knowledge from similar tasks when training on a novel instance of the task [27, 28, 29]. These methods use a policy to transfer information between tasks; on the other hand, we propose to use a learned action space as medium for this knowledge transfer, indicating an alternative form of meta-learning.

Iii Problem Formulation

We formulate our continuous-control robot tasks as discrete-time Markov Decision Processes defined by the tuple . Here, is the state space, is the action space,

is the state transition model characterizing the probability of transitioning to state

from taking action in state , is a state reward function, and is the discount factor. The goal of a RL agent is to learn an action selection policy that maximizes the discounted sum of rewards from any state as  [30].

Following the formalism of van der Pol et al. [24], we assume we can lift the original MDP, , into a new MDP with latent action space, , where is the latent action space and is the latent dynamics model. We assume there exists some optimal mapping, , where , satisfies the property that for any task in the given task family, there exists some sequence of control actions, , that is optimal for solving the task, so that RL exploration within the space is more efficient for solving unseen tasks within the given task family compared to RL exploration in .

The latent MDP can thus be viewed as an abstraction of the original MDP such that maps latent actions, , to control actions, . The insight here is that, acting within this subset of the original action space prevents exploration of actions that would never be optimal for solving those tasks. Using this insight, the goal of LASER is to learn the transformation, , that maps latent actions to actions in the original action space, allowing any task within the given task family to be solved efficiently.

Assuming that LASER has found an optimal mapping, , from the latent action space to the original action space, a RL policy would be able to explore the optimal region of the original action space by acting in the latent action space of the lifted latent MDP . A policy on the latent MDP, , generates a policy on the original MDP, . As shown by van der Pol et al. [24], the generated policy on the original MDP is optimal if the policy on the lifted MDP is optimal.

Iv Learning Action Spaces for Efficient RL

This section describes LASER, our algorithm for representation learning of latent action spaces (see Fig. 2). LASER learns a transformation between a latent action space, , and original action space,

, acting as a latent controller for more efficient policy learning in the latent action space. We outline the learning LASER algorithm in online and offline settings, and a procedure for transfer learning with LASER.

Iv-a Representing Latent Action Spaces

Fig. 2: LASER Overview: We train a latent action space using batches of tuples . These batches come from a previously generated dataset (LASER offline) or from a dynamically generated replay buffer (LASER online). Actions are used in an encoder-decoder architecture with reconstruction loss, . The encoder’s Gaussian prediction, , is regularized via the KL loss . The robot components of state, , and next state, , and the latent actions are used to train a latent state transition model, , with a dynamics loss, . In the LASER online variant, this process alternates with policy learning that generates new tuples to add to the dataset (replay buffer), and uses the actor loss, . The policy generates actions in the latent action space, , that are decoded into robot’s original action space. Gradients updates from the actor loss propagate also through and are applied to online LASER’s decoder during policy updates. The learned LASER action space accelerates subsequent training for new instances of the task.

LASER learns a MDP transformation map between original actions and latent actions , as presented in Sec. III. We assume that the mapping between latent and original action spaces depends on the current state of the robot. This is the case for the analytical robot controllers that we take as inspiration for this work [3, 31], as we can see with an example. Suppose that the original action space of the MDP is the space of torques at the joints of a robot, a frequent low-level action space for robots in research. Moreover, suppose that the task is best learned in a latent action space corresponding to desired positions for each robot’s joint. Given an action in the latent space, , the joint torques corresponding to the desired joint position would depend on the current state of the robot: if the current robot state were close to or at the desired joint position , the torques would be close to zero, but if the robot state were very different to the desired joint positions the transformation into the original action space would lead to larger joint torques, . The transformation is, however, independent of other state information such as the state of the environment or information about the task; this information is used by the policy to deciding what actions to perform.

Based on these insights, in LASER we propose to learn a representation mapping between original and latent action spaces with a encoder-decoder neural architecture conditioned on the current state of the robot. We assume that the state of the environment, , can be separated into a distinct robot state and non-robot state : . can be extracted from the full state () and contains the kinematic and dynamic information of the current state of the robotic agent such as joint configurations, accelerations and Cartesian pose of the end-effector, provided by robot’s proprioceptive sensing. contains other information about the state of the environment and that can be task-relevant such as the goal of the task.

The action encoder of LASER is a variational neural network

parameterized by that encodes an action in the original action space , conditioned on the current robot state , into a latent action . The function defined in Sec. III for mapping from latent actions to control inputs in the original action space will be represented by a latent state-dependent variational decoder neural network, parameterized by , where is the reconstruction of , an action in the original space that would have resulted in . Finally, the latent state transition function can be modeled as a function to output the next robot state .

After learning an action space representation with LASER, an RL policy can then be trained in this latent action space using the decoder to map the policy’s latent actions back to the original action space .

Iv-B Learning an Action Representation with LASER

To learn a latent action space with LASER defined by an encoder-decoder action space mapping, we will leverage a replay buffer with experiences of attempts to complete the task. These demonstrations may be collected ahead of time by an expert policy as in offline LASER, or using a suboptimal policy and collected as training progresses, as in online LASER (see Sec. IV-C). The replay buffer contains triplets of state, action, next state, , from the original MDP with . Therefore, the dataset represents a distribution of control actions in the associated state for useful for achieving tasks within the intended task family.

(a) Door
(b) Wipe
Fig. 3: Two simulation tasks used to evaluate LASER. a) Door: the agent controls a Panda robot and has to open a door a given angle. b) Wipe: the agent controls a Panda robot with an eraser end-effector and needs to contact a surface and wipe the dirt elements on it. The two tasks involve solving problems in clear contact-generated submanifolds of the action space. LASER can help an agent learn the actions to traverse these manifolds and accelerate training in new instances of the tasks.

As shown in Fig. 2

, the LASER framework incorporates several loss terms to find a suitable latent action space. In order to allow for a latent action space to have lower dimensionality than the original action space, we use autoencoders that preserve the principal dimensions of variation of the original space in the latent action space. Thus, the first loss we impose is a

reconstruction loss. The decoder will be trained to reconstruct an action from the latent action from the encoder and the given robot state . This results in a typical autoencoder reconstruction loss [32] defined as:

(1)

As found in previous work on learning latent action spaces [26], an effective latent action space should satisfy three properties: latent controllability, latent consistency, and latent scaling. Latent controllability requires the dynamics transitions between two consecutive latent states and to mimic the transition between their corresponding states in the original MDP, and . Latent consistency enforces similar state transition behavior when the same latent action is taken in similar states. Assuming that executing a latent action at the state results in transition to state , if we execute another latent action close to () at , we transition to a new state that is close to (). Finally, latent scaling ensures that applying larger latent actions leads to larger changes in latent state. As found by van der Pol et al. [24], these properties can be achieved by incorporating a latent state dynamics loss that forces the predicted state from the learned latent state transition model, to be close to the true next latent state for a latent action :

(2)

Finally, we also include a regularization component to the loss in the form of a Kullback–Leibler (KL) divergence term, as is common in variational autoencoder architectures [33]. The KL loss ensures the encoder learns a smooth latent space distribution with zero mean:

(3)

The final LASER loss function is a weighted-sum of the aforementioned losses with a unique constant weighting for each component. The weights allow to prioritize some objectives over others, e.g. the reconstruction over the KL divergence loss, following a

-VAE approach [34]:

L(s,a,s’,θ_E,θ_D,θ_T) = β_recL_rec + β_dynL_dyn + β_KLL_KL

Fig. 4: Exp1. LASER trained on offline batch data: SAC on the original action space, a LASER action space, and on ablations of LASER (Sec. V-B) on the Door (left) and Wipe (right) tasks. LASER and LASER ablations are trained offline (Sec. IV-C) on trajectories sampled from an expert SAC. Our results show that training in the LASER action space converges faster than training in the original action space.

Iv-C Offline and Online LASER Variants

There are two alternative variants to train LASER: offline and online (Fig. 2). Both train using the LASER loss of Equation IV-B, but differ in their training process. Also, the online variant leverages an additional loss, as we will see below. In the offline LASER variant, we leverage a dataset of expert policy experiences acquired for a base task in order to improve learning efficiency on subsequent transfer tasks. The offline LASER process is as follows. First, we train a standard RL algorithm to convergence on the base task, using the original action space of the robot. After convergence, we roll-out episodes using the trained policy and generate a dataset of experiences (consisting of state, action, next state tuples) and train LASER on this dataset. Finally, we train RL agents on transfer tasks using the learned LASER action space: the trained policies will generate actions in the latent space that will be transformed into the original action space of the task by the trained LASER’s state-conditioned decoder.

In the online LASER variant

, we train LASER at the same time that a policy is learning a task for without any pre-training of either the policy or the representation. The policy is using the non-stationary action space provided by LASER. We alternate between LASER training using experiences from the replay buffer of the policy, and policy training in the latest LASER space. An online LASER policy consists of an multi-layer perceptron head followed by the LASER action decoder (Fig. 

2, bottom). The decoder shares the weights with the decoder in the encoder-decoder LASER architecture. To improve LASER training, we exploit the gradients from the policy to optimize LASER’s decoder: we propagate gradients through the decoder during policy training iterations. We found that this process significantly speeds up online training. In Sec. V-B (Exp 2), we show that online LASER procedure can learn the action representation and the policy simultaneously, without incurring any efficiency penalty when compared to training in the original action space. Moreover, it retains the benefit of being able to learn new policies in the learned action space.

V Experimental Evaluation

In our experiments, we aim to answer four questions:

  1. [wide, labelwidth=!, labelindent=0pt]

  2. How does a RL policy learning on LASER action space, , trained on offline batched data, compare to a RL policy learning in the original action space, ?

  3. How does the action space learned with LASER transfer to different variants of the task?

  4. Does online learning of a LASER action space affect policy learning?

  5. How do LASER action spaces align with the natural dimensions of a task?

Fig. 5: Exp2. LASER task transfer: SAC on the original and LASER action spaces in new instances of Door (left), and Wipe (right). In both tasks, we observe a significant benefit in efficiency when learning in the LASER action space compared to the original action space, indicating a transfer of information in the form of an efficient action space.

V-a Experimental Setup

Environments: We conduct experiments on two contact-rich tasks using the RoboSuite simulator  [35]: Door and Wipe (Fig. 3). The goals are to grasp and move a door to a predefined configuration, and to wipe spots of “dirt” on a table, respectively. All experiments are conducted using 16 environments in parallel. We use Soft Actor Critic (SAC, [36]) as our reinforcement learning algorithm. In the Door environment, the original action space is joint torque control; in the Wipe environment, the original action space is joint position control.

Ablations: LASER involves learning three models: 1) a robot state–conditioned encoder network, , for lifting actions from the original action space, , to the latent action space, ; 2) a decoder network for mapping actions from the latent action space, , back to the original action space, ; and 3) a latent state transition function

to impose a smooth transition of robot states from applying latent actions in the latent MDP. LASER uses a state-conditioned decoder (“C”), and losses in dynamics (“D”) and KL to an isotropic normal distribution in latent action space, corresponding to a variational auto-encoder architecture (“VAE”), summarized as the model “CDVAE”. We compare LASER to the following ablations: 1) variants without state-conditioning at the encoder (no “C” in name), 2) variants without state-conditioning at the encoder (no “D” in name), and 3) variants without KL regularization leading to simple auto-encoder architecture (“AE” name instead of “VAE”). For all, we use

, , for non-variational variants, and for variational ones (Eq. IV-B).

Fig. 6: Exp3. LASER trained online with policy learning: SAC on the original and online LASER action spaces on Door (left), and Wipe (right). The improved performance (faster and more optimal convergence) of SAC in the LASER action spaces indicates that it is possible to learn a latent action space simultaneously with policy learning, and that learning such an action space improves the efficiency of policy learning.

V-B Experiments

Exp1. LASER trained on offline batch data: The aim of this experiment is to test whether policy learning is more efficient in a learned latent action space than in an original action space. First, we train a RL policy to convergence on a set of tasks. We then sample 1,000 episodes from this expert policy on each task to form a dataset of expert experiences, which we use to train SAC on action spaces learned by LASER and its ablations.

The results are shown in Fig. 4. In both the Door and Wipe tasks, SAC with offline LASER converges faster than SAC on the original action space. In Door, SAC achieves and maintains a reward of over 100 zero-shot in the LASER action space, while it takes 500,000 steps to achieve the same reward in the original action space. In the Wipe task, SAC reaches a reward of 300 in 600,000 steps with LASER, compared to in 1.5 million in the original action space. This suggests that LASER is able to learn action spaces that simplify learning contact-rich tasks with manipulation of constrained mechanisms.

Exp2. LASER task transfer: We investigate task transfer to both the offline version of LASER presented in Sec. IV-C. We use the action space learned with LASER offline for the Door and Wipe tasks (Exp1) to learn with SAC in a different instance of the base task and compare to learning in the original action space.

In the unseen Door transfer task, we increase the damping coefficient of the door by a factor of 5. The optimal joint torques to solve the transfer task are of higher magnitude than the optimal joint torques to solve the base task, and hence the optimal submanifold of the original joint torque action space is out of the distribution of the LASER action space. In the unseen Wipe transfer task, we have the robot wipe randomly placed circles (instead of lines), so it needs to learn a different motions within the same task manifold.

In Door variant, we reach a reward of 100 zero-shot in the LASER action space, compared to in over 200,000 steps in the original action space. Because the optimal task manifold of the transfer task differs from that of the base task, SAC in the LASER action space converges slightly less optimally than SAC in the original action space does. However, the zero-shot performance of SAC with offline LASER suggests that the action space captures information common to solving both tasks. In the Wipe variant, SAC reaches a reward of 300 in 600,000 steps with LASER, only achieving the same performance in 1.4 million steps in the original action space.

Our results, depicted in Fig. 5, suggest that the action space learned by LASER provides a useful representation for learning new task instances unseen during representation learning, improving efficiency in subsequent training processes.

Exp3. LASER trained online with policy learning: In this experiment we benchmark the performance when interleaving representation learning (using LASER losses) with policy learning (using the SAC losses), as described in Sec. IV-C. Surprisingly, SAC in the action space learned simultaneously with online LASER has converges faster than SAC in the original action space, as shown in the training curve in Fig. 6. This suggests that online learning of action space does not impede policy learning but rather facilitates it.

Fig. 7: Exp4a. Dimensionality analysis (best viewed in color): Mean of each dimension of the variational encoder output for 10 different rollouts; each latent action dimension is shown in a different colour. Only 2 out of the 4 dimensions are utilised (those coloured in blue and red; while green and orange are zero), showing that LASER recovers an action space that is as low-dimensional as possible for efficient policy learning.

Exp4. Qualitative action space evaluation: In this experiment, we investigate the action space manifold learned by LASER in the Wipe environment. Our goal is to observe whether the latent action space aligns with the natural dimensions of the task space in the Wipe task.

In a first experiment, we retrieve the encoded actions during rollouts of the Wipe task to inspect the dimensionality of the learned latent manifold. The results are depicted in Fig. 7. We observe that although we allow a latent action space of dimensionality 4 to be learned, LASER reduces the dimensionality of the latent space to 2 dimensions, learning a manifold that corresponds to the 2-dimensional task-space.

In a second experiment, we traverse the latent action space by continuously applying latent actions in a sinusoidal pattern, executing actions obtained from the LASER decoder outputs, and collecting the end-effector position. The end-effector motion is depicted in Fig. 8. We observe that the robot is controlled along the dimensions of interest to the task, the -plane over the table surface. The end-effector’s motion is primarily lateral, encapsulating the motions necessary to wipe and pan around the table. These experiments indicate that the learned action representation aligns well with the task subspace, mapping to the submanifold of the original action space on which the task should be executed.

Fig. 8: Exp4b. Latent space traversal (best viewed in color): End-effector positions from 200 trajectories within a LASER space on Wipe. The traversals lead to end-effector motions approximately parallel to the -plane. The blue plane represents the table in the Wipe task. Trajectories are colored by timestep in rollout, with lighter colors representing earlier timesteps. LASER learns a space which aligns with the natural task space.

Vi Conclusion

We presented LASER, an approach to learning a latent action space for efficient reinforcement learning. LASER transforms the original MDP of an RL problem into a new MDP, where exploration is easier. The action representation is learned from expert data in an offline (pre-acquired data) or online manner (while the data is acquired). LASER is a variational encoder-decoder model that maps actions in the original action space into a disentangled latent space while maintaining both state-conditioned reconstruction as well as latent space dynamic consistency. We evaluated LASER and LASER ablations in two contact rich manipulation tasks (door opening, and surface wiping) and combined state-of-the-art policy learning algorithms (SAC). Our results revealed that LASER often facilitates training in the same tasks and helps transfer knowledge for faster exploration and convergence in transfer tasks. Visualizations of the learned action space indicate that LASER learns an action space aligned with the natural dimensions of the task-space, leading to the observed improvement in subsequent training processes.

References

  • Levine et al. [2016] S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to-end training of deep visuomotor policies,” JMLR, vol. 17, no. 1, 2016.
  • Bruyninckx and De Schutter [1996] H. Bruyninckx and J. De Schutter, “Specification of force-controlled actions in the "task frame formalism"-a synthesis,” Transactions on Robotics and Automation, vol. 12, no. 4, pp. 581–589, 8 1996.
  • Martín-Martín et al. [2019] R. Martín-Martín, M. Lee, R. Gardner, S. Savarese, J. Bohg, and A. Garg, “Variable impedance control in end-effector space. an action space for reinforcement learning in contact rich tasks,” in Proceedings of the International Conference of Intelligent Robots and Systems (IROS), 2019.
  • Mason [1981] M. T. Mason, “Compliance and force control for computer controlled manipulators,” Transactions on Systems, Man, and Cybernetics, vol. 11, no. 6, pp. 418–432, 6 1981.
  • Khatib [1987] O. Khatib, “A unified approach for motion and force control of robot manipulators: The operational space formulation,” IEEE Journal on Robotics and Automation, vol. 3, no. 1, pp. 43–53, 1987.
  • Kröger et al. [2004] T. Kröger, B. Finkemeyer, U. Thomas, and F. M. Wahl, “Compliant motion programming: The task frame formalism revisited,” Mechatronics & Robotics, Aachen, Germany, 2004.
  • Hogan [1985] N. Hogan, “Impedance control: An approach to manipulation,” Journal of dynamic systems, measurement, and control, vol. 107, p. 17, 1985.
  • Varin et al. [2019] P. Varin, L. Grossman, and S. Kuindersma, “A comparison of action spaces for learning manipulation tasks,” in Proceedings of the International Conference of Intelligent Robots and Systems (IROS), 2019.
  • Bogdanovic et al. [2019] M. Bogdanovic, M. Khadiv, and L. Righetti, “Learning variable impedance control for contact sensitive tasks,” arXiv preprint arXiv:1907.07500, 2019.
  • Gao et al. [2020] J. Gao, Y. Zhou, and T. Asfour, “Learning compliance adaptation in contact-rich manipulation,” in Proceedings of the International Conference on Robotics and Automotion (ICRA), 2020.
  • Pervez et al. [2017]

    A. Pervez, Y. Mao, and D. Lee, “Learning deep movement primitives using convolutional neural networks,” in

    2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids).   IEEE, 2017, pp. 191–197.
  • Buchli et al. [2011] J. Buchli, F. Stulp, E. Theodorou, and S. Schaal, “Learning variable impedance control,” IJRR, vol. 30, no. 7, pp. 820–833, 2011.
  • Ijspeert et al. [2002] A. J. Ijspeert, J. Nakanishi, and S. Schaal, “Movement imitation with nonlinear dynamical systems in humanoid robots,” in ICRA, vol. 2, 5 2002, pp. 1398–1403 vol.2.
  • Khansari et al. [2016] M. Khansari, E. Klingbeil, and O. Khatib, “Adaptive human-inspired compliant contact primitives to perform surface–surface contact under uncertainty,” IJRR, vol. 35, no. 13, pp. 1651–1675, 2016.
  • Sutton et al. [1999] R. S. Sutton, D. Precup, and S. Singh, “Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning,” Artificial intelligence, vol. 112, no. 1-2, pp. 181–211, 1999.
  • Stolle and Precup [2002] M. Stolle and D. Precup, “Learning options in reinforcement learning,” in International Symposium on abstraction, reformulation, and approximation.   Springer, 2002, pp. 212–223.
  • Menache et al. [2002] I. Menache, S. Mannor, and N. Shimkin, “Q-cut—dynamic discovery of sub-goals in reinforcement learning,” in

    European Conference on Machine Learning

    .   Springer, 2002, pp. 295–306.
  • Konidaris et al. [2011] G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto, “Autonomous skill acquisition on a mobile manipulator,” in Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011.
  • Kulkarni et al. [2016] T. D. Kulkarni, K. Narasimhan, A. Saeedi, and J. Tenenbaum, “Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation,” in Advances in neural information processing systems, 2016, pp. 3675–3683.
  • Bacon et al. [2017] P.-L. Bacon, J. Harb, and D. Precup, “The option-critic architecture,” in Thirty-First AAAI Conference on Artificial Intelligence, 2017.
  • Krishnan* et al. [2017] S. Krishnan*, A. Garg*, S. Patil, C. Lea, G. Hager, P. Abbeel, and K. Goldberg (* equal contribution), “Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning,” IJRR, vol. 36, no. 13-14, pp. 1595–1618, 2017.
  • Fang et al. [2019] K. Fang, Y. Zhu, A. Garg, S. Savarese, and L. Fei-Fei, “Dynamics learning with cascaded variational inference for multi-step manipulation,” in Conference on Robot Learning (CoRL), oct 2019.
  • Whitney et al. [2019] W. Whitney, R. Agarwal, K. Cho, and A. Gupta, “Dynamics-aware embeddings,” in International Conference on Learning Representations, 2019.
  • van der Pol et al. [2020] E. van der Pol, T. Kipf, F. A. Oliehoek, and M. Welling, “Plannable approximations to mdp homomorphisms: Equivariance under actions,” in Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, 2020, pp. 1431–1439.
  • Chandak et al. [2019] Y. Chandak, G. Theocharous, J. Kostas, S. Jordan, and P. Thomas, “Learning action representations for reinforcement learning,” in International Conference on Machine Learning, 2019, pp. 941–950.
  • Losey et al. [2019] D. P. Losey, K. Srinivasan, A. Mandlekar, A. Garg, and D. Sadigh, “Controlling assistive robots with learned latent actions,” arXiv preprint arXiv:1909.09674, 2019.
  • Duan et al. [2016] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel, “Rl2: Fast reinforcement learning via slow reinforcement learning,” arXiv preprint arXiv:1611.02779, 2016.
  • Botvinick et al. [2019] M. Botvinick, S. Ritter, J. X. Wang, Z. Kurth-Nelson, C. Blundell, and D. Hassabis, “Reinforcement learning, fast and slow,” Trends in cognitive sciences, vol. 23, no. 5, pp. 408–422, 2019.
  • Finn et al. [2017] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70, 2017, pp. 1126–1135.
  • Sutton and Barto [2018] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction.   MIT press, 2018.
  • Siciliano and Khatib [2016] B. Siciliano and O. Khatib, Springer handbook of robotics.   Springer, 2016.
  • Baldi [2012]

    P. Baldi, “Autoencoders, unsupervised learning, and deep architectures,” in

    Proceedings of ICML workshop on unsupervised and transfer learning, 2012, pp. 37–49.
  • Kingma and Welling [2013] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  • Higgins et al. [2017] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variational framework.” Iclr, vol. 2, no. 5, p. 6, 2017.
  • Zhu et al. [2020] Y. Zhu, J. Wong, A. Mandlekar, and R. Martín-Martín, “robosuite: A modular simulation framework and benchmark for robot learning,” in arXiv preprint arXiv:2009.12293, 2020.
  • Haarnoja et al. [2018] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel et al., “Soft actor-critic algorithms and applications,” arXiv preprint arXiv:1812.05905, 2018.