I Introduction
Animals can traverse complex environments with remarkable agility, bringing to bear broad repertoires of agile and acrobatic skills.
Reproducing such agile behaviors has been a longstanding challenge in robotics, with a large body of work devoted to designing control strategies for various locomotion skills [37, 49, 54, 18, 3]. However,
designing control strategies often involves a lengthy development process, and requires substantial expertise of both the underlying system and the desired skills. Despite the many success in this domain, the capabilities achieved by these systems are still far from the fluid and graceful motions seen in the animal kingdom.
Learningbased approaches offer the potential to improve the agility of legged robots, while also automating a substantial portion of the manual effort involved in the development of controllers. In particular, reinforcement learning (RL) can be an effective and general approach for developing controllers that can perform a wide range of sophisticated skills [7, 43, 25, 44, 34]. While these methods have demonstrated promising results in simulation, agents trained through RL are prone to adopting unnatural behaviors that are dangerous or infeasible when deployed in the real world. Furthermore, designing reward functions that elicit the desired behaviors can itself require a laborious taskspecific tuning process.
The comparatively superior agility seen in animals, as compared to robots, might lead one to wonder: can we build more agile robotic controllers with less effort by directly imitating animal motions? In this work, we propose an imitation learning framework that enables legged robots to learn agile locomotion skills from realworld animals. Our framework leverages reference motion data to provide priors regarding feasible control strategies for a particular skill. The use of reference motions alleviates the need to design skillspecific reward functions, thereby enabling a common framework to learn a diverse array of behaviors. To address the high sample requirements of current RL algorithms, the initial training phase is performed in simulation. In order to transfer policies learned in simulation to the real world, we propose a sample efficient adaptation technique, which finetunes the behavior of a policy using a learned dynamics representation.
The central contribution of our work is a system that enables legged robots to learn agile locomotion skills by imitating animals. We demonstrate the effectiveness of our framework on a variety of dynamic locomotion skills with the Laikago quadruped robot [61], including different locomotion gaits, as well as dynamic hops and turns. In our ablation studies, we explore the impact of different design decisions made for the various components of our system.
Ii Related Work
The development of controllers for legged locomotion has been an enduring subject of interest in robotics, with a large body of work proposing a variety of control strategies for legged systems [37, 49, 54, 20, 18, 64, 8, 3]. However, many of these methods require indepth knowledge and manual engineering for each behavior, and as such, the resulting capabilities are ultimately limited by the designer’s understanding of how to model and represent agile and dynamic behaviors.
Trajectory optimization and model predictive control can mitigate some of the manual effort involved in the design process, but due to the highdimensional and complex dynamics of legged systems, reducedorder models are often needed to formulate tractable optimization problems [11, 17, 12, 2]. These simplified abstractions tend to be taskspecific, and again require significant insight into the salient characteristics of each skill.
Motion imitation.
Imitating reference motions provides a general approach for robots to perform a rich variety of behaviors that would otherwise be difficult to manually encode into controllers [48, 21, 55, 63]. But applications of motion imitation to legged robots have predominantly been limited to behaviors that emphasize upperbody motions, with fairly static lowerbody movements, where balance control can be delegated to separate control strategies [39, 27, 30]. In contrast to physical robots, substantially more dynamic skills can be reproduced by agents in simulation [38, 33, 9, 35]. Recently, motion imitation with reinforcement learning has been effective for learning a large repertoire of highly acrobatic skills in simulation [44, 34, 45, 32]. But due to the high sample complexity of RL algorithms and other physical limitations, many of the capabilities demonstrated in simulation have yet to be replicated in the real world.
Simtoreal transfer.
The challenges of applying RL in the real world have driven the use of domain transfer approaches, where policies are first trained in simulation (source domain), and then transferred to the real world (target domain). Simtoreal transfer can be facilitated by constructing more accurate simulations [58, 62], or adapting the simulator with realworld data [57, 23, 26, 36, 5]. However, building highfidelity simulators remains a challenging endeavour, and even stateoftheart simulators provide only a coarse approximation of the rich dynamics of the real world. Domain randomization can be incorporated into the training process to encourage policies to be robust to variations in the dynamics [52, 60, 47, 42, 41]. Sample efficient adaptation techniques, such as finetuning [51] and metalearning [13, 16, 6] can also be applied to further improve the performance of pretrained policies in new domains. In this work, we leverage a class of adaptation techniques, which we broadly referred to as latent space methods [24, 65, 67],
to transfer locomotion policies from simulation to the real world. During pretraining, these methods learn a latent representation of different behaviors that are effective under various scenarios. When transferring to a new domain, a search can be conducted in the latent space to find behaviors that successfully execute a desired task in the new domain. We show that by combining motion imitation and latent space adaptation, our system is able to learn a diverse corpus of dynamic locomotion skills that can be transferred to legged robots in the real world.
RL for legged locomotion. Reinforcement learning has been effective for automatically acquiring locomotion skills in simulation [44, 34, 32] and in the real world [31, 59, 14, 58, 22, 26]. Kohl and Stone [31] applied a policy gradient method to tune manuallycrafted walking controllers for the Sony Aibo robot. By carefully modeling the motor dynamics of the Minitaur quadruped robot, Tan et al. [58] was able to train walking policies in simulation that can be directly deployed on a real robot. Hwangbo et al. [26] proposed learning a motor dynamics model using realworld data, which enabled direct transfer of a variety of locomotion skills to the ANYmal robot. Their system trained policies using manuallydesigned reward functions for each skill, which can be difficult to specify for more complex behaviors. Imitating reference motions can be a general approach for learning diverse repertoires of skills without the need to design skillspecific reward functions [35, 44, 45]. Xie et al. [62] trained bipedal walking policies for the Cassie robot by imitating reference motions recorded from existing controllers and keyframe animations. The policies are again transferred from simulation to the real world with the aid of careful system identification. Yu et al. [65] transferred bipedal locomotion policies from simulation to a physical Darwin OP2 robot using a latent space adaptation method, which mitigates the dependency on accurate simulators. In this work, we leverage a similar latent space method, but by combining it with motion imitation, our system enables real robots to perform more diverse and agile behaviors than have been demonstrated by these previous methods.
Iii Overview
The objective of our framework is to enable robots to learn skills from real animals. Our framework receives as input a reference motion that demonstrates a desired skill for the robot, which may be recorded using motion capture (mocap) of real animals (e.g. a dog). Given a reference motion, it then uses reinforcement learning to synthesize a policy that enables a robot to reproduce that skill in the real world. A schematic illustration of our framework is available in Figure 2. The process is organized into three stages: motion retargeting, motion imitation, and domain adaptation. 1) The reference motion is first processed by the motion retargeting stage, where the motion clip is mapped from the original subject’s morphology to the robot’s morphology via inversekinematics. 2) Next, the retargeted reference motion is used in the motion imitation stage to train a policy to reproduce the motion with a simulated model of the robot. To facilitate transfer to the real world, domain randomization is applied in simulation to train policies that can adapt to different dynamics. 3) Finally, the policy is transferred to a real robot via a sample efficient domain adaptation process, which adapts the policy’s behavior using a learned latent dynamics representation.
Iv Motion Retargeting
When using motion data recorded from animals, the subject’s morphology tends to differ from that of the robot’s. To address this discrepancy, the source motions are retargeted to the robot’s morphology using inversekinematics [19]. First, a set of source keypoints are specified on the subject’s body, which are paired with corresponding target keypoints on the robot’s body. An illustration of the keypoints is available in Figure 3. The keypoints include the positions of the feet and hips. At each timestep, the source motion specifies the 3D location of each keypoint . The corresponding target keypoint is determined by the robot’s pose , represented in generalized coordinates [15]. IK is then applied to construct a sequence of poses that track the keypoints at each frame,
(1) 
An additional regularization term is included to encourage the poses to remain similar to a default pose , and is a diagonal matrix specifying regularization coefficients for each joint.
V Motion Imitation
We formulate motion imitation as a reinforcement learning problem. In reinforcement learning, the objective is to learn a control policy that enables an agent to maximize its expected return for a given task [56]. At each timestep , the agent observers a state from the environment, and samples an action from its policy . The agent then applies this action, which results in a new state and a scalar reward . Repeated applications of this process generates a trajectory . The objective then is to learn a policy that maximizes the agent’s expected return ,
(2) 
where denotes the time horizon of each episode, and is a discount factor. represents the likelihood of a trajectory under a given policy ,
(3) 
with being the initial state distribution, and representing the dynamics of the system, which determines the effects of the agent’s actions.
To imitate a given reference motion, we follow a similar motion imitation approach as Peng et al. [44]. The inputs to the policy is augmented with an additional goal , which specifies the motion that the robot should imitate. The policy is modeled as a feedforward network that maps a given state and goal to a distribution over actions . The policy is queried at 30Hz for a new action at each timestep. The state is represented by the poses of the robot in the three previous timesteps, and the three previous actions . The pose features
consist of IMU readings of the root orientation (row, pitch, yaw) and the local rotations of every joint. The root position is not included among the pose features to avoid the need to estimate the root position during realworld deployment. The goal
specifies target poses from the reference motion at four future timesteps, spanning approximately 1 second. The action specifies target rotations for PD controllers at each joint. To ensure smoother motions, the PD targets are first processed by a lowpass filter before being applied on the robot [4].Reward Function. The reward function encourages the policy to track the sequence of target poses from the reference motion at every timestep. The reward function is similar to the one used by Peng et al. [44], where the reward at each timestep is given by:
(4) 
The pose reward encourages the robot to minimize the difference between the joint rotations specified by the reference motion and those of the robot. In the equation below, represents the 1D local rotation of joint from the reference motion at time , and represents the robot’s joint,
(5) 
Similarly, the velocity reward is calculated according to the joint velocities, with and being the angular velocity of joint from the reference motion and robot respectively,
(6) 
Next, the endeffector reward , encourages the robot to track the positions of the endeffectors, where denotes the relative 3D position of endeffector with respect to the root,
(7) 
Finally, the root pose reward and root velocity reward encourage the robot to track the reference root motion. and denotes the root’s global position and linear velocity, while and are the rotation and angular velocity,
(8)  
(9) 
Vi Domain Adaptation
Due to discrepancies between the dynamics of the simulation and the real world, policies trained in simulation tend to perform poorly when deployed on a physical system. Therefore, we propose a sample efficient adaptation technique for transferring policies from simulation to the real world.
Via Domain Randomization
Domain randomization is a simple strategy for improving a policy’s robustness to dynamics variations [52, 60, 42]. Instead of training a policy in a single environment with fixed dynamics, domain randomization varies the dynamics during training, thereby encouraging the policy to learn strategies that are functional across different dynamics. However, there may be no single strategy that is effective across all environments, and due to unmodeled effects in the real world, strategies that are robust to different simulated dynamics may nonetheless fail when deployed in a physical system.
ViB Domain Adaptation
In this work, we aim to learn strategies that are robust to variations in the dynamics of the environment, while also being able to adapt its behaviors as necessary for new environments. Let represent the values of the dynamics parameters that are randomized during training in simulation (Table I). At the start of each episode, a random set of parameters are sampled according to . The dynamics parameters are then encoded into a latent embedding by a stochastic encoder , and is provided as an additional input to the policy . For brevity, we have excluded the goal input for the policy. When transferring a policy to the real world, we follow a similar approach as Yu et al. [66], where a search is performed to find a latent encoding that enables the policy to successfully execute the desired behaviors on the physical system. Next, we propose an extension that addresses potential issues due to overfitting with the previously proposed method.
A potential degeneracies of the previously described approach is that the policy may learn strategies that depend on being an accurate representation of the true dynamics of the system. This can result in brittle behaviors where the strategies utilized by the policy for a given can overfit to the precise dynamics from the corresponding parameters . Furthermore, due to unmodeled effects in the real world, there might be no that accurately models realworld dynamics. Therefore, to encourage the policy to be robust to uncertainty in the dynamics, we incorporate an information bottleneck into the encoder. The information bottleneck enforces an upper bound on the mutual information between the dynamics parameters and the encoding . This results in the following constrained policy optimization objective,
(10)  
s.t.  (11) 
where the trajectory distribution is now given by,
(12) 
Since computing the mutual information is intractable, the constraint in Equation 11 can be approximated with a variational upper bound using the KL divergence between and a variational prior [1],
(13) 
We can further simplify the objective by converting Equation 11 into a soft constraint, to yield the following informationregularized objective,
(14)  
with being a Lagrange multiplier. In our experiments, we model the encoder
as a Gaussian distribution with mean
, and the prior is given by the unit Gaussian. This objective can be interpreted as training a policy that maximizes the agent’s expected return across different dynamics, while also being able to adapt its behaviors when necessary by relying on only a minimal amount of information from the groundtruth dynamics parameters. In our formulation, the Lagrange multiplier provides a tradeoff between robustness and adaptability. Large values of restrict the amount of information that the policy can access from . In the limit , the policy converges to a robust but nonadaptive policy that does not access the underlying dynamics parameters. Conversely, small values of provides the policy with unfettered access to the dynamics parameters, which can result in brittle strategies where the policy’s behaviors overfit to the nuances of each setting of the dynamics parameters, potentially leading to poor generalization to realworld dynamics.ViC Real World Transfer
To adapt a policy to the real world, we directly search for an encoding that maximizes the return on the physical system
(15) 
with being the trajectory distribution under realworld dynamics. To identify , we use advantageweighted regression (AWR) [40, 46], a simple offpolicy RL algorithm. Algorithm 1 summarizes the adaptation process. The search distribution is initialized with the prior . At each iteration , we sample an encoding from the current distribution and execute an episode with the policy conditioned on . The return for the episode is recorded and stored along with in a replay buffer containing all samples from previous iterations. is then updated by fitting a new distribution that assigns higher likelihoods to samples with larger advantages. The likelihood of each sample is weighted by the exponentiatedadvantage , where the baselines is the average return of all samples in , and is a manually specified temperature parameter. Note that, since is Gaussian, the optimal distribution at each iteration (Line 9) can be determined analytically. However, we found that the analytic solution is prone to premature convergence to a suboptimal solution. Instead, we update incrementally using a few steps of gradient descent. This process is repeated for iterations, and the mean of the final distribution is used as an approximation of the optimal encoding for deploying the policy in the real world.
Parameter  Training Range  Testing Range 

Mass  default value  default value 
Inertia  default value  default value 
Motor Strength  default value  default value 
Motor Friction  
Latency  
Lateral Friction 
Vii Experimental Evaluation
We evaluate our robotic learning system by learning to imitating a variety of dynamic locomotion skills using the Laikago robot [61]
, an 18 degreesoffreedom quadruped with 3 actuated degreesoffreedom per leg, and 6 underactuated degrees of freedom for the root (torso). Behaviors learned by the policies are best seen in the
supplementary video1, and snapshots of the behaviors are also available in Figure 4. In the following experiments, we aim to evaluate the effectiveness of our framework on learning a diverse set of quadruped skills, and study how well realworld adaptation can enable more agile behaviors. We show that our adaptation method can efficiently transfer policies trained in simulation to the real world with a small number of trials on the physical system. We further study the effects of regularizing the latent dynamics encoding with an information bottleneck, and show that this provides a mechanism to trade off between the robustness and adaptability of the learned policies.Viia Experimental Setup
Retargeting via inversekinematics and simulated training is performed using PyBullet [10]. Table I summarizes the dynamics parameters and their respective range of values. The motion dataset contains a mixture of mocap clips recorded from a dog and clips from artist generated animations. The mocap clips are collected from a public dataset [68] and retargeted to the Laikago following the procedure in Section IV. Figure 5 lists the skills learned by the robot and summarizes the performance of the policies when deployed in the real world. Motion clips recorded from a dog are designated with “Dog”, and the other clips correspond to artist animated motions. Performance is recorded as the average normalized return, with 0 corresponding to the minimum possible return per episode and 1 being the maximum return. Note that the maximum return may not be achievable, since the reference motions are generally not physically feasible for the robot. Performance is calculated using the average of 3 policies initialized with different random seeds. Each policy is trained with proximal policy optimization using about 200 million samples in simulation [53]. Both the encoder and policy are trained endtoend using the reparameterization trick [29]
. Domain adaptation is performed on the physical system with AWR in the latent dynamics space, using approximately 50 realworld trials to adapt each policy. Trials vary between 5s and 10s in length depending on the space requirements of each skill. Hyperparameter settings are available in Appendix
A.Model representation.
All policies are modeled using the neural network architecture shown in Figure
6. The encoder is represented by a fullyconnected network that maps the dynamics parameters to the mean and standard deviation of the encoder distribution. The policy network receives as input the state , goal , and dynamics encoding , then outputs the mean of a Gaussian action distribution. The standard deviation of the action distribution is represented by a fixed matrix. The value function receives as input the state, goal, and dynamics parameters.ViiB Learned Skills
Our framework is able to learn a diverse set of locomotion skills for the Laikago, including dynamic gaits, such as pacing and trotting, as well as agile turning and spinning motions (Figure 4). Pacing is typically used for walking at slower speeds, and is characterized by each pair of legs on the same side of the body moving in unison (Figure 4(a)) [50]. Trotting is a faster gait, where diagonal pairs of legs move together (Figure 1). We are able to train policies for these different gaits just by providing the system with different reference motions. Furthermore, by simply playing the mocap clips backwards, we are able to train policies for different backwards walking gaits (Figure 4(b)). The gaits learned by our policies are faster than those of the manuallydesigned controller from the manufacturer. The fastest manufacturer gait reaches a top speed of about 0.84m/s, while the Dog Trot policy reaches a speed of 1.08m/s. The backwards trotting gait reaches an even higher speed of 1.20m/s. In addition to imitating mocap data from animals, our system is also able to learn from artist animated motions. While these handanimated motions are generally not physically correct, the policies are nonetheless able to closely imitate most motions with the real robot. This includes a highly dynamic HopTurn motion, in which the robot performs a 90 degrees turn midair (Figure 4(e)). While our system is able to imitate a variety of motions, some motions, such as Running Man (Figure 4(f)), prove challenging to reproduce. The motion requires the robot to travel backwards while moving in a forwardwalking manner. Our policies learn to keep the robot’s feet on the ground and shuffle backwards, instead of lifting the feet during each step.
ViiC Domain Adaptation
To determine the effects of domain adaptation, we compare our method to nonadaptive policies trained in simulation without randomization (No Rand), and robust policies trained with randomization (Robust) but do not perform adaptation in new environments. Realworld performance comparisons of these methods are shown in Figure 5, detailed performance statistics in simulation and the real world are available in Appendix B. When deployed on the real robot, the adaptive policies outperform their nonadaptive counterparts on most skills. For simpler skills, such as InPlace Steps and SideSteps, the robust policies are sufficient for transfer to the real robot. But for more dynamic skills, such as Dog Pace and Dog Spin, the robust policies are prone to falling, while the adaptive policies can execute the skills more consistently. Policies trained without randomization fail to transfer to the real world for most skills. Figure 7 compares the time elapsed before the robot falls under the various policies. The adaptive policies are often able to maintain balance for a longer period of time than the other methods, with a significant performance improvement after adaptation.
To evaluate the policies’ abilities to cope with unfamiliar dynamics, we test the policies in outofdistribution simulated environments, where the dynamics parameters are sampled from a larger range of values than those used during training. The range of values used during training and testing are detailed in Table I. Figure 8 visualizes the performance of the policies in 100 simulated environments with different dynamics. The vertical axis represents the normalized return, and the horizontal axis records the portion of environments in which a policy achieves a return higher than a particular value. For example, in the case of Dog Pace, the adaptive policies achieve a return higher than 0.6 in of the environments, while the robust policy achieves a return higher than 0.6 in of the environments. The experiments are repeated 3 times for each method using policies initialized with different random seeds. In these experiments, the adaptive policies tend to outperform their nonadaptive counterparts across the various skills. This suggests that the adaptation process is able to better generalize to environments that differ from those encountered during training. To analyze the performance of policies during the adaptation process, we record the performance of individual policies after each update iteration. Figure 9 illustrates the learning curves in 5 different environments for each skill. The policies are generally able to adapt to new environments in a relatively few number of episodes.
ViiD Information Bottleneck
Next we evaluate the effects of the information bottleneck on adaptation performance. Figure 8 summarizes the performance of policies trained with different values of for the information penalty. Larger values of produce policies that access fewer number of bits of information from the dynamics parameters during pretraining. This encourages a policy to be less reliant on precise knowledge of the underlying dynamics, which in turn results in more robust behaviors that attain higher performance before adaptation. However, since the policy’s behavior is less dependent on the latent variables, this can also result in less adaptable policies, which exhibit smaller performance improvements after adaptation. Similarly, smaller values of tend to produce less robust but more adaptive policies, exhibiting lower performance before adaptation, but a larger improvement after adaptation. In our experiments, we find that provides a good tradeoff between robustness and adaptability. We also compare the informationconstrained latent representations to the unconstrained counterparts (No IB). The informationconstrained policies generally achieve better performance both before and after adaptation.
Viii Discussion and Future Work
We presented a framework for learning agile leggedlocomotion skills by imitating reference motion data. By simply providing the system with different reference motions, we are able to learn policies for a diverse set of behaviors with a quadruped robot, which can then be efficiently transferred from simulation to the real world. However, due to hardware and algorithmic limitations, we have not been able to learn more dynamic behaviors such as large jumps and runs. Exploring techniques that are able to reproduce these behaviors in the real world could significantly increase the agility of legged robots. The behaviors learned by our policies are currently not as stable as the best manuallydesigned controllers. Improving the robustness of these learned controllers would be valuable for more complex realworld applications. We are also interested in learning from other sources of motion data, such video clips, which could substantially increase the volume of behavioral data that robots can learn from.
References
 Alemi et al. [2016] Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. CoRR, abs/1612.00410, 2016. URL http://arxiv.org/abs/1612.00410.
 Apgar et al. [2018] Taylor Apgar, Patrick Clary, Kevin Green, Alan Fern, and Jonathan Hurst. Fast online trajectory optimization for the bipedal robot cassie. 06 2018. doi: 10.15607/RSS.2018.XIV.054.
 Bledt et al. [2018] Gerardo Bledt, Matthew J. Powell, Benjamin Katz, Jared Di Carlo, Patrick M. Wensing, and Sangbae Kim. Mit cheetah 3: Design and control of a robust, dynamic quadruped robot. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2245–2252, 2018.
 Butterworth et al. [1930] Stephen Butterworth et al. On the theory of filter amplifiers. Wireless Engineer, 7(6):536–541, 1930.
 Chebotar et al. [2018] Yevgen Chebotar, Ankur Handa, Viktor Makoviychuk, Miles Macklin, Jan Issac, Nathan D. Ratliff, and Dieter Fox. Closing the simtoreal loop: Adapting simulation randomization with real world experience. CoRR, abs/1810.05687, 2018. URL http://arxiv.org/abs/1810.05687.
 Clavera et al. [2019] Ignasi Clavera, Anusha Nagabandi, Simin Liu, Ronald S. Fearing, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Learning to adapt in dynamic, realworld environments through metareinforcement learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyztsoC5Y7.
 Coros et al. [2009] Stelian Coros, Philippe Beaudoin, and Michiel van de Panne. Robust taskbased control policies for physicsbased characters. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 28(5):Article 170, 2009.
 Coros et al. [2010] Stelian Coros, Philippe Beaudoin, and Michiel van de Panne. Generalized biped walking control. ACM Transctions on Graphics, 29(4):Article 130, 2010.
 Coros et al. [2011] Stelian Coros, Andrej Karpathy, Ben Jones, Lionel Reveret, and Michiel van de Panne. Locomotion skills for simulated quadrupeds. ACM Transactions on Graphics, 30(4):Article TBD, 2011.

Coumans and Bai [2016–2019]
Erwin Coumans and Yunfei Bai.
Pybullet, a python module for physics simulation for games, robotics and machine learning.
http://pybullet.org, 2016–2019.  de Lasa et al. [2010] Martin de Lasa, Igor Mordatch, and Aaron Hertzmann. FeatureBased Locomotion Controllers. ACM Transactions on Graphics, 29(3), 2010.
 Di Carlo et al. [2018] Jared Di Carlo, Patrick M Wensing, Benjamin Katz, Gerardo Bledt, and Sangbae Kim. Dynamic locomotion in the MIT cheetah 3 through convex modelpredictive control. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1–9. IEEE, 2018.
 Duan et al. [2016] Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl$^2$: Fast reinforcement learning via slow reinforcement learning. CoRR, abs/1611.02779, 2016. URL http://arxiv.org/abs/1611.02779.

Endo et al. [2005]
Gen Endo, Jun Morimoto, Takamitsu Matsubara, Jun Nakanishi, and Gordon Cheng.
Learning cpg sensory feedback with policy gradient for biped
locomotion for a fullbody humanoid.
In
Proceedings of the 20th National Conference on Artificial Intelligence  Volume 3
, AAAI’05, page 1267–1273. AAAI Press, 2005. ISBN 157735236x.  Featherstone [2007] Roy Featherstone. Rigid Body Dynamics Algorithms. SpringerVerlag, Berlin, Heidelberg, 2007. ISBN 0387743146.
 Finn et al. [2017] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Modelagnostic metalearning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/finn17a.html.
 Gehring et al. [2016] Christian Gehring, Stelian Coros, Marco Hutler, Dario Bellicoso, Huub Heijnen, Remo Diethelm, Michael Bloesch, Péter Fankhauser, Jemin Hwangbo, Mark Hoepflinger, and Roland Siegwart. Practice makes perfect: An optimizationbased approach to controlling agile motions for a quadruped robot. IEEE Robotics & Automation Magazine, pages 1–1, 02 2016. doi: 10.1109/MRA.2015.2505910.
 Geyer et al. [2003] Hartmut Geyer, Andre Seyfarth, and Reinhard Blickhan. Positive force feedback in bouncing gaits? Proceedings. Biological sciences / The Royal Society, 270:2173–83, 11 2003. doi: 10.1098/rspb.2003.2454.
 Gleicher [1998] Michael Gleicher. Retargetting motion to new characters. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’98, pages 33–42, New York, NY, USA, 1998. ACM. ISBN 0897919998. doi: 10.1145/280814.280820. URL http://doi.acm.org/10.1145/280814.280820.
 Goswami [1999] A. Goswami. Foot rotation indicator (fri) point: a new gait planning tool to evaluate postural stability of biped robots. In Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), volume 1, pages 47–52 vol.1, May 1999. doi: 10.1109/ROBOT.1999.769929.
 Grimes et al. [2006] David B. Grimes, Rawichote Chalodhorn, and Rajesh P. N. Rao. Dynamic imitation in a humanoid robot through nonparametric probabilistic inference. In Gaurav S. Sukhatme, Stefan Schaal, Wolfram Burgard, and Dieter Fox, editors, Robotics: Science and Systems. The MIT Press, 2006. ISBN 0262693488. URL http://dblp.unitrier.de/db/conf/rss/rss2006.html#GrimesCR06.
 Haarnoja et al. [2018] Tuomas Haarnoja, Aurick Zhou, Sehoon Ha, Jie Tan, George Tucker, and Sergey Levine. Learning to walk via deep reinforcement learning. CoRR, abs/1812.11103, 2018. URL http://arxiv.org/abs/1812.11103.
 Hanna and Stone [2017] Josiah Hanna and Peter Stone. Grounded action transformation for robot learning in simulation. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI), February 2017.
 He et al. [2018] Zhanpeng He, Ryan Julian, Eric Heiden, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav S. Sukhatme, and Karol Hausman. Zeroshot skill composition and simulationtoreal transfer by learning task representations. CoRR, abs/1810.02422, 2018. URL http://arxiv.org/abs/1810.02422.
 Heess et al. [2017] Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin A. Riedmiller, and David Silver. Emergence of locomotion behaviours in rich environments. CoRR, abs/1707.02286, 2017. URL http://arxiv.org/abs/1707.02286.
 Hwangbo et al. [2019] Jemin Hwangbo, Joonho Lee, Alexey Dosovitskiy, Dario Bellicoso, Vassilios Tsounis, Vladlen Koltun, and Marco Hutter. Learning agile and dynamic motor skills for legged robots. Science Robotics, 4(26), 2019. doi: 10.1126/scirobotics.aau5872. URL https://robotics.sciencemag.org/content/4/26/eaau5872.
 Kim et al. [2009] S. Kim, C. Kim, B. You, and S. Oh. Stable wholebody motion generation for humanoid robots to imitate human motions. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2518–2524, Oct 2009. doi: 10.1109/IROS.2009.5354271.
 Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Kingma and Welling [2013] Diederik P. Kingma and Max Welling. Autoencoding variational bayes. CoRR, abs/1312.6114, 2013. URL http://dblp.unitrier.de/db/journals/corr/corr1312.html#KingmaW13.
 Koenemann et al. [2014] Jonas Koenemann, Felix Burget, and Maren Bennewitz. Realtime imitation of human wholebody motions by humanoids. 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 2806–2812, 2014.
 Kohl and Stone [2004] Nate Kohl and Peter Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion. In ICRA, pages 2619–2624. IEEE, 2004. URL http://dblp.unitrier.de/db/conf/icra/icra20043.html#KohlS04.
 Lee et al. [2019] Seunghwan Lee, Moonseok Park, Kyoungmin Lee, and Jehee Lee. Scalable muscleactuated human simulation and control. ACM Trans. Graph., 38(4), July 2019. ISSN 07300301. doi: 10.1145/3306346.3322972. URL https://doi.org/10.1145/3306346.3322972.
 Lee et al. [2010] Yoonsang Lee, Sungeun Kim, and Jehee Lee. Datadriven biped control. ACM Trans. Graph., 29(4), July 2010. ISSN 07300301. doi: 10.1145/1778765.1781155. URL https://doi.org/10.1145/1778765.1781155.
 Liu and Hodgins [2018] Libin Liu and Jessica Hodgins. Learning basketball dribbling skills using trajectory optimization and deep reinforcement learning. ACM Trans. Graph., 37(4), July 2018. ISSN 07300301. doi: 10.1145/3197517.3201315. URL https://doi.org/10.1145/3197517.3201315.
 Liu et al. [2016] Libin Liu, Michiel van de Panne, and KangKang Yin. Guided learning of control graphs for physicsbased characters. ACM Transactions on Graphics, 35(3), 2016.
 Lowrey et al. [2018] Kendall Lowrey, Svetoslav Kolev, Jeremy Dao, Aravind Rajeswaran, and Emanuel Todorov. Reinforcement learning for nonprehensile manipulation: Transfer from simulation to physical system. CoRR, abs/1803.10371, 2018. URL http://arxiv.org/abs/1803.10371.
 Miura and Shimoyama [1984] Hirofumi Miura and Isao Shimoyama. Dynamic walk of a biped. The International Journal of Robotics Research, 3:60 – 74, 1984.
 Muico et al. [2009] Uldarico Muico, Yongjoon Lee, Jovan Popoviundefined, and Zoran Popoviundefined. Contactaware nonlinear control of dynamic characters. ACM Trans. Graph., 28(3), July 2009. ISSN 07300301. doi: 10.1145/1531326.1531387. URL https://doi.org/10.1145/1531326.1531387.
 Nakaoka et al. [2003] S. Nakaoka, A. Nakazawa, K. Yokoi, H. Hirukawa, and K. Ikeuchi. Generating whole body motions for a biped humanoid robot from captured human dances. In 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), volume 3, pages 3905–3910 vol.3, Sep. 2003. doi: 10.1109/ROBOT.2003.1242196.
 Neumann and Peters [2009] Gerhard Neumann and Jan R. Peters. Fitted qiteration by advantage weighted regression. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1177–1184. Curran Associates, Inc., 2009. URL http://papers.nips.cc/paper/3501fittedqiterationbyadvantageweightedregression.pdf.
 OpenAI et al. [2018] OpenAI, Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Józefowicz, Bob McGrew, Jakub W. Pachocki, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, Jonas Schneider, Szymon Sidor, Josh Tobin, Peter Welinder, Lilian Weng, and Wojciech Zaremba. Learning dexterous inhand manipulation. CoRR, abs/1808.00177, 2018. URL http://arxiv.org/abs/1808.00177.
 Peng et al. [2018a] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. Simtoreal transfer of robotic control with dynamics randomization. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 1–8, May 2018a. doi: 10.1109/ICRA.2018.8460528.
 Peng et al. [2016] Xue Bin Peng, Glen Berseth, and Michiel van de Panne. Terrainadaptive locomotion skills using deep reinforcement learning. ACM Trans. Graph., 35(4):81:1–81:12, July 2016. ISSN 07300301. doi: 10.1145/2897824.2925881. URL http://doi.acm.org/10.1145/2897824.2925881.
 Peng et al. [2018b] Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. Deepmimic: Exampleguided deep reinforcement learning of physicsbased character skills. ACM Trans. Graph., 37(4):143:1–143:14, July 2018b. ISSN 07300301. doi: 10.1145/3197517.3201311. URL http://doi.acm.org/10.1145/3197517.3201311.
 Peng et al. [2018c] Xue Bin Peng, Angjoo Kanazawa, Jitendra Malik, Pieter Abbeel, and Sergey Levine. Sfv: Reinforcement learning of physical skills from videos. ACM Trans. Graph., 37(6), November 2018c.
 Peng et al. [2019] Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantageweighted regression: Simple and scalable offpolicy reinforcement learning. CoRR, abs/1910.00177, 2019. URL https://arxiv.org/abs/1910.00177.
 Pinto et al. [2017] Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial reinforcement learning. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2817–2826, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/pinto17a.html.
 Pollard et al. [2002] Nancy Pollard, Jessica K. Hodgins, M.J. Riley, and Chris Atkeson. Adapting human motion for the control of a humanoid robot. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’02), May 2002.
 Raibert [1984] M. H. Raibert. Hopping in legged systems — modeling and simulation for the twodimensional onelegged case. IEEE Transactions on Systems, Man, and Cybernetics, SMC14(3):451–463, May 1984. ISSN 21682909. doi: 10.1109/TSMC.1984.6313238.
 Raibert [1990] Marc H Raibert. Trotting, pacing and bounding by a quadruped robot. Journal of biomechanics, 23:79–98, 1990.
 Rusu et al. [2017] Andrei A. Rusu, Matej Večerík, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell. Simtoreal robot learning from pixels with progressive nets. In Sergey Levine, Vincent Vanhoucke, and Ken Goldberg, editors, Proceedings of the 1st Annual Conference on Robot Learning, volume 78 of Proceedings of Machine Learning Research, pages 262–270. PMLR, 13–15 Nov 2017. URL http://proceedings.mlr.press/v78/rusu17a.html.
 Sadeghi and Levine [2016] Fereshteh Sadeghi and Sergey Levine. Cad2rl: Real singleimage flight without a single real image. CoRR, abs/1611.04201, 2016. URL http://arxiv.org/abs/1611.04201.
 Schulman et al. [2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/abs/1707.06347.
 Schwind and Koditschek [1998] William J. Schwind and Daniel E. Koditschek. Spring loaded inverted pendulum running: a plant model. 1998.
 Suleiman et al. [2008] W. Suleiman, E. Yoshida, F. Kanehiro, J. Laumond, and A. Monin. On human motion imitation by humanoid robot. In 2008 IEEE International Conference on Robotics and Automation, pages 2697–2704, May 2008. doi: 10.1109/ROBOT.2008.4543619.
 Sutton and Barto [1998] Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. ISBN 0262193981.
 Tan et al. [2016] J. Tan, Z. Xie, B. Boots, and C. K. Liu. Simulationbased design of dynamic controllers for humanoid balancing. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2729–2736, Oct 2016. doi: 10.1109/IROS.2016.7759424.
 Tan et al. [2018] Jie Tan, Tingnan Zhang, Erwin Coumans, Atil Iscen, Yunfei Bai, Danijar Hafner, Steven Bohez, and Vincent Vanhoucke. Simtoreal: Learning agile locomotion for quadruped robots. In Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania, June 2018. doi: 10.15607/RSS.2018.XIV.010.
 Tedrake et al. [2004] Russ Tedrake, Teresa Weirui Zhang, and H. Sebastian Seung. Stochastic policy gradient reinforcement learning on a simple 3d biped. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), volume 3, pages 2849–2854, Piscataway, NJ, USA, 2004. IEEE. ISBN 0780384636. URL http://www.cs.cmu.edu/~cga/legs/01389841.pdf.
 Tobin et al. [2017] Joshua Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. CoRR, abs/1703.06907, 2017. URL http://arxiv.org/abs/1703.06907.
 Wang [2018] Xingxing Wang. Laikago Pro, Unitree Robotics, 2018. URL http://www.unitree.cc/e/action/ShowInfo.php?classid=6&id=355.
 Xie et al. [2019] Zhaoming Xie, Patrick Clary, Jeremy Dao, Pedro Morais, Jonathan Hurst, and Michiel van de Panne. Learning locomotion skills for cassie: Iterative design and simtoreal. In Proc. Conference on Robot Learning (CORL 2019), 2019.
 Yamane et al. [2010] K. Yamane, S. O. Anderson, and J. K. Hodgins. Controlling humanoid robots with human motion data: Experimental validation. In 2010 10th IEEERAS International Conference on Humanoid Robots, pages 504–510, Dec 2010. doi: 10.1109/ICHR.2010.5686312.
 Yin et al. [2007] KangKang Yin, Kevin Loken, and Michiel van de Panne. Simbicon: Simple biped locomotion control. ACM Trans. Graph., 26(3):Article 105, 2007.
 Yu et al. [2019a] Wenhao Yu, Visak C. V. Kumar, Greg Turk, and C. Karen Liu. Simtoreal transfer for biped locomotion. CoRR, abs/1903.01390, 2019a. URL http://arxiv.org/abs/1903.01390.
 Yu et al. [2019b] Wenhao Yu, C. Karen Liu, and Greg Turk. Policy transfer with strategy optimization. In International Conference on Learning Representations, 2019b. URL https://openreview.net/forum?id=H1g6osRcFQ.
 Yu et al. [2019c] Wenhao Yu, Jie Tan, Yunfei Bai, Erwin Coumans, and Sehoon Ha. Learning fast adaptation with meta strategy optimization, 2019c.
 Zhang et al. [2018] He Zhang, Sebastian Starke, Taku Komura, and Jun Saito. Modeadaptive neural networks for quadruped motion control. ACM Trans. Graph., 37(4):145:1–145:11, July 2018. ISSN 07300301. doi: 10.1145/3197517.3201366. URL http://doi.acm.org/10.1145/3197517.3201366.
Comments
There are no comments yet.