ALLSTEPS: Curriculum-driven Learning of Stepping Stone Skills

05/09/2020
by   Zhaoming Xie, et al.
0

Humans are highly adept at walking in environments with foot placement constraints, including stepping-stone scenarios where the footstep locations are fully constrained. Finding good solutions to stepping-stone locomotion is a longstanding and fundamental challenge for animation and robotics. We present fully learned solutions to this difficult problem using reinforcement learning. We demonstrate the importance of a curriculum for efficient learning and evaluate four possible curriculum choices compared to a non-curriculum baseline. Results are presented for a simulated human character, a realistic bipedal robot simulation and a monster character, in each case producing robust, plausible motions for challenging stepping stone sequences and terrains.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 9

10/08/2020

Guided Curriculum Learning for Walking Over Complex Terrain

Reliable bipedal walking over complex terrain is a challenging problem, ...
07/27/2019

Self-Imitation Learning of Locomotion Movements through Termination Curriculum

Animation and machine learning research have shown great advancements in...
04/11/2022

ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments

Despite recent advances in Reinforcement Learning (RL), many problems, e...
06/17/2019

A gray-box approach for curriculum learning

Curriculum learning is often employed in deep reinforcement learning to ...
03/29/2022

Assessing Evolutionary Terrain Generation Methods for Curriculum Reinforcement Learning

Curriculum learning allows complex tasks to be mastered via incremental ...
06/30/2022

Optimizing Character Animations using Online Crowdsourcing

This paper presents a novel approach for exploring diverse and expressiv...
11/19/2021

Reinforcement Learning with Adaptive Curriculum Dynamics Randomization for Fault-Tolerant Robot Control

This study is aimed at addressing the problem of fault tolerance of quad...

Introduction

Bipedal locomotion is a fundamental problem in computer animation and robotics, and there exist many proposed data-driven or physics-based solutions. However, a principal raison-d’être for legged locomotion is the ability to navigate over challenging irregular terrain, and this is unfortunately not reflected in the bulk of locomotion work, which targets flat terrain locomotion. Traversing irregular terrain is challenging, with the limiting case being that of navigation across a sequence of stepping-stones which fully constrain the location of each footstep. We wish to learn physics-based solutions to this classical stepping stone problem from scratch, i.e., without the use of motion capture data. The limits of the learned skills should ideally stem from the physical capabilities of the characters, and not from the learned control strategy.

We investigate the use of deep reinforcement learning (DRL) for computing solutions to this problem. We find a curriculum-based solution to be essential to achieving good results; the curriculum begins with easy steps and advances to challenging steps. We evaluate four different curricula, which each advance the learning based on different principles, and compare them against a no-curriculum baseline. Challenging stepping stone skills are demonstrated on a humanoid model, a fully-calibrated simulation of a large bipedal robot and a monster model. Finally, we demonstrate that the stepping stone policies can be directly applied to walking on challenging continuous terrain with pre-planned foot placements.

Our contributions are as follows:

  • We show how control policies for challenging stepping stone problems can be learned from scratch using reinforcement learning, as demonstrated across 3 bipeds and 2 simulators. Leveraging careful reward design, we are to learn control policies producing plausible motions, without the use of reference motion data.

  • We demonstrate the critical role of a curriculum to circumvent local minima in optimization and that support efficient learning for this task. We evaluate four curricula in comparison to a no-curriculum baseline.

  • We demonstrate that the stepping stone control policies are directly transferable to locomotion on continuous terrain. The learned stepping stone skills thus serve as a general solution for navigating many types of terrain.

Related Work

The stepping stone problem is of interest to many, including: animation and robotics, as will be discussed in more detail below, gait and posture, e.g., [19, 35], and neuromotor control, e.g., [31, 23]. In what follows below, we focus principally on related work in animation and robotics.

Learning Bipedal Locomotion

Considerable progress has been made towards learning control policies for locomotion in the context of physics-based character animation, often via deep reinforcement learning. In many cases, these aim to satisfy an imitation objective and target motions on flat terrain, e.g. [21, 34, 20, 29, 3]. Other solutions learn in the absence of motion capture data, also for flat terrain, e.g., [45, 18, 15]. Environment information such as height maps [33, 32] or egocentric vision [24] can be fed into the policy to adapt to some degree of terrain irregularity. Learned kinematic locomotion controllers have recently achieved impressive results for terrains that includes hills and obstacles [14, 46], although equivalent capability has not been demonstrated for physically simulated characters. The stepping-stone problem has also been tackled using trajectory optimization, e.g., [36].

Walking on Stepping Stones

Precise foot placement is needed to achieve stepping stone capability. There are many works in the robotics literature that achieve this capability by utilizing path planning techniques, including mixed integer programming [7] or variants of A* search [4, 11]. Such techniques are most often limited to either flat terrain [4] or to quasi-static walking that results in a slow walking speed. Another line of work uses a gait library [27], consisting of trajectories for different steps that are computed offline and are then used to achieve stepping stone walking on a bipedal robot whose motion is restricted to the sagittal plane.

3D stepping stones capability has been shown in several simulated bipedal character models. [28] approach this via the use of control barrier function, although that heavily relies on the feasibility of the resulting Quadratic Programming problem, which is not always satisfied. Furthermore, while the simulated model is 3D, the steps themselves are placed in a straight line on a horizontal plane, i.e., only have distance variation, and thus no height variation or turning. There are also works in computer animation literature demonstrating 3D stepping skills, e.g. [5] and [25], generally with limited realism and capabilities. Foot placement has also been used as guidance for reinforcement learning algorithms to achieve path following capability for a simulated biped [34]. It is used to parameterize the possible steps, on flat terrain, and in practice it does not always do well at reaching the desired foot placements.

Curriculum-based Learning

Curriculum learning is the learning process where task difficulty is increased overtime during training [2]. It has also been applied for synthesizing navigation or locomotion policies, e.g. [44, 16, 9, 13, 45]. The task difficulty is usually determined by human intuition. Teacher-student curriculum learning [22] uses the progress on tasks as a metrics for choosing the next tasks, and demonstrates automatic curriculum choices on decimal number additions of different lengths and Minecraft games. Intrinsic motivation [10] can also help to let robots begin from simple goals and advance towards complicated goals. A curriculum policy [26]

can further be learned by formulating the curriculum learning process as a Markov Decision Process (MDP). More recently,

[40] proposes the POET algorithm that allows a 2D walker to solve increasingly challenging terrains by co-evolving the environment and the policy. Reverse curriculum learning has been shown to be effective at balancing uneven data generation in DRL. For example, [41] and [29] propose a form of adaptive sampling where more difficult tasks are given higher priority during training.

System Overview

[width=0.9]figure/system_overview.pdf

Figure : Overview of our curriculum learning system. The curriculum module improves learning efficiency by dynamically adjusting the terrain difficulty according to the progress of the policy.

An overview of our system is shown in Figure id1

. The environment consists of a physics simulator and a step generator. The step generator samples a random sequence of steps from a given probability distribution of the step parameter space. In the case where no curriculum is applied, the step distribution is uniform across the parameter space for the entire duration of training. In contrast, a curriculum dynamically adjusts the step distribution depending on the progress made by the policy. We experiment with four different curricula and a baseline, each having its own motivation and benefits. We show experimentally that curriculum learning, when applied appropriately, allows the policy to solve the stepping stone task, which is otherwise very challenging with standard reinforcement learning.

The remaining of the paper is organized as follows: stepping stones task definition and character modelling (§ id1), reinforcement learning and reward specifications (§ id1), learning curricula (§ id1), experimental results (§ id1), and discussions (§ id1).

Simulation Environments

We now describe the stepping stones parameter space and character models. We experiment with three different characters, Humanoid, Cassie and Monster, to show that the proposed curricula provide a robust approach for learning stepping stones skills.

Stepping Stones Generation

In the stepping stones task, the goal of the character is to make precise foot placements on a sequence of discrete footholds. The character receives foothold positions of the two upcoming steps in its root space, i.e., and as shown in Figure id1. We use two steps since two-step anticipation yields better performance than a single-step [27], and it has been found that further anticipation may be of limited benefit [5].

[width=.9]figure/step_generation.pdf

Figure : Illustration of the stepping stone problem. The character observes the position of the next two steps with respect to its center-of-mass. The new target is generated from a distribution parameterized by three parameters: and .

Successive step placements are generated in spherical coordinates, where the step length , yaw , and pitch relative to the previous stone are the controllable parameters. This 3D parameter space is also illustrated in Figure id1. We limit the distance, yaw, and pitch to lie in the intervals , , and respectively. During training, we set and , which we find experimentally to be the upper limits of our character’s capability. For our 2D step-parameter tests, step distance is sampled uniformly from meters for the humanoid and meters for Cassie to account for the differences in character morphology. A 5D parameter space includes additional roll and pitch variations of the step surfaces, which supports transfer of the skills to smoothly-varying terrains. The roll and pitch variation of a step, , is generated by first applying the rotation relative to the previous foothold, then subsequently applying the and rotations about its x-axis and y-axis. In effect, this causes the step to become tilted as shown in Figure id1.

When the character successfully steps on the current target, its position is immediately replaced by that of the next target , and new target pops into view. We introduce an artificial look-ahead delay to allow the stance foot to settle (see Table id1), by postponing this replacement process for a fixed number of frames. In practice, the look-ahead delay impacts the speed at which the character moves through the stepping stones and also enables it to stop on a given step. Lastly, to ensure that the character begins tackling variable steps from a predictable and feasible state, we fix the first three steps of the stepping stones sequence. Specifically, the first two steps are manually placed slightly below the character’s feet, and the third step is always flat and directly ahead.

Character Models

[width=0.8]figure/walker3d_cassie_mike_intro.png

Figure : Character models for the Humanoid (left), Cassie (middle), and the Monster (right).
Property Humanoid Cassie Monster
Height (m) 1.60 1.16 1.15
Mass (kg) 59 33 33
Action Parameters 21 10 21
Degrees of Freedom 27 20 27
State Features 60 51 60
Maximum Torque (Nm) 100 112.5 100
Simulation Freq. (Hz) 240 1000 240
Control Freq. (Hz) 60 33 60
Look-ahead Delay 30 3 30
Table : Properties of the characters.

The character models are shown in Figure id1 and the detailed specifications is summarized in Table id1. We focus our experiment and analysis on the Humanoid and Cassie model. However, we show that the curriculum-based learning pipeline can be directly applied to a third character, the Monster.

Humanoid

The Humanoid is simulated with 21 hinge joints using PyBullet [6] and is directly torque-controlled. As is standard in reinforcement learning, we normalize the policy output to be in the interval , then multiply the action value for each joint by its corresponding torque limit. The state space contains the joint angles and velocities in parent space, roll and pitch of the root orientation in global space, and linear velocities of the body in character root space. Furthermore, the state space also includes height of the pelvis related to the lowest foot, as well as a binary contact indicator for each foot. We use the height information to detect when the character falls to early terminate the simulation. To improve the motion quality, we generate mirrored roll-out trajectories using the DUP method from [1] to encourage symmetric motions.

Our humanoid character is carefully modelled to reflect joint and torque limits that are close to those documented for humans in [12]. Humanoid characters with unrealistic torque limits often produce unnatural motions unless guided with reference motions, e.g. [32, 24]. In our experiments, we find, as in [15], that natural motion is easier to achieve with the use of realistic torque limits.

Cassie

The action space of Cassie consists of the target joint angles of the ten actuated joints for a low-level PD controller. The PD controller operates at a much higher frequency than the policy to ensure stability in control and simulation. The state space of Cassie is mostly analogous to that of the Humanoid. One exception is that the binary contact indicators are replaced by a single phase variable used for tracking reference motion, since contact state can be estimated from the phase variable.

The Cassie model is designed by Agility Robotics, simulated in MuJoCo [38], and is validated to be very close to the physical robot [43]. Designing robust controllers for Cassie is challenging since it has 20-DoF while only having ten actuators. Furthermore, due to the strong actuators on the robot, it is difficult to obtain high quality motion directly with a simple reward specification. To bootstrap stepping stones training, we follow [42] to first obtain a natural forward-walking policy by tracking a reference motion. The reference motion is then discarded, i.e., it is not used during training in the stepping stone environment.

Monster

The third character, the Monster, is identical to the Humanoid except for body morphology, mass distribution, and slightly weaker arms.

Learning Control Policies

We use reinforcement learning to learn locomotion skills. However, as we show in Section id1, reinforcement learning alone, without curriculum, is insufficient for solving the stepping stones task. In this section, we provide the background for actor-critic-based policy-gradient algorithms. Importantly, the critic module can be used to estimate the performance of the policy, as shown in [41, 29]. Our adaptive curriculum (§ id1) uses the critic to adjust the task difficulty.

Proximal Policy Optimization with Actor-Critic

In reinforcement learning, at each time , the agent interacts with the environment by applying an action () based on its observation () from the environment and receive a reward, , as feedback. Usually the agent acts according to a parametrized policy , where is the probability density of under the current policy. In DRL,

is a deep neural network with parameters

. The goal is to solve the following optimization problem: maximize_θJ_RL(θ) = E_a_t∼π_θ(⋅— o_t)[∑_t=0^∞γ^ t r(o_t, a_t) ], where is the discount factor so that the sum converges.

We solve this optimization problem with a policy-gradient actor-critic algorithm, updated using proximal policy optimization (PPO) [37]. We choose PPO because it is simple to implement and effective for producing high quality locomotion solutions, as demonstrated in previous work, e.g. [32, 45, 29, 41].

The critic, or the value function, computes the total expected reward a policy can get when starting from an observation . The value function is defined for a policy as: V^π(o) = E_o_0=o, a_t∼π(⋅— o_t)[∑_t=0^∞γ^ t r(o_t, a_t) ]. In DRL, the total expected reward can often only be estimated, and so we collect trajectories by executing the current policy. Let an experience tuple be and a trajectory be , a Monte Carlo estimate of the value function at can be recursively computed via

with

. The value estimate is used to train a neural network-based value function using supervised learning in PPO. In policy-gradient algorithms, the value function is usually only used for computing the advantage for training the actor.

The policy, or the actor, is updated by maximizing L_ppo(θ) = 1T∑_t=1^T min(ρ_t^A_t,   clip(ρ_t,1-ϵ,1+ϵ)^A_t), where is an importance sampling term used for calculating the expectation under the old policy and is the advantage estimation.

Reward Design

Despite recent advancements in DRL algorithms, it remains critical to design suitable reward signals to accelerate the learning process. We describe the reward specifications used for the stepping stones environment below.

Hitting the Target

The immediate goal of the character is to place one of its feet on the next stepping target. We define the target reward as

where is the distance between the center of the step target and its contacting foot. We use and to define the magnitude and sensitivity of the target reward. To account for the differences in body morphology of the Humanoid and Cassie model, we use and for the virtual human and and for Cassie. The sensitivity term is chosen to reflect the approximate length of the foot. Note that the character receives the target reward only when contact with the desired step is made, otherwise it is set to zero.

In the initial stages of training, when the character makes contact with the target, the contact location may be far away from the center. Consequently, the gradient with respect to the target reward is large due to the exponential, which encourages the policy to move the foot closer to the center in the subsequent training iterations.

Progress Reward

The target reward is a sparse reward, which is generally more difficult for DRL algorithms to optimize. We provide an additional dense progress reward to guide the character across the steps. More specifically, let and be the distance between the root of the character to the center of the desired step at the previous and the current time step, as projected onto the ground plane. A progress reward

is added to encourage the characters to move closer to the stepping target. is the control period for each character in Table id1.

Additional Reward For Humanoid

It is common practice to incorporate task-agnostic rewards to encourage natural motion when working in the absence of any reference motion, e.g. [8, 45]. We use similar reward terms to shape the motions for the Humanoid:

The four terms penalize the character for using excess energy, reaching joint limits, failing to maintain an upright posture, and unnaturally speeding across the steps. Most of the terms are adapted from the reward implementation in [6].

For the energy penalty, we have

where is the number of joints on the Humanoid, is the normalized torque for joint , and is the joint velocity.

The joint limit penalty is used to discourage the character from violating the joint limit, defined as

where is the indicator function for checking whether joint is beyond of its natural range of motion defined by limits and . In essence, this penalty is proportional to the number of joints near the lower or upper limit.

The posture penalty is

where and are the roll and pitch of the body orientation in global frame. The penalty applies only when the character is leaning sideways for more than 0.4 radians, or backwards beyond 0.2 radians or forwards by 0.4 radians.

We also observe that the Humanoid tends to move unnaturally fast to achieve a good progress reward. We add a velocity penalty

to discourage the character from exceeding root speed of meters per second. The issue does not effect Cassie since its speed is predetermined by the fixed gait period.

Finally, we add an alive bonus

for every time step that the Humanoid is able to keep its root meters above the lower foot, otherwise the episode is terminated. This reward encourages the Humanoid to maintain balance and prevents it from being overly eager to maximize the progress reward.

Learning Curricula

The learning efficiency for the stepping stones task is strongly correlated to the distribution of the step parameters. In this section, we describe five different sampling strategies, including uniform sampling for baseline comparison. For clarity, we focus on the 2D parameter subspace of . However, we further extend this strategy to 3D and 5D step parameter spaces.

Except for uniform sampling, other strategies require dynamically adjusting the step parameter distributions. As such, we first discretize the sampling space evenly into an grid inside the region defined by . The midpoint of the grid is precisely and . Also note that the granularity of the -axis and -axis is different since is not equal to . The discretization process is illustrated in Figure id1.

[width=1.95]figure/sampling_prob_all.pdf

Figure : Left: Fixed-order curriculum advance evenly through the sampling space. Middle: Fixed-order boundary curriculum advance evenly, but only samples step on the boundary.of the parameter space. Right: Adaptive curriculum is free to explore the parameter space at its own pace.

Uniform Sampling (Baseline)

The simplest strategy is to sample the parameters uniformly during training. This is effective if the sampling space only spans easy steps, e.g. steps with small yaw and pitch variations. As the step variations become larger, it becomes less likely for the policy to receive the step reward during random exploration, and so the gradient information is also reduced. We also refer this strategy to as the no curriculum baseline, since it does not adjust step parameters distribution during training.

Fixed-order Curriculum

This curriculum is designed based on our intuition of tasks difficulty. We first divide the grid into six stages, from the easiest to the most challenging. In stage , and are sampled uniformly from the grid centered at the middle point. E.g. in the first stage, we only sample the center point of the grid, which means that every step is generated with and . The curriculum advances when the average total reward during a training iteration exceeds a threshold (see Table id1). The curriculum becomes equivalent to uniform sampling when the last stage is reached, i.e. , and is fixed until the end of the training. The process is illustrated in Figure id1. We call this the fixed-order curriculum because the stages proceed in a predefined order, although the progression from one stage to the next is still tied to the performance. Similar approaches have been shown to be effective for learning locomotion tasks, e.g. [45].

Fixed-order Boundary Curriculum

This strategy is similar to the fix-order curriculum with one important modification: Instead of sampling uniformly in the rectangular domain, it only samples in the boundary regions. Please refer to Fig id1 for visual illustration of the differences. The premise is that the policy can remember solutions to previously encountered step parameters, or that the solution which solves the new parameters also solves the inner region, and so it is more efficient to sample only on the boundary.

Difficult-tasks-favored Sampling

This strategy is equivalent to the adaptive sampling introduced in [29] and [41]. The idea is that during task sampling, more difficult tasks will cause more failure, leading to more frequent early termination. Because of this, even though the tasks are sampled uniformly, the data collected will be more biased towards easier tasks. To counter this, the sampling distribution is updated based on the current value function estimate of each task. This results in more difficult tasks being sampled more frequently, thus balancing the data distribution observed during training. In many ways, this strategy takes the opposite approach of the fixed-order curriculum, where the policy focuses on easy steps in early stages of training and moves progressively into more difficult settings. We describe the implementation together in Section id1.

Adaptive Curriculum

The motivating philosophy of our adaptive curriculum is that it is beneficial to avoid scenarios that are either too easy or too challenging during learning. Most of the trajectory samples should be devoted to medium difficulty steps that the policy can improve on in the short term.

We define the capability of a policy for parameters and as

where is fixed to be and

converts the step parameters to Cartesian vectors used by the policy and value function. In simple terms, the capability metric is an answer to the question: Given two upcoming steps, what is the average performance of the current policy across all observed character states?

Evaluating is generally intractable, so we estimate it by executing the policy on an easy terrain, i.e. the terrain generated by the first stage defined in Section id1, once per episode. Each time the character makes contact with the target foothold, the curriculum evaluates the capability of the current policy for each pair in the grid by hallucinating their placements. The process is repeated for five steps to accumulate different character states, and the mean result is used as a proxy for capability. Also, note that only the parameters of the second step are used for evaluating the capability, i.e. the first step is always fixed. It is possible to use both steps for evaluation, but the second step will be replaced when the character makes contact with the first, since new steps are generated on every contact. Lastly, we observe that the value function is less sensitive to the second step for Cassie, possibly due to the pre-trained imitation controller, and so we vary the first step instead.

We then define the sampling probability of a set of parameters in the parameter grid to be proportional to

where . Finally, this proportionality is normalized into a probability distribution . Here controls the sensitivity to differences in capability values and decides the difficulty setting of the curriculum. In our experiments, we use and for the Humanoid and for Cassie.

When , the curriculum prefers step parameters such that , i.e. steps where the policy has high confidence. In practice, these usually correspond to the easiest steps, e.g. ones without roll and pitch variations. Conversely, if , the curriculum samples steps that are beyond the capability of the current policy. We use this as our implementation of difficult-tasks-favored sampling, as they are similar in spirit.

Results and Evaluations

Property Humanoid Cassie
Fixed-order reward threshold 2500 1000
Adaptive curriculum 0.9 0.85
Exploration noise (logstd)
Samples per iteration () 5 4
Table : Curriculum and learning parameters.
Humanoid Cassie
2-5 7-10 Task Parameter U FO FOB A U FO FOB A
Flat
   1.20, 1.20 1.20, 1.25 1.35, 1.35 1.45, 1.50 0.85 0.90 0.95 0.95
   1.15, 1.20 1.15, 1.20 1.25, 1.35 1.35, 1.40 0.75 0.80 0.85 0.90
Single-step
   0.75, 0.80 0.80, 0.80 0.80 0.80 0.85 0.60
   1.30, 1.50 1.50, 1.50 0.75, 1.00 0.90, 0.95 0.80 0.85 0.80 0.75
Continuous-step
   —, 0.65 0.40 0.45 0.40
   0.75, 0.80 —, 0.65 0.65, 0.70 0.35
Spiral
   0.75, 0.80 0.80, 0.85 0.50 0.65 0.60
   0.65, 0.70 1.40, 1.50 0.65, 0.75 1.00, 1.10 0.55 0.60
Table : Performance evaluation of policy under different settings, for Uniform (U), Fixed-Order (FO), Fixed-Order Boundary (FOB), and Adaptive (A) curricula. The performance numbers represent the maximum radial distances achievable. Please see the text for a detailed explanation of the performance numbers. Larger is better. Bold indicates best compared to alternative. Entries marked with a dash indicate the policy fails for all .

We train stepping stone policies for the Humanoid, Cassie and Monster. We then quantitatively evaluate and compare the differences between sampling strategies. Since the Humanoid and the Monster are similar in terms of control and reward specifications, we focus our evaluation on the Humanoid and Cassie.

We first summarize the high-level findings. All three curricula that gradually increase the task difficulty are able to do well at solving the stepping stone tasks. This include the fixed-order, fixed-order boundary, and adaptive curricula. The remaining approaches, uniform sampling and difficult-tasks-favored sampling, each produce conservative policies that simply learn to stand on the first step when the alive bonus in present, and otherwise yield much less robust and less capable policies. The performance of the policies is best demonstrated in the supplementary video.

Policy Structures

All policies in our experiments are represented by two five-layer neural networks, each hidden layer has neurons, and trained with PPO. One network is the actor that outputs the mean of a Gaussian policy and the other is the critic that outputs a single value which indicates the value function estimate of the current policy. The first three hidden layers of the actor use the softsign [39]

activation while the final two layers use ReLU activation. We apply Tanh to the final output to normalize the action to have a maximum value of one. For the critic, we use ReLU for all the hidden layers. The policy parameters are updated using Adam optimizer

[17] with a mini-batch size of and a learning rate of for epochs in each roll-out. Training a single policy takes about to

hours on a GPU, with simulation running in parallel on a 16-core CPU. The learning pipeline is implemented in PyTorch

[30].

To reduce the amount of computation, we pre-train an initial, straight line and flat terrain, locomotion controller for both the humanoid and Cassie. The step length is sampled from for the humanoid and for Cassie. These controllers are used as the starting point for all subsequent experiments. This also means that we are directly comparing different sampling strategies on their performance for the stepping stones task. For the experiments described in this section, we use and unless otherwise specified. Other character-specific curriculum and learning parameters used for training are summarized in Table id1.

Learning Curves for 2D Parameter Space

The performance of different sampling strategies is shown in Figure id1. To ensure fairness in the learning curves comparison, we use uniform sampling to evaluate all policies. It is important to note that the learning curves may not reflect the performance of the policies as precisely as visual demonstrations. In particular, due to the presence of the alive bonus for the Humanoid, a simple policy can receive a maximum reward of 2000 by standing still on the first step. Please refer to the supplementary video for further details.

For the Humanoid, the learning curves capture the phenomenon of local and global optima, where the sampling strategies fall into two categories. In the first category, both the uniform and difficult-tasks-favored sampling strategies quickly achieve decent performances, but eventually converge to lower final rewards. The combination of difficult steps and sparse target reward discourages the policies trained with these two methods to make further progress after learning to balance on the first step. In contrast, the policies steadily improve under the fixed-order, fixed-order boundary, and adaptive curricula, due to the gradual build-up of steps difficulty. These three curricula were able to guide the policies to solve the stepping stones task, and the difference in learning speed is insignificant. The distinction between these three curricula is more clear in their use cases, which we discuss in the next section.

[width=0.98]figure/learning_curve_with_boundary.pdf

Figure : Learning curves for different sampling strategies, averaged over five runs. Left: Humanoid Right: Cassie.

Curriculum Progress for 2D Parameter Space

The fixed-order curriculum is developed based on our intuition of task difficulty. However, the relationship between a task parameter and difficulty is not always obvious. The benefits of the adaptive curriculum are that it yields a smoothly-advancing curriculum with fine-grained step distribution control based on the policy’s local capability.

Figure id1 shows the relative progress of the fixed-order and adaptive curriculum, where the heatmaps of the latter were captured at the end of each of the six stages. From the adaptive curriculum heatmaps, it is clear that the competency in the yaw dimension expands much faster than in the pitch dimension. This observation is consistent with our intuition that variations in the yaw dimension should be easier to learn. Furthermore, the high-probability, ring-structured region of each heatmap resembles that in the fixed-order boundary curriculum. Overall, the adaptive curriculum is flexible and has similar features to the fixed-order and fixed-order boundary curricula. One disadvantage is that it requires more computation to evaluate the capability of the policy.

3D Parameter Space

We extend the evaluation of fixed-order, fixed-order boundary, and adaptive curriculum to the 3D parameter space, now including step distance . The step distance is sampled from 11 uniformly discretized values between meters for the Humanoid and for Cassie. For the fixed-order curriculum, in addition to the parameters defined Section id1, it starts at in the first stage and expands the sampling space by two grid points every time the reward threshold is met. The fixed-order boundary curriculum is similarly extended. For the adaptive curriculum, the capability of the policy defined in Section id1 is modified to take an additional parameter .

For the fixed-order curriculum, it may be impossible to progress to the final stage due to the physical capability of the characters. However, it is entirely possible that a parameter choice, e..g, , is within capability limit, and that the fixed-order curriculum will never have the chance to attempt it, while the adaptive curriculum is free to advance unevenly in the parameter space. We observe this phenomenon in our experiments.

Policy Capability Limits

We also examine the performance of the policies by fixing and while pushing to the limit. The test scenarios are summarized in Table id1. The single-step scenario means one inclined or declined step at the start, followed by horizontal straight-line steps until the end. The continuous-step variation is where all steps are on a constant incline or decline. Note that is defined such that a negative value produces an incline. The motions for some of the scenarios can be visualized in Figure Policy Capability Limits.

[width=0.9]figure/3d_scenarios_v2.pdf

Figure : Snapshot of the motions on different test scenarios.

We test whether the policy can sustain the performance level for ten consecutive steps. For the Humanoid, the simulation is not fully deterministic due to an observed underlying stochasticity in PyBullet’s contact-handling, and so we repeat each scenario five times and record two numbers. The first represents the maximum value of for which the policy succeeds for all five runs, and it thus provides a conservative estimate. The second number represents the maximum value of for which the policy succeeds in at least one of the runs. We observe empirically that the policies work consistently when is less than the maximum value recorded, and thus the learned policies are generally quite robust.

When we decrease to 40 degrees in the single-step and continuous-step decline scenarios for the Humanoid, the adaptive curriculum is able to perform consistently for all five runs at 0.8 meters and 1.5 meters respectively. This suggests that may be near the physical limit of the Humanoid. Since the adaptive curriculum prioritizes medium difficulty settings, e.g. , the most extreme scenarios are likely to be sampled very rarely. The fixed-order curriculum does not suffer from this issue since it is forced to sample the extreme scenarios as long as the final stage is reached.

5D Parameters Space

For the 5D parameter space, we also include the pitch and roll of each step, as measured in their respective local frames, so that the generated steps are tilted. We sample degrees, where and are the roll and pitch of the steps. Each new dimension is discretized into

intervals as before and the adaptive curriculum is applied to train a new policy for each character. For comparison with their respective 3D policy, we evaluate the number of steps each policy can handle on ten randomly sampled 5D stepping stone sequences, each with 50 steps. The mean and standard deviation of successful steps is reported in Table 

id1. A snapshot of the motion on tilted steps can be seen in Figure id1.

Parameters Humanoid Cassie
3D Policy
5D Policy
Table : Robustness of 3D and 5D policies on 5D stepping stone sequences. The numbers represent the number of steps before falling.

[width=0.9]figure/tilted_5d_steps.pdf

Figure : Steps with roll and pitch variations.

Walking on Variable Terrain

Given the considerable abilities of the characters to realize challenging stepping stone scenarios, we expect that the same control policies can execute similar steps on continuous terrain as it does on isolated footholds. The primary difference between the two scenarios is that the continuous terrain might present tripping hazards for the swing foot that are not present in the case of isolated stepping stones. Also, the continuous terrain may demand more precise foot placements since the surfaces near target locations have non-uniform slopes. We use the height field primitive in PyBullet to model continuous terrains generated using Perlin noise. Then we synthesize footstep trajectories to create 5D stepping stone sequences from the character’s initial position to feed to the policies. Note that the policy perceives discrete steps, as before, while the simulator sees only the height field. While we find height fields in PyBullet to have slightly different contact dynamics than the discrete footholds we used for stepping stones, our policies are robust enough to handle the differences without further training. Figure id1 shows the Humanoid walking on continuous terrain.

To demonstrate the generality of our approach, we apply the same learning pipeline to train a policy for the Monster with the same 5D parameter space. This policy achieves the same robustness and capabilities on the continuous terrain. Please refer to the supplementary video for visual results.

[width=0.85]figure/humanoid_terrain.pdf

Figure : Stepping-stone policy applied to continuous terrain.

Discussion and Limitations

During training, we use stepping stone blocks which are five times wider than the ones used for rendering. We find this to improve the training consistency, as it makes the sparse target reward more discoverable during random exploration. However, it also causes the characters to occasionally miss the step for some extreme sampling parameters when testing on narrower steps. This issue could be addressed by adding step width as a curriculum parameter and decrease it over time during training.

The Humanoid and Cassie appear to use different anticipation horizons. Although we provide a two-step look-ahead for both the Humanoid and Cassie, the value function estimates indicate that Cassie’s policy considers only the first step while the Humanoid uses both. This may be because Cassie has a fixed step-timing, enforced by the phase variable, which limits the policy to take more cautious step. For the Humanoid, we observe that its step-timing depends on the combinations of the two upcoming steps. For example, the character prefers to quickly walk down consecutive descending steps, while taking other combinations more slowly. This gives the policy more flexibility and makes the second step information more meaningful.

For the adaptive curriculum, we estimate the difficulty of a step by hallucinating it while traversing horizontal and straight steps. One limitation of this method is that it ignores the influence of step transitions. For example, it is generally easier to make a right-turning step if the swing foot is the right foot, and vice versa. A natural way to take the transition into account is to estimate the difficulty of the step before the step generation within the training episode. However, this requires additional computation.

The purpose of look-ahead delay was to emulate human reaction time to produce more conservative motions. With the default delay of 30, the Humanoid walks across the stepping stones at an average speed of 1.35 m/s, similar to typical human walking pace. We can control the walking speed by adjusting the look-ahead delay and disabling the speed penalty. When the look-ahead delay is set to 2, the Humanoid traverses the terrain at an average speed of 2.10 m/s, which is closer to jogging.

Lastly, our policies seems to have reached the physical limits achievable with a normal stepping gait. Different locomotion modes are required to solve even more drastic terrain variations, e.g., the Humanoid can use hands to clamber up steeper inclines. Despite being able to control arm movements, the Humanoid prefers to maintain a tucked position for its arms. An interesting future direction will be to learn different locomotion modes for handling different scenarios.

Conclusions

We have presented a general learned solution capable of solving challenging stepping stone sequences, as applicable to physics-based legged locomotion. To this end, we evaluated four different curricula and demonstrated that the key to solving this problem is using suitable learning curricula that gradually increase the task difficulty according to the capability of the policy. In the future we wish to integrate these stepping capabilities with a step planner, to rapidly generalize the capabilities to new characters, to support true omni-directional stepping, to integrate hands-assisted locomotion modes such as clambering, and to test the capabilities on physical robots. We believe that the simplicity of our key findings, in retrospect, makes them the perfect stepping-stone to future research on generalized locomotion.

  • [1] F. Adbolhosseini, H. Y. Ling, Z. Xie, X. B. Peng, and M. van de Panne (2019) On learning symmetric locomotion. In Proc. ACM SIGGRAPH Motion, Interaction, and Games (MIG 2019), Cited by: Humanoid.
  • [2] Y. Bengio, J. Louradour, R. Collobert, and J. Weston (2009) Curriculum learning. In

    Proceedings of the 26th annual international conference on machine learning

    ,
    pp. 41–48. Cited by: p11.
  • [3] K. Bergamin, S. Clavet, D. Holden, and J. R. Forbes (2019) DReCon: data-driven responsive control of physics-based characters. ACM Transactions on Graphics (TOG) 38 (6), pp. 1–11. Cited by: p8.
  • [4] J. Chestnutt, M. Lau, G. Cheung, J. Kuffner, J. Hodgins, and T. Kanade (2005) Footstep planning for the honda asimo humanoid. In Proceedings of the 2005 IEEE international conference on robotics and automation, pp. 629–634. Cited by: p9.
  • [5] S. Coros, P. Beaudoin, K. K. Yin, and M. van de Panne (2008) Synthesis of constrained walking skills. In ACM Transactions on Graphics (TOG), Vol. 27, pp. 113. Cited by: p10, p15.
  • [6] E. Coumans and Y. Bai (2016–2019) PyBullet, a python module for physics simulation for games, robotics and machine learning. Note: http://pybullet.org Cited by: Humanoid, Additional Reward For Humanoid.
  • [7] R. Deits and R. Tedrake (2014) Footstep planning on uneven terrain with mixed-integer convex optimization. In 2014 IEEE-RAS International Conference on Humanoid Robots, pp. 279–286. Cited by: p9.
  • [8] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel (2016) Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learning, pp. 1329–1338. Cited by: Additional Reward For Humanoid.
  • [9] C. Florensa, D. Held, M. Wulfmeier, M. Zhang, and P. Abbeel (2017) Reverse curriculum generation for reinforcement learning. arXiv preprint arXiv:1707.05300. Cited by: p11.
  • [10] S. Forestier, Y. Mollard, and P. Oudeyer (2017) Intrinsically motivated goal exploration processes with automatic curriculum learning. arXiv preprint arXiv:1708.02190. Cited by: p11.
  • [11] R. J. Griffin, G. Wiedebach, S. McCrory, S. Bertrand, I. Lee, and J. Pratt (2019) Footstep planning for autonomous walking over rough terrain. arXiv preprint arXiv:1907.08673. Cited by: p9.
  • [12] M. Grimmer (2015) Powered lower limb prostheses. Ph.D. Thesis, Technische Universität. Cited by: Humanoid.
  • [13] N. Heess, S. Sriram, J. Lemmon, J. Merel, G. Wayne, Y. Tassa, T. Erez, Z. Wang, S. Eslami, M. Riedmiller, et al. (2017) Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286. Cited by: p11.
  • [14] D. Holden, T. Komura, and J. Saito (2017) Phase-functioned neural networks for character control. ACM Transactions on Graphics (TOG) 36 (4), pp. 42. Cited by: p8.
  • [15] Y. Jiang, T. Van Wouwe, F. De Groote, and C. K. Liu (2019) Synthesis of biologically realistic human motion using joint torque actuation. arXiv preprint arXiv:1904.13041. Cited by: p8, Humanoid.
  • [16] A. Karpathy and M. van de Panne (2012) Curriculum learning for motor skills. In

    Canadian Conference on Artificial Intelligence

    ,
    pp. 325–330. Cited by: p11.
  • [17] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: p38.
  • [18] S. Lee, M. Park, K. Lee, and J. Lee (2019-07) Scalable muscle-actuated human simulation and control. ACM Trans. Graph. 38 (4). Cited by: p8.
  • [19] U. Lindemann, J. Klenk, C. Becker, and R. Moe-Nilssen (2013) Assessment of adaptive walking performance. Medical engineering & physics 35 (2), pp. 217–220. Cited by: p7.
  • [20] L. Liu and J. Hodgins (2017) Learning to schedule control fragments for physics-based characters using deep q-learning. ACM Transactions on Graphics (TOG) 36 (3), pp. 1–14. Cited by: p8.
  • [21] L. Liu, M. V. D. Panne, and K. Yin (2016) Guided learning of control graphs for physics-based characters. ACM Transactions on Graphics (TOG) 35 (3), pp. 1–14. Cited by: p8.
  • [22] T. Matiisen, A. Oliver, T. Cohen, and J. Schulman (2019) Teacher-student curriculum learning. IEEE transactions on neural networks and learning systems. Cited by: p11.
  • [23] J. S. Matthis and B. R. Fajen (2014) Visual control of foot placement when walking over complex terrain.. Journal of experimental psychology: human perception and performance 40 (1), pp. 106. Cited by: p7.
  • [24] J. Merel, A. Ahuja, V. Pham, S. Tunyasuvunakool, S. Liu, D. Tirumala, N. Heess, and G. Wayne (2018) Hierarchical visuomotor control of humanoids. arXiv preprint arXiv:1811.09656. Cited by: p8, Humanoid.
  • [25] I. Mordatch, M. De Lasa, and A. Hertzmann (2010) Robust physics-based locomotion using low-dimensional planning. In ACM SIGGRAPH 2010 papers, pp. 1–8. Cited by: p10.
  • [26] S. Narvekar and P. Stone (2019) Learning curriculum policies for reinforcement learning. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 25–33. Cited by: p11.
  • [27] Q. Nguyen, A. Agrawal, X. Da, W. C. Martin, H. Geyer, J. W. Grizzle, and K. Sreenath (2017) Dynamic walking on randomly-varying discrete terrain with one-step preview.. In Robotics: Science and Systems, Cited by: p15, p9.
  • [28] Q. Nguyen, A. Hereid, J. W. Grizzle, A. D. Ames, and K. Sreenath (2016) 3d dynamic walking on stepping stones with control barrier functions. In 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 827–834. Cited by: p10.
  • [29] S. Park, H. Ryu, S. Lee, S. Lee, and J. Lee (2019) Learning predict-and-simulate policies from unorganized human motion data. ACM Trans. Graph. 38 (6). Cited by: p11, p19, p21, p30, p8.
  • [30] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. (2019)

    PyTorch: an imperative style, high-performance deep learning library

    .
    In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: p38.
  • [31] A. E. Patla and J. N. Vickers (2003) How far ahead do we look when required to step on specific locations in the travel path during locomotion?. Experimental brain research 148 (1), pp. 133–138. Cited by: p7.
  • [32] X. B. Peng, P. Abbeel, S. Levine, and M. van de Panne (2018) Deepmimic: example-guided deep reinforcement learning of physics-based character skills. ACM Transactions on Graphics (TOG) 37 (4), pp. 143. Cited by: p21, p8, Humanoid.
  • [33] X. B. Peng, G. Berseth, and M. Van de Panne (2016) Terrain-adaptive locomotion skills using deep reinforcement learning. ACM Transactions on Graphics (TOG) 35 (4), pp. 1–12. Cited by: p8.
  • [34] X. B. Peng, G. Berseth, K. Yin, and M. Van De Panne (2017) Deeploco: dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (TOG) 36 (4), pp. 41. Cited by: p10, p8.
  • [35] Z. Potocanac, W. Hoogkamer, F. P. Carpes, M. Pijnappels, S. M. Verschueren, and J. Duysens (2014) Response inhibition during avoidance of virtual obstacles while walking. Gait & posture 39 (1), pp. 641–644. Cited by: p7.
  • [36] A. Safonova, J. K. Hodgins, and N. S. Pollard (2004) Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces. In ACM Transactions on Graphics (ToG), Vol. 23, pp. 514–521. Cited by: p8.
  • [37] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: p21.
  • [38] E. Todorov, T. Erez, and Y. Tassa (2012) Mujoco: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: Cassie.
  • [39] J. Turian, J. Bergstra, and Y. Bengio (2009) Quadratic features and deep architectures for chunking. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pp. 245–248. Cited by: p38.
  • [40] R. Wang, J. Lehman, J. Clune, and K. O. Stanley (2019) Paired open-ended trailblazer (poet): endlessly generating increasingly complex and diverse learning environments and their solutions. arXiv preprint arXiv:1901.01753. Cited by: p11.
  • [41] J. Won and J. Lee (2019) Learning body shape variation in physics-based characters. ACM Trans. Graph. 38 (6). Cited by: p11, p19, p21, p30.
  • [42] Z. Xie, G. Berseth, P. Clary, J. Hurst, and M. van de Panne (2018) Feedback control for cassie with deep reinforcement learning. In Proc. IEEE/RSJ Intl Conf on Intelligent Robots and Systems (IROS 2018), Cited by: Cassie.
  • [43] Z. Xie, P. Clary, J. Dao, P. Morais, J. Hurst, and M. van de Panne (2019) Learning locomotion skills for cassie: iterative design and sim-to-real. In Proc. Conference on Robot Learning (CORL 2019), Cited by: Cassie.
  • [44] K. Yin, S. Coros, P. Beaudoin, and M. van de Panne (2008) Continuation methods for adapting simulated skills. In ACM Transactions on Graphics (TOG), Vol. 27, pp. 81. Cited by: p11.
  • [45] W. Yu, G. Turk, and C. K. Liu (2018) Learning symmetric and low-energy locomotion. ACM Transactions on Graphics (TOG) 37 (4), pp. 144. Cited by: p11, p21, p28, p8, Additional Reward For Humanoid.
  • [46] H. Zhang, S. Starke, T. Komura, and J. Saito (2018-07) Mode-adaptive neural networks for quadruped motion control. ACM Trans. Graph. 37 (4), pp. 145:1–145:11. External Links: ISSN 0730-0301 Cited by: p8.