Introduction
Bipedal locomotion is a fundamental problem in computer animation and robotics, and there exist many proposed datadriven or physicsbased solutions. However, a principal raisond’être for legged locomotion is the ability to navigate over challenging irregular terrain, and this is unfortunately not reflected in the bulk of locomotion work, which targets flat terrain locomotion. Traversing irregular terrain is challenging, with the limiting case being that of navigation across a sequence of steppingstones which fully constrain the location of each footstep. We wish to learn physicsbased solutions to this classical stepping stone problem from scratch, i.e., without the use of motion capture data. The limits of the learned skills should ideally stem from the physical capabilities of the characters, and not from the learned control strategy.
We investigate the use of deep reinforcement learning (DRL) for computing solutions to this problem. We find a curriculumbased solution to be essential to achieving good results; the curriculum begins with easy steps and advances to challenging steps. We evaluate four different curricula, which each advance the learning based on different principles, and compare them against a nocurriculum baseline. Challenging stepping stone skills are demonstrated on a humanoid model, a fullycalibrated simulation of a large bipedal robot and a monster model. Finally, we demonstrate that the stepping stone policies can be directly applied to walking on challenging continuous terrain with preplanned foot placements.
Our contributions are as follows:

We show how control policies for challenging stepping stone problems can be learned from scratch using reinforcement learning, as demonstrated across 3 bipeds and 2 simulators. Leveraging careful reward design, we are to learn control policies producing plausible motions, without the use of reference motion data.

We demonstrate the critical role of a curriculum to circumvent local minima in optimization and that support efficient learning for this task. We evaluate four curricula in comparison to a nocurriculum baseline.

We demonstrate that the stepping stone control policies are directly transferable to locomotion on continuous terrain. The learned stepping stone skills thus serve as a general solution for navigating many types of terrain.
Related Work
The stepping stone problem is of interest to many, including: animation and robotics, as will be discussed in more detail below, gait and posture, e.g., [19, 35], and neuromotor control, e.g., [31, 23]. In what follows below, we focus principally on related work in animation and robotics.
Learning Bipedal Locomotion
Considerable progress has been made towards learning control policies for locomotion in the context of physicsbased character animation, often via deep reinforcement learning. In many cases, these aim to satisfy an imitation objective and target motions on flat terrain, e.g. [21, 34, 20, 29, 3]. Other solutions learn in the absence of motion capture data, also for flat terrain, e.g., [45, 18, 15]. Environment information such as height maps [33, 32] or egocentric vision [24] can be fed into the policy to adapt to some degree of terrain irregularity. Learned kinematic locomotion controllers have recently achieved impressive results for terrains that includes hills and obstacles [14, 46], although equivalent capability has not been demonstrated for physically simulated characters. The steppingstone problem has also been tackled using trajectory optimization, e.g., [36].
Walking on Stepping Stones
Precise foot placement is needed to achieve stepping stone capability. There are many works in the robotics literature that achieve this capability by utilizing path planning techniques, including mixed integer programming [7] or variants of A* search [4, 11]. Such techniques are most often limited to either flat terrain [4] or to quasistatic walking that results in a slow walking speed. Another line of work uses a gait library [27], consisting of trajectories for different steps that are computed offline and are then used to achieve stepping stone walking on a bipedal robot whose motion is restricted to the sagittal plane.
3D stepping stones capability has been shown in several simulated bipedal character models. [28] approach this via the use of control barrier function, although that heavily relies on the feasibility of the resulting Quadratic Programming problem, which is not always satisfied. Furthermore, while the simulated model is 3D, the steps themselves are placed in a straight line on a horizontal plane, i.e., only have distance variation, and thus no height variation or turning. There are also works in computer animation literature demonstrating 3D stepping skills, e.g. [5] and [25], generally with limited realism and capabilities. Foot placement has also been used as guidance for reinforcement learning algorithms to achieve path following capability for a simulated biped [34]. It is used to parameterize the possible steps, on flat terrain, and in practice it does not always do well at reaching the desired foot placements.
Curriculumbased Learning
Curriculum learning is the learning process where task difficulty is increased overtime during training [2]. It has also been applied for synthesizing navigation or locomotion policies, e.g. [44, 16, 9, 13, 45]. The task difficulty is usually determined by human intuition. Teacherstudent curriculum learning [22] uses the progress on tasks as a metrics for choosing the next tasks, and demonstrates automatic curriculum choices on decimal number additions of different lengths and Minecraft games. Intrinsic motivation [10] can also help to let robots begin from simple goals and advance towards complicated goals. A curriculum policy [26]
can further be learned by formulating the curriculum learning process as a Markov Decision Process (MDP). More recently,
[40] proposes the POET algorithm that allows a 2D walker to solve increasingly challenging terrains by coevolving the environment and the policy. Reverse curriculum learning has been shown to be effective at balancing uneven data generation in DRL. For example, [41] and [29] propose a form of adaptive sampling where more difficult tasks are given higher priority during training.System Overview
An overview of our system is shown in Figure id1
. The environment consists of a physics simulator and a step generator. The step generator samples a random sequence of steps from a given probability distribution of the step parameter space. In the case where no curriculum is applied, the step distribution is uniform across the parameter space for the entire duration of training. In contrast, a curriculum dynamically adjusts the step distribution depending on the progress made by the policy. We experiment with four different curricula and a baseline, each having its own motivation and benefits. We show experimentally that curriculum learning, when applied appropriately, allows the policy to solve the stepping stone task, which is otherwise very challenging with standard reinforcement learning.
The remaining of the paper is organized as follows: stepping stones task definition and character modelling (§ id1), reinforcement learning and reward specifications (§ id1), learning curricula (§ id1), experimental results (§ id1), and discussions (§ id1).
Simulation Environments
We now describe the stepping stones parameter space and character models. We experiment with three different characters, Humanoid, Cassie and Monster, to show that the proposed curricula provide a robust approach for learning stepping stones skills.
Stepping Stones Generation
In the stepping stones task, the goal of the character is to make precise foot placements on a sequence of discrete footholds. The character receives foothold positions of the two upcoming steps in its root space, i.e., and as shown in Figure id1. We use two steps since twostep anticipation yields better performance than a singlestep [27], and it has been found that further anticipation may be of limited benefit [5].
Successive step placements are generated in spherical coordinates, where the step length , yaw , and pitch relative to the previous stone are the controllable parameters. This 3D parameter space is also illustrated in Figure id1. We limit the distance, yaw, and pitch to lie in the intervals , , and respectively. During training, we set and , which we find experimentally to be the upper limits of our character’s capability. For our 2D stepparameter tests, step distance is sampled uniformly from meters for the humanoid and meters for Cassie to account for the differences in character morphology. A 5D parameter space includes additional roll and pitch variations of the step surfaces, which supports transfer of the skills to smoothlyvarying terrains. The roll and pitch variation of a step, , is generated by first applying the rotation relative to the previous foothold, then subsequently applying the and rotations about its xaxis and yaxis. In effect, this causes the step to become tilted as shown in Figure id1.
When the character successfully steps on the current target, its position is immediately replaced by that of the next target , and new target pops into view. We introduce an artificial lookahead delay to allow the stance foot to settle (see Table id1), by postponing this replacement process for a fixed number of frames. In practice, the lookahead delay impacts the speed at which the character moves through the stepping stones and also enables it to stop on a given step. Lastly, to ensure that the character begins tackling variable steps from a predictable and feasible state, we fix the first three steps of the stepping stones sequence. Specifically, the first two steps are manually placed slightly below the character’s feet, and the third step is always flat and directly ahead.
Character Models
Property  Humanoid  Cassie  Monster 
Height (m)  1.60  1.16  1.15 
Mass (kg)  59  33  33 
Action Parameters  21  10  21 
Degrees of Freedom  27  20  27 
State Features  60  51  60 
Maximum Torque (Nm)  100  112.5  100 
Simulation Freq. (Hz)  240  1000  240 
Control Freq. (Hz)  60  33  60 
Lookahead Delay  30  3  30 
The character models are shown in Figure id1 and the detailed specifications is summarized in Table id1. We focus our experiment and analysis on the Humanoid and Cassie model. However, we show that the curriculumbased learning pipeline can be directly applied to a third character, the Monster.
Humanoid
The Humanoid is simulated with 21 hinge joints using PyBullet [6] and is directly torquecontrolled. As is standard in reinforcement learning, we normalize the policy output to be in the interval , then multiply the action value for each joint by its corresponding torque limit. The state space contains the joint angles and velocities in parent space, roll and pitch of the root orientation in global space, and linear velocities of the body in character root space. Furthermore, the state space also includes height of the pelvis related to the lowest foot, as well as a binary contact indicator for each foot. We use the height information to detect when the character falls to early terminate the simulation. To improve the motion quality, we generate mirrored rollout trajectories using the DUP method from [1] to encourage symmetric motions.
Our humanoid character is carefully modelled to reflect joint and torque limits that are close to those documented for humans in [12]. Humanoid characters with unrealistic torque limits often produce unnatural motions unless guided with reference motions, e.g. [32, 24]. In our experiments, we find, as in [15], that natural motion is easier to achieve with the use of realistic torque limits.
Cassie
The action space of Cassie consists of the target joint angles of the ten actuated joints for a lowlevel PD controller. The PD controller operates at a much higher frequency than the policy to ensure stability in control and simulation. The state space of Cassie is mostly analogous to that of the Humanoid. One exception is that the binary contact indicators are replaced by a single phase variable used for tracking reference motion, since contact state can be estimated from the phase variable.
The Cassie model is designed by Agility Robotics, simulated in MuJoCo [38], and is validated to be very close to the physical robot [43]. Designing robust controllers for Cassie is challenging since it has 20DoF while only having ten actuators. Furthermore, due to the strong actuators on the robot, it is difficult to obtain high quality motion directly with a simple reward specification. To bootstrap stepping stones training, we follow [42] to first obtain a natural forwardwalking policy by tracking a reference motion. The reference motion is then discarded, i.e., it is not used during training in the stepping stone environment.
Monster
The third character, the Monster, is identical to the Humanoid except for body morphology, mass distribution, and slightly weaker arms.
Learning Control Policies
We use reinforcement learning to learn locomotion skills. However, as we show in Section id1, reinforcement learning alone, without curriculum, is insufficient for solving the stepping stones task. In this section, we provide the background for actorcriticbased policygradient algorithms. Importantly, the critic module can be used to estimate the performance of the policy, as shown in [41, 29]. Our adaptive curriculum (§ id1) uses the critic to adjust the task difficulty.
Proximal Policy Optimization with ActorCritic
In reinforcement learning, at each time , the agent interacts with the environment by applying an action () based on its observation () from the environment and receive a reward, , as feedback. Usually the agent acts according to a parametrized policy , where is the probability density of under the current policy. In DRL,
is a deep neural network with parameters
. The goal is to solve the following optimization problem: maximize_θJ_RL(θ) = E_a_t∼π_θ(⋅— o_t)[∑_t=0^∞γ^ t r(o_t, a_t) ], where is the discount factor so that the sum converges.We solve this optimization problem with a policygradient actorcritic algorithm, updated using proximal policy optimization (PPO) [37]. We choose PPO because it is simple to implement and effective for producing high quality locomotion solutions, as demonstrated in previous work, e.g. [32, 45, 29, 41].
The critic, or the value function, computes the total expected reward a policy can get when starting from an observation . The value function is defined for a policy as: V^π(o) = E_o_0=o, a_t∼π(⋅— o_t)[∑_t=0^∞γ^ t r(o_t, a_t) ]. In DRL, the total expected reward can often only be estimated, and so we collect trajectories by executing the current policy. Let an experience tuple be and a trajectory be , a Monte Carlo estimate of the value function at can be recursively computed via
with
. The value estimate is used to train a neural networkbased value function using supervised learning in PPO. In policygradient algorithms, the value function is usually only used for computing the advantage for training the actor.
The policy, or the actor, is updated by maximizing L_ppo(θ) = 1T∑_t=1^T min(ρ_t^A_t, clip(ρ_t,1ϵ,1+ϵ)^A_t), where is an importance sampling term used for calculating the expectation under the old policy and is the advantage estimation.
Reward Design
Despite recent advancements in DRL algorithms, it remains critical to design suitable reward signals to accelerate the learning process. We describe the reward specifications used for the stepping stones environment below.
Hitting the Target
The immediate goal of the character is to place one of its feet on the next stepping target. We define the target reward as
where is the distance between the center of the step target and its contacting foot. We use and to define the magnitude and sensitivity of the target reward. To account for the differences in body morphology of the Humanoid and Cassie model, we use and for the virtual human and and for Cassie. The sensitivity term is chosen to reflect the approximate length of the foot. Note that the character receives the target reward only when contact with the desired step is made, otherwise it is set to zero.
In the initial stages of training, when the character makes contact with the target, the contact location may be far away from the center. Consequently, the gradient with respect to the target reward is large due to the exponential, which encourages the policy to move the foot closer to the center in the subsequent training iterations.
Progress Reward
The target reward is a sparse reward, which is generally more difficult for DRL algorithms to optimize. We provide an additional dense progress reward to guide the character across the steps. More specifically, let and be the distance between the root of the character to the center of the desired step at the previous and the current time step, as projected onto the ground plane. A progress reward
is added to encourage the characters to move closer to the stepping target. is the control period for each character in Table id1.
Additional Reward For Humanoid
It is common practice to incorporate taskagnostic rewards to encourage natural motion when working in the absence of any reference motion, e.g. [8, 45]. We use similar reward terms to shape the motions for the Humanoid:
The four terms penalize the character for using excess energy, reaching joint limits, failing to maintain an upright posture, and unnaturally speeding across the steps. Most of the terms are adapted from the reward implementation in [6].
For the energy penalty, we have
where is the number of joints on the Humanoid, is the normalized torque for joint , and is the joint velocity.
The joint limit penalty is used to discourage the character from violating the joint limit, defined as
where is the indicator function for checking whether joint is beyond of its natural range of motion defined by limits and . In essence, this penalty is proportional to the number of joints near the lower or upper limit.
The posture penalty is
where and are the roll and pitch of the body orientation in global frame. The penalty applies only when the character is leaning sideways for more than 0.4 radians, or backwards beyond 0.2 radians or forwards by 0.4 radians.
We also observe that the Humanoid tends to move unnaturally fast to achieve a good progress reward. We add a velocity penalty
to discourage the character from exceeding root speed of meters per second. The issue does not effect Cassie since its speed is predetermined by the fixed gait period.
Finally, we add an alive bonus
for every time step that the Humanoid is able to keep its root meters above the lower foot, otherwise the episode is terminated. This reward encourages the Humanoid to maintain balance and prevents it from being overly eager to maximize the progress reward.
Learning Curricula
The learning efficiency for the stepping stones task is strongly correlated to the distribution of the step parameters. In this section, we describe five different sampling strategies, including uniform sampling for baseline comparison. For clarity, we focus on the 2D parameter subspace of . However, we further extend this strategy to 3D and 5D step parameter spaces.
Except for uniform sampling, other strategies require dynamically adjusting the step parameter distributions. As such, we first discretize the sampling space evenly into an grid inside the region defined by . The midpoint of the grid is precisely and . Also note that the granularity of the axis and axis is different since is not equal to . The discretization process is illustrated in Figure id1.
Uniform Sampling (Baseline)
The simplest strategy is to sample the parameters uniformly during training. This is effective if the sampling space only spans easy steps, e.g. steps with small yaw and pitch variations. As the step variations become larger, it becomes less likely for the policy to receive the step reward during random exploration, and so the gradient information is also reduced. We also refer this strategy to as the no curriculum baseline, since it does not adjust step parameters distribution during training.
Fixedorder Curriculum
This curriculum is designed based on our intuition of tasks difficulty. We first divide the grid into six stages, from the easiest to the most challenging. In stage , and are sampled uniformly from the grid centered at the middle point. E.g. in the first stage, we only sample the center point of the grid, which means that every step is generated with and . The curriculum advances when the average total reward during a training iteration exceeds a threshold (see Table id1). The curriculum becomes equivalent to uniform sampling when the last stage is reached, i.e. , and is fixed until the end of the training. The process is illustrated in Figure id1. We call this the fixedorder curriculum because the stages proceed in a predefined order, although the progression from one stage to the next is still tied to the performance. Similar approaches have been shown to be effective for learning locomotion tasks, e.g. [45].
Fixedorder Boundary Curriculum
This strategy is similar to the fixorder curriculum with one important modification: Instead of sampling uniformly in the rectangular domain, it only samples in the boundary regions. Please refer to Fig id1 for visual illustration of the differences. The premise is that the policy can remember solutions to previously encountered step parameters, or that the solution which solves the new parameters also solves the inner region, and so it is more efficient to sample only on the boundary.
Difficulttasksfavored Sampling
This strategy is equivalent to the adaptive sampling introduced in [29] and [41]. The idea is that during task sampling, more difficult tasks will cause more failure, leading to more frequent early termination. Because of this, even though the tasks are sampled uniformly, the data collected will be more biased towards easier tasks. To counter this, the sampling distribution is updated based on the current value function estimate of each task. This results in more difficult tasks being sampled more frequently, thus balancing the data distribution observed during training. In many ways, this strategy takes the opposite approach of the fixedorder curriculum, where the policy focuses on easy steps in early stages of training and moves progressively into more difficult settings. We describe the implementation together in Section id1.
Adaptive Curriculum
The motivating philosophy of our adaptive curriculum is that it is beneficial to avoid scenarios that are either too easy or too challenging during learning. Most of the trajectory samples should be devoted to medium difficulty steps that the policy can improve on in the short term.
We define the capability of a policy for parameters and as
where is fixed to be and
converts the step parameters to Cartesian vectors used by the policy and value function. In simple terms, the capability metric is an answer to the question: Given two upcoming steps, what is the average performance of the current policy across all observed character states?
Evaluating is generally intractable, so we estimate it by executing the policy on an easy terrain, i.e. the terrain generated by the first stage defined in Section id1, once per episode. Each time the character makes contact with the target foothold, the curriculum evaluates the capability of the current policy for each pair in the grid by hallucinating their placements. The process is repeated for five steps to accumulate different character states, and the mean result is used as a proxy for capability. Also, note that only the parameters of the second step are used for evaluating the capability, i.e. the first step is always fixed. It is possible to use both steps for evaluation, but the second step will be replaced when the character makes contact with the first, since new steps are generated on every contact. Lastly, we observe that the value function is less sensitive to the second step for Cassie, possibly due to the pretrained imitation controller, and so we vary the first step instead.
We then define the sampling probability of a set of parameters in the parameter grid to be proportional to
where . Finally, this proportionality is normalized into a probability distribution . Here controls the sensitivity to differences in capability values and decides the difficulty setting of the curriculum. In our experiments, we use and for the Humanoid and for Cassie.
When , the curriculum prefers step parameters such that , i.e. steps where the policy has high confidence. In practice, these usually correspond to the easiest steps, e.g. ones without roll and pitch variations. Conversely, if , the curriculum samples steps that are beyond the capability of the current policy. We use this as our implementation of difficulttasksfavored sampling, as they are similar in spirit.
Results and Evaluations
Property  Humanoid  Cassie 
Fixedorder reward threshold  2500  1000 
Adaptive curriculum  0.9  0.85 
Exploration noise (logstd)  
Samples per iteration ()  5  4 
Humanoid  Cassie  

25 710 Task Parameter  U  FO  FOB  A  U  FO  FOB  A 
Flat  
1.20, 1.20  1.20, 1.25  1.35, 1.35  1.45, 1.50  0.85  0.90  0.95  0.95  
1.15, 1.20  1.15, 1.20  1.25, 1.35  1.35, 1.40  0.75  0.80  0.85  0.90  
Singlestep  
—  0.75, 0.80  —  0.80, 0.80  0.80  0.80  0.85  0.60  
1.30, 1.50  1.50, 1.50  0.75, 1.00  0.90, 0.95  0.80  0.85  0.80  0.75  
Continuousstep  
—  —  —  —, 0.65  —  0.40  0.45  0.40  
—  0.75, 0.80  —, 0.65  0.65, 0.70  —  —  —  0.35  
Spiral  
—  0.75, 0.80  —  0.80, 0.85  —  0.50  0.65  0.60  
0.65, 0.70  1.40, 1.50  0.65, 0.75  1.00, 1.10  —  0.55  —  0.60  
We train stepping stone policies for the Humanoid, Cassie and Monster. We then quantitatively evaluate and compare the differences between sampling strategies. Since the Humanoid and the Monster are similar in terms of control and reward specifications, we focus our evaluation on the Humanoid and Cassie.
We first summarize the highlevel findings. All three curricula that gradually increase the task difficulty are able to do well at solving the stepping stone tasks. This include the fixedorder, fixedorder boundary, and adaptive curricula. The remaining approaches, uniform sampling and difficulttasksfavored sampling, each produce conservative policies that simply learn to stand on the first step when the alive bonus in present, and otherwise yield much less robust and less capable policies. The performance of the policies is best demonstrated in the supplementary video.
Policy Structures
All policies in our experiments are represented by two fivelayer neural networks, each hidden layer has neurons, and trained with PPO. One network is the actor that outputs the mean of a Gaussian policy and the other is the critic that outputs a single value which indicates the value function estimate of the current policy. The first three hidden layers of the actor use the softsign [39]
activation while the final two layers use ReLU activation. We apply Tanh to the final output to normalize the action to have a maximum value of one. For the critic, we use ReLU for all the hidden layers. The policy parameters are updated using Adam optimizer
[17] with a minibatch size of and a learning rate of for epochs in each rollout. Training a single policy takes about tohours on a GPU, with simulation running in parallel on a 16core CPU. The learning pipeline is implemented in PyTorch
[30].To reduce the amount of computation, we pretrain an initial, straight line and flat terrain, locomotion controller for both the humanoid and Cassie. The step length is sampled from for the humanoid and for Cassie. These controllers are used as the starting point for all subsequent experiments. This also means that we are directly comparing different sampling strategies on their performance for the stepping stones task. For the experiments described in this section, we use and unless otherwise specified. Other characterspecific curriculum and learning parameters used for training are summarized in Table id1.
Learning Curves for 2D Parameter Space
The performance of different sampling strategies is shown in Figure id1. To ensure fairness in the learning curves comparison, we use uniform sampling to evaluate all policies. It is important to note that the learning curves may not reflect the performance of the policies as precisely as visual demonstrations. In particular, due to the presence of the alive bonus for the Humanoid, a simple policy can receive a maximum reward of 2000 by standing still on the first step. Please refer to the supplementary video for further details.
For the Humanoid, the learning curves capture the phenomenon of local and global optima, where the sampling strategies fall into two categories. In the first category, both the uniform and difficulttasksfavored sampling strategies quickly achieve decent performances, but eventually converge to lower final rewards. The combination of difficult steps and sparse target reward discourages the policies trained with these two methods to make further progress after learning to balance on the first step. In contrast, the policies steadily improve under the fixedorder, fixedorder boundary, and adaptive curricula, due to the gradual buildup of steps difficulty. These three curricula were able to guide the policies to solve the stepping stones task, and the difference in learning speed is insignificant. The distinction between these three curricula is more clear in their use cases, which we discuss in the next section.
Curriculum Progress for 2D Parameter Space
The fixedorder curriculum is developed based on our intuition of task difficulty. However, the relationship between a task parameter and difficulty is not always obvious. The benefits of the adaptive curriculum are that it yields a smoothlyadvancing curriculum with finegrained step distribution control based on the policy’s local capability.
Figure id1 shows the relative progress of the fixedorder and adaptive curriculum, where the heatmaps of the latter were captured at the end of each of the six stages. From the adaptive curriculum heatmaps, it is clear that the competency in the yaw dimension expands much faster than in the pitch dimension. This observation is consistent with our intuition that variations in the yaw dimension should be easier to learn. Furthermore, the highprobability, ringstructured region of each heatmap resembles that in the fixedorder boundary curriculum. Overall, the adaptive curriculum is flexible and has similar features to the fixedorder and fixedorder boundary curricula. One disadvantage is that it requires more computation to evaluate the capability of the policy.
3D Parameter Space
We extend the evaluation of fixedorder, fixedorder boundary, and adaptive curriculum to the 3D parameter space, now including step distance . The step distance is sampled from 11 uniformly discretized values between meters for the Humanoid and for Cassie. For the fixedorder curriculum, in addition to the parameters defined Section id1, it starts at in the first stage and expands the sampling space by two grid points every time the reward threshold is met. The fixedorder boundary curriculum is similarly extended. For the adaptive curriculum, the capability of the policy defined in Section id1 is modified to take an additional parameter .
For the fixedorder curriculum, it may be impossible to progress to the final stage due to the physical capability of the characters. However, it is entirely possible that a parameter choice, e..g, , is within capability limit, and that the fixedorder curriculum will never have the chance to attempt it, while the adaptive curriculum is free to advance unevenly in the parameter space. We observe this phenomenon in our experiments.
Policy Capability Limits
We also examine the performance of the policies by fixing and while pushing to the limit. The test scenarios are summarized in Table id1. The singlestep scenario means one inclined or declined step at the start, followed by horizontal straightline steps until the end. The continuousstep variation is where all steps are on a constant incline or decline. Note that is defined such that a negative value produces an incline. The motions for some of the scenarios can be visualized in Figure Policy Capability Limits.
We test whether the policy can sustain the performance level for ten consecutive steps. For the Humanoid, the simulation is not fully deterministic due to an observed underlying stochasticity in PyBullet’s contacthandling, and so we repeat each scenario five times and record two numbers. The first represents the maximum value of for which the policy succeeds for all five runs, and it thus provides a conservative estimate. The second number represents the maximum value of for which the policy succeeds in at least one of the runs. We observe empirically that the policies work consistently when is less than the maximum value recorded, and thus the learned policies are generally quite robust.
When we decrease to 40 degrees in the singlestep and continuousstep decline scenarios for the Humanoid, the adaptive curriculum is able to perform consistently for all five runs at 0.8 meters and 1.5 meters respectively. This suggests that may be near the physical limit of the Humanoid. Since the adaptive curriculum prioritizes medium difficulty settings, e.g. , the most extreme scenarios are likely to be sampled very rarely. The fixedorder curriculum does not suffer from this issue since it is forced to sample the extreme scenarios as long as the final stage is reached.
5D Parameters Space
For the 5D parameter space, we also include the pitch and roll of each step, as measured in their respective local frames, so that the generated steps are tilted. We sample degrees, where and are the roll and pitch of the steps. Each new dimension is discretized into
intervals as before and the adaptive curriculum is applied to train a new policy for each character. For comparison with their respective 3D policy, we evaluate the number of steps each policy can handle on ten randomly sampled 5D stepping stone sequences, each with 50 steps. The mean and standard deviation of successful steps is reported in Table
id1. A snapshot of the motion on tilted steps can be seen in Figure id1.Parameters  Humanoid  Cassie 

3D Policy  
5D Policy  
Walking on Variable Terrain
Given the considerable abilities of the characters to realize challenging stepping stone scenarios, we expect that the same control policies can execute similar steps on continuous terrain as it does on isolated footholds. The primary difference between the two scenarios is that the continuous terrain might present tripping hazards for the swing foot that are not present in the case of isolated stepping stones. Also, the continuous terrain may demand more precise foot placements since the surfaces near target locations have nonuniform slopes. We use the height field primitive in PyBullet to model continuous terrains generated using Perlin noise. Then we synthesize footstep trajectories to create 5D stepping stone sequences from the character’s initial position to feed to the policies. Note that the policy perceives discrete steps, as before, while the simulator sees only the height field. While we find height fields in PyBullet to have slightly different contact dynamics than the discrete footholds we used for stepping stones, our policies are robust enough to handle the differences without further training. Figure id1 shows the Humanoid walking on continuous terrain.
To demonstrate the generality of our approach, we apply the same learning pipeline to train a policy for the Monster with the same 5D parameter space. This policy achieves the same robustness and capabilities on the continuous terrain. Please refer to the supplementary video for visual results.
Discussion and Limitations
During training, we use stepping stone blocks which are five times wider than the ones used for rendering. We find this to improve the training consistency, as it makes the sparse target reward more discoverable during random exploration. However, it also causes the characters to occasionally miss the step for some extreme sampling parameters when testing on narrower steps. This issue could be addressed by adding step width as a curriculum parameter and decrease it over time during training.
The Humanoid and Cassie appear to use different anticipation horizons. Although we provide a twostep lookahead for both the Humanoid and Cassie, the value function estimates indicate that Cassie’s policy considers only the first step while the Humanoid uses both. This may be because Cassie has a fixed steptiming, enforced by the phase variable, which limits the policy to take more cautious step. For the Humanoid, we observe that its steptiming depends on the combinations of the two upcoming steps. For example, the character prefers to quickly walk down consecutive descending steps, while taking other combinations more slowly. This gives the policy more flexibility and makes the second step information more meaningful.
For the adaptive curriculum, we estimate the difficulty of a step by hallucinating it while traversing horizontal and straight steps. One limitation of this method is that it ignores the influence of step transitions. For example, it is generally easier to make a rightturning step if the swing foot is the right foot, and vice versa. A natural way to take the transition into account is to estimate the difficulty of the step before the step generation within the training episode. However, this requires additional computation.
The purpose of lookahead delay was to emulate human reaction time to produce more conservative motions. With the default delay of 30, the Humanoid walks across the stepping stones at an average speed of 1.35 m/s, similar to typical human walking pace. We can control the walking speed by adjusting the lookahead delay and disabling the speed penalty. When the lookahead delay is set to 2, the Humanoid traverses the terrain at an average speed of 2.10 m/s, which is closer to jogging.
Lastly, our policies seems to have reached the physical limits achievable with a normal stepping gait. Different locomotion modes are required to solve even more drastic terrain variations, e.g., the Humanoid can use hands to clamber up steeper inclines. Despite being able to control arm movements, the Humanoid prefers to maintain a tucked position for its arms. An interesting future direction will be to learn different locomotion modes for handling different scenarios.
Conclusions
We have presented a general learned solution capable of solving challenging stepping stone sequences, as applicable to physicsbased legged locomotion. To this end, we evaluated four different curricula and demonstrated that the key to solving this problem is using suitable learning curricula that gradually increase the task difficulty according to the capability of the policy. In the future we wish to integrate these stepping capabilities with a step planner, to rapidly generalize the capabilities to new characters, to support true omnidirectional stepping, to integrate handsassisted locomotion modes such as clambering, and to test the capabilities on physical robots. We believe that the simplicity of our key findings, in retrospect, makes them the perfect steppingstone to future research on generalized locomotion.
 [1] (2019) On learning symmetric locomotion. In Proc. ACM SIGGRAPH Motion, Interaction, and Games (MIG 2019), Cited by: Humanoid.

[2]
(2009)
Curriculum learning.
In
Proceedings of the 26th annual international conference on machine learning
, pp. 41–48. Cited by: p11.  [3] (2019) DReCon: datadriven responsive control of physicsbased characters. ACM Transactions on Graphics (TOG) 38 (6), pp. 1–11. Cited by: p8.
 [4] (2005) Footstep planning for the honda asimo humanoid. In Proceedings of the 2005 IEEE international conference on robotics and automation, pp. 629–634. Cited by: p9.
 [5] (2008) Synthesis of constrained walking skills. In ACM Transactions on Graphics (TOG), Vol. 27, pp. 113. Cited by: p10, p15.
 [6] (2016–2019) PyBullet, a python module for physics simulation for games, robotics and machine learning. Note: http://pybullet.org Cited by: Humanoid, Additional Reward For Humanoid.
 [7] (2014) Footstep planning on uneven terrain with mixedinteger convex optimization. In 2014 IEEERAS International Conference on Humanoid Robots, pp. 279–286. Cited by: p9.
 [8] (2016) Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learning, pp. 1329–1338. Cited by: Additional Reward For Humanoid.
 [9] (2017) Reverse curriculum generation for reinforcement learning. arXiv preprint arXiv:1707.05300. Cited by: p11.
 [10] (2017) Intrinsically motivated goal exploration processes with automatic curriculum learning. arXiv preprint arXiv:1708.02190. Cited by: p11.
 [11] (2019) Footstep planning for autonomous walking over rough terrain. arXiv preprint arXiv:1907.08673. Cited by: p9.
 [12] (2015) Powered lower limb prostheses. Ph.D. Thesis, Technische Universität. Cited by: Humanoid.
 [13] (2017) Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286. Cited by: p11.
 [14] (2017) Phasefunctioned neural networks for character control. ACM Transactions on Graphics (TOG) 36 (4), pp. 42. Cited by: p8.
 [15] (2019) Synthesis of biologically realistic human motion using joint torque actuation. arXiv preprint arXiv:1904.13041. Cited by: p8, Humanoid.

[16]
(2012)
Curriculum learning for motor skills.
In
Canadian Conference on Artificial Intelligence
, pp. 325–330. Cited by: p11.  [17] (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: p38.
 [18] (201907) Scalable muscleactuated human simulation and control. ACM Trans. Graph. 38 (4). Cited by: p8.
 [19] (2013) Assessment of adaptive walking performance. Medical engineering & physics 35 (2), pp. 217–220. Cited by: p7.
 [20] (2017) Learning to schedule control fragments for physicsbased characters using deep qlearning. ACM Transactions on Graphics (TOG) 36 (3), pp. 1–14. Cited by: p8.
 [21] (2016) Guided learning of control graphs for physicsbased characters. ACM Transactions on Graphics (TOG) 35 (3), pp. 1–14. Cited by: p8.
 [22] (2019) Teacherstudent curriculum learning. IEEE transactions on neural networks and learning systems. Cited by: p11.
 [23] (2014) Visual control of foot placement when walking over complex terrain.. Journal of experimental psychology: human perception and performance 40 (1), pp. 106. Cited by: p7.
 [24] (2018) Hierarchical visuomotor control of humanoids. arXiv preprint arXiv:1811.09656. Cited by: p8, Humanoid.
 [25] (2010) Robust physicsbased locomotion using lowdimensional planning. In ACM SIGGRAPH 2010 papers, pp. 1–8. Cited by: p10.
 [26] (2019) Learning curriculum policies for reinforcement learning. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 25–33. Cited by: p11.
 [27] (2017) Dynamic walking on randomlyvarying discrete terrain with onestep preview.. In Robotics: Science and Systems, Cited by: p15, p9.
 [28] (2016) 3d dynamic walking on stepping stones with control barrier functions. In 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 827–834. Cited by: p10.
 [29] (2019) Learning predictandsimulate policies from unorganized human motion data. ACM Trans. Graph. 38 (6). Cited by: p11, p19, p21, p30, p8.

[30]
(2019)
PyTorch: an imperative style, highperformance deep learning library
. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: p38.  [31] (2003) How far ahead do we look when required to step on specific locations in the travel path during locomotion?. Experimental brain research 148 (1), pp. 133–138. Cited by: p7.
 [32] (2018) Deepmimic: exampleguided deep reinforcement learning of physicsbased character skills. ACM Transactions on Graphics (TOG) 37 (4), pp. 143. Cited by: p21, p8, Humanoid.
 [33] (2016) Terrainadaptive locomotion skills using deep reinforcement learning. ACM Transactions on Graphics (TOG) 35 (4), pp. 1–12. Cited by: p8.
 [34] (2017) Deeploco: dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (TOG) 36 (4), pp. 41. Cited by: p10, p8.
 [35] (2014) Response inhibition during avoidance of virtual obstacles while walking. Gait & posture 39 (1), pp. 641–644. Cited by: p7.
 [36] (2004) Synthesizing physically realistic human motion in lowdimensional, behaviorspecific spaces. In ACM Transactions on Graphics (ToG), Vol. 23, pp. 514–521. Cited by: p8.
 [37] (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: p21.
 [38] (2012) Mujoco: a physics engine for modelbased control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: Cassie.
 [39] (2009) Quadratic features and deep architectures for chunking. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pp. 245–248. Cited by: p38.
 [40] (2019) Paired openended trailblazer (poet): endlessly generating increasingly complex and diverse learning environments and their solutions. arXiv preprint arXiv:1901.01753. Cited by: p11.
 [41] (2019) Learning body shape variation in physicsbased characters. ACM Trans. Graph. 38 (6). Cited by: p11, p19, p21, p30.
 [42] (2018) Feedback control for cassie with deep reinforcement learning. In Proc. IEEE/RSJ Intl Conf on Intelligent Robots and Systems (IROS 2018), Cited by: Cassie.
 [43] (2019) Learning locomotion skills for cassie: iterative design and simtoreal. In Proc. Conference on Robot Learning (CORL 2019), Cited by: Cassie.
 [44] (2008) Continuation methods for adapting simulated skills. In ACM Transactions on Graphics (TOG), Vol. 27, pp. 81. Cited by: p11.
 [45] (2018) Learning symmetric and lowenergy locomotion. ACM Transactions on Graphics (TOG) 37 (4), pp. 144. Cited by: p11, p21, p28, p8, Additional Reward For Humanoid.
 [46] (201807) Modeadaptive neural networks for quadruped motion control. ACM Trans. Graph. 37 (4), pp. 145:1–145:11. External Links: ISSN 07300301 Cited by: p8.