GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning

10/12/2018
by   Jacky Liang, et al.
Carnegie Mellon University
Nvidia
0

Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively large number of training samples for learning complex tasks. Many recent works on speeding up Deep RL have focused on distributed training and simulation. While distributed training is often done on the GPU, simulation is not. In this work, we propose using GPU-accelerated RL simulations as an alternative to CPU ones. Using NVIDIA Flex, a GPU-based physics engine, we show promising speed-ups of learning various continuous-control, locomotion tasks. With one GPU and CPU core, we are able to train the Humanoid running task in less than 20 minutes, using 10-1000x fewer CPU cores than previous works. We also demonstrate the scalability of our simulator to multi-GPU settings to train more challenging locomotion tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 11

12/08/2020

The Architectural Implications of Distributed Reinforcement Learning on CPU-GPU Systems

With deep reinforcement learning (RL) methods achieving results that exc...
08/24/2021

Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning

Isaac Gym offers a high performance learning platform to train policies ...
07/19/2019

GPU-Accelerated Atari Emulation for Reinforcement Learning

We designed and implemented a CUDA port of the Atari Learning Environmen...
10/16/2019

Hyper: Distributed Cloud Processing for Large-Scale Deep Learning Tasks

Training and deploying deep learning models in real-world applications r...
04/14/2022

Accelerated Policy Learning with Parallel Differentiable Simulation

Deep reinforcement learning can generate complex control policies, but r...
03/11/2022

GATSPI: GPU Accelerated Gate-Level Simulation for Power Improvement

In this paper, we present GATSPI, a novel GPU accelerated logic gate sim...
03/12/2021

Large Batch Simulation for Deep Reinforcement Learning

We accelerate deep reinforcement learning-based training in visually com...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Model-free Deep RL has seen impressive achievements [1, 2, 3] in recent years, but many methods and tasks require enormous amount of compute due to the large sample complexity of exploration in high-dimensional state and action spaces. One approach to overcome exploration is by using human demonstrations [4, 5], but collecting human demonstrations remain challenging for many tasks, and it is difficult to scale. Another approach is to vastly scale up RL simulation and training to distributed settings, so large amounts of data can be obtained in a relatively short amount of time. Many recent works of this approach have seen scaling benefits by performing policy training on the GPU while scaling up environment simulation on many CPUs. In this work, we propose using a GPU-accelerated RL simulator to bring the benefits of GPU’s parallelism to RL simulation as well.

Using Flex, a GPU-based physics engine developed with CUDA, we implement an OpenAI Gym-like interface to perform RL experiments for continuous control locomotion tasks. We benchmark our simulator on ant and humanoid running tasks as well as their more challenging variations, inspired by ones proposed in OpenAI Roboschool and the Deepmind Parkour environments. They include learning to run toward changing target locations, recovering from falls, and running on complex, uneven terrains. Our choice of tasks is driven by their popularity and the challenges they offer to various Deep RL algorithms. Although our training results are not directly comparable to those obtained in physics simulators used in prior work (e.g. MujoCo, Bullet) due to differences in physics simulation, we have endeavoured to do head-to-head comparisons wherever possible. Using our GPU-accelerated RL framework to simulate and train hundreds to thousands of agents at once on a single GPU, we were able to achieve faster training results than previous works which used large CPU clusters. In addition, the scale and speed-ups achieved through our simulator, especially in the more challenging tasks, make GPU-accelerated RL simulation a viable alternative to CPU ones.

Figure 1: GPU-Accelerated RL Simulation. We use an in-house GPU-accelerated physics simulator, to concurrently simulate hundreds to thousands of robots for Deep RL of continuous control locomotion tasks. Here we show the Ant, Humanoid, and Humanoid Flagrun Harder on Complex Terrain tasks benchmarked in our work. Using a single machine (1 GPU and CPU core), we are able to train humanoids to run in less than 20 minutes.

We summarize our key contributions below:

  1. A GPU-accelerated RL simulator built with an in-house GPU-based physics engine. We plan to release our simulator in the near future to facilitate fast RL training to the community.

  2. Experiments on massively distributed simulations of hundreds to thousands of locomotion environments on single and multiple GPU settings.

  3. Improvements in training speed for various challenging locomotion tasks, learning the humanoid running task in less than 20 minutes on a single machine. See our trained policies at https://sites.google.com/view/accelerated-gpu-simulation/home.

We note that in this paper our focus is on the application of our GPU-based physics engine to RL simulation, and not on comparisons and benchmarks of the physics engine itself.

2 Related Works

2.1 Distributed Deep RL

Many prior works have explored parallelizing environment simulation and policy training on CPUs. Nair et al. [6] proposed the first massively parallelized method for training RL agents. Their method, Gorila DQN, has separate learners, actors, and parameter servers. Simulating and training with hundreds of CPU cores it was able to achieve superhuman performance in most Atari games in a few days. Following Gorila DQN, Mnih et al. [1] proposed the Asynchronous-Actor-Critic-Agents (A3C) algorithm, and with 16 CPU cores they were able to compete with Gorila DQN in Atari games with about the same training time.

Babaeizadeh et al. [7] extended A3C by proposing a CPU/GPU hybrid variation of A3C, GA3C, which moves the policy to the GPU. This enabled GPU-accelerated policy inference and training. Their algorithm dynamically adjusts the number of parallel agents, trainers, and parameter servers to maximize training speed. On a single machine with 16 CPU cores and 1 Tesla K40 GPU GA3C achieved more than speed-up over a CPU-only implementation of A3C. Adamski et al. [8] furthered scaled up A3C training by using larger batchsizes and a well-tuned Adam Optimizer. Their method allowed them to learn many Atari games with hundreds of CPU cores, with no GPU, in less than an hour (e.g. Breakout with CPU cores in minutes).

Salimans et al. [9] explored using evolutionary strategies (ES) for RL. The authors scaled up ES to use as much as CPU cores to learn Atari games in about one hour. They also performed learning experiments with MuJoCo locomotion tasks, and with CPU cores they were able to train a humanoid to walk in minutes. Such et al. [10]

applied Genetic Algorithms (GA), a gradient-free optimization method, to RL. GA is also amenable to parallelization, and it was able to learn Atari games also in one hour with

CPU cores. In humanoid locomotion tasks however, the algorithm trained much slower than ES, and was not able to achieve comparable performance in the same time frame.

Recent advances in parallel computation tools such as Horovod [11]

and Ray have enabled researchers to easily scale up machine learning to distributed settings. Ray RLLib 

[12] is a distributed RL library using the Ray framework. In their benchmarks, the authors were able to scale ES with Ray RLLib to more than CPU cores, learning the humanoid walking task in just minutes. Mania et al. [13] also used Ray but for their proposed algorithm, Augmented Random Search (ARS). ARS learned the humanoid walking task in minutes with CPU cores, while using less CPU time than ES.

Other previous works aimed to improve the efficiency of off-policy learning from the large amount of data generated by many parallel actors. Espeholt et al. [14] proposed IMPALA, which applies large-scale, distributed learning systems to solving multi-task RL problems. IMPALA is inspired by A3C, but the actors don’t compute and send policy gradients to the learners - they send the sampled trajectories instead. The authors scaled IMPALA to use CPU actors and GPU learners, and it learned the benchmarked tasks (DMLab-30, a suite of multi-task video game-like environments) in less than a day. Horgan et al. [15] introduced Distributed Prioritized Experience Replay, an off-policy RL algorithm that uses a novel technique to sample more important trajectories in its replay buffer for learning. This work uses many CPU cores for simulating the RL environment, and 1 GPU for training. With actors, the method learned most Atari games in a few hours, and with actors it trained a humanoid to walk in hour, run in hours. Stooke and Abbeel [16] explored optimizing existing RL algorithms for fast performance on a multi-GPU system. The authors used an entire NVIDIA DGX-1, which contains NVIDIA Tesla V100 GPUs. Running parallel simulations on CPU cores and performing training on all V100s, the authors report being able to train many Atari games in less than minutes. Recently, OpenAI massively scaled up Proximal Policy Optimization [3] to use more than CPU cores to simulate in-hand manipulation tasks [17] and more than for playing Dota 111https://openai.com/five/.

2.2 Locomotion and Physics Simulation

We focus our attention to continuous control locomotion tasks first proposed in MuJoCo [18], a popular CPU-based physics simulator. DART and Bullet are other notable alternatives, but MuJoCo remains by far the most popular physics simulator in the Deep RL community  [9, 10, 12, 13, 15, 19, 3, 20] for its efficient simulation and relatively stable contact models. Duan et al. [19] first benchmarked different RL algorithms on various continuous control tasks such as cartpole swing-up and humanoid walking forward. Later, Schulman et al. [3] introduced more complex locomotion tasks, such as humanoid flagrun where the agent must learn to change and run toward different directions. Heess et al. [20]

take this one step further and train humanoid agents to walk and run on uneven and dynamic terrains. Taking inspiration from these works, we use the humanoid running task and its more challenging variations for benchmarking. Owing to the humanoid’s high degree of freedom control space, its tasks require most Deep RL algorithms to use a significant number of samples to learn, which provide opportunities for improving learning speed via reduction in simulation time.

Many previous works on distributed RL have focused on discrete control problems such as Atari games, which do not require physics simulation. Moreover, the works in continuous control tasks have only used CPU-based simulations. While GPU-accelerated physics simulations have been applied in scientific computing [21, 22] and healthcare [23, 24], they have yet to be applied in robotics. To achieve state-of-the-art performance, previous works often had to scale environment simulation to hundreds, if not thousands of CPU cores. In our work, we explore using GPU-accelerated simulation as an alternative to CPU-based ones. Using a single GPU, we can simulate hundreds to thousands of robots and achieve state-of-the-art results in locomotion tasks on a single machine, learning the humanoid running task in less than 20 minutes.

3 GPU-Accelerated Robotics Simulation

3.1 GPU-based Physics Engine

Our in-house GPU-based physics engine uses a non-smooth Newton method for its rigid body solver and a maximal coordinate formulation. Like with environments in MuJoCo and Bullet, we use torque control as the actuation model. Potential collisions and contacts among bodies are detected speculatively and are modeled through unilateral constraint functions with a smooth isotropic Coulomb friction model. We use sliding friction coefficient of 1.0, the same as in MuJoCo [25]. Restitution coefficient is and gravity is downward. For time-stepping we use an implicit time-discretization also like [25], and the time step used is s. Each Newton iteration is solved by a sparse iterative Krylov linear solver with the minimum number of linear iterations such that simulation is stable for our experiments. We found Krylov methods allowed sufficient stiffness to achieve realistic humanoid gaits, while relaxation methods like Projected Gauss-Seidel were less effective, especially when paired with a maximal coordinate representation.

We develop a GPU-accelerated robotics simulation framework for RL that can simulate many robot agents in parallel for a variety of tasks. To simulate multiple robots performing the same task in parallel, we load all robots and task-related objects into the same simulation. This is unlike previous works that parallelize simulation by using multiple CPU processes or threads, each simulating an individual environment. The parallelism and the speed-up in our simulation is achieved by performing the physics computations on the GPU. We note that in our simulations, agents are able to interact with each other, which is not possible for running multiple simulation processes of 1 agent each.

3.2 GPU Simulation Speed

To illustrate the typical performance of our simulator on a single-machine setting, we measured the GPU simulation frame time for the humanoid task as we increase the number of humanoids concurrently simulated in the environment. The results are obtained on an NVIDIA Tesla V100 GPU. The GPU simulation frame time does not include the time needed for calculating rewards, applying actions, and transfer RL-related data back and forth from the Python client, as that speed varies based on implementation of the RL framework. In Figure 2 we report two plots, the total simulation frames generated per second, calculated by multiplying the number of agents by the frames per second of the entire simulation, and the GPU frame time per agent. We note that both values converge with around simulated humanoids, where it can generate K frames per second, and the mean frame time per agent is below ms.

We observe in our learning experiments that although the total simulation frames generated per second peaks around agents, this is not the optimal number of agents for minimizing learning speed. The number of agents used affects how the learning algorithm and policy explores the state and action spaces, and the threshold for optimizing learning speed depends on the specific task.

Figure 2: GPU Simulation Speed. We measure the speed of GPU simulation for the humanoid task as we increase the number of concurrent humanoids simulated. The total simulations per second peaks at around KHz for humanoids, and the best mean GPU simulation frame time per agent is less than . The simulation time grows much slower than the number of humanoids because of the constant CUDA kernels launch overhead, which dominates in total step time when only a few humanoids are available.

4 Experiments

To evaluate the performance of our GPU-accelerated physics simulator for RL, we perform a set of learning experiments on various locomotion tasks. We first measure the learning performance on a single GPU as we vary the number of parallel agents simulated on that GPU. Then we measure how well learning performance scales with multiple GPUs and nodes as we fix the number of agents simulated per GPU.

4.1 Tasks

We perform learning experiments on the following 4 tasks, 3 of which are shown in Figure 1.

Ant. We use the ant model commonly found in MuJoCo locomotion environments. It has legs and controllable joints that form the action space. The goal of the Ant task is to have the agent move forward as fast as possible. Ant is relatively easy to learn, because the initial state of the task is stable. This makes it a useful task for sanity checks and debugging.

Humanoid. Like [20, 26], we use the humanoid model with 28 degrees of freedom and 21 actuated joints. This humanoid is more complex than the 24-DoF humanoid model with 17 actuated joints used in [3, 9, 12, 13]. The 28-DoF humanoid has 4 additional joints to control the ankle angles of the robot, whereas the 24-DoF one has ball-shaped feet that cannot be rotated. We choose the more complex humanoid for benchmarking, because the additional ankle joints allow the humanoid to learn more realistic running and turning motions. The goal of the Humanoid task is to have the agent move forward as fast as possible. This task is often used in the literature for benchmarking locomotion learning performance.

The observations for Ant and Humanoid include the agent’s height, velocity, and joint angles, among others. See Appendix B for a detailed comparison of observations used in our and previous works.

Humanoid Flagrun Harder (HFH). In the HFH task, a humanoid must learn to not only run forward but also to turn and run toward different target locations. The agent must also learn how to recover from falling by standing up. This is a much more challenging task than vanilla Humanoid, and it takes more time to train. The action space of this task is the same as that in the Humanoid task. We observe that training a humanoid for HFH leads to more robust and symmetric walking gaits e.g. humanoids can maintain their stand-up and running skills even with as much as 50% higher or lower gravity.

Humanoid Flagrun Harder on Complex Terrain.

In this task, the agent must learn to run and change directions on uneven terrain with static rectangular obstacles. The dimensions, location, and orientation of the obstacles are randomly sampled from a uniform distribution. The action space of this task is the same as that in the Humanoid task. To help the humanoid navigate complex terrain and overcome obstacles, we augment the observation space with a 2D,

rectangular height map that follows the humanoid’s center of mass. Similar to [20], our height map is denser near the humanoid.

For Ant and Humanoid, an episode terminates if the agents fall below a threshold. For the two HFH tasks, we allow the agents to fall below a threshold for a certain time period (160 frames) before terminating the agent. This enables the agents to learn to stand up and recover from a fall. The Flagrun targets for the HFH tasks change every 200 frames to a random location within of the agent, or earlier if the humanoid reaches within of the target. For all tasks, the maximum episode length for both training and evaluation is frames.

Rewards. Similar to previous works, the reward function for all tasks reward the current speed toward the desired targets and penalize excessive torque applied to the joints. Our reward functions however, are not immediately comparable to previous works, due to a smaller alive bonus and the addition of other terms that we empirically found to lead to more natural, symmetric running gaits. We note that the locomotion rewards used in MuJoCo and Bullet are also different, arising from the vagaries of implementation details, such as the solver and the number of iterations used for optimisation. See Appendix C, D for the exact rewards used and a comparison of our rewards with those used in previous work.

A common reward threshold for solving the humanoid running forward task is  [9, 12, 13], but this threshold is for the 24-DoF humanoid, and to our knowledge there is no widely used reward threshold for the 28-DoF humanoid and for the HFH tasks. For the Humanoid task, we chose a reward threshold of 3000 for walking and for running. roughly corresponds to a forward moving speed of  m/s, and for  m/s, which is about the same speed as Roboschool’s example 24-DoF humanoid running policy. For Ant, we use as the reward threshold for running, and for final reward. For the HFH task, we use as an intermediary reward threshold, around which the agents first learn to stand up from the ground, and as the final reward.

Initial State and Perturbations. Unlike parallel simulations on CPUs that simulate multiple agents in their own environments, usually one environment per CPU core, we simulate all agents in one environment on the GPU. The initial positions, velocities, joint angles, and joint angular velocities of agents are perturbed slightly during initialization. We also exert random forces onto the agents for all 4 tasks every to frames, for a few Newtons each time. These external perturbations help the agent to learn more robust policies. We also enabled inter-agent collisions for the HFH tasks. The initial spacing of the humanoids affect the collision frequency, and the occasional collisions help the agents to explore states where they must learn to balance and recover from falls, leading to more robust policies.

4.2 Learning Algorithm

We use a variation of Proximal Policy Optimization (PPO) [3] for all our experiments to benchmarks locomotion tasks. We adapted the open source OpenAI Baseline 222https://github.com/openai/baselines implementation of PPO to work with our simulation environment, where a single environment step simulates multiple agents concurrently. Similar to [3, 20]

, we also use an adaptive-learning rate based on the KL divergence between the current and previous policies. Additionally, for stability we whiten the current observations by maintaining online statistics of mean and standard deviation from the history of past observations.

Our policy and value functions share the same feed-forward network architectures. As in the Baselines implementation of PPO, we use scaled exponential linear units (SELU  [27]

) for the activation function. SELU implicitly encourages normalized activations, and in our hyperparameter search, policies with SELU learned faster and achieved higher rewards than those with ReLU and Tanh.

For our multi-GPU benchmarks, we implemented two variants of Distributed PPO. Our first variant uses multiple GPUs to generate rollouts but trains the policy and value function on a single GPU. This is often the case with CPU based implementations, where each CPU worker generates rollouts, and one master CPU trains. Our second variant is similar to Distributed PPO [20]. Both are synchronized algorithms, where at every iteration, gradients from all workers are applied to a central policy, and the worker policies are updated with the new policy parameters and this is what we use for all our experiments. We found that the first variant was not scalable to multiple nodes. We also experimented with averaging parameters, but we found that it performed significantly worse than averaging gradients. We use Horovod 333https://github.com/uber/horovod

for distributed simulation and training across multiple GPUs and nodes where each GPU runs its own simulation and training instance in Tensorflow. The weight parameters are updated by averaging gradients across multiple GPU workers using efficient allreduce by NCCL 

444https://developer.nvidia.com/nccl and broadcasting from a master GPU. Importantly, since the env.step() function is implemented directly on GPU for multiple agents, we are able to leverage the parallelism offered by GPUs to obtain observation-action pairs concurrently for hundreds to thousands of agents.

4.3 Hardware

All experiments were done using NVIDIA Tesla V100 GPUs on NVIDIA’s internal compute cluster. Each single-GPU experiment uses 1 CPU core of a 20-Core Intel Xeon E5-2698 v4 processor running at 2.2 GHz. For multi-GPU experiments, we scale the number of CPU cores used to match the number of GPUs used.

4.4 Single-GPU Simulation and Training

We first performed hyperparameter grid-search on the number of frames used per PPO optimization iteration and the network architectures for training 1024 parallel agents on the Ant and Humanoid tasks. For both tasks, we found the best policy and value network architectures to have 3 hidden layers decreasing in size. See Appendix E for the specific architectures and hyperparameters used.

We found the best frames per iteration within our searched values for Ant and Humanoid to be frames per iteration agents = frames. This number is kept constant as we scale the number of parallel agents simulated up and down. For example, for agents, we use frames per iteration, and for we use . Keeping the frames per iteration constant helps us to show differences in learning speed as caused by improvements in simulation speed, and not by performance of the learning algorithm. We note that for high agent counts, the small number of frames used still enable learning, because our tasks use dense rewards.

We report the time needed to reach certain reward thresholds for the Ant, Humanoid, and HFH tasks as we vary the number of agents in Figure 3. Because we fix the amount of experience used per PPO update constant, we are able to observe a trade-off between increasing the number of agents but collecting less frames per agent and decreasing the number of agents but collecting more frames per agent. The point of diminishing return varies across task and reward thresholds.

(a) Ant (b) Humanoid (c) Humanoid Flagrun Harder

Figure 3: Single GPU Experiments. We show the reward curves and wall time needed for training various tasks using increasing numbers of simulated agents to reach certain reward thresholds. The number of agents we evaluated vary in the powers of 2. The reward thresholds were chosen for significant behavior changes. For example - at around 2500 reward the Humanoid Flagrun Harder agents learn to stand up from sitting positions and begin walking. We keep the amount of experience used per PPO update iteration constant across evaluations by decreasing the frames used per agent as the number of agents increase. Using 512 agents we were able to train the Humanoid agent to run in about 16 minutes. All experiments were running against the same set of seeds for consistent comparison.

We also note the short time needed to learn these tasks using GPU-accelerated simulations. We list training times and resources used for the humanoid task in prior works in Table 1, all of which used CPU-based physics simulation. With one GPU and CPU core, the Humanoid agents were able to run in less than 20 minutes while using 10 to a 1000 less CPU cores than previous works [13, 9].

Algorithm CPU Cores GPUs Time (mins)
Evolution Strategies [9] 1440 - 10
Augmented Random Search [13] 48 - 21
Distributed Prioritized Experience Replay [15] 32 1 240
Proximal Policy Optimization w/ GPU Simulation (Ours) 1 1 16
Table 1: Resources and Times for Training a Humanoid to Run. Prior works all used CPU-based physics simulations. In this table, we do not include the original PPO paper [3] - it used 128 CPU cores for the humanoid task but did not report training time. We also did not include the Distributed PPO paper [20] - their humanoid training took more than 40 hours to converge, but they did not report the number of CPU cores used.

4.5 Multi-GPU Distributed Simulation and Training

We extend our method to distribute GPU simulations across multiple GPU workers to see how learning speed can be improved on the Humanoid, HFH, and HFH on Complex Terrain tasks. For these experiments, we run a simulation and training instance on each GPU, and we use Horovod for distributed gradients averaging. We also normalize the advantage estimates across all GPUs and distribute them back to each GPU worker at every iteration, ensuring that advantages across all GPUs share a consistent global mean and standard deviation. The number of agents simulated per GPU for Humanoid and HFH is 1024. We use a smaller number of 512 agents per GPU for HFH on Complex Terrain to keep memory usage and simulation speed reasonable, as the addition of the height map significantly increases the dimensionality of the observations. Results are reported in Figure 

4. We observe only limited scaling effects for the Humanoid task, which hit diminishing returns after 4 GPUs. In the more complex tasks however, we observed noticeable speed-ups. For the HFH task, the 1 GPU run reached 4000 rewards in about 2 hours, while the 8 GPU run reached it in about 30 minutes, and 16 GPUs in about 15. For the HFH on Complex Terrain task, we observe more apparent scaling benefits with multiple GPU simulation and training. On average, the 16 and 32 GPU runs learn the task faster than the 2, 4, and 8 GPU runs, while the large overlap in standard deviations for 8 GPUs with 16 and 32 shows the diminishing returns of using more agents in learning.

(a) Humanoid (b) Humanoid Flagrun Harder (HFH) (c) HFH Complex Terrain

Figure 4: Multi-GPU Simulation and Training. We show how our GPU-accelerated RL simulation can be scaled to simulation and training with multiple GPUs. The overlap of standard deviations indicates that there are little scaling effects for the Humanoid task, and the benefit of multi-GPU simulation and training is only apparent for more complex tasks and with a greater difference in the number of GPUs used. All experiments were running against the same set of seeds for consistent comparison.

5 Conclusion and Future Work

In this work, we used an in-house GPU-accelerated physics simulator to concurrently simulate hundreds to thousands of robots for Deep RL of continuous-control, locomotion tasks. In contrast to prior works that trained locomotion tasks on CPU clusters, with some using hundreds to thousands of CPU cores, we are able to train a humanoid to run in less than 20 minutes on a single machine with 1 GPU and CPU core, making GPU-accelerated RL simulation a viable alternative to CPU-based ones. Our RL simulation framework can also be scaled to multi-GPU and multi-node settings, and we observed that multi-GPU simulation and training shows greater learning speed improvements for more complex locomotion tasks. Given the recent successes of sim2real transfer learning, from grasping in clutter 

[28], quadruped locomotion [29], to dexterous manipulation [30], all of which used Bullet to generate simulation data to train policies that worked in the real world, we believe our simulator can provide valuable speed-ups for similar applications in the future.

In future work, we plan to experiment with more complex humanoid environments by allowing the humanoid to actively control the orientation of the rays used to generate the height map. This may enable the humanoids to navigate dynamic obstacles and obstacles in mid-air. We also plan to use our simulator for manipulation tasks with robots such as the Fetch, Baxter, and YuMi. In this work, we’ve considered locomotion tasks with full state information. For many tasks in manipulation and navigation however, training from vision data is preferred. For such tasks, we note the potential of zero-copy training - directly feeding simulation data generated by a GPU-based simulator and the task’s states and rewards into a deep learning framework without the data leaving the GPU. Zero-copy training eliminates the need to communicate data from the GPU to the CPU, and can further improve training speed.

We thank Phil Rogers, Vikrama Ditya, Christopher Lamb, Nathan Luehr, David Addison, Hari Sundararajan, Sivakumar Arayandi Thottakara, Julie Bernauer and many others who manage the NVIDIA GPU infrastructure for all the kind help they provided in carrying out the experiments on the GPU clusters.

References

Appendix A Rewards vs Frames

We plot the reward vs frames curves for single-GPU experiments in Figure 5 and multi-GPU experiments in Figure 6. A zoomed-in version of each plot is shown on the second row. The difference in the number of total frames for different number of agents is due to the fact that we stop training based on a fixed amount of time.

(a) Ant (b) Humanoid (c) Humanoid Flagrun Harder

Figure 5: Reward vs Frames for Single GPU Experiments.

(a) Humanoid (b) Humanoid Flagrun Harder (HFH) (c) HFH Complex Terrain

Figure 6: Reward vs Frames for Multi-GPU Experiments.

Appendix B Comparison of Observations

Table 2 compares the different observations for the Humanoid running task used in MuJoCo, Roboschool, Control Suite, and our environments.

MuJoCo 555https://github.com/openai/gym/wiki/Humanoid-V1 Roboschool 666https://github.com/openai/roboschool/blob/master/roboschool/gym_forward_walker.py Control Suite 777https://github.com/deepmind/dm_control/blob/master/dm_control/suite/humanoid.py Ours
Root Body Height
Root Body Rotation
Root Body Velocity -
Root Body Angular Velocity - - -
Root Body Heading Direction - -
Joint Angles
Joint Angle Velocities -
Positions of Hands and Feet - - -
Velocities of Bodies - -
Angular Velocities of Bodies - - -

Inertia Tensor and Mass

- - -
Actuator Forces - -
External Forces on Bodies - - -
Whether Feet in Contact w/ Ground - -
Total
Table 2: Observation and dimensionality comparison for Humanoid running task. The Root Body Heading Direction for Roboschool and our agents are represented by the sine and cosine values of the angular difference between our agent’s heading and the direction to the target (which for the humanoid running task is just forwards from the starting location). The Root Body Rotation is represented as quaternions for MuJoCo, roll and pitch angles for Roboschool and ours, and the z-projection of the rotation matrix for Control Suite.

Appendix C Rewards

The reward function we used for all four tasks are as follows:

(1)
(2)
(3)

is the angle from the robot’s current heading to the angle toward the target location. is the angle of the robot’s torso from the vertical-axis (i.e. if the humanoid is standing up right, this angle would be 0). is the alive bonus, and it is for Ant and for humanoids. is the speed toward the current target.

is the vector of motor torques applied at each joint, with

being the maximum that can be applied. is the current action. is the number of joints at joint limits, and is the number of feet that is in collision with ground.

Appendix D Comparison of Reward Functions

Table 3 compares the coefficients for the summands in the reward function for the Humanoid running task used in MuJoCo, Roboschool, and our environments.

We don’t list the reward function coefficients for the humanoid walking task in Deepmind Control Suite, as it is very different in structure - it is the product of the running speed with two coefficients that depend on the magnitude of the controls and how upright the humanoid is. See here 888https://github.com/deepmind/dm_control/blob/master/dm_control/suite/humanoid.py for details.

MuJoCo 999https://github.com/openai/gym/blob/master/gym/envs/mujoco/humanoid.py Roboschool 101010https://github.com/openai/roboschool/blob/master/roboschool/gym_forward_walker.py Ours
Alive Bonus
Running Speed Bonus
Heading Bonus - -
Standing Bonus - -
Control Cost
Electricity (Torque) Cost -
Joints at Limits Cost -
Feet Contact Cost -
External Forces Cost - -
Table 3: Reward function comparison for Humanoid Running task. is the dimension of the controls. The External Forces Cost for MuJoCo is the multiplier of the sum of squares of external forces on all bodies.

Appendix E Hyperparameters

In table 4 we give the hyperparameters used during training for Ant, Humanoid, and Humanoid Flagrun Harder tasks. The timesteps per batch is given relative to a specific amount of parallel agents simulated - , and this is scaled across different experiments. For example, if the timesteps per batch is , then we used with the experiment that has agents, and with the experiment that has agents. The desired KL specifies the target KL value used for adapting the Adam step size at each iteration.

The value and policy networks in our PPO algorithm have the same feed-forward architectures, and they are specified in a list format, where the th value gives the size of the th layer.

Hyperparameter Ant Humanoid HFH HFH Terrain
Timesteps per Batch 32 32 64 64

Num. Epochs

20 20 10 20
Minibatch Size per Agent 16 32 32 8
Desired KL 0.01 0.02 0.01 0.01
Neural Net Architecture [128, 64, 32] [256, 128, 64] [512, 256, 128] See Caption
Table 4: Hyperparameters used in different tasks. For the HFH Terrain neural network, we pass the height map into two fully-connected (FC) layers of sizes , the other observations through one layer of size , then finally pass their concatenated outputs through two more FC layers of sizes before outputting the controls.

Appendix F MuJoCo Simulation Times

In Figure 7 we report plots similar to those in Figure 2 where we evaluate the total frames per second and frame time per agent on MuJoCo 1.5 as the number of concurrent humanoids simulated in the scene increases. We measured MuJoCo’s single-core CPU simulation time in a similar setup as we did with our GPU simulation time - at every time step random actions are given to the -DoF humanoids lying on the floor. The CPU used is an Intel Core i9-7960X running at 2.80GHz. At the time of writing MuJoCo 2.0 has just been released, but it is not yet supported by mujoco_py. We plot the projected curves for MuJoCo 2.0 by reducing the simulation time of MuJoCo 1.5 by , as reported here 111111https://www.roboti.us/index.html#mujoco200.

We note that while MuJoCo performs well on one core with one humanoid, the simulation time increases as the number of humanoids in the scene increases. This is in contrast to GPU simulations, where having more concurrent simulations with more contacts, which enable large-scale inter-agent interactions, reduces the simulation time per agent.

Figure 7: MuJoCo Simulation Speed.