DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
The DeepMind Control Suite is a set of continuous control tasks with a standardised structure and interpretable rewards, intended to serve as performance benchmarks for reinforcement learning agents. The tasks are written in Python and powered by the MuJoCo physics engine, making them easy to use and modify. We include benchmarks for several learning algorithms. The Control Suite is publicly available at https://www.github.com/deepmind/dm_control . A video summary of all tasks is available at http://youtu.be/rAai4QzcYbs .READ FULL TEXT VIEW PDF
The dm_control software package is a collection of Python libraries and ...
Recent advances in deep reinforcement learning in the paradigm of locomo...
We develop a methodology for assessing the robustness of models to
This paper introduces the Behaviour Suite for Reinforcement Learning, or...
The objective of many real-world tasks is complex and difficult to
Humans quickly solve tasks in novel systems with complex dynamics, witho...
We provide a simple hardware wrapper around the Quanser's
DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
Controlling the physical world is an integral part and arguably a prerequisite of general intelligence. Indeed, the only known example of general-purpose intelligence emerged in primates which had been manipulating the world for millions of years.
Physical control tasks share many common properties and it is sensible to consider them as a distinct class of behavioural problems. Unlike board games, language and other symbolic domains, physical tasks are fundamentally continuous in state, time and action. Their dynamics are subject to second-order equations of motion, implying that the underlying state is composed of position-like and velocity-like variables, while state derivatives are acceleration-like. Sensory signals (i.e. observations) usually carry meaningful physical units and vary over corresponding timescales.
This decade has seen rapid progress in the application of Reinforcement Learning (RL) techniques to difficult problem domains such as video games (Mnih2015). The Arcade Learning Environment (ALE, bellemare2012arcade) was a vital facilitator of these developments, providing a set of standard benchmarks for evaluating and comparing learning algorithms. The DeepMind Control Suite provides a similar set of standard benchmarks for continuous control problems.
The OpenAI Gym (brockman2016gym) currently includes a set of continuous control domains that has become the de-facto benchmark in continuous RL (duan2016benchmarking; 2017deepRLmatters)
. The Control Suite is also a set of tasks for benchmarking continuous RL algorithms, with a few notable differences. We focus exclusively on continuous control, e.g. separating observations with similar units (position, velocity, force etc.) rather than concatenating into one vector. Our unified reward structure (see below) offers interpretable learning curves and aggregated suite-wide performance measures. Furthermore, we emphasise high-quality well-documented code using uniform design patterns, offering a readable, transparent and easily extensible codebase. Finally, the Control Suite has equivalent domains to all those in the Gym while adding many more111With the notable exception of Philipp Moritz’s “ant” quadruped, which we intend to replace soon, see Future Work..
In Section 2 we explain the general structure of the Control Suite and in Section 3 we describe each domain in detail. In Sections 4 and 5 we document the high and low-level Python APIs, respectively. Section 6 is devoted to our benchmarking results. We then conclude and provide a roadmap for future development.
The DeepMind Control Suite is a set of stable, well-tested continuous control tasks that are easy to use and modify. Tasks are written in Python and physical models are defined using MJCF. Standardised action, observation and reward structures make benchmarking simple and learning curves easy to interpret.
Verification in this context means making sure that the physics simulation is stable and that the task is solvable:
Simulated physics can easily destabilise and diverge, mostly due to errors introduced by time discretisation. Smaller time-steps are more stable, but require more computation per unit simulation time, so the choice of time-step is always a trade-off between stability and speed (erez2015simulation). What’s more, learning agents are better at discovering and exploiting instabilities.222This phenomenon, sometimes known as Sims’ Law, was first articulated in (sims1994evolving): “Any bugs that allow energy leaks from non-conservation, or even round-off errors, will inevitably be discovered and exploited”.
It is surprisingly easy to write tasks that are much easier or harder than intended, that are impossible to solve or that can be solved by very different strategies than expected (i.e. “cheats”). To prevent these situations, the Atari™ games that make up ALE were extensively tested over more than 10 man-years333Marc Bellemare, personal communication.. However, continuous control domains cannot be solved by humans, so a different approach must be taken.
In order to tackle both of these challenges, we ran variety of learning agents (e.g. lillicrap2015continuous; mnih2016asynchronous) against all tasks, and iterated on each task’s design until we were satisfied that the physics was stable and non-exploitable, and that the task is solved correctly by at least one agent. Tasks that are solvable by some learning agent were collated into the benchmarking set. Tasks were not solved by any learning agent are in the extra set of tasks.
A continuous Markov Decision Process (MDP) is given by a set of states, a set of actions , a dynamics (transition) function , an observation function and a scalar reward function .
The state is a vector of real numbers , with the exception of spatial orientations which are represented by unit quaternions . States are initialised in some subset by the begin_episode() method. To avoid memorised “rote” solutions is never a single state.
With the exception of the LQR domain (see below), the action vector is in the unit box .
While the state notionally evolves according to a continuous ordinary differential equation, in practice temporal integration is discrete444Most domains use MuJoCo’s default semi-implicit Euler integrator, a few which have smooth, nearly energy-conserving dynamics use 4th-order Runge Kutta. with some fixed, finite time-step: .
The function describes the observations available to the learning agent. With the exception of point-mass:hard (see below), all tasks are strongly observable, i.e. the state can be recovered from a single observation. Observation features which depend only on the state (position and velocity) are functions of the current state. Features which are also dependent on controls (e.g. touch sensor readings) are functions of the previous transition. Observations are implemented as a Python OrderedDict.
The range of rewards in the Control Suite, with the exception of the LQR domain, are in the unit interval . Some tasks have “sparse” rewards . This structure is facilitated by the tolerance() function, see Figure 2. Since terms produced by tolerance() are in the unit interval, both averaging and multiplication operations maintain that property, facillitating cost design.
Control problems are classified as finite-horizon, first-exit and infinite-horizon(bertsekas1995dynamic). Control Suite tasks have no terminal states or time limit and are therefore of the infinite-horizon variety. Notionally the objective is the continuous-time infinite-horizon average return , but in practice all of our agents internally use the discounted formulation or, in discrete time , where is the discount factor. In the limit (equivalently ), the policies of the discounted-horizon and average-return formulations are identical.
While agents are expected to optimise for infinite-horizon returns, these are difficult to measure. As a proxy we use fixed-length episodes of 1000 time steps. Since all reward functions are designed so that at or near a goal state, learning curves measuring total returns all have the same y-axis limits of , making them easier to interpret.
MuJoCo (todorov2012mujoco) is a fast, minimal-coordinate, continuous-time physics engine. It compares favourably to other popular engines (erez2015simulation)
, especially for articulated, low-to-medium degree-of-freedom (DoF) models in contact with other bodies. The convenientMJCF definition format and reconfigurable computation pipeline have made MuJoCo popular555Along with the MultiBody branch of the Bullet physics engine. for robotics and reinforcement learning research (e.g. schulman2015trust).
A domain refers to a physical model, while a task refers to an instance of that model with a particular MDP structure. For example the difference between the swingup and balance tasks of the cartpole domain is whether the pole is initialised pointing downwards or upwards, respectively. In some cases, e.g. when the model is procedurally generated, different tasks might have different physical properties. Tasks in the Control Suite are collated into tuples according predefined tags. In particular, tasks used for benchmarking are in the BENCHMARKING tuple, while those not used for benchmarking (because they are particularly difficult, or because they don’t conform to the standard structure) are in the EXTRA tuple. All suite tasks are accessible via the ALL_TASKS tuple. In the domain descriptions below, names are followed by three integers specifying the dimensions of the state, control and observation spaces i.e. .
We enable humanoid_CMU
to be used for imitation learning as inmerel2017learning by providing tools for parsing, conversion and playback of human motion capture data from the cmu_mocap. The convert() function in the parse_amc module loads an AMC data file and returns a sequence of configurations for the humanoid_CMU model. The example script CMU_mocap_demo.py uses this function to generate a video.
In this section we describe the following Python code:
The environment.Base class that defines generic RL interface.
The suite module that contains the domains and tasks defined in Section 3
The underlying MuJoCo bindings and the mujoco.Physics class that provides most of the functionality needed to interact with an instantiated MJCF model.
The class environment.Base, found within the dm_control.rl.environment module, defines the following abstract methods:
describe the actions accepted and the observations returned by an Environment. For all the tasks in the suite, actions are given as a single NumPy array. action_spec() returns an ArraySpec, with attributes describing the shape, data type, and optional minimum and maximum bounds for the action arrays. Observations consist of an OrderedDict containing one or more NumPy arrays. observation_spec() returns an OrderedDict of ArraySpecs describing the shape and data type of each corresponding observation.
respectively start a new episode, and advance time given an action.
Starting an episode and running it to completion might look like
Both reset() and step() return a TimeStep namedtuple with fields [step_type, reward, discount, observation]:
is an enum taking a value in [FIRST, MID, LAST]. The convenience methods first(), mid() and last() return boolean values indicating whether the TimeStep’s type is of the respective value.
is a scalar float.
is a scalar float .
is an OrderedDict of NumPy arrays matching the specification returned by observation_spec().
Whereas the step_type specifies whether or not the episode is terminating, it is the discount that determines the termination type. corresponds to a terminal state666i.e. where the sum of future reward is equal to the current reward. as in the first-exit or finite-horizon formulations. A terminal TimeStep with corresponds to the infinite-horizon formulation. In this case an agent interacting with the environment should treat the episode as if it could have continued indefinitely, even though the sequence of observations and rewards is truncated. All Control Suite tasks with the exception of LQR777The LQR task terminates with when the state is very close to 0, which is a proxy for the infinite exponential convergence of stabilised linear systems. return at every step, including on termination.
To load an environment representing a task from the suite, use suite.load():
Wrappers can be used to modify the behaviour of control environments:
By default, Control Suite environments return low-dimensional feature observations. The pixel.Wrapper adds or replaces these with images.
While the environment.Base class is specific to the Reinforcement Learning scenario, the underlying bindings and mujoco.Physics class provide a general-purpose wrapper of the MuJoCo engine. We use Python’s ctypes library to bind to MuJoCo structs, enums and functions.
The bindings provide easy access to all MuJoCo library functions, automatically converting NumPy arrays to data pointers where appropriate.
The Physics class encapsulates MuJoCo’s most commonly used functionality.
Loading an MJCF model
The Physics.from_xml_string() method loads an MJCF model and returns a Physics instance:
The Physics.render() method outputs a numpy array of pixel values.
Optional arguments to render can be used to specify the resolution, camera ID and whether to render RGB or depth images.
Physics.model and Physics.data
MuJoCo’s mjModel and mjData structs, describing static and dynamic simulation parameters, can be accessed via the model and data properties of Physics. They contain NumPy arrays that have direct, writeable views onto MuJoCo’s internal memory. Because the memory is owned by MuJoCo an attempt to overwrite an entire array will fail:
Setting the state with reset_context()
When setting the MujoCo state, derived quantities like global positions or sensor measurements are not updated. In order to facilitate synchronisation of derived quantities we provide the Physics.reset_context() context:
Running the simulation
The physics.step() method is used to advance the simulation. Note that this method does not directly call MuJoCo’s mj_step() function. At the end of an mj_step the state is updated, but the intermediate quantities stored in mjData were computed with respect to the previous state. To keep these derived quantities as closely synchronised with the current simulation state as possible, we use the fact that MuJoCo partitions mj_step into two parts: mj_step1, which depends only on the state and mj_step2, which also depends on the control. Our physics.step first executes mj_step2 (assuming mj_step1 has already been called), and then calls mj_step1, beginning the next step888In the case of Runge-Kutta integration, we simply conclude each RK4 step with an mj_step1.. The upshot is that quantities that depend only on position and velocity (e.g. camera pixels) are synchronised with the current state, while quantities that depend on force/acceleration (e.g. touch sensors) are with respect to the previous transition.
It is often more convenient and less error-prone to refer to elements in the simulation by name rather than by index. Physics.named.model and Physics.named.data provide array-like containers that provide convenient named views:
These containers can be indexed by name for both reading and writing, and support most forms of NumPy indexing:
Note that in the example above we use a joint name to index into the generalised position array qpos. Indexing into a multi-DoF ball or free joint would output the appropriate slice.
We provide baselines for two commonly employed deep reinforcement learning algorithms A3C (williams1991function; mnih2016asynchronous) and DDPG (lillicrap2015continuous), as well as the recently introduced D4PG (d4pg)
. We refer to the relevant papers for algorithm motivation and details and here provide only hyperparameter, network architecture, and training configuration information (see relevant sections below).
We study both the case of learning with state derived features as observations and learning from raw-pixel inputs for all the tasks in the Control Suite. It is of course possible to look at control via combined state features and pixel features, but we do not study this case here. We present results for both final performance and learning curves that demonstrate aspects of data-efficiency and stability of training.
Establishing baselines for reinforcement learning problems and algorithms is notoriously difficult (islam2017reproducibility; 2017deepRLmatters). Though we describe results for well-functioning implementations of the algorithms we present, it may be possible to perform better on these tasks with the same algorithms. For a given algorithm we ran experiments with a similar network architecture, set of hyperparameters, and training configuration as described in the original papers. We ran a simple grid search for each algorithm to find a well performing setting for each (see details for grid searches below). We used the same hyperparameters across all of the tasks (i.e. so that nothing is tuned per-task). Thus, it should be possible to improve performance on a given task by tuning parameters with respect to performance for that specific task. For these reasons, the results are not presented as upper bounds for performance with these algorithms, but rather as a starting point for comparison. It is also worth noting that we have not made a concerted effort to maximise data efficiency, for example by making many mini-batch updates using the replay buffer per step in the environment, as in popov2017data.
The following pseudocode block demonstrates how to load a single task in the benchmark suite, run a single episode with a random agent, and compute the reward as we do for the results reported here. Note that we run each environment for 1000 time steps and sum the rewards provided by the environment after each call to step. Thus, the maximum possible score for any task is 1000. For many tasks, the practical maximum is significantly less than 1000 since it may take many steps until it’s possible to drive the system into a state that gives a full reward of 1.0 each time step.
In the state feature case we ran 15 different seeds for each task with A3C and DDPG; for results with D4PG, which was generally found to be more stable, we ran 5 seeds. In the raw-pixel case we also ran 5 different seeds for each task. The seed sets the network weight initialisation for the associated run. In all cases, initial network weights were sampled using standard TensorFlow initialisers. In the figures showing performance on individual tasks (Figures4-7), the lines denote the median performance and the shaded regions denote the 5th and 95th percentiles across seeds. In the tables showing performance for individual tasks (Tables 1 & 2
) we report means and standard errors across seeds.
As well as studying performance on individual tasks, we examined the performance of algorithms across all tasks by plotting a simple aggregate measure. Figure 3 shows the mean performance over environment steps and wallclock time for both state features and raw-pixels. These measures are of particular interest: they offer a view into the generality of a reinforcement learning algorithm. In this aggregate view, it is clear that D4PG is the best performing agent in all metrics, with the exception that DDPG is more data efficient before environment steps. It is worth noting that the data efficiency for D4PG can be improved over DDPG by simply reducing the number of actor threads for D4PG (experiments not shown here), since with 32 actors D4PG is somewhat wasteful of environment data (with the benefit of being more efficient in terms of wall-clock).
While we have made a concerted effort to ensure reproducible benchmarks, it’s worth noting that there remain uncontrolled aspects that introduce variance into the evaluations. For example, some tasks have a randomly placed target or initialisation of the model, and the sequence of these are not fixed across runs. Thus, each learning run will see a different sequence of episodes, which will lead to variance in performance. This might be fixed by introducing a fixed sequence of initialisation for episodes, but this is not in any case a practical solution for the common case of parallelised training, so our benchmarks simply reflect variability in episode initialisation sequence.
mnih2016asynchronous proposed a version of the Advantage Actor Critic (A2C; williams1991function) that could be trained asynchronously (A3C). Here we report results for the A3C trained with 32 workers per task. The network consisted of 2 MLP layers shared between actor and critic with 256 units in the first hidden layer. The grid search explored: learning rates, [1e-2, 1e-3, 1e-4, 1e-5, 3e-5, 4e-5, 5e-5]; unroll length
; activation functions for computing; number of units in the second hidden layer ; annealing of learning rate . The advantage baseline was computed using a linear layer after the second hidden layer. Actions were sampled from a multivariate Gaussian with diagonal covariance, parameterized by the output vectors and . The value of the logarithm of was computed using the second sigmoid activation function given above (which was found to be more stable than the function used in the original A3C manuscript), while
was computed from a hyperbolic tangent, both stemming from the second MLP layer. The RMSProp optimiser was used with a decay factor of, a damping factor of and a learning rate starting at and annealed to
throughout training using a linear schedule, with no gradient clipping. An entropy regularisation cost weighted atwas added to the policy loss.
presented a Deep Deterministic Policy Gradients (DDPG) agent that performed well on an early version of the Control Suite. Here we present performance for straightforward single actor/learner implementation of the DDPG algorithm. Both actor and critic networks were MLPs with ReLU nonlinearities. The actor network had two layers ofunits respectively, while the critic network had two layers of units. The action vector was passed through a linear layer and summed up with the activations of the second critic layer in order to compute Q values. The grid search explored: discount factors, ; learning rates, fixed to be the same for both networks; damping and spread parameters for the Ornstein–Uhlenbeck process, and respectively; hard (swap at intervals of 100 steps) versus soft () target updates. For the results shown here the two networks were trained with independent Adam optimisers kingma2014adam, both with a learning rate of , with gradients clipped at for the actor network. The agent used discounting of . As in the paper, we used a target network with soft updates and an Ohrstein-Uhlenbeck process to add an exploration noise that is correlated in time, with similar parameters, except for a slightly bigger (, , ). The replay buffer size was also kept to , and training was done with a minibatch size of .
The Distributional Distributed Deep Deterministic Policy Gradients algorithm (d4pg) extends regular DDPG with the following features: First, the critic value function is modelled as a categorical distribution (bellemare2017distributional), using 101 categories spaced evenly across . Second, acting and learning are decoupled and deployed on separate machines using the Ape-X architecture described in (apex). We used 32 CPU-based actors and a single GPU-based learner for benchmarking. D4PG additionally applies -step returns with , and non-uniform replay sampling (schaul2015prioritized) () and eviction () strategies using a sample-based distributional KL loss (see (apex) and (d4pg)
for details). D4PG hyperparameters were the same as those used for DDPG, with the exception that (1) hard target network updates are applied every 100 steps, and (2) exploration noise is sampled from a Gaussian distribution with fixedvarying from to across the actors. A mini-batch size of was used.
Due to the different parallelization architectures, the evaluation protocol for each agent was slightly different: DDPG was evaluated for 10 episodes for every 100000 steps (with no exploration noise), and A3C was trained with 32 workers and concurrently evaluated with another worker that updated its parameters every 3 episodes, which produced intervals of on average 96000 steps per update. The plots in Figure 4 and Figure 5 show the median and the 5th and 95th percentile of the returns for the first steps. Each agent was run 15 times per task using different seeds (except for D4PG which was run 5 times), using only low-dimensional state feature information. D4PG tends to achieve better results in nearly all of the tasks. Notably, it manages to reliably solve the manipulator:bring_ball task, and achieves a good performance in acrobot tasks. We found that part of the reason the agent did not go above in the acrobot
task is due to the time it takes for the pendulum to be swung up, so its performance is probably close to the upper bound.
The DeepMind Control Suite can be configured to produce observations containing any combination of state vectors and pixels generated from the provided cameras. We also benchmarked a variant of D4PG that learns directly from pixel-only input, using RGB frames from the camera. To process pixel input, D4PG is augmented with a simple 2-layer ConvNet. Both kernels are sizeba2016layer and tanh() activations. We explored four variants of the algorithm. In the first, there were separate networks for the actor and Q-critic. In the other three, the actor and critic shared the convolutional layers, and the actor and critic each had a separate fully connected layer before their respective outputs. The best performance was obtained by weight-sharing the convolutional kernel weights between the actor and critic networks, and only allowing these weights to be updated by the critic optimiser (i.e. truncating the policy gradients after the actor MLP). D4Pixels internally frame-stacks 3 consecutive observations as the ConvNet input.
Results for 1 day of running time are shown in Figure 7; we plot the results for the three shared-weights variants of D4PG, with gradients into the ConvNet from the actor (dotted green), critic (dashed green), or both (solid green). For the sake of comparison, we plot D4PG performance for low-dimensional features (solid blue) from Figure 5. The variant that employed separate networks for actor and critic performed significantly worse than the best of these and is not shown. Learning from pixel-only input is successful on many of the tasks, but fails completely in some cases. It is worth noting that the camera view for some of the task domains are not well suited to a pixel-only solution for the task. Thus, some of the failure cases are likely due to the difficulty of positioning a camera that simultaneously captures both the navigation targets as well as the details of the agents body: e.g., in the case of swimmer:swimmer6 and swimmer15 as well as fish:swim.
The DeepMind Control Suite is a starting place for the design and performance comparison of reinforcement learning algorithms for physics-based control. It offers a wide range of tasks, from near-trivial to quite difficult. The uniform reward structure allows for robust suite-wide performance measures.
The results presented here for A3C, DDPG, and D4PG constitute baselines using, to the best of our knowledge, well performing implementations of these algorithms. At the same time, we emphasise that the learning curves are not based on exhaustive hyperparameter optimisation, and that for a given algorithm the same hyperparameters were used across all tasks in the Control Suite. Thus, we expect that it may be possible to obtain better performance or data efficiency, especially on a per-task basis.
We are excited to be sharing the Control Suite with the wider community and hope that it will be found useful. We look forward to the diverse research the Suite may enable, and to integrating community contributions in future releases.
Several elements are missing from the current release of the Control Suite.
Some features, like the lack of rich tasks, are missing by design. The Suite, and particularly the benchmarking set of tasks, is meant to be a stable, simple starting point for learning control. Task categories like full manipulation and locomotion in complex terrains require reasoning about a distribution of tasks and models, not only initial states. These require more powerful tools which we hope to share in the future in a different branch.
There are several features that we hoped to include but did not make it into this release; we intend to add these in the future. They include: a quadrupedal locomotion task, an interactive visualiser with which to view and perturb the simulation, support for C callbacks and multi-threaded dynamics, a MuJoCo TensorFlow op wrapper and Windows™ support.