Traversing the Reality Gap via Simulator Tuning

by   Jack Collins, et al.

The large demand for simulated data has made the reality gap a problem on the forefront of robotics. We propose a method to traverse the gap by tuning available simulation parameters. Through the optimisation of physics engine parameters, we show that we are able to narrow the gap between simulated solutions and a real world dataset, and thus allow more ready transfer of leaned behaviours between the two. We subsequently gain understanding as to the importance of specific simulator parameters, which is of broad interest to the robotic machine learning community. We find that even optimised for different tasks that different physics engine perform better in certain scenarios and that friction and maximum actuator velocity are tightly bounded parameters that greatly impact the transference of simulated solutions.



There are no comments yet.


page 1

page 2

page 3

page 4


Auto-Tuned Sim-to-Real Transfer

Policies trained in simulation often fail when transferred to the real w...

Follow the Gradient: Crossing the Reality Gap using Differentiable Physics (RealityGrad)

We propose a novel iterative approach for crossing the reality gap that ...

TuneNet: One-Shot Residual Tuning for System Identification and Sim-to-Real Robot Task Transfer

As researchers teach robots to perform more and more complex tasks, the ...

Quantifying the Reality Gap in Robotic Manipulation Tasks

We quantify the accuracy of various simulators compared to a real world ...

Flightmare: A Flexible Quadrotor Simulator

Currently available quadrotor simulators have a rigid and highly-special...

Reinforced Grounded Action Transformation for Sim-to-Real Transfer

Robots can learn to do complex tasks in simulation, but often, learned b...

Are We Making Real Progress in Simulated Environments? Measuring the Sim2Real Gap in Embodied Visual Navigation

Does progress in simulation translate to progress in robotics? Specifica...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Physics simulations attempt to model some pertinent facets of the real world in software. Simulations are necessarily simplified for computational feasibility, yet reflect real-world phenomena at a given level of veracity, the extent of which is the result of a trade-off between accuracy and computational time. In the domain of robotics, rigid-body simulators are frequently used as a large proportion of robots can be well-modelled as rigid bodies. Robotics simulators reproduce the most important physical phenomena (i.e. gravity, collisions, etc.) but replace detailed modelling of complex phenomena with computationally faster, less accurate high-level representations and constraints. Robotics simulations variously rely on the replication of phenomena that are difficult to accurately replicate, e.g., simulating actuators (i.e. torque characteristics, gear backlash, …), sensors (i.e. noise, latency, …), and rendered images (i.e. reflections, refraction, textures, …). This gap between reality and simulation is commonly referred to as the “Reality Gap”.

Although conducting research in simulation means having to overcome the reality gap, the associated pros outweigh the cons for many learning-based approaches. In simulation there is no risk of damaging hardware whilst having access to robots that are not physically available. In addition, many instantiations of a scene can be run in parallel potentially faster than real-time, and human intervention is not required to manage experiments. With the current surge in data-driven techniques like Deep Learning, simulation data is either a pre-requisite to using such techniques, or at least a more attainable alternative to the (generally) expensive, laborious and non-scaleable collection of real-world data.

Fig. 1: 3D plot of a robotic manipulators end effector trajectory comparing (i) real world, (ii) untuned simulator, and (iii) tuned simulator. See magnified Excerpt A which visualises the advantages of the tuned simulation over the generic simulation.

Of course, working in simulation only makes sense if the eventually-learned behaviour can transfer to reality. Sim2real research aspires to make any simulated behaviour seamlessly transfer to hardware and operate in real-world conditions. There are three prevalent ideologies within the sim2real community for overcome the reality gap, (i) data-driven improvement of simulation, (ii) generation of robust controllers, and (iii) a hybrid approach combining (i) and (ii). (i) augments the simulation with real-world data. This approach suffers as collection of data from the real world remains expensive. In comparison, (ii) must expose a controller to a wide range of environments through the randomisation of a subset of simulation parameters or introduction of noise. Due to the controller being exposed to both realistic and (a large number of) unrealistic scenarios the creation of such controllers is time consuming. (iii) attempts to mitigate the disadvantages of collecting real world data by hand and simulating a large number of unrealistic training scenes but in doing so adds the complexity of integrating simulation and the real world into a single workflow.

In this work we build on past research that describes a method for recording data for comparison between reality and simulation [3], as well as the provision of metrics for a principled, numerical quantification of the differences between simulation and reality. Our extension investigates the optimisation of simulation parameters towards the goal of achieving real world simulation performance. We show that our approach is able to attain better performance than generic simulator parameters (as visualised in Figure 1) and is a promising method for traversing the reality gap. Our contributions aside from a method to optimise simulator parameters include an analysis as to the most influential parameters in achieving real world results.

The motivation for our work is to inform researchers and those applying sim2real techniques as to the important parameters to tune in simulation. As an extension of the results, further conclusions can be drawn as to the best data to collect from the real world to make realistic simulations, the parameters to randomise and the extent of learning approach refinement.

The question we endeavour to answer is; what are the simulation parameters that are most influential in arriving at a realistic simulation? Our experimentation involves optimising two popular rigid body simulators used by the robotics community to faithfully replicate the results of a series of tasks conducted by a real robot in a motion capture system. The tasks are a range of kinematic movements and object interactions performed by a robotic manipulator, and the results of this ‘ground truth’ are available as a publicly-accessible dataset [4]

. Optimisation of simulator parameters is via differential evolution, an Evolutionary Algorithm that performs well on high-dimensional optimisation problems.

Ii Background

Ii-a Simulation and Physics Engines

Rigid-body simulators are a class of simulators that simplify the world into rigid objects that are potentially connected through (actuated or unactuated) joints. The use of rigid-body simulators is prevalent in robotics due to the widespread use of rigid robots in the field. Simulators are often modular, with the simulator acting as a high level interface, typically including a graphical user interface, API accessibility from external programming languages, plugins, importers, and scene description formats [30].

Robotic simulators utilise one or more of a number of physics engines. Common physics engines include Bullet [7], Dynamic Animation and Robotics Toolkit (DART) [15] and Open Dynamics Engine (ODE) [25], all of which are licensed under free software licenses. The physics engine operates below the simulator with the goal of providing physically accurate movement of objects instantiated in the simulator.

A multitude of user definable parameters are available to be tuned, however the exact number varies between engines and implementations. Parameters relate to the visual aspects (colours, textures, etc. ), material properties (frictions, restitution, etc.), object properties (mass, inertia, etc.), joint characteristics (type of constraint, actuation, etc.) and other more general physics engine properties (time step, solver settings, dampening, etc.). Although a large tunable parameter space represents an opportunity to adapt a physics engine to accurately replicate real-world conditions, it also results in a high-dimensional optimisation problem, where the effects of varying parts of this parameter space on simulator performance is not intuitive.

Ii-B Reality Gap

With the well accepted problem of the reality gap the only way currently to guarantee a solution will perform as expected in reality is to create the solution in reality, to this end test-rigs are a proven method for overcoming some of the issues associated with working in the real world [10, 9]. There are a range of approaches that have been raised in the past to cross the reality gap starting most notably with Jakobi et al. [12] using targeted noise to generate robust controllers. Other sim2real approaches have focused on tuning controllers [22], optimising transferable controllers [14], adding perturbations to the environment [29] and learning the target platforms actuator responses [11].

There are several methods of overcoming the reality gap using real-world data augmentation. The most common is to alter the generic simulator settings with more accurate parameters that are collected from real world measurements, derived from calculations, from researched values or experimentally [29, 28, 33]. Oftentimes parameters such as weight, physical dimensions, frictional coefficients, centre of mass, inertial properties, actuator control properties and more are used [8, 2]. Another common practice is to substitute a more accurate model of an actuator derived from the response of the physical robot or parameters from system identification, examples of this include work by Andrychowicz et al. [1] and Tan et al. [29].

To update parameters using recorded data there are several documented approaches with a portion of these updating simulation parameters live. One example of live parameter updates is by Moeckel et al. using a Kinect sensor coupled with background subtraction [18] to detect gait transference. Although motion capture systems that give accurate 6DOF pose are becoming increasingly common equipment in research labs, few methods have been reported that use them [1].

Domain randomisation is currently the most popular method for overcoming the reality gap in the machine learning domain. The parameters to be randomised are chosen according to the policy which is being learned, tasks that are entirely visual or require computer vision focus on randomising the visual characteristics of a simulation (i.e. parameters relating to cameras, colours, textures, lighting, etc.)

[31]. A policy that requires environmental interactions will randomise physical properties (i.e. mass, inertia, friction, etc. ) [20]. Randomised parameters are bounded with limits at initialisation which are hand picked by the user, these are often plausible ranges that will be found within the real world, although not necessarily accurate to the operating conditions of the target robot and environment.

By presenting such varied scenes to an agent, extensive amounts of time are required to learn. In particular, slow initial learning rates are noted, and approaches have attempted to overcome this by progressively increasing the variation experienced [19]. Accurate parameter ranges that are specific to the parameter settings would further reduce the landscape and lead to quicker training times, and this is one contribution of our work.

Ii-C Parameter Optimisation

As discussed there are a large number of available parameters to optimise when applied to simulators and physics engines. As rigid-body simulators commonly used in robotics are non-differentiable [6] the optimisation of parameters relies on gradient free algorithms. Bayesian optimisation was considered for the task of finding a global minimum as it provides an efficient sampling method requiring reduced simulation evaluations [26]. However, Bayesian optimisation is limited in the number of variables it is able to optimise, common practice is to optimise up to [32]. Extensions of Bayesian Optimisation have seen this extended further up to variables using drop out [16].

Evolutionary Algorithms (EAs) are another class of black-box optimisation algorithms that have been applied to problems with larger search spaces. Differential Evolution (DE)[27] is a popular EA, a global optimiser this is easily parallelisable and able to scale to a large number of variables. Algorithmic parameters for tuning include crossover rate , mutation weight and population size [5]. A good value for has been found to be between and according to Ronkkonen et al. [24]. The population size is closely related to the mutation parameters, with problem dimensionality and problem properties also affecting the choice in population size. The recommended population size for dimension problems is from a review conducted by Piotrowski [21].

Iii Methodology

Our approach to simulator optimisation can be segmented into several components as listed below:

  • Real World Dataset Collection (Section III-A)

  • Robotic Simulations (Section III-B)

  • Simulator Parameter Selection (Section III-C)

  • Running the Optimisation Algorithm (Section III-D)

Iii-a Dataset

Our dataset is a publicly available collection of tasks completed by a robotic manipulator and recorded by a motion capture system [4]. The data gives a ground truth of the real world with 6DOF pose of the manipulator and manipulated objects recorded. There are 10 tasks in total, 2 of which are pure kinematics (no objects) and 8 of which involve non-prehensile manipulation. Tasks are purposefully elementary as they are foundational to larger compound tasks, making results derived from these tasks scale to harder and more complex applications. The manipulation tasks have interactions with objects including cubes, cuboids, cylinders and cones. Another useful property of the dataset is the contrast in object materials, with half the interaction tasks completed with plastic objects and the other half with exact wooden replicas. For a complete description of the dataset and its use in benchmarking reality, please see [4].

The dataset is released with simulation protocols that allow users to simulate the same scene and same control of the robot arm that is used in reality. All scenes use a levelled plane with a Kinova 6DOF robotic arm attached with KG-3 gripper and either none or one object to manipulate. Dataset users must follow the explicit instructions on scene setup, robot configuration and motor controls, but are able to change any of the other user definable parameters of their chosen simulator and physics engine.

Iii-B Simulation

We selected two popular robotic simulators; PyBullet (version 2.5.8) [7] and V-Rep (version 3.6.2 now known as CoppeliaSim) [23], accessed through the PyRep interface [13]. The two simulators are chosen as they provide a common interface, and also provide easy access to a multitude of physics engines. Pybullet uses Bullet 2.85 whilst V-Rep uses Bullet 2.83 and Bullet 2.78. V-Rep also provides access to ODE and Newton physics.

PyBullet exposes a large number of settings to the user natively. V-Rep has an abstraction layer between the simulator and physics engines making it possible to interface with multiple different engines. Most of the same parameters accessible to the PyBullet interface are available in the V-Rep physics engines. However, as we use the PyRep interface not all parameters we require are accessible from the external API therefore we use embedded scripts within the simulator which are invoked from PyRep.

Iii-C Parameters

From the different physics engines available (including the 3 versions of Bullet) there are many parameters that create the same effect on the physics of the simulation that are either implemented using different methods, or different units. As such it was necessary to find the shared parameters that were directly comparable between physics engines, and settings that were not. We used two approaches; in the first we compared only those parameters available across all simulations and physics engines (Shared). In the second we allow each to tune a fuller range of parameters that may be available (Individual). Table I documents all the Shared parameters and the Individual parameters that we chose to simulate. The Individual parameter optimisation included both the Shared and Individual parameters.

Shared Range Individual Range
Time Step [0.001,0.05] Joint Damping (6 Joints) [0.0001,0.9]
Mass (Links, Gripper, Objects) [0.7*M,1.3*M] Rolling Friction (Gripper,Floor, Objects) [0.0001, 1.25]
Maximum Joint Torque (6 Joints) [100,9000] Sliding Friction (Gripper,Floor, Objects) [0.0001, 1.25]
Maximum Joint Velocity (6 Joints) [10,40] Restitution (Gripper,Floor, Objects) [0.0001,0.9]
Lateral Friction (Gripper,Floor, Objects) [0.0001, 1.25] Linear Damping (Gripper,Floor, Objects) [0.0001,0.9]
Angular Damping (Gripper,Floor, Objects) [0.0001,0.9]
TABLE I: List of Parameters Used for Optimisation and the Range Limits

It would be infeasible to tune all available parameters. As positions (x,y,z), rotations (x,y,z,w) and inertias (xx,yy,zz) require multiple parameters each, it was impractical to create variables for the centre of masses, inertia position, inertia rotation and inertia tensor.

Iii-D Optimisation

We use DE as implemented in the SciPy optimise module. DE follows the same approach as most EAs in that it begins with a randomly initialised population of a set number of individuals evolved across a number of generations

. Individuals are a vector of parameters and child populations are the succeeding generation (or offspring) from parent populations with a chosen strategy dictating the creation of child populations.

We apply ’best1bin’ strategy which iterates over the parent population creating a vector for each individual () by mutating the fittest individual in the parent population () by the difference between two randomly chosen individuals of the parent population, see Equation 1 where is the mutation factor. A child member is then created by choosing each parameter from either or the

parent as per a binomial distribution where the number must be less than the recombination rate to select the parameter from

. If a child vector is fitter than its parent it replaces it in the current population. In comparison to other strategies ’best1bin’ has strong supporting evidence that it is a competitive strategy [17].


The fitness objective is to minimise the 3D Euclidean distance between the simulator and reality, this value dictates the fitness used by the DE for a given population member. For tasks and (kinematic tasks) this is the distance between the wrist joint of the robotic manipulator in simulation () and the dataset () summed at throughout the duration of the simulation and divided by the number of data points (), see Equation 2. Tasks that include objects (simulation: , dataset: ) use the combined euclidean distance of the arm and object, see Equation 3. The trajectory of the dataset object is a distribution as you can not create a mean object trajectory from multiple repeats where the object did not have the same start and end position. Therefore, we use the difference in the final position of the object in simulation and the dataset.


Iv Experimentation

We define an optimisation as an application of DE to optimise an array of either shared or individual simulation parameters. In total there were optimisations completed that make up the results. This is broken down into:

  • “Shared” and “Individual” parameters;

  • Experiments: manipulation tasks from the dataset the a combination of all ;

  • 5 physics engines; and

  • 10 repeats of each.

A large number of experimental runs were required and as such were scheduled on a High Performance CPU Cluster (HPC). A singularity container with PyBullet, PyRep and V-Rep installed provided a distributed and scaleable deployment. Experiments were paired and scheduled onto a single node of the HPC, with each repeat given access to a single core. Each node was 20 cores of an Intel Xeon E5-2660 V3 processor with a clock speed of , cache and of memory.

The constants for the DE algorithm were for , for with dithering and a population of where is the length of the parameter array. The population was chosen to be low due to the large evaluation times of some physics engines and experiments paired with the limited amount of compute time. The experiment finished under one of three conditions:

  • Convergence (i.e. when the standard deviation of the current population was less than one percent of the population mean), or

  • DE generations completed, or

  • of compute time (a hard constraint of the HPC).

The number of variables tuned varied for Shared and Individual experiments. Shared experiments tuned variables, Individual experiments tuned (inclusive of the Shared variables). See Table I for more details.

V Results

V-a Shared Parameters

V-A1 Performance

Table II shows the fitnesses for each experiment and physics engine when using generic simulator parameters. The fitness plots in Figure 2 demonstrate the convergence of the optimisations completed on the same physics engines, and the parameters shared between all physics engines. The smallest error obtained between simulation and the real world dataset for each experiment can be found in Table III.

Directly comparing the generic fitness values from Table II to the fitness values achieved when optimising shared parameters in Table III we see that the tuned fitness is lower than all physics engines with generic parameters. Taking the best generic fitness and comparing it to the best tuned fitness for each experiment we see improvements ranging from for experiment and for experiment . The effect of tuning the shared parameters is therefore significant and provides a more realistic simulation closer aligned to the real world.

From Table III we see a correlation between the experiment ’type’ and the physics engine with the least error. Newton was the best Physics Engine for Experiments and , implying that it is able to better model arm kinematics without object interactions. PyBullet was best at experiments and , all of which include rolling objects. The clustering of experiments and physics engines indicates that no physics engine is best equipped to deal with all simulation scenarios but that physics engines can have heightened performance in select scenarios over other physics engines.

Experiment PyBullet Bullet2.78 Bullet2.83 ODE Newton
1 0.1498 0.2660 0.2687 0.1800 0.1437
2 0.1283 0.1320 0.1582 0.1285 0.1147
3 0.1764 0.2533 0.2838 0.1943 0.2691
4 0.2107 0.2255 0.1966 0.1674 0.2877
5 491.2769 0.2967 0.3031 0.2909 0.2910
6 503.3037 0.6029 0.6096 0.5972 0.5977
7 0.1778 0.2500 0.3163 0.2370 0.2323
8 0.2132 0.3075 0.3412 0.2812 0.2759
9 0.1306 0.1950 0.1241 0.1171 0.1242
10 0.1242 0.1143 0.1155 0.1069 0.1176
11 995.8916 2.6433 2.7170 2.3005 2.4540
TABLE II: Fitness of Generic Physics Engine Settings
Experiment Best Physics Engine Shared Best Physics Engine Individual
1 Newton (0.0973) Newton (0.0973)
2 Newton (0.0984) Newton (0.0984)
3 Bullet 2.78 (0.0498) Bullet 2.78 (0.0498)
4 Bullet 2.83 (0.0629) Bullet 2.78 (0.0673)
5 PyBullet (0.0506) PyBullet (0.2407)
6 PyBullet (0.0552) PyBullet (0.5641)
7 Bullet 2.78 (0.0551) PyBullet (0.0551)
8 PyBullet (0.0442) Bullet 2.78 (0.0744)
9 ODE (0.0519) ODE (0.0482)
10 ODE (0.0503) ODE (0.0487)
11 ODE (1.7360) ODE (1.7714)
TABLE III: Best Optimised Fitness by Experiment and Parameters Tuned
Fig. 2: Convergence plots of all experiments using shared parameters. Each subplot plots the line of best fitness throughout the generations averaged across the repeats for each physics engine.
Fig. 3: Convergence plots of all experiments using individual parameters. Each subplot plots the line of best fitness throughout the generations averaged across the repeats for each physics engine.

Newton took the least compute time, taking a total for all experiments. Slowest was PyBullet at . The time required to complete an optimisation gives some notion of the difficulty, as experiment 11 (the combination of all dataset tasks, i.e. ) understandably took the longest. We note extended completion times for experiments and , which were cylinder rolling tasks.

Table 2 displays in the y-axis of each subplot the number of generations required for the optimisation to terminate. Newton was consistently the physics engine with the lowest number of evolutionary generations, whilst Bullet 2.83 and Bullet 2.78 had the most, terminating at generations for experiments and both of which were comparatively easy experiments. This is an interesting observation as it alludes to the fact that Newton is an easier environment to optimise within, with either a reduced search space or less noise, however, Newton does not provide accurate environmental interactions when compared with the other physics engines being reviewed. The large number of evolutionary generations required by both of V-Reps Bullet environments especially for the easier cuboid interactions implies a noisy landscape for fitness optimisation.

V-A2 Parameters

From the Shared parameters we note a select few having a large impact on the performance of the Physics engine. We measure the importance of a parameters value as its deviation across the 10 optimisation repeats. Owing to the large amount of data generated, we include exemplar box and whisker plots in Figure 4 for most relevant data.

One of the most influential parameters found was the simulation timestep (see Figure 4), the deviation was consistently low across experiments and physics engines except for rolling tasks (experiments ). The generic timestep value for V-Rep is and for PyBullet. Pybullet’s median value was with a standard deviation of less than for all experiments. Similarly, V-Rep Physics Engines were also very close to the generic timestep with the median for Bullet 2.78: , Bullet 2.83: , ODE: and Newton: . We therefore recommend setting the physics engine timestep to the recommended value as detailed by the developer of the simulator. The constrained value is very likely to be due to a reliance of other parameters that would need to be tuned that are physics engine specific.

Fig. 4: Box and Whisker Plot for each (i) physics engine and (ii) experiment. The subplots are all for the shared timestep parameter. This plot is meant as a exemplar of the other parameter plots for individual optimisations and parameter plots for shared optimisations.

Other parameters that largely influenced the realism of the simulation were the lateral friction of the manipulated objects i.e. wood and plastic frictions. The friction of the gripper and floor plane were not as influential except for Pybullet during the two rolling experiments where the floor plane had a standard deviation of and . This is a very likely the reason why PyBullet had the lowest error for three of the rolling experiments.

It was expected that the parameters influencing the response of the joints would have a large impact on the fitness as there is a direct correlation between the measured wrist joint and actuator response. This assumption was found to be true for the maximum joint velocity, but no such trends could be found for the maximum torque. Experiments consistently had statistically significant lower standard deviation across joints . Joint was likely less influential in simulator realism due to the restricted amount of movement it experienced in the experiments and although experiments did not display the same reliance on accurate joint velocity this is likely due to the experiments being more complex and the resulting optimisation harder. The results from the shared parameter optimisation show that we can perform context-sensitive tuning that is able to positively influence the realism of the simulation for all environments.

V-B Individual Parameters

V-B1 Performance

Figure 3 depicts the fitness convergence for the individual parameters tuned. Made obvious by the plots is the difficulty that the extra parameters add as some physics engines fail to converge appropriately for certain experiments. When comparing the lowest error for each experiment as found in Table III, there are only three instances where the individual optimisation improves upon the shared parameters and where the optimisation arrives at a worse solution. The added complexity of the additional parameters to tune is likely the cause of the worse fitnesses. Similar to the shared parameter optimisations the individual runs have a correlation between the best physics engine and the type of task, i.e. Newton is best at kinematics and PyBullet is best at of the tasks that include rolling objects.

Taking into account the mean final fitness instead of the absolute best the individual optimisation lessens the engine/experiment error out of the times. This is likely due to several reasons, (i) DE does not guarantee to find the optimal solution, (ii) the additional parameters add noise and complexity that make it harder to optimise, and (iii) due to termination of compute after the algorithm is unable to complete optimisation without convergence. Termination at the maximum run time of individual parameter optimisations occurs on out of occasions.

V-B2 Parameters

The influential parameters for individual optimisations were the same as those for shared. These being timestep, lateral friction and maximum joint velocity. In addition to these parameters there were two additional ones that had statistically significant standard deviations. The restitution of an object parameterizes the conservation of energy after a contact, only of our experiments contain contacts, most of which are at low speed. Experiment , interaction with a plastic cuboid, saw a standard deviation of less than for three of the physics engines for the restitution value of plastic. This eludes to the fact that restitution could be an important factor given experiments that include contacts that are above a contact energy threshold.

The mass of arm links was another parameter that saw noticeably low standard deviations for individual parameter optimisations. Arm link masses produced an interesting result with links displaying low standard deviations for Bullet 2.78, Bullet 2.83 and ODE on the simpler experiments i.e. experiments . The smaller deviations across the easier experiments is likely due to the reduced noise whilst optimising the parameters.

Vi Conclusion

In conclusion, we have investigated the influence of a range of simulation parameters on optimising 2 simulators and 5 physics engines towards more realistic simulations. Our method is significantly better than using generic simulator parameters with all simulation environments and all experiments achieving an improved fitness. This was achieved by using a real world dataset of motion capture recorded manipulation tasks and optimising both shared and individualised simulation parameters towards the dataset. The optimisation algorithm chosen was differential evolution (DE) due to the large number of optimisation parameters it is able to concurrently optimise and the non-differentiable nature of the problem. The fitness signal throughout the optimisation runs was the Euclidean distance error between the simulated wrist of the manipulator summed with the final placement error of any objects in the scene.

We found that the most important parameters that were shared between physics engines were simulation timestep, lateral object friction and joint velocity. From the expanded range of parameters we also found that it is likely for high energy contact tasks that the value of restitution is important.

To improve simulator performance we recommend that users start with (i) the default simulator timestep, (ii) researching or experimentally acquiring an accurate friction value, and (iii) recording or sourcing accurate maximum joint velocities for each actuator. These same parameters, if accurate, should not be excessively randomised as results indicate that the variation in these parameters are tightly bounded to the real world value.


  • [1] M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, et al. (2018) Learning dexterous in-hand manipulation. arXiv preprint arXiv:1808.00177. Cited by: §II-B, §II-B.
  • [2] Y. Chebotar, A. Handa, V. Makoviychuk, M. Macklin, J. Issac, N. Ratliff, and D. Fox (2019) Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience. In 2019 International Conference on Robotics and Automation (ICRA), Vol. , pp. 8973–8979. External Links: ISBN 1050-4729, Document Cited by: §II-B.
  • [3] J. Collins, D. Howard, and J. Leitner (2019) Quantifying the Reality Gap in Robotic Manipulation Tasks. In 2019 International Conference on Robotics and Automation (ICRA), Vol. , pp. 6706–6712. External Links: Document Cited by: §I.
  • [4] J. Collins, J. McVicar, D. Wedlock, R. Brown, D. Howard, and J. Leitner (2019) Benchmarking Simulated Robotic Manipulation through a Real World Dataset. IEEE Robotics and Automation Letters (), pp. 1. External Links: Document, ISSN 2377-3774 Cited by: §I, §III-A.
  • [5] S. Das and P. N. Suganthan (2011) Differential Evolution: A Survey of the State-of-the-Art.

    IEEE Transactions on Evolutionary Computation

    15 (1), pp. 4–31.
    External Links: Document, ISSN 1941-0026 Cited by: §II-C.
  • [6] J. Degrave, M. Hermans, J. Dambre, and F. Wyffels (2019-03) A Differentiable Physics Engine for Deep Learning in Robotics. Frontiers in neurorobotics 13, pp. 6 (eng). External Links: Link, Document, ISSN 1662-5218 Cited by: §II-C.
  • [7] E. Coumans and Y. Bai (2016) Pybullet, a Python Module for Physics Simulation for Games, Robotics and Machine Learning. External Links: Link Cited by: §II-A, §III-B.
  • [8] M. Gautier and W. Khalil (1988) On the identification of the inertial parameters of robots. In Proceedings of the 27th IEEE Conference on Decision and Control, Vol. , pp. 2264–2269. External Links: ISBN null, Document Cited by: §II-B.
  • [9] H. Heijnen, D. Howard, and N. Kottege (2017-05) A testbed that evolves hexapod controllers in hardware. In Proceedings - IEEE International Conference on Robotics and Automation, pp. 1065–1071. External Links: Link, ISBN 9781509046331, Document, ISSN 10504729 Cited by: §II-B.
  • [10] D. Howard and T. Merz (2015-09) A platform for the direct hardware evolution of quadcopter controllers. In IEEE International Conference on Intelligent Robots and Systems, Vol. 2015-Decem, pp. 4614–4619. External Links: Link, ISBN 9781479999941, Document, ISSN 21530866 Cited by: §II-B.
  • [11] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter (2019) Learning agile and dynamic motor skills for legged robots. Science Robotics 4 (26). External Links: Link, Document Cited by: §II-B.
  • [12] N. Jakobi, P. Husbands, and I. Harvey (1995) Noise and the reality gap: The use of simulation in evolutionary robotics. In

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

    Vol. 929, pp. 704–720. External Links: Link, ISBN 3540594965, Document, ISSN 16113349 Cited by: §II-B.
  • [13] S. James, M. Freese, and A. J. Davison (2019-06) PyRep: Bringing V-REP to Deep Robot Learning. External Links: Link Cited by: §III-B.
  • [14] S. Koos, J. Mouret, and S. Doncieux (2010) Crossing the reality gap in evolutionary robotics by promoting transferable controllers. In Proceedings of the 12th annual conference on Genetic and evolutionary computation - GECCO ’10, pp. 119. External Links: Link, ISBN 9781450300728, Document Cited by: §II-B.
  • [15] J. Lee, M. Grey, S. Ha, T. Kunz, S. Jain, Y. Ye, S. Srinivasa, M. Stilman, and C. Liu (2018) DART: Dynamic Animation and Robotics Toolkit.

    Journal of Open Source Software

    3 (22), pp. 500.
    External Links: Link, Document Cited by: §II-A.
  • [16] C. Li, S. Gupta, S. Rana, V. Nguyen, S. Venkatesh, and A. Shilton (2018-02) High Dimensional Bayesian Optimization Using Dropout. IJCAI International Joint Conference on Artificial Intelligence, pp. 2096–2102. External Links: Link Cited by: §II-C.
  • [17] E. Mezura-Montes, J. Velázquez-Reyes, and C. A. Coello Coello (2006) A Comparative Study of Differential Evolution Variants for Global Optimization. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, GECCO ’06, New York, NY, USA, pp. 485–492. External Links: Link, ISBN 1595931864, Document Cited by: §III-D.
  • [18] R. Moeckel, Y. N. Perov, A. T. Nguyen, M. Vespignani, S. Bonardi, S. Pouya, A. Sproewitz, J. v. d. Kieboom, F. Wilhelm, and A. J. Ijspeert (2013) Gait optimization for roombots modular robots — Matching simulation and reality. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 3265–3272. External Links: ISBN 2153-0858, Document Cited by: §II-B.
  • [19] OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng, Q. Yuan, W. Zaremba, and L. Zhang (2019-10) Solving Rubik’s Cube with a Robot Hand. External Links: Link Cited by: §II-B.
  • [20] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel (2018) Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. External Links: ISBN 1538630818 Cited by: §II-B.
  • [21] A. P. Piotrowski (2017) Review of Differential Evolution population size. Swarm and Evolutionary Computation 32, pp. 1–24. External Links: Link, Document, ISSN 2210-6502 Cited by: §II-C.
  • [22] H. Qiu, M. Garratt, D. Howard, and S. Anavatti (2020) Crossing the reality gap with evolved plastic neurocontrollers. arXiv preprint arXiv:2002.09854. Cited by: §II-B.
  • [23] E. Rohmer, S. P.N. Singh, and M. Freese (2013-11) V-REP: A versatile and scalable robot simulation framework. In IEEE International Conference on Intelligent Robots and Systems, pp. 1321–1326. External Links: Link, ISBN 9781467363587, Document, ISSN 21530858 Cited by: §III-B.
  • [24] J. Ronkkonen, S. Kukkonen, and K. V. Price (2005) Real-parameter optimization with differential evolution. In 2005 IEEE Congress on Evolutionary Computation, Vol. 1, pp. 506–513. External Links: ISBN 1941-0026, Document Cited by: §II-C.
  • [25] R. Smith (2005) Open dynamics engine. External Links: Link Cited by: §II-A.
  • [26] J. Snoek, H. Larochelle, and R. P. Adams (2012) Practical Bayesian Optimization of Machine Learning Algorithms. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2, NIPS’12, Red Hook, NY, USA, pp. 2951–2959. Cited by: §II-C.
  • [27] R. Storn and K. Price (1997)

    Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces

    Journal of Global Optimization 11 (4), pp. 341–359. External Links: Link, Document, ISSN 1573-2916 Cited by: §II-C.
  • [28] J. Tan, Z. Xie, B. Boots, and C. K. Liu (2016) Simulation-based design of dynamic controllers for humanoid balancing. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 2729–2736. External Links: ISBN 2153-0866, Document Cited by: §II-B.
  • [29] J. Tan, T. Zhang, E. Coumans, A. Iscen, Y. Bai, D. Hafner, S. Bohez, and V. Vanhoucke (2018-04) Sim-to-Real: Learning Agile Locomotion For Quadruped Robots. External Links: Link, Document Cited by: §II-B, §II-B.
  • [30] M. Torres-Torriti, T. Arredondo, and P. Castillo-Pizarro (2016) Survey and comparative study of free simulation software for mobile robots. Robotica 34 (4), pp. 791–822. External Links: ISSN 0263-5747 Cited by: §II-A.
  • [31] J. Tremblay, A. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield (2018) Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization. Cited by: §II-B.
  • [32] Z. Wang, M. Zoghi, F. Hutter, D. Matheson, and N. De Freitas (2013) Bayesian Optimization in High Dimensions via Random Embeddings. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI ’13, pp. 1778–1784. External Links: ISBN 9781577356332 Cited by: §II-C.
  • [33] R. L. Williams, B. E. Carter, P. Gallina, and G. Rosati (2002) Dynamic model with slip for wheeled omnidirectional robots. IEEE Transactions on Robotics and Automation 18 (3), pp. 285–293. External Links: Document, ISSN 2374-958X Cited by: §II-B.