Benchmarking Simulated Robotic Manipulation through a Real World Dataset

by   Jack Collins, et al.

We present a benchmark to facilitate simulated manipulation; an attempt to overcome the obstacles of physical benchmarks through the distribution of a real world, ground truth dataset. Users are given various simulated manipulation tasks with assigned protocols having the objective of replicating the real world results of a recorded dataset. The benchmark comprises of a range of metrics used to characterise the successes of submitted environments whilst providing insight into their deficiencies. We apply our benchmark to two simulation environments, PyBullet and V-Rep, and publish the results. All materials required to benchmark an environment, including protocols and the dataset, can be found at the benchmarks' website


page 1

page 3

page 6


Quantifying the Reality Gap in Robotic Manipulation Tasks

We quantify the accuracy of various simulators compared to a real world ...

BulletArm: An Open-Source Robotic Manipulation Benchmark and Learning Framework

We present BulletArm, a novel benchmark and learning-environment for rob...

OffWorld Gym: open-access physical robotics environment for real-world reinforcement learning benchmark and research

Success stories of applied machine learning can be traced back to the da...

Sim2Real2Sim: Bridging the Gap Between Simulation and Real-World in Flexible Object Manipulation

This paper addresses a new strategy called Simulation-to-Real-to-Simulat...

Continuous Optimization Benchmarks by Simulation

Benchmark experiments are required to test, compare, tune, and understan...

A User's Guide to Calibrating Robotics Simulators

Simulators are a critical component of modern robotics research. Strateg...

A Systematic Comparison of Simulation Software for Robotic Arm Manipulation using ROS2

Simulation software is a powerful tool for robotics research, allowing t...