In recent years, deep learning has significantly impacted numerous areas in machine learning, improving state-of-the-art results in tasks such as image recognition, speech recognition, and language translation. Robotics has benefited greatly from this progress, with many robotics systems opting to use deep learning in many or all of the processing stages of a typical robotics pipeline [2, 3]. As we aim to endow robots with the ability to operate in complex and dynamic worlds, it becomes important to collect a rich variety of data of robots acting in these worlds. If we are to use deep learning however, it comes at a cost of requiring large amounts of training data, which can be particularly time consuming to collect in these dynamic environments. Simulations then, can help in one of two primary ways:
Rapid prototyping of learning algorithms in the hope to find data-efficient solutions that can be trained on small real-world datasets that are feasible to collect.
Two common simulation environments in the literature today are Bullet  and MuJoCo . However, given that these are physics engines rather than robotics frameworks, it can often be cumbersome to build rich environments and integrate standard robotics tooling such as inverse & forward kinematics, user interfaces, motion libraries, and path planners.
Fortunately, the Virtual Robot Experimentation Platform (V-REP)  is a robotics framework that makes it easy to design robotics applications. However, although the platform is highly customisable and ships with several API, including a Python remote API, it was not developed with the intention to be used for large-scale data collection. As a result, V-REP, when accessed via Python, is currently too slow for the rapid environment interaction that is needed in many robot learning methods, such as reinforcement learning (RL). To that end, PyRep is an attempt to bring the power of V-REP to the robot learning community. In addition to a new intuitive Python API and rendering engine, we modify the open-source version of V-REP to tailor it towards communicating with Python; as a result, we achieve speed boosts upwards of in comparison of the original V-REP Python API.
V-REP  is a general-purpose robot simulation framework maintained by Coppelia Robotics. Some of its many features include:
Cross-platform content (Linux, Mac, and Windows).
Several means of communication with the framework (including embedded Lua scripts, C++ plugins, remote APIs in 6 languages, ROS, etc).
Support for 4 physics engines (Bullet, ODE, Newton, and Vortex), with the ability to quickly switch from one engine to the other.
Inverse & forward kinematics.
Distributed control architecture based on embedded Lua scripts.
Python and C++ are primary languages for research in deep learning and robotics, and so it is imperative that communication times between a learning framework and V-REP are kept to a minimum. Given that V-REP was introduced in 2013 when deep learning was in its infancy, prioritisation was not given to rapid external API calls, which currently rely on inter-thread communication. As a result, this makes V-REP slow to use for external data-hungry applications.
Below we outline the modifications that were made to V-REP.
The 6 remote APIs offered suffer from 2 communication delays. One of these comes from the socket communication between the remote API and the simulation environment (though this can be decreased considerably using shared memory). The second delay, and most notable, is inter-thread communication between the main thread and the various communication threads. This communication latency can become noticeable when the environment needs to be queried synchronously at each timestep (which is often the case in RL). To remove these latencies, we have modified the open-source version of V-REP such that Python now has direct control of the simulation loop, meaning that commands sent from Python are directly executed on the same thread. With these modifications we were able to collect robot trajectories/episodes over 4 orders of magnitude faster than using the original remote Python API; making PyRep an attractive platform for evaluation of robot learning methods.
V-REP ships with 2 main renderers: a default OpenGL 2.0 renderer, and the POV-Ray ray tracing renderer. POV-Ray produces high quality images but at a very low framerate. The OpenGL 2.0 renderer on the other hand uses basic shadow-free rendering, and uses the old-style fixed-function pipeline OpenGL. As part of this report, we release a new OpenGL 3.0+ renderer which supports shadow rendering from all V-REP supported lights, including directional, spotlight, and pointlight. Examples renderings can be seen in Figure 1.
The new API manages simulation handles and provides an object-oriented way of interfacing with the simulation environment. Moreover, we have made it easy to add new robots with motion planning capabilities with only a few lines of Python code. An example of the API in use can be seen in Figure 2.
V-REP has been used extensively over the years in more traditional robotics research and development, but has been overlooked by the growing robot learning community. The new PyRep toolkit brings the power of V-REP to the community by providing a simple and flexible API, significant speedup in run-time, and integration of an OpenGL 3.0+ renderer to V-REP. We are eager to see the tasks that can be solved by new and exciting robot learning methods.
-  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, p. 436, 2015.
-  A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, et al., “Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching,” International Conference on Robotics and Automation, 2018.
-  D. Morrison, A. W. Tow, M. McTaggart, R. Smith, N. Kelly-Boxall, S. Wade-McCue, J. Erskine, R. Grinover, A. Gurman, T. Hunn, et al., “Cartman: The low-cost cartesian manipulator that won the amazon robotics challenge,” International Conference on Robotics and Automation, 2018.
-  K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor, K. Konolige, et al., “Using simulation and domain adaptation to improve efficiency of deep robotic grasping,” International Conference on Robotics and Automation, 2018.
S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz,
S. Levine, R. Hadsell, and K. Bousmalis, “Sim-to-real via sim-to-sim:
Data-efficient robotic grasping via randomized-to-canonical adaptation
Conference on Computer Vision and Pattern Recognition, 2019.
J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,”International Conference on Intelligent Robots and Systems, 2017.
-  S. James, A. J. Davison, and E. Johns, “Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task,” Conference on Robot Learning, 2017.
-  J. Matas, S. James, and A. J. Davison, “Sim-to-real reinforcement learning for deformable object manipulation,” Conference on Robot Learning, 2018.
-  E. Coumans, “Bullet physics simulation,” in ACM SIGGRAPH 2015 Courses, SIGGRAPH ’15, (New York, NY, USA), ACM, 2015.
-  E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” International Conference on Intelligent Robots and Systems, 2012.
-  E. Rohmer, S. P. Singh, and M. Freese, “V-rep: A versatile and scalable robot simulation framework,” International Conference on Intelligent Robots and Systems, 2013.