LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving

05/07/2020 ∙ by Guodong Rong, et al. ∙ LG Electronics Inc 0

Testing autonomous driving algorithms on real autonomous vehicles is extremely costly and many researchers and developers in the field cannot afford a real car and the corresponding sensors. Although several free and open-source autonomous driving stacks, such as Autoware and Apollo are available, choices of open-source simulators to use with them are limited. In this paper, we introduce the LGSVL Simulator which is a high fidelity simulator for autonomous driving. The simulator engine provides end-to-end, full-stack simulation which is ready to be hooked up to Autoware and Apollo. In addition, simulator tools are provided with the core simulation engine which allow users to easily customize sensors, create new types of controllable objects, replace some modules in the core simulator, and create digital twins of particular environments.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Autonomous vehicles have seen dramatic progress in the past several years. Research shows that autonomous vehicles have to be driven billions of miles to demonstrate their liability [17], which is impossible without the help of simulation. From the very beginning of autonomous driving research [24], simulators have played a key role in development and testing of autonomous driving (AD) stacks. Simulation allows developers to quickly test new algorithms without driving real vehicles. Compared to road testing, simulation has several important advantages: It is safer than real road testing, particularly for some dangerous scenarios (e.g. pedestrian jaywalking), and can generate corner cases which are rarely encountered in the real world (e.g. extreme weather). Moreover, a simulator is able to exactly reproduce all factors of a problematic scenario and thus allows developers to debug and test new patches.

More and more modules in today’s autonomous driving stacks utilize deep neural networks (DNN) to help improve performance. Training DNN models requires a large amount of labeled data. Traditional datasets for autonomous driving, such as KITTI

[14] and Cityscapes [9], do not have enough data for DNN to deal with complicated scenarios. Although several large datasets have been recently published by academia [35] and autonomous driving companies [4, 19, 30], these datasets which are collected from real world drives are usually manually (often with help from some automated tools) labeled, which is slow, costly, and error-prone. For some ground truth types, such as pixel-wise segmentation or optical flow, it is extremely difficult or impossible to manually label the data. Simulators can easily generate accurately labeled datasets that are an order-of-magnitude larger in size in parallel with the help of cloud platform.

Fig. 1: Rendering examples by LGSVL Simulator

In this paper, we introduce the LGSVL Simulator111https://www.lgsvlsimulator.com/. “LGSVL” stands for “LG Silicon Valley Lab” which is now renamed to LG Electronics America R&D Lab.. The core simulation engine is developed using the Unity game engine [31] and is open source with the source code freely available on GitHub222https://github.com/lgsvl/simulator. The simulator has a communication bridge that enables passing messages between the simulator and an AD stack. By default the bridge supports ROS, ROS2, and Cyber RT messages, making it ready to be used with Autoware (ROS-based) and Baidu Apollo (ROS-based for 3.0 and previous versions, Cyber RT-based for 3.5 and later versions), the two most popular open source AD stacks. Map tools are provided to import and export HD maps for autonomous driving in formats such as Lanelet2 [23], OpenDRIVE, and Apollo HD Map. Fig. 1 illustrates some rendering examples from LGSVL Simulator.

The rest of this paper is organized as follows: Section II reviews prior related work. A detailed overview of the LGSVL Simulator is provided in Section III. Some examples of applications of the simulator are listed in Section IV, and Section V concludes the paper with direction of our future work.

Ii Related Work

Simulation has been widely used in the automotive industry, especially for vehicle dynamics. Some famous examples are: CarMaker [5], CarSim [6], and ADAMS [1]. Autonomous driving requires more than just vehicle dynamics, and factors such as complex environment settings, different sensor arrangements and configurations, and simulating traffic for vehicles and pedestrians, must also be considered. Some of the earlier simulators [34, 7] run autonomous vehicles in virtual environments, but lack important features such as support for different sensors and simulating pedestrians.

Gazebo [18] is one of the most popular simulation platforms used in robotics and related research areas. Its modular design allows different sensor models and physics engines to be plugged into the simulator. But it is difficult to create large and complex environments with Gazebo and it lacks the newest advancements in rendering available in modern game engines like Unreal [12] and Unity.

There are some other popular open source simulators for autonomous driving, such as AirSim [28], CARLA [10], and Deepdrive [32]

. These other simulators were typically created as research platforms to support reinforcement learning or synthetic data generation for machine learning, and may require significant additional effort to integrate with a user’s AD stack and communication bridge.

There are also several commercial automotive simulators including ANSYS [2], dSPACE [11], PreScan [29], rFpro [25], Cognata [8], Metamoto [20] and NVIDIA’s Drive Constellation [22]. However, because these simulators are not open source they can be difficult for users to customize to satisfy their own specific requirements or research goals.

Commercial video games related to driving nowadays offer realistic environments. Researchers have used games such as Grand Theft Auto V to generate synthetic datasets [27, 26, 16]. However, this usually requires some hacking to be able to access resources in the game and can violate user license agreements. In addition, it is difficult if not impossible to support sensors other than a camera, and to deterministically control the vehicle as well as non-player characters such as pedestrians and traffic.

Iii Overview of Lgsvl Simulator

The autonomous driving simulation workflow enabled by LGSVL Simulator is illustrated in Fig. 2. Details of each component are explained in the following of this section.

Fig. 2: Workflow of LGSVL Simulator

Iii-a User AD Stack

The user AD stack is the system that the user wants to develop, test, and verify through simulation. LGSVL Simulator currently provides reference out-of-the-box integration with the open source AD system platforms Apollo333http://apollo.auto/, developed by Baidu, and Autoware.AI444https://www.autoware.ai/ and Autoware.Auto555https://www.autoware.auto/, developed by the Autoware Foundation.

The user AD stack connects to LGSVL Simulator through a communication bridge interface; a bridge is selected based on the user AD stack’s runtime framework. For Baidu’s Apollo platform, which uses a custom runtime framework called Cyber RT, a custom bridge is provided to the simulator. Autoware.AI and Autoware.Auto, which run on ROS and ROS2, can connect to LGSVL Simulator through standard open source ROS and ROS2 bridges. Fig. 3 shows Autoware and Apollo running with LGSVL Simulator.

Fig. 3: Autoware (top) and Apollo (bottom) running with LGSVL Simulator
Fig. 4: High-level architecture of autonomous driving system and the roles of the simulation engine

If the user’s AD stack uses a custom runtime framework, a custom communication bridge interface can be easily added as a plug-in. Furthermore, LGSVL Simulator supports multiple AD systems connected simultaneously. Each AD system can communicate with the simulator through a dedicated bridge, enabling interaction between different autonomous systems in a unified simulation environment.

Iii-B Simulation Engine

LGSVL Simulator utilizes Unity’s game engine for simulation and takes advantage of the latest technologies in Unity, such as High Definition Render Pipeline (HDRP), in order to simulate photo-realistic virtual environments that match the real world.

Functions of the simulation engine can be broken down into: environment simulation, sensor simulation, and vehicle dynamics and control simulation of an ego vehicle. Fig. 4 shows the relationship between the simulation engine and the AD stack.

Environment simulation includes traffic simulation as well as physical environment simulation like weather and time-of-day. These aspects are important components for test scenario simulation. All aspects of environment simulation can be controlled via the Python API.

The simulation engine of LGSVL Simulator is developed as an open source project. The source code is available publicly on GitHub, and the executable can be downloaded and used for free.

Iii-C Sensor and Vehicle Models

The ego vehicle sensor arrangement in the LGSVL Simulator is fully customizable. The simulator’s web user interface accepts sensor configurations as JSON formatted text allowing easy setup of sensors’ intrinsic and extrinsic parameters. Each sensor entry describes the sensor type, placement, publishing rate, topic name, and reference frame of the measurements. Some sensors may also have additional fields to further define specifications; for example, each LiDAR sensor’s beam count is also configurable.

The simulator has a default set of sensors to choose from which currently include camera, LiDAR, Radar, GPS, and IMU as well as different virtual ground truth sensors. Users can also build their own custom sensors and add them to the simulator as sensor plugins. Fig. 5 illustrates some of sensors in LGSVL Simulator: left column shows some physical sensors including fish-eye camera sensor, LiDAR sensor, and Radar sensor; right column shows some virtual ground truth sensors including segmentation sensor, depth sensor, and 3D bounding box sensor.

For segmentation sensor, we combine semantic segmentation and instance segmentation. Users can configure which semantics get instance segmentation – each instance of objects with these semantics will get different segmentation colors, and instances of other types of objects only get one segmentation color per semantic. For example, if the user configured only “car” and “pedestrian” to have instance segmentation, all buildings will have the same segmentation color, and all roads will have another segmentation color. Each car and each pedestrian will have different segmentation color, but all cars’ color will be similar (e.g. all bluish) and same as pedestrians (e.g. all reddish).

Fig. 5: Different types of sensors. Left (top to bottom): Fish-eye camera, LiDAR, Radar; Right (top to bottom): Segmentation, Depth, 3D Bounding Box.

In addition to the default reference sensors, real world sensor models used in autonomous vehicle systems are also supported in LGSVL Simulator. These sensor plugins have parameters that match their real world counterparts, e.g. Velodyne VLP-16 LiDAR, and behave the same as a real sensor generating realistic data in the same format. Furthermore, users can create their own sensor plugins to implement new variations and even new types of sensors not supported by default in LGSVL Simulator.

LGSVL Simulator provides a basic vehicle dynamics model for the ego vehicle. Additionally, the vehicle dynamics system is set up to allow integration of external third party dynamics models through a Functional Mockup Interface (FMI) [21], shared libraries that can be loaded into the simulator, or separate IPC (Inter-Process Communication) interfaces for co-simulation. As a result, users can couple LGSVL Simulator together with third party vehicle dynamics simulation tools to take advantage of both systems.

Iii-D 3D Environment and HD Maps

The virtual environment is an important component in autonomous driving simulation that enables providing many inputs to an AD system.

As the source of inputs to all sensors, the environment affects an AD system’s perception, prediction, and tracking modules. The environment affects vehicle dynamics which is the key factor for the vehicle control module. It also influences the localization and planning modules through changes to the HD map, which depends on the actual 3D environment. Finally, the 3D environment is the basis for environmental simulation including weather, time of day, traffic agents, and other dynamic objects.

While synthetic 3D environments can be created and used in simulation, we can also replicate and simulate real world locations by creating a digital twin of a real scene from logged data (images, point cloud, etc.). Fig. 6 shows a digital twin simulation environment we created for Borregas Avenue in Sunnyvale, California. In addition, we have collaborated with AAA Northern California, Nevada & Utah to make a digital twin of a portion of GoMentum Station [15]. GoMentum is an AV test facility located in Concord, CA featuring 19 miles of roadways, 48 intersections, and 8 distinct testing zones over 2,100 acres. Using the GoMentum digital twin environment, we tested scenarios in both simulation and with a real test vehicle at the test facility.

Fig. 6: Digital twin of Borregas Avenue.

LGSVL Simulator supports creating, editing, and exporting HD Maps of existing 3D environments. This feature allows users to create and edit custom HD map annotations in a 3D environment. While a 3D environment is useful as realistic simulation of the road, buildings, dynamic agents, and environment conditions which can be perceived and reacted on, map annotations can then be used by other agents in the environment that are part of a scenario (non-ego vehicles, pedestrians, controllable plugin objects). This means that vehicle agents in simulation will be able to follow traffic rules, such as traffic lights, stop signs, lanes, and turns, pedestrian agents can follow a annotated route, etc. As shown in Fig. 7, LGSVL Simulator HD map annotations have very rich information like traffic lanes, lane boundary lines, traffic signals, traffic signs, pedestrian walking routes, etc. On the right side of the figure, a user can make different annotations by choosing corresponding options under Create Modes.

The HD map annotations can be exported into one of the several formats: Apollo 5.0 HD Map, Autoware Vector Map, Lanelet2, and OpenDrive 1.4, so users can use the map files for their own autonomous driving stacks. On the other hand, if a user has a real-world HD map in supported format, he/she can import the map into a 3D environment in LGSVL Simulator. The user will get the corresponding map annotations which are necessary for agents like vehicles, pedestrians to work. Currently, the supported HD map formats which can be imported are Apollo 5.0, Lanelet2, and OpenDrive 1.4. With the ability to both import and export map annotations, a user could import HD maps sourced elsewhere, edit annotations, then export again to make sure that the HD maps used in LGSVL Simulator are coincident with that used by the user’s autonomous driving system.

Fig. 7: HD Map example and annotation tool in LGSVL Simulator.

Iii-E Test Scenarios

Test scenarios consist of simulating an environment and situation in which an autonomous driving stack can be placed to verify correct and expected behavior. Lots of variables are included, such as time of day, weather, road condition, as well as distribution and movement of moving agents, e.g. cars, pedestrians, etc.

LGSVL Simulator provides a Python API to enable users to control and interact with simulated environments. Users can write scripts to create scenarios for their needs – spawning and controlling NPC vehicles and pedestrians and set the environment parameters. With deterministic physics, scripting allows for repeatable testing in simulation. Improvements are continuously made to the platform to support better smart agents and traffic modeling to recreate scenarios that are as close to reality as possible.

We also collaborated with UC Berkeley using SCENIC [13] to generate and test thousands of different scenario test cases by randomizing various parameters. Results from testing those generated scenarios in simulation (using the GoMentum digital twin) then informed which scenarios and parameters would be most useful to test in the real world test facility.

Iv Applications

LGSVL Simulator enables various simulation applications for autonomous driving and more. Some examples are listed in this section. Since the ecosystem of LGSVL Simulator is an open environment, we believe users will extend this spectrum into more different domains.

Iv-a SIL and HIL Testing

The LGSVL Simulator supports both software in the loop (SIL) and hardware in the loop (HIL) testing of AD stacks.

For SIL testing, LGSVL Simulator generates data for different perception sensors, e.g. images for camera sensors and point cloud data for LiDAR sensors, as well as GPS and IMU telemetry data which are used by the perception and localization modules of an AD stack. This enables end-to-end testing of the users’ AD stack. Furthermore, LGSVL Simulator also generates input for other AD stack modules to enable single module (unit) tests. For example, 3D bounding boxes can be generated to simulate output from a perception module as input for a planning module, so users can bypass the perception module (i.e. assuming perfect perception) to test just the planning module.

LGSVL Simulator supports a set of chassis commands, so that a machine running LGSVL Simulator can communicate with another machine running an AD stack which can then control the simulated ego vehicle using these chassis commands. This enables HIL testing where the AD stack is not able to differentiate inputs coming from a real car from simulation data and can send control commands to LGSVL Simulator in the same way it sends to the real car.

Iv-B Machine Learning and Synthetic Data Generation

The LGSVL Simulator provides an easy-to-use Python API that enables collecting and storing camera images and LiDAR data with various ground truth information – occlusion, truncation, 2D bounding box, 3D bounding box, semantic and instance segmentation, etc. Users can write Python scripts to configure sensor intrinsic and extrinsic parameters and generate labeled data in their own format for perception training. An example Python script to generate data in the KITTI format is provided on GitHub666https://www.lgsvlsimulator.com/docs/api-example-descriptions/#collecting-data-in-kitti-format.

Reinforcement learning is an active area of research for autonomous vehicles and robotics, often with the goal of training agents for planning and control. In reinforcement learning, an agent takes actions in an environment based on a policy, often implemented as a DNN, and receives a reward as feedback from the environment which in-turn is used to revise the policy. This process generally needs to be repeated through a large number of episodes before an optimal solution is achieved. The LGSVL Simulator provides out-of-the-box integration with OpenAI Gym [3] through the Python API777https://www.lgsvlsimulator.com/docs/openai-gym/, enabling the LGSVL Simulator as an environment that can be used for reinforcement learning with OpenAI Gym.

Iv-C V2X System

In addition to sensing the world via equipped sensors, autonomous vehicles can also benefit from V2X (vehicle-to-everything) communications, such as getting information of other vehicles via V2V (vehicle-to-vehicle) and getting more environment information via V2I (vehicle-to-infrastructure). Testing V2X in real world is even more difficult than testing a single autonomous vehicle since it requires connected vehicles and infrastructure support. Researchers usually use simulator to test and verify V2X algorithms [33]. LGSVL Simulator supports creation of real or virtual sensor plug-ins which enables users to create special V2X sensors to get information from other vehicles (V2V), pedestrians (V2P), or surrounding infrastructures (V2I). Thus LGSVL Simulator can be used to test V2X systems as well as to generate synthetic data for training.

Iv-D Smart City

Modern smart city systems utilize road-side sensors to monitor traffic flow. The results can be used to control traffic lights making traffic flow smoother. Such system requires different metrics to evaluate the traffic condition. One typical example is “stop count” – the number of “stop”s for a car to drive through an intersection, while a “stop” is defined as its speed falling down to lower than a given threshold for certain time. The ground truths of such metrics are difficult to be manually collected. LGSVL Simulator is also suitable for this kind of application. Using our sensor plug-in model, users can define a new type of sensor counting number of “stop” for a car since we have exact speed and location information. Our controllable plug-in allows users to customize traffic light and other special traffic signs which can be controlled via Python API.

V Conclusions

We introduced LGSVL Simulator, a Unity-based high fidelity simulator for autonomous driving and other related systems. It has been integrated with Autoware and Apollo AD stacks for end-to-end tests, and can be easily extended for other similar AD systems. Several application examples are provided to show the capabilities of the LGSVL Simulator.

The simulation engine is open source and the whole ecosystem is designed to be open, so that users can utilize LGSVL Simulator for different applications and add their own contributions to the ecosystem. The simulator will be continuously enhanced to address new requirements from the user community.


This work is done within LG Electronics America R&D Lab. We thank all the past and current colleagues who have contributed to this project. We also thank all external contributors on GitHub and all users who have provided feedback/suggestions to us.


  • [1] MSC Software (2020)(Website) External Links: Link Cited by: §II.
  • [2] Ansys(Website) External Links: Link Cited by: §II.
  • [3] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba (2016) OpenAI Gym. External Links: arXiv:1606.01540 Cited by: §IV-B.
  • [4] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom (2019) NuScenes: a multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027. Cited by: §I.
  • [5] IPG (2020)(Website) External Links: Link Cited by: §II.
  • [6] Mechanical Simulation (2020)(Website) External Links: Link Cited by: §II.
  • [7] C. Chen, A. Seff, A. Kornhauser, and J. Xiao (2015-12) DeepDriving: learning affordance for direct perception in autonomous driving. In

    2015 IEEE International Conference on Computer Vision (ICCV)

    Vol. , pp. 2722–2730. External Links: Document, ISSN 2380-7504 Cited by: §II.
  • [8] Cognata(Website) External Links: Link Cited by: §II.
  • [9] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding


    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Cited by: §I.
  • [10] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun (2017) CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, pp. 1–16. Cited by: §II.
  • [11] dSPACE(Website) External Links: Link Cited by: §II.
  • [12] Epic Games Unreal Engine. External Links: Link Cited by: §II.
  • [13] D. J. Fremont, T. Dreossi, S. Ghosh, X. Yue, A. L. Sangiovanni-Vincentelli, and S. A. Seshia (2019) Scenic: a language for scenario specification and scene generation. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 63–78. Cited by: §III-E.
  • [14] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013-09) Vision meets robotics: the KITTI dataset. International Journal of Robotics Research 32 (11), pp. 1231–1237. External Links: ISSN 0278-3649, Link, Document Cited by: §I.
  • [15] (Website) External Links: Link Cited by: §III-D.
  • [16] M. Johnson-Roberson, C. Barto, R. Mehta, S. N. Sridhar, K. Rosaen, and R. Vasudevan (2017) Driving in the matrix: can virtual worlds replace human-generated annotations for real world tasks?. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 746–753. Cited by: §II.
  • [17] N. Kalra and S. M. Paddock (2016) Driving to safety: how many miles of driving would it take to demonstrate autonomous vehicle reliability?. Technical report Technical Report RR-1478-RC, Calif.: RAND Corporation. Cited by: §I.
  • [18] N. Koenig and A. Howard (2004-Sep.) Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Vol. 3, pp. 2149–2154 vol.3. External Links: Document Cited by: §II.
  • [19] Lyft (2019)(Website) External Links: Link Cited by: §I.
  • [20] Metamoto(Website) External Links: Link Cited by: §II.
  • [21] Modelica Association (2019)(Website) External Links: Link Cited by: §III-C.
  • [22] nVidia(Website) External Links: Link Cited by: §II.
  • [23] F. Poggenhans, J. Pauls, J. Janosovits, S. Orf, M. Naumann, F. Kuhnt, and M. Mayr (2018-11) Lanelet2: a high-definition map framework for the future of automated driving. In Proc. IEEE Intell. Trans. Syst. Conf., Hawaii, USA. External Links: Link Cited by: §I.
  • [24] D. Pomerleau (1989-01) ALVINN: an autonomous land vehicle in a neural network. In Proceedings of Advances in Neural Information Processing Systems 1, D.S. Touretzky (Ed.), pp. 305–313. Cited by: §I.
  • [25] rFpro(Website) External Links: Link Cited by: §II.
  • [26] S. R. Richter, Z. Hayder, and V. Koltun (2017) Playing for benchmarks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pp. 2232–2241. External Links: Link, Document Cited by: §II.
  • [27] S. R. Richter, V. Vineet, S. Roth, and V. Koltun (2016) Playing for data: Ground truth from computer games. In European Conference on Computer Vision (ECCV), B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), LNCS, Vol. 9906, pp. 102–118. Cited by: §II.
  • [28] S. Shah, D. Dey, C. Lovett, and A. Kapoor (2017) AirSim: high-fidelity visual and physical simulation for autonomous vehicles. In Field and Service Robotics, External Links: arXiv:1705.05065, Link Cited by: §II.
  • [29] Siemens(Website) External Links: Link Cited by: §II.
  • [30] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, V. Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon, A. Gao, A. Joshi, Y. Zhang, J. Shlens, Z. Chen, and D. Anguelov (2019) Scalability in perception for autonomous driving: waymo open dataset. arXiv preprint arXiv:1912.04838. Cited by: §I.
  • [31] Unity Technologies Unity. External Links: Link Cited by: §I.
  • [32] Voyage (2019)(Website) External Links: Link Cited by: §II.
  • [33] Z. Wang, G. Wu, K. Boriboonsomsin, M. J. Barth, K. Han, B. Kim, and P. Tiwari (2019-05) Cooperative ramp merging system: agent-based modeling and simulation using game engine. SAE International Journal of Connected and Automated Vehicles 2 (2). External Links: ISSN 2574-0741, Document Cited by: §IV-C.
  • [34] B. Wymann, E. Espié, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner (2014) TORCS, The Open Racing Car Simulator. Note: http://www.torcs.org Cited by: §II.
  • [35] F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell (2018) BDD100K: A diverse driving video database with scalable annotation tooling. CoRR abs/1805.04687. External Links: Link, 1805.04687 Cited by: §I.