Simulation Framework for Mobile Robots in Planetary-Like Environments

05/29/2020
by   Riccardo Giubilato, et al.
DLR
0

In this paper we present a simulation framework for the evaluation of the navigation and localization metrological performances of a robotic platform. The simulator, based on ROS (Robot Operating System) Gazebo, is targeted to a planetary-like research vehicle which allows to test various perception and navigation approaches for specific environment conditions. The possibility of simulating arbitrary sensor setups comprising cameras, LiDARs (Light Detection and Ranging) and IMUs makes Gazebo an excellent resource for rapid prototyping. In this work we evaluate a variety of open-source visual and LiDAR SLAM (Simultaneous Localization and Mapping) algorithms in a simulated Martian environment. Datasets are captured by driving the rover and recording sensors outputs as well as the ground truth for a precise performance evaluation.

READ FULL TEXT VIEW PDF

Authors

page 2

page 3

page 5

page 6

12/01/2020

End-to-End UAV Simulation for Visual SLAM and Navigation

Visual Simultaneous Localization and Mapping (v-SLAM) and navigation of ...
04/29/2021

Accurate outdoor ground truth based on total stations

In robotics, accurate ground-truth position fostered the development of ...
10/22/2020

NightOwl: Robotic Platform for Wheeled Service Robot

NightOwl is a robotic platform designed exclusively for a wheeled servic...
10/18/2021

Enhancing exploration algorithms for navigation with visual SLAM

Exploration is an important step in autonomous navigation of robotic sys...
12/14/2021

Autonomous Navigation System from Simultaneous Localization and Mapping

This paper presents the development of a Simultaneous Localization and M...
03/16/2021

The utilization of spherical camera in simulation for service robotics

Safety is one of the most critical factors in robotics, especially when ...
10/12/2021

Fully-simulated Integration of Scamp5d Vision System and Robot Simulator

This paper proposed a fully-simulated environment by integrating an on-s...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Designing a mobile robot is a costly task, often carried out in an inevitable trial-and-error process. For this reason, simulation toolkits are precious assets to optimize both time and expenses. Aside from mechanical or physical analysis software, which allow to evaluate in detail very specific design choices, many solutions are available to assist the high-level design of the whole robot such as Gazebo111gazebosim.org, V-REP222coppeliarobotics.com/coppeliaSim or Microsoft AirSim333github.com/microsoft/AirSim [26]. This family of software offers simple physical simulation capabilities in order to allow the robot to interact with a simulated environment and more importantly, provides solutions to simulate the output of various types of perception sensors, such as cameras, range sensors, or Inertial Measurement Units (IMU).

Among them, the Gazebo simulator offers a tight integration within the Robot Operating System (ROS) where the generated sensors outputs can be processed by Simultaneous Localization and Mapping (SLAM) algorithms for pose estimation and mapping

[11, 5, 6, 12]. Then, motion planning algorithms can output motor controls which move the robot in a virtual environment. This is beneficial not only to assist the design process of the robot but also to test thoroughly and in different operating conditions all the algorithms involved. Furthermore, the tight integration with ROS allows to share the source code for all operations between the real robot and the virtual counterpart. This ensures that, when deployed on the field, the real robot will behave almost exactly as foreseen and tested in simulation. In addition, the evaluation of positioning algorithms on simulated environments is beneficial from the metrological perspective: the ground truth is exact, while during field testing it is indeed affected by errors. Lastly, it is possible to evaluate the impact of sensor characteristics, such as FOV and resolution, on the reconstructed trajectory.

(a)
(b)
(c)
Fig. 1: (a) The MORPHEUS rover [12] and its simulated counterpart in ROS Gazebo. (b) View of a synthetic environment modeled in Blender along with a rendered camera view
(a) Left Image (b) Right Image (c) Disparity (d) Point Cloud
(e) 3D LiDAR scan
Fig. 2: Simulated sensing modalities for the MORPHEUS rover. (a-d) Stereo camera output with left and right images (after intrinsic and extrinsic calibration) using the multicamera plugin, disparity map and generated pointcloud. (e) full 3D LiDAR scan generated by the gazebo_ros_laser_controller plugin, colormapped by height

In this paper, we present a simulation framework dedicated to the validation of SLAM algorithms given the mobility capabilities of a rover and the Martian topography. The framework is based on ROS and Gazebo, and is targeted to the MORPHEUS rover [12], a research test-bed for autonomous space operations developed at the University of Padova (see Fig. (a)a). A replica of the rover (Fig. (b)b) is driven in a variety of simulated planetary environments, enriched with 3D models of rocks of various sizes to add structure. We evaluate both vision and LiDAR perception technologies. Vision has been extensively used for navigation purposes for NASA MER and MSL rovers, and will be used in the next rover missions: ESA ExoMars and NASA Mars 2020 [13]. Although to this day no LiDAR sensor have been used on planetary rovers, they have been employed for relative navigation in on-Earth-orbit space missions [7], opening the possibility of future implementation in planetary environments.

This paper is structured as follows: Section II introduces recent related works, Section III presents the simulation framework, Section IV introduces the tested localization algorithms, Section V reports an in-depth analysis of the results and Section VI contains some final remarks.

Ii Related Works

Algorithm Sensor Loop Closure Implementation Notes
ORB-SLAM2 [22] Stereo / Mono / RGB-D 2400 maximum ORB features
RTAB-MAP [20] Stereo / RGB-D ORB features for Loop Closure - enabled Hypothesis Verification - use g2o [19]
LibVISO2 [10] Stereo / Mono X -
A-LOAM [31] 3D LiDAR X -
HDL-SLAM [18] 3D LiDAR scan registration with NDT [4]
LeGO-LOAM [27] 3D LiDAR -
TABLE I: Summary of the tested algorithms

In literature exist a variety of research works which make use of the Gazebo simulation environment. Many of them are related to indoor mapping and navigation [1, 29, 24]. In [17] a minimal simulated environment is used to test the operations of a planetary research platform, and finally building a 3D representation of the observed environment in form of OctoMap [16].

Recently, Gazebo has been used to test and develop navigation strategies for Astrobee [8, 28], a flying robot for the International Space Station. The robot tracks its motion using Visual-Inertial sensing and uses a depth camera to build maps for path planning. All sensors are simulated in a virtual environment replicating the interiors of the ISS, allowing to test the full navigation pipeline in the proper operative conditions.

The authors of [2] used Gazebo to build a simulator for rover operations in a lunar environment. The Gazebo rendering engine has been modified to some extent in order to enable loading of several kilometers wide DTMs while keeping the computational cost at minimum. Photorealism is obtained through visual shaders replicating sun glare, improvements on the shadow generation and by adding custom bump maps to draw wheel marks on the ground.

The authors of [21], in order to validate the vision-based algorithm for the ExoMars rover navigation, developed a simulation capable of generating realistic Mars-like images. Their simulation was based on the University of Dundee’s computer graphics utility PANGU [23].

Iii Simulated Rover and Test Environment

The MORPHEUS rover is a mobile platform targeted at unstructured terrains. 6 wheels, individually powered by MAXON®motors, are mounted on three rockers passively connected to the rover body by revoluting joints at their barycenter. Turning is performed by skid-steering such that both spot turns and pivot turns are possible. The motor drivers are controlled by Arduino microcontrollers which receive inputs and communicate the motors status to a nVidia Jetson TX2 running Ubuntu 14.04, where all the local processing is done. The Jetson shares a Wi-Fi ROS network with a laptop intented as a base station, where the status of the robot can be monitored and user inputs can be forwarded. The rover is equipped with a Stereolabs ZED camera, which captures synchronized image pairs at variable framerates and resolutions. The stereo processing (distortion correction and stereo rectification) is performed on the embedded Tegra GPU. The rover carries also a plane scanning LiDAR to perform obstacle avoidance.

Iii-a The Rover Model

An URDF model of the rover is exported from CAD drawings using the ROS add-on for SolidWorks sw_urdf_exporter444wiki.ros.org/sw_urdf_exporter. As the complexity of the model induces a significant computational load to the rendering and physics engine, we provide also a simplified version retaining complete functionality. The skid-steer locomotion is implemented using the diff_drive_controller555wiki.ros.org/diff_drive_controller.

The stereo camera is implemented using the multicamera plugin which allows to simulate lens distortion and noise in the image. We combine this plugin with the recently released lens_flare_sensor [2] to simulate the lens flare effect on the image when the sun lies close to the line of sight. The LiDAR sensor is simulated replicating a Velodyne VLP-16 using the gazebo_ros_velodyne_laser666wiki.ros.org/velodyne_gazebo_plugins plugin.

Iii-B The Environment

Blender

Inputs

Gazebo

source Digital Terrain Model

terrain texture

normal map

Create plane and divide in grid

Apply displacement_map

Generate random rock models

Scatter rocks using hair_particles

Generate path

.stl + .obj to world model

.sdf to actor objects
Fig. 3: Schematic workflow to generate a virtual environment in Gazebo from a Digital Terrain Model using the open source 3D modeler Blender

The virtual environment is modeled after a Digital Terrain Model (DTM) of the Gale crater on Mars, cropped to a planar landscape. A schematic overview of the map generation process is given in Fig. 3. The DTM is imported in Gazebo to create a base featureless surface as a basis for the virtual environment. To populate the surface with rocks, we import the DTM in the 3D modeler Blender applying a displacement map to a plane segmented in a coarse grid. Two population of rocks are scattered over this surface using a manually weighted random distribution roughly matching the frequencies observed on the Martian surface [14]: a small population of large boulders and a large population of smaller rocks with diameters ranging from 0.1 to 0.5 meters.

To precisely evaluate the performances of SLAM algorithms on this environment, we use Blender to generate two fixed paths along which the robot will move using the actor functionalities of ROS Gazebo. To simulate the motion caused by the roughness of the terrain we added noise to the camera orientations. The resulting sequences of poses are exported to SDF files to instruct the Gazebo actor objects.

Iv Localization Algorithms

In this paper we compare a variety of odometry or SLAM algorithms using either the virtual stereo camera or 3D LiDAR to provide localization (i.e. compute the transformation between the robot and world reference frames ) showing how our virtual environment can be used to aid the design choices for the perception system of a robot depending on the target environment. An overview is provided in Table I along with relevant implementation remarks about parameter values that differ from the default ones.

Iv-a Visual SLAM

Among all available Visual SLAM algorithms for stereo cameras, we selected ORB-SLAM2 [22], RTAB-MAP [20] and LibVISO2 [10]. LiBVISO2 is a widely used Visual Odometry algorithm without Loop Closure capabilities. RTAB-MAP is instead a RGB-D Graph SLAM with a bayesian Loop Closure detector that addresses multi-session mapping and is highly configurable through an easy Graphical User Interface (GUI) (e.g. types of feature detectors and descriptors, optimizers and respective parameters). ORB-SLAM2 is a Visual SLAM algorithm for monocular, stereo and RGB-D vision systems based on ORB features [25] which leverages a Bag of Words approach [9] for localization and Loop Closure detection.

Iv-B LiDAR SLAM

In addition, we compare the performances of a variety of recently published LiDAR SLAM algorithms which are released open source and are compatible with the Robot Operating System. A-LOAM777github.com/HKUST-Aerial-Robotics/A-LOAM is an implementation of LOAM [30] where odometry and mapping are decoupled to be performed at a faster and slower rate respectively and the poses are computed by matching edge and planar features across scans. LeGO-LOAM [27] improves the performances of LOAM by extracting and matching point clusters across LiDAR scans and by explicitly utilizing the ground to constrain the roll pitch and z coordinates during pose tracking. In addition to the original LOAM, a pose graph is maintained to include Loop Closures. Finally we test hdl_graph_slam888github.com/koide3/hdl_graph_slam [18] (referred here as HDL-SLAM), an open source LiDAR SLAM package for the Robot Operating System which provides a modular graph SLAM for 3D LiDARs based on scan registration through ICP or NDT [3]. It provides interfaces for easy integration of IMU and GPS measurements and performs Loop Closure detection to correct a pose graph.

V Experiments and Discussion

ORB RTAB VISO2 ALOAM HDL LeGO
ATE [m] 0.48 0.14 1.56 0.76 34.29 0.21
TDr [%] 6.31 0.43 0.34 0.81 60.3 0.47
TABLE II: Root Mean Square of Absolute Trajectory Error and Median of Translation Drift in the Long Sequence
ORB RTAB VISO2 ALOAM HDL LeGO
ATE [m] 0.07 0.04 0.39 7.65 2.49 0.63
TDr [%] 1.91 0.62 0.55 7.58 13.1 2.89
TABLE III: Root Mean Square of Absolute Trajectory Error and Median of Translation Drift in the Short Sequence

We performed a variety of experiments in two sequences generated as explained in Sec. III-B. The first sequence, denominated Long, takes places in the environment visible in Fig.(c)c which comprises a denser distribution of small pebbles and a sparser distribution or larger rocks with dimensions comparable to the ones of the rover. A closed trajectory, about 300 meters long, allows to evaluate the tracking performances of all algorithms in the presence of 90 turns as well as the Loop Closure capabilities, as the rovers returns in the initial location with the same viewpoint. A second and shorter sequence, denominated here Short, is about 60 meters long and takes place around high boulders, generally bigger than the rover. An approximately triangular trajectory ends in proximity of the beginning, however on an opposite camera viewpoint, not allowing detection of Loop Closures from the visual pipelines but, in principle, allowing it for LiDAR pipelines which benefit from 360 range coverage.

The virtual rover is equipped with a stereo camera whose specifications make it equivalent to a Stereolabs ZED stereo camera, which is mounted on our Morpheus rover (see Fig. (a)a). The 3D LiDAR is modeled roughly after the Ouster OS-1 with 64 scan planes. The full characteristics of both sensors are reported for brevity in Table IV.

Stereo camera 3D LiDAR
Resolution 1280x720 px 0.2 (H) x  0.4 (V)
FoV 90° (H) x 60° (V) 90° (H) x 30° (V)
Refresh Rate 30 Hz 10 Hz
Baseline 0.12 m -
TABLE IV: Camera and LiDAR characteristics

We test the performances of the algorithms introduces in Sec. IV in terms of how accurately they reconstruct the trajectories from the Long and Short sessions. We first align the trajectory to the ground truth using Horn’s method [15] given pose correspondences found by matching timestamps. In order to not underestimate the pose errors resulting from angular drift, only the first third of the whole trajectory is use for alignment. For each correspondence, we compute the Absolute Trajectory Error (ATE), or the L2 distance between the aligned poses:

(1)

where and are positions of corresponding poses from ground truth and estimated from SLAM respectively after alignment. We also compute the translation drift as the relative difference between the lengths of local segments of the estimated trajectory and ground truth. Let be and two ground truth poses such that the length of the trajectory that connects them is 10 meters. Let then be and the estimated poses from SLAM that correspond via timestamps to and . Thus, we define the local translation drift as:

(2)

Finally, we report a summary of the results in a table for both sequences by computing the Root Mean Square of the errors for each time point along the trajectories and for each algorithm.

(a) Visual SLAM map
(b) LiDAR SLAM map
Fig. 4: Visualization of the maps built from a visual SLAM (ORB-SLAM2) and a LiDAR SLAM (LeGO-LOAM). This figures highlight the different appearance from a sparse visual map of 3D landmarks from detected image features and a dense LiDAR map obtained by stacking 3D LiDAR scans given accurate estimations of the sensor poses
(a)
(b)
(c) Trajectories & environment Long sequence
(d) Trajectories & environment Short sequence
(e) Absolute Trajectory Error
(f) Absolute Trajectory Error
(g) Translation Drift
(h) Translation Drift
Fig. 5: Performances and result visualization of the compared stereo and LiDAR SLAM systems in the Long and Short sequences. Trajectories are overlaid on top views of the environment, showing the amount and distribution of rocks. The ATE plots focus only on the algorithms that succeeded in estimating the trajectory.

A first mean of comparison among visual and LiDAR approaches is in the quality and density of the map, which might be employed for the detection of geological features. Figure 4 presents the qualitative difference between a visual map, comprised of sparse 3D landmarks, and a LiDAR map, built by concatenating LiDAR scans. A quantitative evaluation of performances is instead presented in Figure 5, which reports the results of all algorithms on the Long and Short sequences. In addition, Tables II and III contain the RMS errors highlighting the best scoring algorithms. Figures (c)c and (d)d show the resulting trajectories and ground truth overlaid on a top-view of the environment to highlight the context in terms of geometry. In the Long sequence both ORB-SLAM2 and RTAB-MAP were able to successfully close the loop, therefore their ATE is close to zero at both the beginning and end of the trajectory. However, ORB-SLAM2 accumulates some translation drift which manifest itself as higher ATEs in the middle of the trajectory (see Fig. (e)e). Contrarily, LibVISO2 exhibits the lowest translational drift but accumulates angular drift which can not be recovered as it is a pure visual odometry. RTAB-MAP instead shows consistent performances in both sequences achieving the lowest ATE errors thanks to an accurate odometry and Loop Closure capabilities. The LiDAR odometry A-LOAM outperform the visual odometry LibVISO2 in the Long sequence although obtaining the highest errors in the Short sequence. LeGO-LOAM instead even outperforms ORB-SLAM in the Long

sequence. This result is surprising given the little geometric structures present in this sequence but can be explained given that both ALOAM and LeGO-LOAM extract and match edge features belonging to the LiDAR scans, of which the environment has plenty and uniformly distributed, contrarily to the

Short sequence which is characterized by bigger and sparsely distributed boulders that obstruct the view. HDL-SLAM instead relies on the mechanism of scan matching, which degenerates in the presence on planar scenes. This explains the extreme translation drift in the Long sequence.

Vi Conclusions

In this work we presented a simulation framework for mobile robots based on ROS Gazebo. We demonstrated how it can be used to aid the selection of perception sensors based on the expected geometry and appearance of the environment. Furthermore, we compared the performances of a variety of open source Visual and LiDAR SLAM algorithms in different environments characterized by different rock distributions and size. Although visual SLAM proves to be accurate in presence of textured ground, LiDAR SLAM has the advantage of building detailed maps in form of point clouds. As for future developments of this work, we plan to enhance the photo-realism of the simulation and join the advantages of both SLAM approaches, fusing 3D LiDARs with stereo cameras using the simulated environment to validate the approach.

Acknowledgement

This work has been supported by Project “ARES”, Progetti Innovativi degli Studenti, University of Padova. We would also like to thank the Morpheus Team for the discussions and participation in the experiments.

References

  • [1] I. Afanasyev, A. Sagitov, and E. Magid (2015) ROS-based slam for a gazebo-simulated mobile robot in image-based 3d model of indoor environment. In Advanced Concepts for Intelligent Vision Systems, Cham, pp. 273–283. Cited by: §II.
  • [2] M. Allan, U. Wong, P. M. Furlong, A. Rogg, S. McMichael, T. Welsh, I. Chen, S. Peters, B. Gerkey, M. Quigley, et al. (2019) Planetary rover simulation for lunar exploration missions. In 2019 IEEE Aerospace Conference, pp. 1–19. Cited by: §II, §III-A.
  • [3] P. Biber and W. Strasser (2003-10)

    The normal distributions transform: a new approach to laser scan matching

    .
    In IROS, Vol. 3, pp. 2743–2748. External Links: Document, ISSN Cited by: §IV-B.
  • [4] P. Biber and W. Straßer (2003) The normal distributions transform: a new approach to laser scan matching. In IROS, Vol. 3, pp. 2743–2748. Cited by: TABLE I.
  • [5] S. Chiodini, R. Giubilato, M. Pertile, and S. Debei (2020) Retrieving scale on monocular visual odometry using low resolution range sensors. IEEE Transactions on Instrumentation and Measurement (), pp. . External Links: Document, ISSN 1557-9662 Cited by: §I.
  • [6] S. Chiodini, M. Pertile, R. Giubilato, F. Salvioli, D. Bussi, M. Barrera, P. Franceschetti, and S. Debei (2019-06) Rover relative localization testing in martian relevant environment. In 2019 IEEE 5th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Vol. , pp. 473–478. External Links: ISSN 2575-7482 Cited by: §I.
  • [7] J. A. Christian and S. Cryan (2013) A survey of lidar technology and its use in spacecraft relative navigation. In AIAA Guidance, Navigation, and Control (GNC) Conference, pp. 4641. Cited by: §I.
  • [8] L. Fluckiger and B. Coltin (2019) Astrobee robot software: enabling mobile autonomy on the iss. Cited by: §II.
  • [9] D. Gálvez-López and J. D. Tardós (2012-10) Bags of binary words for fast place recognition in image sequences. IEEE Transactions on Robotics 28 (5), pp. 1188–1197. External Links: Document, ISSN 1552-3098 Cited by: §IV-A.
  • [10] A. Geiger, J. Ziegler, and C. Stiller (2011) StereoScan: dense 3d reconstruction in real-time. Cited by: TABLE I, §IV-A.
  • [11] R. Giubilato, M. Vayugundla, M. J. Schuster, W. Stuerzl, A. Wedler, R. Triebel, and S. Debei (2020) Relocalization with submaps: multi-session mapping for planetary rovers equipped with stereo cameras. IEEE Robotics and Automation Letters 5 (2), pp. 580–587. External Links: ISSN 2377-3774 Cited by: §I.
  • [12] R. Giubilato, S. Chiodini, M. Pertile, and S. Debei (2019) An evaluation of ros-compatible stereo visual slam methods on a nvidia jetson tx2. Measurement 140, pp. 161–170. Cited by: Fig. 1, §I, §I.
  • [13] S. B. Goldberg, M. W. Maimone, and L. Matthies (2002) Stereo vision and rover navigation software for planetary exploration. In Proceedings, IEEE Aerospace Conference, Vol. 5, pp. 5–5. Cited by: §I.
  • [14] M. P. Golombek, A. Haldemann, N. Forsberg-Taylor, E. N. Dimaggio, R. Schroeder, B. Jakosky, M. Mellon, and J. Matijevic (2003) Rock size-frequency distributions on mars and implications for mars exploration rover landing safety and operations. Journal of Geophysical Research: Planets 108 (E12). Cited by: §III-B.
  • [15] B. K. P. Horn (1987-04) Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. A 4 (4), pp. 629–642. External Links: Document Cited by: §V.
  • [16] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Document Cited by: §II.
  • [17] P. G. Jayasekara, G. Ishigami, and T. Kubota (2012) Testing and validation of autonomous navigation for a planetary exploration rover using opensource simulation tools. In International Symposium on Arti-cial Intelligence, Robotics and Automation in Space, ISAIRAS, Cited by: §II.
  • [18] K. Koide, J. Miura, and E. Menegatti (2019) A portable three-dimensional lidar-based system for long-term and wide-area people behavior measurement. International Journal of Advanced Robotic Systems 16 (2). Cited by: TABLE I, §IV-B.
  • [19] R. Kummerle, B. Steder, C. Dornhege, M. Ruhnke, G. Grisetti, C. Stachniss, and A. Kleiner (2009-09-30) On measuring the accuracy of SLAM algorithms. Autonomous Robots 27 (4), pp. 387. External Links: ISSN 1573-7527 Cited by: TABLE I.
  • [20] M. Labbé and F. Michaud (2019) RTAB-map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. Journal of Field Robotics 36 (2), pp. 416–446. Cited by: TABLE I, §IV-A.
  • [21] M. McCrum, S. Parkes, I. Martin, and M. Dunstan (2010) Mars visual simulation for exomars navigation algorithm validation. In Proc. of i-SAIRAS, pp. 283–290. Cited by: §II.
  • [22] R. Mur-Artal and J. D. Tardós (2017) Visual-inertial monocular slam with map reuse. IEEE Robotics and Automation Letters 2 (2), pp. 796–803. Cited by: TABLE I, §IV-A.
  • [23] S. Parkes, M. Dunstan, I. Martin, M. McCrum, and O. Dubois-Matra (2009) Testing advanced navigation systems for planetary landers and rovers. In 60th International Astronautical Congress, IAC, pp. 869–877. Cited by: §II.
  • [24] Z. B. Rivera, M. C. De Simone, and D. Guida (2019) Unmanned ground vehicle modelling in gazebo/ros-based environments. Machines 7 (2), pp. 42. Cited by: §II.
  • [25] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski (2011) ORB: an efficient alternative to sift or surf. In

    2011 International conference on computer vision

    ,
    pp. 2564–2571. Cited by: §IV-A.
  • [26] S. Shah, D. Dey, C. Lovett, and A. Kapoor (2018) Airsim: high-fidelity visual and physical simulation for autonomous vehicles. In Field and service robotics, pp. 621–635. Cited by: §I.
  • [27] T. Shan and B. Englot (2018) LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In IROS, pp. 4758–4765. Cited by: TABLE I, §IV-B.
  • [28] A. M. Vargas (2018) Astrobee: current status and future use as an international research platform. In International Astronautical Congress (IAC), Cited by: §II.
  • [29] C. Wang, L. Meng, S. She, I. M. Mitchell, T. Li, F. Tung, W. Wan, M. Q. Meng, and C. W. de Silva (2017) Autonomous mobile robot navigation in uneven and unstructured indoor environments. In IROS, pp. 109–116. Cited by: §II.
  • [30] J. Zhang and S. Singh (2014) LOAM: lidar odometry and mapping in real-time. In Robotics: Science and Systems Conference (RSS), Cited by: §IV-B.
  • [31] J. Zhang and S. Singh (2014) LOAM: lidar odometry and mapping in real-time.. In Robotics: Science and Systems, Vol. 2. Cited by: TABLE I.