Designing a mobile robot is a costly task, often carried out in an inevitable trial-and-error process. For this reason, simulation toolkits are precious assets to optimize both time and expenses. Aside from mechanical or physical analysis software, which allow to evaluate in detail very specific design choices, many solutions are available to assist the high-level design of the whole robot such as Gazebo111gazebosim.org, V-REP222coppeliarobotics.com/coppeliaSim or Microsoft AirSim333github.com/microsoft/AirSim . This family of software offers simple physical simulation capabilities in order to allow the robot to interact with a simulated environment and more importantly, provides solutions to simulate the output of various types of perception sensors, such as cameras, range sensors, or Inertial Measurement Units (IMU).
Among them, the Gazebo simulator offers a tight integration within the Robot Operating System (ROS) where the generated sensors outputs can be processed by Simultaneous Localization and Mapping (SLAM) algorithms for pose estimation and mapping[11, 5, 6, 12]. Then, motion planning algorithms can output motor controls which move the robot in a virtual environment. This is beneficial not only to assist the design process of the robot but also to test thoroughly and in different operating conditions all the algorithms involved. Furthermore, the tight integration with ROS allows to share the source code for all operations between the real robot and the virtual counterpart. This ensures that, when deployed on the field, the real robot will behave almost exactly as foreseen and tested in simulation. In addition, the evaluation of positioning algorithms on simulated environments is beneficial from the metrological perspective: the ground truth is exact, while during field testing it is indeed affected by errors. Lastly, it is possible to evaluate the impact of sensor characteristics, such as FOV and resolution, on the reconstructed trajectory.
In this paper, we present a simulation framework dedicated to the validation of SLAM algorithms given the mobility capabilities of a rover and the Martian topography. The framework is based on ROS and Gazebo, and is targeted to the MORPHEUS rover , a research test-bed for autonomous space operations developed at the University of Padova (see Fig. (a)a). A replica of the rover (Fig. (b)b) is driven in a variety of simulated planetary environments, enriched with 3D models of rocks of various sizes to add structure. We evaluate both vision and LiDAR perception technologies. Vision has been extensively used for navigation purposes for NASA MER and MSL rovers, and will be used in the next rover missions: ESA ExoMars and NASA Mars 2020 . Although to this day no LiDAR sensor have been used on planetary rovers, they have been employed for relative navigation in on-Earth-orbit space missions , opening the possibility of future implementation in planetary environments.
Ii Related Works
|Algorithm||Sensor||Loop Closure||Implementation Notes|
|ORB-SLAM2 ||Stereo / Mono / RGB-D||2400 maximum ORB features|
|RTAB-MAP ||Stereo / RGB-D||ORB features for Loop Closure - enabled Hypothesis Verification - use g2o |
|LibVISO2 ||Stereo / Mono||X||-|
|A-LOAM ||3D LiDAR||X||-|
|HDL-SLAM ||3D LiDAR||scan registration with NDT |
|LeGO-LOAM ||3D LiDAR||-|
In literature exist a variety of research works which make use of the Gazebo simulation environment. Many of them are related to indoor mapping and navigation [1, 29, 24]. In  a minimal simulated environment is used to test the operations of a planetary research platform, and finally building a 3D representation of the observed environment in form of OctoMap .
Recently, Gazebo has been used to test and develop navigation strategies for Astrobee [8, 28], a flying robot for the International Space Station. The robot tracks its motion using Visual-Inertial sensing and uses a depth camera to build maps for path planning. All sensors are simulated in a virtual environment replicating the interiors of the ISS, allowing to test the full navigation pipeline in the proper operative conditions.
The authors of  used Gazebo to build a simulator for rover operations in a lunar environment. The Gazebo rendering engine has been modified to some extent in order to enable loading of several kilometers wide DTMs while keeping the computational cost at minimum. Photorealism is obtained through visual shaders replicating sun glare, improvements on the shadow generation and by adding custom bump maps to draw wheel marks on the ground.
Iii Simulated Rover and Test Environment
The MORPHEUS rover is a mobile platform targeted at unstructured terrains. 6 wheels, individually powered by MAXON®motors, are mounted on three rockers passively connected to the rover body by revoluting joints at their barycenter. Turning is performed by skid-steering such that both spot turns and pivot turns are possible. The motor drivers are controlled by Arduino microcontrollers which receive inputs and communicate the motors status to a nVidia Jetson TX2 running Ubuntu 14.04, where all the local processing is done. The Jetson shares a Wi-Fi ROS network with a laptop intented as a base station, where the status of the robot can be monitored and user inputs can be forwarded. The rover is equipped with a Stereolabs ZED camera, which captures synchronized image pairs at variable framerates and resolutions. The stereo processing (distortion correction and stereo rectification) is performed on the embedded Tegra GPU. The rover carries also a plane scanning LiDAR to perform obstacle avoidance.
Iii-a The Rover Model
An URDF model of the rover is exported from CAD drawings using the ROS add-on for SolidWorks sw_urdf_exporter444wiki.ros.org/sw_urdf_exporter. As the complexity of the model induces a significant computational load to the rendering and physics engine, we provide also a simplified version retaining complete functionality. The skid-steer locomotion is implemented using the diff_drive_controller555wiki.ros.org/diff_drive_controller.
The stereo camera is implemented using the multicamera plugin which allows to simulate lens distortion and noise in the image. We combine this plugin with the recently released lens_flare_sensor  to simulate the lens flare effect on the image when the sun lies close to the line of sight. The LiDAR sensor is simulated replicating a Velodyne VLP-16 using the gazebo_ros_velodyne_laser666wiki.ros.org/velodyne_gazebo_plugins plugin.
Iii-B The Environment
The virtual environment is modeled after a Digital Terrain Model (DTM) of the Gale crater on Mars, cropped to a planar landscape. A schematic overview of the map generation process is given in Fig. 3. The DTM is imported in Gazebo to create a base featureless surface as a basis for the virtual environment. To populate the surface with rocks, we import the DTM in the 3D modeler Blender applying a displacement map to a plane segmented in a coarse grid. Two population of rocks are scattered over this surface using a manually weighted random distribution roughly matching the frequencies observed on the Martian surface : a small population of large boulders and a large population of smaller rocks with diameters ranging from 0.1 to 0.5 meters.
To precisely evaluate the performances of SLAM algorithms on this environment, we use Blender to generate two fixed paths along which the robot will move using the actor functionalities of ROS Gazebo. To simulate the motion caused by the roughness of the terrain we added noise to the camera orientations. The resulting sequences of poses are exported to SDF files to instruct the Gazebo actor objects.
Iv Localization Algorithms
In this paper we compare a variety of odometry or SLAM algorithms using either the virtual stereo camera or 3D LiDAR to provide localization (i.e. compute the transformation between the robot and world reference frames ) showing how our virtual environment can be used to aid the design choices for the perception system of a robot depending on the target environment. An overview is provided in Table I along with relevant implementation remarks about parameter values that differ from the default ones.
Iv-a Visual SLAM
Among all available Visual SLAM algorithms for stereo cameras, we selected ORB-SLAM2 , RTAB-MAP  and LibVISO2 . LiBVISO2 is a widely used Visual Odometry algorithm without Loop Closure capabilities. RTAB-MAP is instead a RGB-D Graph SLAM with a bayesian Loop Closure detector that addresses multi-session mapping and is highly configurable through an easy Graphical User Interface (GUI) (e.g. types of feature detectors and descriptors, optimizers and respective parameters). ORB-SLAM2 is a Visual SLAM algorithm for monocular, stereo and RGB-D vision systems based on ORB features  which leverages a Bag of Words approach  for localization and Loop Closure detection.
Iv-B LiDAR SLAM
In addition, we compare the performances of a variety of recently published LiDAR SLAM algorithms which are released open source and are compatible with the Robot Operating System. A-LOAM777github.com/HKUST-Aerial-Robotics/A-LOAM is an implementation of LOAM  where odometry and mapping are decoupled to be performed at a faster and slower rate respectively and the poses are computed by matching edge and planar features across scans. LeGO-LOAM  improves the performances of LOAM by extracting and matching point clusters across LiDAR scans and by explicitly utilizing the ground to constrain the roll pitch and z coordinates during pose tracking. In addition to the original LOAM, a pose graph is maintained to include Loop Closures. Finally we test hdl_graph_slam888github.com/koide3/hdl_graph_slam  (referred here as HDL-SLAM), an open source LiDAR SLAM package for the Robot Operating System which provides a modular graph SLAM for 3D LiDARs based on scan registration through ICP or NDT . It provides interfaces for easy integration of IMU and GPS measurements and performs Loop Closure detection to correct a pose graph.
V Experiments and Discussion
We performed a variety of experiments in two sequences generated as explained in Sec. III-B. The first sequence, denominated Long, takes places in the environment visible in Fig.(c)c which comprises a denser distribution of small pebbles and a sparser distribution or larger rocks with dimensions comparable to the ones of the rover. A closed trajectory, about 300 meters long, allows to evaluate the tracking performances of all algorithms in the presence of 90 turns as well as the Loop Closure capabilities, as the rovers returns in the initial location with the same viewpoint. A second and shorter sequence, denominated here Short, is about 60 meters long and takes place around high boulders, generally bigger than the rover. An approximately triangular trajectory ends in proximity of the beginning, however on an opposite camera viewpoint, not allowing detection of Loop Closures from the visual pipelines but, in principle, allowing it for LiDAR pipelines which benefit from 360 range coverage.
The virtual rover is equipped with a stereo camera whose specifications make it equivalent to a Stereolabs ZED stereo camera, which is mounted on our Morpheus rover (see Fig. (a)a). The 3D LiDAR is modeled roughly after the Ouster OS-1 with 64 scan planes. The full characteristics of both sensors are reported for brevity in Table IV.
|Stereo camera||3D LiDAR|
|Resolution||1280x720 px||0.2 (H) x 0.4 (V)|
|FoV||90° (H) x 60° (V)||90° (H) x 30° (V)|
|Refresh Rate||30 Hz||10 Hz|
We test the performances of the algorithms introduces in Sec. IV in terms of how accurately they reconstruct the trajectories from the Long and Short sessions. We first align the trajectory to the ground truth using Horn’s method  given pose correspondences found by matching timestamps. In order to not underestimate the pose errors resulting from angular drift, only the first third of the whole trajectory is use for alignment. For each correspondence, we compute the Absolute Trajectory Error (ATE), or the L2 distance between the aligned poses:
where and are positions of corresponding poses from ground truth and estimated from SLAM respectively after alignment. We also compute the translation drift as the relative difference between the lengths of local segments of the estimated trajectory and ground truth. Let be and two ground truth poses such that the length of the trajectory that connects them is 10 meters. Let then be and the estimated poses from SLAM that correspond via timestamps to and . Thus, we define the local translation drift as:
Finally, we report a summary of the results in a table for both sequences by computing the Root Mean Square of the errors for each time point along the trajectories and for each algorithm.
A first mean of comparison among visual and LiDAR approaches is in the quality and density of the map, which might be employed for the detection of geological features. Figure 4 presents the qualitative difference between a visual map, comprised of sparse 3D landmarks, and a LiDAR map, built by concatenating LiDAR scans. A quantitative evaluation of performances is instead presented in Figure 5, which reports the results of all algorithms on the Long and Short sequences. In addition, Tables II and III contain the RMS errors highlighting the best scoring algorithms. Figures (c)c and (d)d show the resulting trajectories and ground truth overlaid on a top-view of the environment to highlight the context in terms of geometry. In the Long sequence both ORB-SLAM2 and RTAB-MAP were able to successfully close the loop, therefore their ATE is close to zero at both the beginning and end of the trajectory. However, ORB-SLAM2 accumulates some translation drift which manifest itself as higher ATEs in the middle of the trajectory (see Fig. (e)e). Contrarily, LibVISO2 exhibits the lowest translational drift but accumulates angular drift which can not be recovered as it is a pure visual odometry. RTAB-MAP instead shows consistent performances in both sequences achieving the lowest ATE errors thanks to an accurate odometry and Loop Closure capabilities. The LiDAR odometry A-LOAM outperform the visual odometry LibVISO2 in the Long sequence although obtaining the highest errors in the Short sequence. LeGO-LOAM instead even outperforms ORB-SLAM in the Long
sequence. This result is surprising given the little geometric structures present in this sequence but can be explained given that both ALOAM and LeGO-LOAM extract and match edge features belonging to the LiDAR scans, of which the environment has plenty and uniformly distributed, contrarily to theShort sequence which is characterized by bigger and sparsely distributed boulders that obstruct the view. HDL-SLAM instead relies on the mechanism of scan matching, which degenerates in the presence on planar scenes. This explains the extreme translation drift in the Long sequence.
In this work we presented a simulation framework for mobile robots based on ROS Gazebo. We demonstrated how it can be used to aid the selection of perception sensors based on the expected geometry and appearance of the environment. Furthermore, we compared the performances of a variety of open source Visual and LiDAR SLAM algorithms in different environments characterized by different rock distributions and size. Although visual SLAM proves to be accurate in presence of textured ground, LiDAR SLAM has the advantage of building detailed maps in form of point clouds. As for future developments of this work, we plan to enhance the photo-realism of the simulation and join the advantages of both SLAM approaches, fusing 3D LiDARs with stereo cameras using the simulated environment to validate the approach.
This work has been supported by Project “ARES”, Progetti Innovativi degli Studenti, University of Padova. We would also like to thank the Morpheus Team for the discussions and participation in the experiments.
-  (2015) ROS-based slam for a gazebo-simulated mobile robot in image-based 3d model of indoor environment. In Advanced Concepts for Intelligent Vision Systems, Cham, pp. 273–283. Cited by: §II.
-  (2019) Planetary rover simulation for lunar exploration missions. In 2019 IEEE Aerospace Conference, pp. 1–19. Cited by: §II, §III-A.
The normal distributions transform: a new approach to laser scan matching. In IROS, Vol. 3, pp. 2743–2748. External Links: Cited by: §IV-B.
-  (2003) The normal distributions transform: a new approach to laser scan matching. In IROS, Vol. 3, pp. 2743–2748. Cited by: TABLE I.
-  (2020) Retrieving scale on monocular visual odometry using low resolution range sensors. IEEE Transactions on Instrumentation and Measurement (), pp. . External Links: Cited by: §I.
-  (2019-06) Rover relative localization testing in martian relevant environment. In 2019 IEEE 5th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Vol. , pp. 473–478. External Links: Cited by: §I.
-  (2013) A survey of lidar technology and its use in spacecraft relative navigation. In AIAA Guidance, Navigation, and Control (GNC) Conference, pp. 4641. Cited by: §I.
-  (2019) Astrobee robot software: enabling mobile autonomy on the iss. Cited by: §II.
-  (2012-10) Bags of binary words for fast place recognition in image sequences. IEEE Transactions on Robotics 28 (5), pp. 1188–1197. External Links: Cited by: §IV-A.
-  (2011) StereoScan: dense 3d reconstruction in real-time. Cited by: TABLE I, §IV-A.
-  (2020) Relocalization with submaps: multi-session mapping for planetary rovers equipped with stereo cameras. IEEE Robotics and Automation Letters 5 (2), pp. 580–587. External Links: Cited by: §I.
-  (2019) An evaluation of ros-compatible stereo visual slam methods on a nvidia jetson tx2. Measurement 140, pp. 161–170. Cited by: Fig. 1, §I, §I.
-  (2002) Stereo vision and rover navigation software for planetary exploration. In Proceedings, IEEE Aerospace Conference, Vol. 5, pp. 5–5. Cited by: §I.
-  (2003) Rock size-frequency distributions on mars and implications for mars exploration rover landing safety and operations. Journal of Geophysical Research: Planets 108 (E12). Cited by: §III-B.
-  (1987-04) Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. A 4 (4), pp. 629–642. External Links: Cited by: §V.
-  (2013) OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. External Links: Cited by: §II.
-  (2012) Testing and validation of autonomous navigation for a planetary exploration rover using opensource simulation tools. In International Symposium on Arti-cial Intelligence, Robotics and Automation in Space, ISAIRAS, Cited by: §II.
-  (2019) A portable three-dimensional lidar-based system for long-term and wide-area people behavior measurement. International Journal of Advanced Robotic Systems 16 (2). Cited by: TABLE I, §IV-B.
-  (2009-09-30) On measuring the accuracy of SLAM algorithms. Autonomous Robots 27 (4), pp. 387. External Links: Cited by: TABLE I.
-  (2019) RTAB-map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. Journal of Field Robotics 36 (2), pp. 416–446. Cited by: TABLE I, §IV-A.
-  (2010) Mars visual simulation for exomars navigation algorithm validation. In Proc. of i-SAIRAS, pp. 283–290. Cited by: §II.
-  (2017) Visual-inertial monocular slam with map reuse. IEEE Robotics and Automation Letters 2 (2), pp. 796–803. Cited by: TABLE I, §IV-A.
-  (2009) Testing advanced navigation systems for planetary landers and rovers. In 60th International Astronautical Congress, IAC, pp. 869–877. Cited by: §II.
-  (2019) Unmanned ground vehicle modelling in gazebo/ros-based environments. Machines 7 (2), pp. 42. Cited by: §II.
ORB: an efficient alternative to sift or surf.
2011 International conference on computer vision, pp. 2564–2571. Cited by: §IV-A.
-  (2018) Airsim: high-fidelity visual and physical simulation for autonomous vehicles. In Field and service robotics, pp. 621–635. Cited by: §I.
-  (2018) LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In IROS, pp. 4758–4765. Cited by: TABLE I, §IV-B.
-  (2018) Astrobee: current status and future use as an international research platform. In International Astronautical Congress (IAC), Cited by: §II.
-  (2017) Autonomous mobile robot navigation in uneven and unstructured indoor environments. In IROS, pp. 109–116. Cited by: §II.
-  (2014) LOAM: lidar odometry and mapping in real-time. In Robotics: Science and Systems Conference (RSS), Cited by: §IV-B.
-  (2014) LOAM: lidar odometry and mapping in real-time.. In Robotics: Science and Systems, Vol. 2. Cited by: TABLE I.