Monocular Direct Sparse Localization in a Prior 3D Surfel Map

In this paper, we introduce an approach to tracking the pose of a monocular camera in a prior surfel map. By rendering vertex and normal maps from the prior surfel map, the global planar information for the sparse tracked points in the image frame is obtained. The tracked points with and without the global planar information involve both global and local constraints of frames to the system. Our approach formulates all constraints in the form of direct photometric errors within a local window of the frames. The final optimization utilizes these constraints to provide the accurate estimation of global 6-DoF camera poses with the absolute scale. The extensive simulation and real-world experiments demonstrate that our monocular method can provide accurate camera localization results under various conditions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

04/25/2020

GPO: Global Plane Optimization for Fast and Accurate Monocular SLAM Initialization

Initialization is essential to monocular Simultaneous Localization and M...
04/08/2021

3D Surfel Map-Aided Visual Relocalization with Learned Descriptors

In this paper, we introduce a method for visual relocalization using the...
11/10/2019

SLTR: Simultaneous Localization of Target and Reflector in NLOS Condition Using Beacons

When the direct view between the target and the observer is not availabl...
03/31/2020

Metric Monocular Localization Using Signed Distance Fields

Metric localization plays a critical role in vision-based navigation. Fo...
06/01/2016

Mapping and Localization from Planar Markers

Squared planar markers are a popular tool for fast, accurate and robust ...
09/28/2021

Localization of a Smart Infrastructure Fisheye Camera in a Prior Map for Autonomous Vehicles

This work presents a technique for localization of a smart infrastructur...
08/30/2019

ORBSLAM-Atlas: a robust and accurate multi-map system

We propose ORBSLAM-Atlas, a system able to handle an unlimited number of...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Localization is one of the fundamental requirements for autonomous vehicles. Various sensors and algorithms have been developed to fulfill real-time localization or simultaneous localization and mapping (SLAM). Though GPS can provide global position information over the earth, the localization results can be easily influenced by multi-path effects and it cannot be used indoors.

On-board sensors which do not rely on extrinsic infrastructure have become a necessity for reliable localization. Cameras and lidars are two of the most popular sensors employed for the tasks of localization and SLAM. Lidars can provide accurate and long-range measurements of the environment, and many lidar-based localization and mapping methods [1, 2, 3] show good performance indoors and outdoors. However, a typical 3D lidar is bulky and expensive, which limits its application on some small or low-cost platforms.

Cameras have become an alternative to lidars thanks to their light weight and low cost. Camera-based methods or those fused with an inertial measurement unit (IMU), visual-inertial methods, can meet the same demands for localization [4, 5, 6]. However, compared to lidar-based systems, they have inferior accuracy and robustness. In particular, monocular camera systems will face the scale drift problem [7]. Appearance changes, including weather and illumination, can also cause instability to camera-based methods. Finding associations in multiple session maps can address the problem [8], but it is costly to store multiple maps of the same place.

An affordable way to combine the advantages of lidars and cameras is to use a 3D lidar to build a 3D map and then achieve camera-based localization in this built map. In this way, accurate and large-scale maps can be efficiently built by 3D lidar mapping methods. Then, the cameras can utilize the geometric information from the map to reduce the long-term drift and gain more accurate localization results.

Fig. 1: Our method localizes a monocular camera in the surfel map. The red trajectory shows the camera positions in the 3D map, which is colorized by the color information from the camera (top). Green points are those with surfel constraints, while blue points are those without surfel constraints in a local window of frames (bottom).

Based on this idea, we present a novel monocular camera localization system in a 3D surfel map, called DSL (Direct Sparse Localization). The main contributions of our paper are as follows,

  • A cross-modality localization algorithm is proposed to localize camera poses in a prior surfel map. All the constraints are from direct photometric energy functions, making our system efficient to track and optimize camera poses.

  • Global constraints from the map make the monocular system aware of the absolute scale and global transform. We adopt the surfel representation, making our method efficient to store 3D information and render depth, along with vertex & normal maps with a modern GPU.

  • Degeneration analysis of our method, which can provide a hint as to the uncertainty of localization accuracy in real-world applications, is provided.

  • The proposed system is validated in both simulation and real-world experiments. Our method outperforms many state-of-the-art visual(-inertial) localization or SLAM algorithms.

Ii Related Work

In this section, we mainly discuss the literature focused on cross-modality localization, especially camera localization in 3D maps.

By finding the correspondences between two sensor inputs, several methods use common objects, which can be observed in both camera and lidar views. In [9], the manually labeled road markings in a 3D lidar map were used to construct a sparse point cloud. Combined with epipolar geometry and the vehicle odometry, the Chamfer distance between the edge image and the projected sparse point cloud was used to estimate 6-DoF camera poses. In [10], vertical planes from both vision and lidar data, were extracted. The authors took the correspondences of visual and lidar planes as coplanarity constraints to constrain the global bundle adjustment.

Mutual information is an effective metric for cross-modality matching, which is adopted in many methods to localize the camera in the maps produced by heterogenous sensors. In [11], the reflectivities from the lidar map were used to render synthetic images given the potential camera poses. A 3-DoF search over the potential camera poses was applied. The optimal pose was determined by maximizing the normalized mutual information between the camera input and the synthetic image to achieve 2D localization. Using the derivatives of analytical normalized information distance (NID), Pascoe et al. [12]

extended this method to 6-DoF camera pose estimation. In

[13], a similar NID-based method was proposed to evaluate the similarity between the live image and the images generated from a textured 3D prior mesh to obtain the camera pose. Finally, based on mutual projections, the similarity between synthetic depth images and images from a panoramic camera was fit into a particle filter-based Monte Carlo localization framework in [14].

Exploiting the geometric information is another strategy. Typically, these methods will extract feature points in their visual modules, and are in the schemes of indirect visual methods. Caselitz et al. [15] introduced a monocular camera localization method which performs in an iterative closest point (ICP) scheme. It associates and aligns the sparse point cloud produced by monocular visual SLAM with the lidar map iteratively to estimate the 7-DoF similarity transformation. In [16], a method for stereo camera pose estimation was proposed. It estimates the camera pose by minimizing the depth residuals between the depth from the stereo matching and the depth of the point projected from the lidar map. Ding et al. [17] used a hybrid bundle adjustment to optimize the visual map from the stereo visual inertial system, and to align the sparse visual map against the pre-built lidar map at the same time. Zuo et al. [18] took the tightly-coupled MSCKF [19]

as front-end tracking and registered the refined semi-dense point cloud from stereo matching to the prior lidar map using a normal distribution transform (NDT)-based method

[20]. Using the Signed Distance Field (SDF) representation built from stereo vision, Huang et al. [21] proposed a monocular camera localization method by increasing the coherence between the indirect local structure and the SDF model. Instead of using indirect visual pipelines, our method benefits from direct visual tracking [5], which does not rely on the explicit feature extractors. The correspondences of pixels among the frames can be updated during the optimization, and the plane information from the surfel representation can further help to make the system aware of the global pose and scale.

Iii Method

Iii-a Notation

In this paper, we denote the transformation matrix as , which transforms a point in the frame into the frame . The corresponding Lie-algebra elements

, which, for brevity, is expressed as a vector

, can be mapped by the exponential map, , to . The rotation matrix and translation vector of are denoted as and , respectively. returns the pixel intensity of the image corresponding to the frame , given the homogeneous pixel coordinates . We use the pinhole model with as the camera intrinsic matrix and assume all images are undistorted.

Iii-B System Overview

Fig. 2: Framework of the proposed algorithm. Our system contains two sub-routines, the direct sparse localization and map rendering. Note that the parts represented by dashed lines in the diagram are only needed in the initialization.

The framework of our method is shown in Fig. 2. With a rough initial pose, our system first initializes direct sparse visual odometry with the generated depth map from the map rendering module (Sec. III-C). With a valid value in the depth map, a candidate point is assigned a rough inverse depth, which is used for the future camera tracking.

After initialization, the system obtains the vertex & normal maps of the last keyframe, from the map rendering module (Sec. III-C). This rendering step runs only once after a new keyframe is added and optimized. In a local window with the number of keyframes , we track the points across all image regions following [5]. We can acquire the plane information of the tracked points from the vertex and normal maps if one pixel is valid in both maps. With the assumption that the local tracked points share the same plane in the world frame, we can ensure that most of the tracked points are associated with the correct global surfels, even though uncertainty may exist in the global keyframe pose. This is illustrated in Fig. 3.

Since each surfel can be considered as a local plane represented in the global frame, we use this plane information to constrain the relative poses between camera frames, as well as the global poses of frames hosting the sparse tracked points (Sec. III-D and III-E).

Fig. 3: Assumption of global surfel association. If the error between the ground-truth and the estimated poses is small, the tracked points can still be associated to the same global surfel or its neighbor surfels with close plane coefficients.

Iii-C Map Producing and Rendering

Our surfel map is represented as a list of unordered surfels, similar to the ones proposed by Whelan et al. [22]. In our method, only the position, normal and radius are used for the camera pose estimation.

Provided a point cloud map from lidar mapping, we can build the surfel map as follows. First, by voxel grid downsampling, we can reduce the number of points and make the points evenly distributed. Then, the normal of each point is estimated by principal component analysis (PCA) of its neighbor points

[23]. The surfel map can be built by assigning each surfel by a point position, with its estimated normal in the processed point cloud and its radius according to the voxel size in the downsampling step. When the system starts, this surfel map will be loaded once to GPU.

Given a camera pose in the global frame, the map rendering module will project the surfels to the local frame and return the depth map, , or vertex & normal maps, . Similarly to [22], we use the OpenGL Shading Language to predict these maps. The rendered maps have the same size as the input image. For each pixel in the rendered maps, provides its depth in the given camera frame, while provides its surfel position and normal in the global frame. Fig. 4 shows a sample input image and the corresponding given an estimated camera pose.

(a) Raw input
(b) Vertex map
(c) Normal map
Fig. 4: A sample camera input and the rendered vertex & normal maps given an estimated camera pose. The incompleteness of is due to the sparsity of the lidar inputs and different fields of view between the camera and lidar.

Iii-D Homography Constraints from Global Surfel

To use the direct photometric errors as constraints, we follow [5] to formulate our energy functions. For each tracked point, the photometric residual can be written as

(1)

where

(2)

where is the exposure time of the host or target image; and are the coefficients for the affine brightness function ; and is the inverse depth of the corresponding normalized point, , in the host frame, ; indicates equality up to a scale factor.

Given the plane coefficient of the pixel in , , so that for any point on the plane , the homography [24] between and the target frame, , can be written as

(3)

The variables to estimate are the relative poses , and the affine brightness parameters between and . We denote the full variables as . Note that stores vertex & normal information in the global frame, , from the map rendering module. Thus, the plane information in needs to be transformed from the global frame with the estimated global pose of , , as

(4)

Combining Eqn. 1, 3 and 4, the photometric residual with the surfel constraint can be derived as

(5)

Note that Eqn. 5 does not contain the inverse depths of the points with surfel constraints, but includes the global poses of the host frames, . This helps to constrain the camera poses globally in . For simplicity, we denote as , and the to-be-optimized variables in as . Then, the Jacobian of w.r.t. and can be written as

(6)

where

(7)
(8)

Iii-E Optimization

Fig. 5: Surfel and non-surfel constraints. The point associated to a surfel forms the homography (surfel) constraints as Eqn. 5, which constrains the global poses; the point without association forms normal photometric (non-surfel) constraints as Eqn. 1, which constrains the relative pose between frames.

The final optimization is based on relative constraints from direct sparse tracking and global constraints from global surfels, as shown in Fig. 5. Due to map incompleteness or estimation uncertainty, not all tracked points can be associated with a global surfel. Thus, the final energy function to be optimized becomes

(9)

where and are the energy function corresponding to the surfel and non-surfel constraints, respectively. In detail,

(10)

where denotes or , with the residual represented in Eqn. 5 or 1, respectively; is the set of all keyframes, is the set of tracked points (pixels) in , is the set of frames where the point is visible, and is the pixels in the patch centered on ; is the gradient-dependent weighting defined in [5] and is the Huber loss. The above problem can be regarded as a non-linear least squares problem, which can be solved by the Gauss-Newton or Levenberg-Marquardt method.

Iii-F Implementation Details

In this section, we briefly introduce the implementation details of the remaining parts of our system.

Iii-F1 Filtering and Association of Tracked Points

We consider the pixels which have non-zero values on in the following.

After each iteration of the optimization, the updated inverse depth and projected pixel coordinates in , , can be obtained. From the intersection of the ray of the host point and the plane associated to the surfel, the inverse depth and projected pixel induced by the surfel can be obtained as and . We filter and associate these tracked points by the following criteria:

  • If or

    , the point is considered as an outlier and we will

    filter it out,

  • If and , the point is regarded as a converged point and we associate it to the corresponding surfel,

where . The filtered point will be removed from Eqn. 9, while we will involve the associated point in and remove it from .

After associating a point with a surfel, the inverse depth of the point will not be a variable to estimate and can be determined with the optimized pose of . Thus, the semi-dense depth map for frame tracking will set as the pixel’s depth, with the uncertainty obtained from the resolution of the map.

Iii-F2 Front-end

For more details of the front-end implementation, we refer readers to [5]. We follow similar frame and point management to that described in [5]. The frame management tracks the frame by coarse-to-fine direct alignment, and creates and marginalizes keyframes to maintain the local window; the point management selects, tracks and activates points for tracking and optimization.

Iii-F3 Marginalization

For the points without surfel constraints, we can apply the same marginalization process as the one in [5], where the First Estimate Jacobian [25, 26] is applied. Since Eqn. 5 does not involve inverse depths, we do not need to marginalize these points with surfel constraints. The related residuals of these points will be involved in the marginalization of frames only.

Iv Degeneration Analysis

Degeneration appears when the surfel structure cannot constrain camera poses uniquely to the global surfel map. In this section, several common degeneration cases are discussed. Then, we demonstrate how the system recovers the localization drift with sufficient observations. The detailed derivations for this section can be found in the supplementary material [27].

Surfel distributions, especially the normal directions of surfels, can influence the performance of the proposed system. The tracked points with no surfel associations can provide relative constraints for camera poses. If all the constraints are from non-surfel points, the system is equivalent to visual-only odometry, which has scale ambiguity [28].

To simplify the analysis, we consider that the constraints are directly from points in the image planes and , instead of their intensity values, and that the pose of the first frame is given. Then, in this visual-only case, the camera poses and are constrained by the following relationship:

(11)

For any scale factor and global transform (with corresponding rotation and translation ), identical measurements of and are produced by the following variables with tilde sign

(12)

By Eqn. 12, the relative pose between and and the inverse depth of any point from non-surfel constraints become

(13)

The identical measurements can be verified by substituting Eqn. 13 into Eqn. 11 as

(14)

To analyze the degeneration with surfel constraints, we make two assumptions: 1) The first assumption is that the visual system can track points ideally. This leads to a relatively accurate visual structure, camera poses up to scale and an unknown global transform. 2) The second assumption is that the surfel coefficients, as well as the associations between tracked points and surfels, are known.

From the above assumptions and the accurate relative pose relationship from Eqn. 14, we can regard , , , , and as known and locally constrained. The uncertainty comes from the unknown scale and global transform .

We can rewrite the surfel constraints as

(15)

The degeneration exists when two or more different state pairs ( and ) hold the same constraints from Eqn. 15.

Iv-a Single Plane

The first degeneration case is when all points are on the same plane and they share the same plane coefficients, . In this case, the identical measurements can be produced by

(16)

Neither nor can be uniquely determined.

Iv-B Parallel Planes

The parallel planes case will appear when all the points are from two sides of a long passage. In this case, the absolute scale can be determined. However, the global transform still remains distinguished. Any and meeting

(17)

will not violate the surfel or non-surfel constraints. It can be considered as a particular case of Sec. IV-A, where the scale . can be formed by any rotation aligned with the plane normal, , and any translation perpendicular to .

Iv-C Non-parallel Planes with Co-planar Normals

In this case, all normal vectors of the surfel constraints spread on a plane, , only. We denote the normal vector of as . Different from the case in Sec. IV-B, two or more non-parallel planes now exist. There will be no ambiguity on the global rotation and scale, i.e., and . The ambiguity appears only when satisfies for all and . Since all normal vectors spread on , any meeting

(18)

leads to . Thus, cannot be determined uniquely.

Iv-D Recovery with Sufficient Observations

If there are sufficient and diverse surfels ( and ) observed in multiple host frames, and can be determined uniquely. The influence of the surfel distribution on the localization accuracy is further evaluated by simulation in Sec. V-B.

V Results

In this section, quantitative results are provided to validate our method. Fig. 1 shows some qualitative results on our HKUST dataset with lidar, IMU, camera and GPS data collected from a golf cart. More qualitative results can be found in our supplementary material [27] and video.

V-a EuRoC Indoor Quantitative Results

We compared our DSL method with the state-of-the-art stereo-inertial localization method (MSCKF w/ map) [18], the visual-inertial SLAM method with loop-closures (VINS-Mono) [6] and direct sparse odometry (DSO) [5] on the EuRoC dataset [29]. The EuRoC dataset provides stereo grayscale images, IMU data, ground-truth poses and a ground-truth lidar map. For the following results, our method and DSO111The camera inputs for DSL and DSO are photometricaly calibrated by [30] for the V1_03_difficult to compensate for the unknown exposure time. were evaluated on one of the cameras as inputs only; Both camera inputs and IMU data are used for MSCKF w/ map; And for VINS-Mono, the left camera and IMU data are inputs.

When our localization method starts, the initial pose of the first camera frame is provided. We found that our system could recover from the initial guess with perturbations around 0.3 m and 5 degrees, thanks to the constraints from the global model.

The absolute trajectory error (ATE) of each sequence and relative pose error (RPE) over all trajectories [31] are shown in Table I, where the results of MSCKF w/ Map and VINS-Mono (loop) are reported by [18]. The estimated and ground-truth poses were aligned for all methods and scaled for the monocular method DSO by [32]. The results were averaged over 5 runs to reduce the randomness. In the ATE results, our method outperforms the visual-only and visual-inertial methods. In the RPE results, our method has close results w.r.t. different lengths of the trajectory segment. This shows that our method can provide both short- or long-distance pose estimation accurately.

Dataset DSL (ours) DSO MSCKF w/ Map VINS-Mono (loop)
left right left right
V1_01_easy 0.035 0.039 0.091 0.065 0.056 0.044
V1_02_medium 0.034 0.026 0.212 0.177 0.055 0.054
V1_03_difficult 0.045 0.047 0.161 0.234 0.087 0.209
V2_01_easy 0.026 0.023 0.047 0.043 0.069 0.062
V2_02_medium 0.023 0.025 0.074 0.08 0.089 0.114
V2_03_difficult 0.103 0.083 X X 0.149 0.149
Segment Length DSL (ours) MSCKF w/ Map VINS-Mono (loop)
left right
7m 0.121 0.111 0.143 0.156
14m 0.121 0.106 0.154 0.160
21m 0.133 0.120 0.184 0.208
28m 0.108 0.100 0.175 0.223
35m 0.118 0.111 0.191 0.260
TABLE I: Average ATE and RPE [31] for 5 runs are shown in the left and right tables, respectively. Units are in meters.

V-B CARLA Simulator Outdoor Tests

(a) Ratio of constraints
(b) Surfel distribution
Fig. 6: Localization performance under different surfel conditions (the x-axis of (a) and two axes of (b) denote the ratio within a domain, e.g., represents

). (a) Translation errors of camera poses w.r.t. the ratio of the surfel constraints to the total constraints. (b) Translation errors w.r.t. the ratio of eigenvalues of the covariance matrix of the surfel normals,

(better viewed in color).

We next evaluated our method within the CARLA simulator [33], which is capable of generating maps, camera inputs and ground-truth poses. To evaluate the effects of surfel distribution and the ratio of the surfel constraints to the total constraints, we collected the localization errors and all the constraints used at the same time.

Due to the estimation errors of the inverse depths and incompleteness in the rendered maps , as similarly shown in Fig. 4, not all pixels can be associated to surfels. We tested our method with different randomly sampled maps. In Fig. 5(a), the translation errors of camera poses w.r.t. the ratio of the surfel constraints to the total constraints are shown. We can see that large errors exist when the surfel constraints are not sufficient, i.e., the ratio . This is because when there are insufficient surfel constraints, our method degrades to a monocular visual method, which can have scale- or pose- drift without the global constraints. Thus, to ensure accurate results, a surfel map covering most of the camera observations is recommended. In practice, the lidar map used to produce the surfel map should have sufficient overlap with the camera inputs.

To show the effects of surfel distribution, we collected the plane coefficients of the surfel constraints in each frame. We could obtain the covariance matrix of all surfel normals, whose eigenvalues are denoted as , where . The ratios of were calculated, and are compared with the errors of the camera poses in Fig. 5(b). We can see that on the bottom left of the figure, the translation errors are larger because all the surfels have almost the same normal direction, corresponding to the case in Sec. IV-A or IV-B, while on the top right of the figure, the error is smaller, where surfel normals are distributed evenly in space. These results are consistent with our analysis in Sec. IV.

Furthermore, to show the influence of map noises, we added Gaussian noise with different noise levels to the original point cloud and re-generated the surfel maps. The translation and rotation errors w.r.t. different map noises are shown in Table II

. We found that our method could still have reliable localization performance with standard deviation

. When the noise was too large, the pose accuracy degraded due to the inaccuracy of the the normal estimation and of the position of the vertices. However, this extreme case could be avoided by checking the map quality.

Surfel noise [m] 0.0 0.1 0.2 0.3 0.4 0.5
Translation error [m] 0.12 0.19 0.22 0.35 0.52 2.08
Rotation error [deg] 0.29 0.57 0.49 0.74 0.86 0.88
TABLE II: Average pose errors of DSL w.r.t. map noises.

V-C Runtime

Runtime analysis222Run on an Intel i7-8700K CPU with an Nvidia GTX-1080Ti GPU. on different datasets can be found in Table III. Compared to the runtime of DSO [5], our proposed method had almost no additional overhead by involving the global surfel constraints and rendering.

Dataset EuRoC CARLA HKUST
Rendering (ms) 8 1 11 1 9 1
Tracking (ms) 22 20 23 19 15 7
Optimization (ms) 117 31 112 37 102 34
Number of surfels 3.20E+06 8.24E+06 9.44E+06
Surfel radius (m) 0.01 0.1 0.05
Image size 752 480 800 600 640 480
TABLE III: Runtime analysis on different datasets.

Vi Conclusion

In this work, we have introduced a cross-modality algorithm of monocular direct sparse camera localization in a prior surfel map (DSL), which has the ability to provide accurate 6-DoF camera poses. The proposed method uses surfel representation of the 3D map. Given an estimated pose, we render the surfels into vertex and normal maps, from which we obtain the plane coefficients of the associated pixels. The plane coefficients of the surfels form the proposed homography constraints to make the whole system aware of the absolute scale and global poses. The final optimization combines the tracked points with and without surfel constraints in a fully direct photometric formulation. We have also shown the degeneration analysis of our method, which can be used to indicate the reliability of the system. Comprehensive evaluation shows that our method outperforms many state-of-the-art visual(-inertial) localization or SLAM algorithms. Our future work will investigate the possibility of online map updating by camera observations and applying DSL in more dynamic and challenging scenarios.

References

  • [1] J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real-time.” in Robotics: Science and Systems, vol. 2, 2014, p. 9.
  • [2] T. Shan and B. Englot, “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 4758–4765.
  • [3] H. Ye, Y. Chen, and M. Liu, “Tightly coupled 3d lidar inertial odometry and mapping,” in 2019 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2019.
  • [4] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “Orb-slam: a versatile and accurate monocular slam system,” IEEE transactions on robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
  • [5] J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 3, pp. 611–625, 2017.
  • [6] T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  • [7] H. Strasdat, J. Montiel, and A. J. Davison, “Scale drift-aware large scale monocular slam,” Robotics: Science and Systems VI, vol. 2, no. 3, p. 7, 2010.
  • [8] W. Churchill and P. Newman, “Experience-based navigation for long-term localisation,” The International Journal of Robotics Research, vol. 32, no. 14, pp. 1645–1661, 2013.
  • [9] Y. Lu, J. Huang, Y.-T. Chen, and B. Heisele, “Monocular localization in urban environments using road markings,” in 2017 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2017, pp. 468–474.
  • [10] Y. Lu, J. Lee, S.-H. Yeh, H.-M. Cheng, B. Chen, and D. Song, “Sharing heterogeneous spatial knowledge: Map fusion between asynchronous monocular vision and lidar or other prior inputs,” in The International Symposium on Robotics Research (ISRR), Puerto Varas, Chile, vol. 158, 2017.
  • [11] R. W. Wolcott and R. M. Eustice, “Visual localization within lidar maps for automated urban driving,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2014, pp. 176–183.
  • [12] G. Pascoe, W. P. Maddern, and P. Newman, “Robust direct visual localisation using normalised information distance.” in BMVC, 2015, pp. 70–1.
  • [13] G. Pascoe, W. Maddern, A. D. Stewart, and P. Newman, “Farlap: Fast robust localisation using appearance priors,” in 2015 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2015, pp. 6366–6373.
  • [14] P. Neubert, S. Schubert, and P. Protzel, “Sampling-based methods for visual navigation in 3d maps by synthesizing depth images,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2017, pp. 2492–2498.
  • [15] T. Caselitz, B. Steder, M. Ruhnke, and W. Burgard, “Monocular camera localization in 3d lidar maps,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 1926–1931.
  • [16] Y. Kim, J. Jeong, and A. Kim, “Stereo camera localization in 3d lidar maps,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 1–9.
  • [17] X. Ding, Y. Wang, D. Li, L. Tang, H. Yin, and R. Xiong, “Laser map aided visual inertial localization in changing environment,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 4794–4801.
  • [18] X. Zuo, P. Geneva, Y. Yang, W. Ye, Y. Liu, and G. Huang, “Visual-inertial localization with prior lidar map constraints,” IEEE Robotics and Automation Letters, pp. 1–1, 2019.
  • [19]

    A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint kalman filter for vision-aided inertial navigation,” in

    Proceedings 2007 IEEE International Conference on Robotics and Automation.   IEEE, 2007, pp. 3565–3572.
  • [20] B. Huhle, M. Magnusson, W. Straßer, and A. J. Lilienthal, “Registration of colored 3d point clouds with a kernel-based extension to the normal distributions transform,” in 2008 IEEE International Conference on Robotics and Automation.   IEEE, 2008, pp. 4025–4030.
  • [21] H. Huang, Y. Sun, H. Ye, and M. Liu, “Metric monocular localization using signed distance fields,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019.
  • [22] T. Whelan, R. F. Salas-Moreno, B. Glocker, A. J. Davison, and S. Leutenegger, “Elasticfusion: Real-time dense slam and light source estimation,” The International Journal of Robotics Research, vol. 35, no. 14, pp. 1697–1716, 2016.
  • [23] R. B. Rusu, “Semantic 3d object maps for everyday manipulation in human living environments,” KI-Künstliche Intelligenz, vol. 24, no. 4, pp. 345–348, 2010.
  • [24] R. Hartley and A. Zisserman,

    Multiple View Geometry in Computer Vision

    .   Cambridge University Press, 2003.
  • [25] G. P. Huang, A. I. Mourikis, and S. I. Roumeliotis, “A first-estimates jacobian ekf for improving slam consistency,” in Experimental Robotics.   Springer, 2009, pp. 373–382.
  • [26] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframe-based visual–inertial odometry using nonlinear optimization,” The International Journal of Robotics Research, vol. 34, no. 3, pp. 314–334, 2015.
  • [27] H. Ye, H. Huang, and M. Liu, “Supplementary material to: Monocular direct sparse localization in a prior 3d surfel map,” Tech. Rep. [Online]. Available: https://sites.google.com/view/dsl-ram-lab/
  • [28] E. S. Jones and S. Soatto, “Visual-inertial navigation, mapping and localization: A scalable real-time causal approach,” The International Journal of Robotics Research, vol. 30, no. 4, pp. 407–430, 2011.
  • [29] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
  • [30] P. Bergmann, R. Wang, and D. Cremers, “Online photometric calibration of auto exposure video for realtime visual odometry and slam,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 627–634, 2017.
  • [31] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgb-d slam systems,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2012, pp. 573–580.
  • [32] S. Umeyama, “Least-squares estimation of transformation parameters between two point patterns,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 4, pp. 376–380, 1991.
  • [33] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” arXiv preprint arXiv:1711.03938, 2017.