I Introduction
Localization is one of the fundamental requirements for autonomous vehicles. Various sensors and algorithms have been developed to fulfill realtime localization or simultaneous localization and mapping (SLAM). Though GPS can provide global position information over the earth, the localization results can be easily influenced by multipath effects and it cannot be used indoors.
Onboard sensors which do not rely on extrinsic infrastructure have become a necessity for reliable localization. Cameras and lidars are two of the most popular sensors employed for the tasks of localization and SLAM. Lidars can provide accurate and longrange measurements of the environment, and many lidarbased localization and mapping methods [1, 2, 3] show good performance indoors and outdoors. However, a typical 3D lidar is bulky and expensive, which limits its application on some small or lowcost platforms.
Cameras have become an alternative to lidars thanks to their light weight and low cost. Camerabased methods or those fused with an inertial measurement unit (IMU), visualinertial methods, can meet the same demands for localization [4, 5, 6]. However, compared to lidarbased systems, they have inferior accuracy and robustness. In particular, monocular camera systems will face the scale drift problem [7]. Appearance changes, including weather and illumination, can also cause instability to camerabased methods. Finding associations in multiple session maps can address the problem [8], but it is costly to store multiple maps of the same place.
An affordable way to combine the advantages of lidars and cameras is to use a 3D lidar to build a 3D map and then achieve camerabased localization in this built map. In this way, accurate and largescale maps can be efficiently built by 3D lidar mapping methods. Then, the cameras can utilize the geometric information from the map to reduce the longterm drift and gain more accurate localization results.
Based on this idea, we present a novel monocular camera localization system in a 3D surfel map, called DSL (Direct Sparse Localization). The main contributions of our paper are as follows,

A crossmodality localization algorithm is proposed to localize camera poses in a prior surfel map. All the constraints are from direct photometric energy functions, making our system efficient to track and optimize camera poses.

Global constraints from the map make the monocular system aware of the absolute scale and global transform. We adopt the surfel representation, making our method efficient to store 3D information and render depth, along with vertex & normal maps with a modern GPU.

Degeneration analysis of our method, which can provide a hint as to the uncertainty of localization accuracy in realworld applications, is provided.

The proposed system is validated in both simulation and realworld experiments. Our method outperforms many stateoftheart visual(inertial) localization or SLAM algorithms.
Ii Related Work
In this section, we mainly discuss the literature focused on crossmodality localization, especially camera localization in 3D maps.
By finding the correspondences between two sensor inputs, several methods use common objects, which can be observed in both camera and lidar views. In [9], the manually labeled road markings in a 3D lidar map were used to construct a sparse point cloud. Combined with epipolar geometry and the vehicle odometry, the Chamfer distance between the edge image and the projected sparse point cloud was used to estimate 6DoF camera poses. In [10], vertical planes from both vision and lidar data, were extracted. The authors took the correspondences of visual and lidar planes as coplanarity constraints to constrain the global bundle adjustment.
Mutual information is an effective metric for crossmodality matching, which is adopted in many methods to localize the camera in the maps produced by heterogenous sensors. In [11], the reflectivities from the lidar map were used to render synthetic images given the potential camera poses. A 3DoF search over the potential camera poses was applied. The optimal pose was determined by maximizing the normalized mutual information between the camera input and the synthetic image to achieve 2D localization. Using the derivatives of analytical normalized information distance (NID), Pascoe et al. [12]
extended this method to 6DoF camera pose estimation. In
[13], a similar NIDbased method was proposed to evaluate the similarity between the live image and the images generated from a textured 3D prior mesh to obtain the camera pose. Finally, based on mutual projections, the similarity between synthetic depth images and images from a panoramic camera was fit into a particle filterbased Monte Carlo localization framework in [14].Exploiting the geometric information is another strategy. Typically, these methods will extract feature points in their visual modules, and are in the schemes of indirect visual methods. Caselitz et al. [15] introduced a monocular camera localization method which performs in an iterative closest point (ICP) scheme. It associates and aligns the sparse point cloud produced by monocular visual SLAM with the lidar map iteratively to estimate the 7DoF similarity transformation. In [16], a method for stereo camera pose estimation was proposed. It estimates the camera pose by minimizing the depth residuals between the depth from the stereo matching and the depth of the point projected from the lidar map. Ding et al. [17] used a hybrid bundle adjustment to optimize the visual map from the stereo visual inertial system, and to align the sparse visual map against the prebuilt lidar map at the same time. Zuo et al. [18] took the tightlycoupled MSCKF [19]
as frontend tracking and registered the refined semidense point cloud from stereo matching to the prior lidar map using a normal distribution transform (NDT)based method
[20]. Using the Signed Distance Field (SDF) representation built from stereo vision, Huang et al. [21] proposed a monocular camera localization method by increasing the coherence between the indirect local structure and the SDF model. Instead of using indirect visual pipelines, our method benefits from direct visual tracking [5], which does not rely on the explicit feature extractors. The correspondences of pixels among the frames can be updated during the optimization, and the plane information from the surfel representation can further help to make the system aware of the global pose and scale.Iii Method
Iiia Notation
In this paper, we denote the transformation matrix as , which transforms a point in the frame into the frame . The corresponding Liealgebra elements
, which, for brevity, is expressed as a vector
, can be mapped by the exponential map, , to . The rotation matrix and translation vector of are denoted as and , respectively. returns the pixel intensity of the image corresponding to the frame , given the homogeneous pixel coordinates . We use the pinhole model with as the camera intrinsic matrix and assume all images are undistorted.IiiB System Overview
The framework of our method is shown in Fig. 2. With a rough initial pose, our system first initializes direct sparse visual odometry with the generated depth map from the map rendering module (Sec. IIIC). With a valid value in the depth map, a candidate point is assigned a rough inverse depth, which is used for the future camera tracking.
After initialization, the system obtains the vertex & normal maps of the last keyframe, from the map rendering module (Sec. IIIC). This rendering step runs only once after a new keyframe is added and optimized. In a local window with the number of keyframes , we track the points across all image regions following [5]. We can acquire the plane information of the tracked points from the vertex and normal maps if one pixel is valid in both maps. With the assumption that the local tracked points share the same plane in the world frame, we can ensure that most of the tracked points are associated with the correct global surfels, even though uncertainty may exist in the global keyframe pose. This is illustrated in Fig. 3.
IiiC Map Producing and Rendering
Our surfel map is represented as a list of unordered surfels, similar to the ones proposed by Whelan et al. [22]. In our method, only the position, normal and radius are used for the camera pose estimation.
Provided a point cloud map from lidar mapping, we can build the surfel map as follows. First, by voxel grid downsampling, we can reduce the number of points and make the points evenly distributed. Then, the normal of each point is estimated by principal component analysis (PCA) of its neighbor points
[23]. The surfel map can be built by assigning each surfel by a point position, with its estimated normal in the processed point cloud and its radius according to the voxel size in the downsampling step. When the system starts, this surfel map will be loaded once to GPU.Given a camera pose in the global frame, the map rendering module will project the surfels to the local frame and return the depth map, , or vertex & normal maps, . Similarly to [22], we use the OpenGL Shading Language to predict these maps. The rendered maps have the same size as the input image. For each pixel in the rendered maps, provides its depth in the given camera frame, while provides its surfel position and normal in the global frame. Fig. 4 shows a sample input image and the corresponding given an estimated camera pose.
IiiD Homography Constraints from Global Surfel
To use the direct photometric errors as constraints, we follow [5] to formulate our energy functions. For each tracked point, the photometric residual can be written as
(1) 
where
(2) 
where is the exposure time of the host or target image; and are the coefficients for the affine brightness function ; and is the inverse depth of the corresponding normalized point, , in the host frame, ; indicates equality up to a scale factor.
Given the plane coefficient of the pixel in , , so that for any point on the plane , the homography [24] between and the target frame, , can be written as
(3) 
The variables to estimate are the relative poses , and the affine brightness parameters between and . We denote the full variables as . Note that stores vertex & normal information in the global frame, , from the map rendering module. Thus, the plane information in needs to be transformed from the global frame with the estimated global pose of , , as
(4) 
Combining Eqn. 1, 3 and 4, the photometric residual with the surfel constraint can be derived as
(5) 
Note that Eqn. 5 does not contain the inverse depths of the points with surfel constraints, but includes the global poses of the host frames, . This helps to constrain the camera poses globally in . For simplicity, we denote as , and the tobeoptimized variables in as . Then, the Jacobian of w.r.t. and can be written as
(6) 
where
(7) 
(8) 
IiiE Optimization
The final optimization is based on relative constraints from direct sparse tracking and global constraints from global surfels, as shown in Fig. 5. Due to map incompleteness or estimation uncertainty, not all tracked points can be associated with a global surfel. Thus, the final energy function to be optimized becomes
(9) 
where and are the energy function corresponding to the surfel and nonsurfel constraints, respectively. In detail,
(10) 
where denotes or , with the residual represented in Eqn. 5 or 1, respectively; is the set of all keyframes, is the set of tracked points (pixels) in , is the set of frames where the point is visible, and is the pixels in the patch centered on ; is the gradientdependent weighting defined in [5] and is the Huber loss. The above problem can be regarded as a nonlinear least squares problem, which can be solved by the GaussNewton or LevenbergMarquardt method.
IiiF Implementation Details
In this section, we briefly introduce the implementation details of the remaining parts of our system.
IiiF1 Filtering and Association of Tracked Points
We consider the pixels which have nonzero values on in the following.
After each iteration of the optimization, the updated inverse depth and projected pixel coordinates in , , can be obtained. From the intersection of the ray of the host point and the plane associated to the surfel, the inverse depth and projected pixel induced by the surfel can be obtained as and . We filter and associate these tracked points by the following criteria:

If and , the point is regarded as a converged point and we associate it to the corresponding surfel,
where . The filtered point will be removed from Eqn. 9, while we will involve the associated point in and remove it from .
After associating a point with a surfel, the inverse depth of the point will not be a variable to estimate and can be determined with the optimized pose of . Thus, the semidense depth map for frame tracking will set as the pixel’s depth, with the uncertainty obtained from the resolution of the map.
IiiF2 Frontend
For more details of the frontend implementation, we refer readers to [5]. We follow similar frame and point management to that described in [5]. The frame management tracks the frame by coarsetofine direct alignment, and creates and marginalizes keyframes to maintain the local window; the point management selects, tracks and activates points for tracking and optimization.
IiiF3 Marginalization
For the points without surfel constraints, we can apply the same marginalization process as the one in [5], where the First Estimate Jacobian [25, 26] is applied. Since Eqn. 5 does not involve inverse depths, we do not need to marginalize these points with surfel constraints. The related residuals of these points will be involved in the marginalization of frames only.
Iv Degeneration Analysis
Degeneration appears when the surfel structure cannot constrain camera poses uniquely to the global surfel map. In this section, several common degeneration cases are discussed. Then, we demonstrate how the system recovers the localization drift with sufficient observations. The detailed derivations for this section can be found in the supplementary material [27].
Surfel distributions, especially the normal directions of surfels, can influence the performance of the proposed system. The tracked points with no surfel associations can provide relative constraints for camera poses. If all the constraints are from nonsurfel points, the system is equivalent to visualonly odometry, which has scale ambiguity [28].
To simplify the analysis, we consider that the constraints are directly from points in the image planes and , instead of their intensity values, and that the pose of the first frame is given. Then, in this visualonly case, the camera poses and are constrained by the following relationship:
(11) 
For any scale factor and global transform (with corresponding rotation and translation ), identical measurements of and are produced by the following variables with tilde sign
(12) 
By Eqn. 12, the relative pose between and and the inverse depth of any point from nonsurfel constraints become
(13) 
To analyze the degeneration with surfel constraints, we make two assumptions: 1) The first assumption is that the visual system can track points ideally. This leads to a relatively accurate visual structure, camera poses up to scale and an unknown global transform. 2) The second assumption is that the surfel coefficients, as well as the associations between tracked points and surfels, are known.
From the above assumptions and the accurate relative pose relationship from Eqn. 14, we can regard , , , , and as known and locally constrained. The uncertainty comes from the unknown scale and global transform .
We can rewrite the surfel constraints as
(15) 
The degeneration exists when two or more different state pairs ( and ) hold the same constraints from Eqn. 15.
Iva Single Plane
The first degeneration case is when all points are on the same plane and they share the same plane coefficients, . In this case, the identical measurements can be produced by
(16) 
Neither nor can be uniquely determined.
IvB Parallel Planes
The parallel planes case will appear when all the points are from two sides of a long passage. In this case, the absolute scale can be determined. However, the global transform still remains distinguished. Any and meeting
(17) 
will not violate the surfel or nonsurfel constraints. It can be considered as a particular case of Sec. IVA, where the scale . can be formed by any rotation aligned with the plane normal, , and any translation perpendicular to .
IvC Nonparallel Planes with Coplanar Normals
In this case, all normal vectors of the surfel constraints spread on a plane, , only. We denote the normal vector of as . Different from the case in Sec. IVB, two or more nonparallel planes now exist. There will be no ambiguity on the global rotation and scale, i.e., and . The ambiguity appears only when satisfies for all and . Since all normal vectors spread on , any meeting
(18) 
leads to . Thus, cannot be determined uniquely.
IvD Recovery with Sufficient Observations
If there are sufficient and diverse surfels ( and ) observed in multiple host frames, and can be determined uniquely. The influence of the surfel distribution on the localization accuracy is further evaluated by simulation in Sec. VB.
V Results
In this section, quantitative results are provided to validate our method. Fig. 1 shows some qualitative results on our HKUST dataset with lidar, IMU, camera and GPS data collected from a golf cart. More qualitative results can be found in our supplementary material [27] and video.
Va EuRoC Indoor Quantitative Results
We compared our DSL method with the stateoftheart stereoinertial localization method (MSCKF w/ map) [18], the visualinertial SLAM method with loopclosures (VINSMono) [6] and direct sparse odometry (DSO) [5] on the EuRoC dataset [29]. The EuRoC dataset provides stereo grayscale images, IMU data, groundtruth poses and a groundtruth lidar map. For the following results, our method and DSO^{1}^{1}1The camera inputs for DSL and DSO are photometricaly calibrated by [30] for the V1_03_difficult to compensate for the unknown exposure time. were evaluated on one of the cameras as inputs only; Both camera inputs and IMU data are used for MSCKF w/ map; And for VINSMono, the left camera and IMU data are inputs.
When our localization method starts, the initial pose of the first camera frame is provided. We found that our system could recover from the initial guess with perturbations around 0.3 m and 5 degrees, thanks to the constraints from the global model.
The absolute trajectory error (ATE) of each sequence and relative pose error (RPE) over all trajectories [31] are shown in Table I, where the results of MSCKF w/ Map and VINSMono (loop) are reported by [18]. The estimated and groundtruth poses were aligned for all methods and scaled for the monocular method DSO by [32]. The results were averaged over 5 runs to reduce the randomness. In the ATE results, our method outperforms the visualonly and visualinertial methods. In the RPE results, our method has close results w.r.t. different lengths of the trajectory segment. This shows that our method can provide both short or longdistance pose estimation accurately.


VB CARLA Simulator Outdoor Tests
). (a) Translation errors of camera poses w.r.t. the ratio of the surfel constraints to the total constraints. (b) Translation errors w.r.t. the ratio of eigenvalues of the covariance matrix of the surfel normals,
(better viewed in color).We next evaluated our method within the CARLA simulator [33], which is capable of generating maps, camera inputs and groundtruth poses. To evaluate the effects of surfel distribution and the ratio of the surfel constraints to the total constraints, we collected the localization errors and all the constraints used at the same time.
Due to the estimation errors of the inverse depths and incompleteness in the rendered maps , as similarly shown in Fig. 4, not all pixels can be associated to surfels. We tested our method with different randomly sampled maps. In Fig. 5(a), the translation errors of camera poses w.r.t. the ratio of the surfel constraints to the total constraints are shown. We can see that large errors exist when the surfel constraints are not sufficient, i.e., the ratio . This is because when there are insufficient surfel constraints, our method degrades to a monocular visual method, which can have scale or pose drift without the global constraints. Thus, to ensure accurate results, a surfel map covering most of the camera observations is recommended. In practice, the lidar map used to produce the surfel map should have sufficient overlap with the camera inputs.
To show the effects of surfel distribution, we collected the plane coefficients of the surfel constraints in each frame. We could obtain the covariance matrix of all surfel normals, whose eigenvalues are denoted as , where . The ratios of were calculated, and are compared with the errors of the camera poses in Fig. 5(b). We can see that on the bottom left of the figure, the translation errors are larger because all the surfels have almost the same normal direction, corresponding to the case in Sec. IVA or IVB, while on the top right of the figure, the error is smaller, where surfel normals are distributed evenly in space. These results are consistent with our analysis in Sec. IV.
Furthermore, to show the influence of map noises, we added Gaussian noise with different noise levels to the original point cloud and regenerated the surfel maps. The translation and rotation errors w.r.t. different map noises are shown in Table II
. We found that our method could still have reliable localization performance with standard deviation
. When the noise was too large, the pose accuracy degraded due to the inaccuracy of the the normal estimation and of the position of the vertices. However, this extreme case could be avoided by checking the map quality.Surfel noise [m]  0.0  0.1  0.2  0.3  0.4  0.5 

Translation error [m]  0.12  0.19  0.22  0.35  0.52  2.08 
Rotation error [deg]  0.29  0.57  0.49  0.74  0.86  0.88 
VC Runtime
Runtime analysis^{2}^{2}2Run on an Intel i78700K CPU with an Nvidia GTX1080Ti GPU. on different datasets can be found in Table III. Compared to the runtime of DSO [5], our proposed method had almost no additional overhead by involving the global surfel constraints and rendering.
Dataset  EuRoC  CARLA  HKUST 

Rendering (ms)  8 1  11 1  9 1 
Tracking (ms)  22 20  23 19  15 7 
Optimization (ms)  117 31  112 37  102 34 
Number of surfels  3.20E+06  8.24E+06  9.44E+06 
Surfel radius (m)  0.01  0.1  0.05 
Image size  752 480  800 600  640 480 
Vi Conclusion
In this work, we have introduced a crossmodality algorithm of monocular direct sparse camera localization in a prior surfel map (DSL), which has the ability to provide accurate 6DoF camera poses. The proposed method uses surfel representation of the 3D map. Given an estimated pose, we render the surfels into vertex and normal maps, from which we obtain the plane coefficients of the associated pixels. The plane coefficients of the surfels form the proposed homography constraints to make the whole system aware of the absolute scale and global poses. The final optimization combines the tracked points with and without surfel constraints in a fully direct photometric formulation. We have also shown the degeneration analysis of our method, which can be used to indicate the reliability of the system. Comprehensive evaluation shows that our method outperforms many stateoftheart visual(inertial) localization or SLAM algorithms. Our future work will investigate the possibility of online map updating by camera observations and applying DSL in more dynamic and challenging scenarios.
References
 [1] J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in realtime.” in Robotics: Science and Systems, vol. 2, 2014, p. 9.
 [2] T. Shan and B. Englot, “Legoloam: Lightweight and groundoptimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 4758–4765.
 [3] H. Ye, Y. Chen, and M. Liu, “Tightly coupled 3d lidar inertial odometry and mapping,” in 2019 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2019.
 [4] R. MurArtal, J. M. M. Montiel, and J. D. Tardos, “Orbslam: a versatile and accurate monocular slam system,” IEEE transactions on robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
 [5] J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 3, pp. 611–625, 2017.
 [6] T. Qin, P. Li, and S. Shen, “Vinsmono: A robust and versatile monocular visualinertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
 [7] H. Strasdat, J. Montiel, and A. J. Davison, “Scale driftaware large scale monocular slam,” Robotics: Science and Systems VI, vol. 2, no. 3, p. 7, 2010.
 [8] W. Churchill and P. Newman, “Experiencebased navigation for longterm localisation,” The International Journal of Robotics Research, vol. 32, no. 14, pp. 1645–1661, 2013.
 [9] Y. Lu, J. Huang, Y.T. Chen, and B. Heisele, “Monocular localization in urban environments using road markings,” in 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017, pp. 468–474.
 [10] Y. Lu, J. Lee, S.H. Yeh, H.M. Cheng, B. Chen, and D. Song, “Sharing heterogeneous spatial knowledge: Map fusion between asynchronous monocular vision and lidar or other prior inputs,” in The International Symposium on Robotics Research (ISRR), Puerto Varas, Chile, vol. 158, 2017.
 [11] R. W. Wolcott and R. M. Eustice, “Visual localization within lidar maps for automated urban driving,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2014, pp. 176–183.
 [12] G. Pascoe, W. P. Maddern, and P. Newman, “Robust direct visual localisation using normalised information distance.” in BMVC, 2015, pp. 70–1.
 [13] G. Pascoe, W. Maddern, A. D. Stewart, and P. Newman, “Farlap: Fast robust localisation using appearance priors,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 6366–6373.
 [14] P. Neubert, S. Schubert, and P. Protzel, “Samplingbased methods for visual navigation in 3d maps by synthesizing depth images,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 2492–2498.
 [15] T. Caselitz, B. Steder, M. Ruhnke, and W. Burgard, “Monocular camera localization in 3d lidar maps,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 1926–1931.
 [16] Y. Kim, J. Jeong, and A. Kim, “Stereo camera localization in 3d lidar maps,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 1–9.
 [17] X. Ding, Y. Wang, D. Li, L. Tang, H. Yin, and R. Xiong, “Laser map aided visual inertial localization in changing environment,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 4794–4801.
 [18] X. Zuo, P. Geneva, Y. Yang, W. Ye, Y. Liu, and G. Huang, “Visualinertial localization with prior lidar map constraints,” IEEE Robotics and Automation Letters, pp. 1–1, 2019.

[19]
A. I. Mourikis and S. I. Roumeliotis, “A multistate constraint kalman filter for visionaided inertial navigation,” in
Proceedings 2007 IEEE International Conference on Robotics and Automation. IEEE, 2007, pp. 3565–3572.  [20] B. Huhle, M. Magnusson, W. Straßer, and A. J. Lilienthal, “Registration of colored 3d point clouds with a kernelbased extension to the normal distributions transform,” in 2008 IEEE International Conference on Robotics and Automation. IEEE, 2008, pp. 4025–4030.
 [21] H. Huang, Y. Sun, H. Ye, and M. Liu, “Metric monocular localization using signed distance fields,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019.
 [22] T. Whelan, R. F. SalasMoreno, B. Glocker, A. J. Davison, and S. Leutenegger, “Elasticfusion: Realtime dense slam and light source estimation,” The International Journal of Robotics Research, vol. 35, no. 14, pp. 1697–1716, 2016.
 [23] R. B. Rusu, “Semantic 3d object maps for everyday manipulation in human living environments,” KIKünstliche Intelligenz, vol. 24, no. 4, pp. 345–348, 2010.

[24]
R. Hartley and A. Zisserman,
Multiple View Geometry in Computer Vision
. Cambridge University Press, 2003.  [25] G. P. Huang, A. I. Mourikis, and S. I. Roumeliotis, “A firstestimates jacobian ekf for improving slam consistency,” in Experimental Robotics. Springer, 2009, pp. 373–382.
 [26] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframebased visual–inertial odometry using nonlinear optimization,” The International Journal of Robotics Research, vol. 34, no. 3, pp. 314–334, 2015.
 [27] H. Ye, H. Huang, and M. Liu, “Supplementary material to: Monocular direct sparse localization in a prior 3d surfel map,” Tech. Rep. [Online]. Available: https://sites.google.com/view/dslramlab/
 [28] E. S. Jones and S. Soatto, “Visualinertial navigation, mapping and localization: A scalable realtime causal approach,” The International Journal of Robotics Research, vol. 30, no. 4, pp. 407–430, 2011.
 [29] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
 [30] P. Bergmann, R. Wang, and D. Cremers, “Online photometric calibration of auto exposure video for realtime visual odometry and slam,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 627–634, 2017.
 [31] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgbd slam systems,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012, pp. 573–580.
 [32] S. Umeyama, “Leastsquares estimation of transformation parameters between two point patterns,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 4, pp. 376–380, 1991.
 [33] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” arXiv preprint arXiv:1711.03938, 2017.
Comments
There are no comments yet.