Accurate robot localization in urban environments is in great demand for autonomous vehicles. Since GPS localization is unstable without direct line-of-sight to the satellites, LiDAR-based localization modules are often used because of the accurate range measurements. Additionally, urban scenes have relative time-invariant geometric structures (e.g., buildings), thus one-time 3D map construction can be used for long-term localization. However, the expensive cost and heavy weight of LiDAR sensors limit its wide applications. Cameras and IMUs are low-cost, light-weight and commonly available sensors and current visual-inertial based pose estimation and mapping methods are well-developed for a variety of robot systems[1, 2, 3]. Nevertheless, state estimation methods that use only image features are prone to failures due to lighting or texture changes in the environment. Consequently, if camera localization modules can be associated with a prior 3D map, i.e., fuse the visual information with range measurements, there will be a great potential to use these light-weight and small camera modules for accurate localization in urban environments without a LiDAR.
However, the fusion of image data with 3D point clouds is challenging due to appearance differences and modality gaps. Current approaches typically transfer the 3D data into 2D space or reconstruct 3D point clouds from 2D images to align data for pose estimation. Based on the characteristics of urban environments, our intuition lies on the fact that the major geometric structures, such as lines and planes, can be both captured in 3D maps and 2D images regardless of appearance differences and modality gaps. The direct 2D-3D geometric co-occurrence correspondence is more robust and precise than the association of domain-transferred data. Therefore, our purpose is to directly estimate the 2D-3D geometric line correspondences for the accurate and long-term camera localization.
In this work, we propose an approach for real-time light-weight monocular camera localization in prior 3D LiDAR maps using direct 2D-3D geometric line correspondences. We assume a coarse pose initialization is given and focus on the pose tracking in maps, which follows the related works [4, 5]
. For geometric concurrent feature extraction, 3D line segments are detected offline from LiDAR maps while robust 2D line segments are extracted online from video sequences. By employing the 6-DOF pose prediction from VIO, local visible 3D lines in field-of-view (FoV) are extracted and directly matched with 2D line features to obtain coarse 2D-3D line correspondences. Finally, the camera pose and 2D-3D correspondences are iteratively optimized by minimizing the projection error of correspondences and rejecting outliers.
The main contribution of this work is to estimate geometric 2D-3D line correspondences for camera localization, which efficiently associates every keyframe with the prior LiDAR map. The 2D-3D line correspondence estimation is robust to appearance changes and suitable for camera localization in urban environments. Fig. 1 shows a camera image with 2D-3D line correspondences and estimated camera poses in the LiDAR map.
Ii Related Work
Vision-based localization in maps is to establish correspondences between 2D and 3D modalities for improving the localization robustness and accuracy . To overcome the modality gaps and appearance differences, general approaches are using “intermediate products” to transfer the matching into the same space, i.e., in 2D space  or in 3D space [4, 8].
The first kind of camera localization in 3D maps is matching photometry in 2D image space. For most of the visual simultaneous localization and mapping (SLAM) methods [1, 2], sparse point cloud maps are reconstructed with photometry (or visual) descriptors, then loop closure detection is conducted by matching these descriptors to decrease drifts. In image-based localization methods [7, 9], large scale maps are reconstructed using SFM with visual features, these features are efficiently matched to yield 2D-3D correspondences for camera localization. Nevertheless, appearance and visual features are sensitive to illumination changes and light conditions, which make the correspondences unstable for long-term camera localization. Additionally, to handle the LiDAR maps without associated visual features, LiDAR appearance synthetic images are often used to directly match with camera images by normalized mutual information (NMI)  or normalized information distance (NID) . Additionally, recent methods utilize colored point cloud projections  and synthesized depth images  to match with live images for camera localization. These localization methods are all trying to transfer the 2D-3D matching problem to be a 2D-2D matching problem.
The second strategy is matching geometry in 3D space to estimate camera poses. By using local bundle adjustment,  proposed to match a sparse reconstructed local 3D point clouds with given 3D LiDAR maps, which solves the scale estimation problem of monocular VO system and achieves online estimation of 6-DoF camera poses. Similarly in , 3D structural descriptors are used for matching LiDAR maps with sparse visual reconstructed point clouds. In [8, 5], dense local point clouds are reconstructed from a stereo camera to match with the 3D LiDAR maps, and then the matching results are loosely or tightly coupled into the VO and VIO system for optimizing camera poses. These localization methods by 3D registration obtain feasible results compared with vision-only based methods. However, the localization accuracy highly depends on local reconstruction performances. SFM suffers from scale problem and the reconstructed sparse points may not have correspondence in maps. Stereo reconstruction gives dense local point clouds but mostly is time-consuming and does not scale well for long-range depth estimation.
Compared with the local point cloud matching methods, we aim at directly extracting 2D lines from a monocular camera to match with 3D lines from LiDAR maps for camera localization, which does not rely on SFM or stereo reconstruction modules. The global 2D-3D registration is known as a kidnapped robot problem for the feature description gap and the non-convexity . RANSAC-based and branch-and-bound strategies are often used to maximize the global optimum without a pose prior [15, 16]. With the camera pose prediction from GPS or VO (VIO), we can obtain a pose prediction as the prior for local 2D-3D matching. SoftPosit  and BlindPnP  are conventional simultaneous 2D-3D correspondences and pose estimation approaches with the provided pose initialization. For the camera to LiDAR sensor calibration methods [19, 20], geometric feature correspondences across images and point clouds are often used to optimize the extrinsic parameters. Recently, some learning-based methods were proposed to directly register images to LiDAR-Map with GPS pose initialization [21, 22]. These methods suffer from either high computation complexity or unstable outputs, which still need exploration for realtime camera localization in 3D LiDAR maps.
Iii Proposed method
The proposed method simultaneously estimates 6-DoF camera poses and 2D-3D line correspondences in LiDAR maps. 2D-3D line correspondences are utilized to optimize camera poses by minimizing the 3D line projection error while refined camera poses can help reject outlier correspondences. As the preliminary to the online 2D-3D correspondence estimation, 3D line features are extracted offline on the large scale 3D LiDAR maps. At the same time, a coarse pose initialization is given for the first frame by the PnP solver  on manually labeled 2D-3D point correspondences. Then VINS-Mono  is utilized to predict the camera motion between adjacent keyframes. With the predicted poses, local 3D lines in camera field-of-view (FoV) are extracted and directly matched with the online extracted 2D lines from image sequences. Finally, camera poses and 2D-3D correspondences are iteratively updated. The pipeline is shown in Fig. 2.
Iii-a 2D and 3D line extraction
In urban environments, the geometric structures are often represented by line segments and planes. Here we utilize a simple segmentation-based 3D line detection method  to extract 3D lines from the LiDAR maps. The general idea is to cluster point clouds into plane regions and use contour line fitting to obtain 3D line segments. This method is efficient and robust for large scale unorganized point clouds in urban environments. Although it needs time to process millions of points, the 3D lines of all maps are extracted only once before we start tracking the camera poses in the maps.
For 2D line extraction, we want to extract the major geometric 2D lines which are consistent with 3D lines and robust to noises. It is challenging in urban scenes because the substantial texture noises yield fragmented 2D line segments and some geometric edges are invisible in 2D images on the color-homogeneous structures (e.g., white wall). Many state-of-the-art line segment detection (LSD) methods have been proposed in computer vision[25, 26], where traditional hand-craft methods are with high efficiency for online running on CPU. However, the detected lines are fragmented and noisy, an example is shown in Fig. 3(a). These kinds of fragmented and noisy features can produce a lot of 2D-3D matching outliers. Considering the line completeness and robustness to noisy, we finally employ a learning-based LSD  which uses the attraction field map (AFM) to transfer the LSD problem to be a region coloring problem. For each pixel
in images, the model first learns a 2D vectorfrom the pixel to the nearest point on the nearest line segment.
where is the coordinate of pixel . Then an AFM is generated by encoding the pixels to line associations. During the training step, ground-truth line segments are transferred to AFM and the network directly learns a model to minimize the similarity between the output and the true AFM. For testing, AFM representations are first generated from images, then a squeeze module  is used to iteratively group belonging to the same line to fit line segments. Fig. 3(b) shows the detected line segments of the example image. It shows consistency with geometric 3D structures and robustness to texture noises.
Iii-B 2D-3D line matching
For a single frame, the main steps to obtain 2D-3D correspondences consist of initial camera pose prediction, visible 3D lines collection, and individual 2D-3D line correspondence estimation. Here the extraction of 3D lines in FoV helps improve the efficiency because local 3D lines in FoV are very limited compared with all 3D lines in the 3D map. Considering the occlusion checking is difficult to conduct on only 3D lines map, we keep all the 3D lines in FoV without discarding occluded lines.
For an image at time , the corresponding pose estimation from VINS-Mono is denoted as , and the estimated pose from 2D-3D correspondences is denoted as . By using the estimated pose from 2D-3D correspondences of the last frame and the camera motion from VINS-Mono , the predicted pose of current frame can be computed
With the pose prediction , the local 3D lines in FoV can be extracted based on the two endpoint projections for efficiency. For checking the visibility of a 3D point in FoV,
where is the camera intrinsic matrix. If , and , the 3D point is in FoV. For a 3D line segment, the visibility checking is more complicated. Based on the visibility of two endpoints, there are three cases for validation:
If both two endpoints are in FoV as shown in Fig. 4(a), we keep the whole 3D line segment as a local visible feature.
When only one endpoint is in FoV (in Fig. 4(b)), we iteratively sample new 3D points on the 3D line segment from the visible point by ratio of the line length and check the visibility of the new sample points. The generated subset 3D line segment with the longest length in FoV is stored as a local visible feature.
When both two endpoints are out of FoV but a subsegment is in FoV, as shown in Fig. 4(c), we can also sample points to extract a subsegment in FoV. However, most of the invisible 3D lines are in this case, thus we discard all the 3D segments with two endpoints out of FoV for efficiency.
Now with the predicted camera pose , the 2D lines are directly matched with the local 3D lines in FoV, where and are the start point and endpoint of a 2D line, respectively. For each possible 2D-3D correspondence, we use a 3D vector to measure the similarity, 2D angle distance , the distance of two 3D endpoint projections to the corresponding infinite 2D lines, and the overlapped length of the finite 2D lines with the 3D line projection.
The projection of a 3D line on the image plane is (Eq. 3). The normalized orientation of 2D lines is denoted as . Then the 2D angle distance can be computed by
Assume the parametric representation of the extracted 2D line features is . The distance can be computed by
By using the point-to-finite-line projection points, we can compute the overlapped length
where and are corresponding to the projection points of and respectively. Then for each extracted 2D line , brute-force searching strategy is used to find a 3D line whose distance and . Thus we obtain a set of coarse 2D-3D line correspondences for further camera pose estimation.
Iii-C Pose optimization and correspondence refinement
For a single frame, we can optimize the camera pose by minimizing the point-to-infinite-line distance of the two 3D endpoint projections to corresponding 2D line distance. The Lie algebra of the estimated camera poses are denoted as . The coefficient vector of the infinite 2D line as . Considering all the 2D-3D line correspondences, the minimization function is
where contains two endpoints, denotes the number of correspondences. It is formulated as a non-linear least-squares minimization problem. We initialize the camera pose with the predicted pose from VINS-Mono. With Lie algebra, the typical L-M algorithm can find the optimal camera pose .
However, single frame 2D-3D correspondence observations are not robust enough for online camera localization. When the 3D lines in FoV are limited or parallel to each other in 3D space, the 2D-3D correspondences cannot constrain the 6-DoF pose. Additionally, even the correspondences are enough for pose estimation, the geometric localization noises of both 2D and 3D lines will make the estimation jitter around the true pose. To solve these problems, a sliding window is utilized to add more previous correspondence observations for optimizing the current pose (as shown in Fig. 5). Assuming the camera motion estimated from VINS-Mono for two adjacent keyframes is accurate, which is reasonable because the drifts of VINS-Mono on two adjacent keyframes are small. are denoted as the individual camera motion of two adjacent keyframes from frame to frame . The camera pose of the previous -th keyframe can be formulated as
Then all previous 2D-3D correspondences in the sliding window can be “visible” in current frame. In Eq. 9, the previous camera pose is related to the current pose , while is a constant pose transformation, its Lie algebra is denoted as . Thus the pose optimization function can be written as
where is the number of previous frames in the sliding window. Therefore, more observations make the estimation more robust to outliers, and the motion constraints smooth the pose trajectory. When using the sliding window, an unbalance problem will arise due to the different numbers of 2D-3D correspondences for each frame. To equalize the contribution of each frame correspondences and improve the efficiency, a threshold for the maximum number of correspondences is set to discard the 2D-3D correspondences with short overlap distance . We employ Ceres Solver  to implement the optimization. After an optimized camera pose is obtained, the 2D-3D correspondences can be re-estimated by the updated pose to reject outliers. It follows the 2D-3D line matching in Section. III-B with more restrict thresholds (e.g., ). Then a more accurate camera pose can be updated with the new correspondences. After several iterations, both camera pose and 2D-3D correspondences can be optimized.
Iv Experimental results
We tested the proposed method on two different real-world experiments. The first experiment was conducted on the public available EuRoC MAV Dataset  with ground-truth trajectories. Secondly, we performed real experiments on our dataset collected by a Realsense D435i camera under varying conditions to validate the performance of the proposed method.
Considering the 3D LiDAR maps are often large scale with a big data volume, we first sub-sample the point clouds using CloudCompare  with a constant space resolution (1-5cm in experiments). Then the 3D line extraction based on  is conducted to obtain 3D lines. Since the arbitrary pose initialization in maps is a kidnapped robot problem for global 2D-3D correspondence estimation, a coarse pose initialization is given for the first frame by the PnP solver  on manually labeled 2D-3D point correspondences (at least 4 pairs, we use 6 pairs for robustness). For the learning-based 2D line extraction, we directly use the trained U-Net model  from Wireframe dataset  and modify it for online line segment detection. The testing platform is a desktop with Intel Core i7-4790K CPU, 32GB RAM, and an Nvidia GeForce GTX 980Ti GPU. The GPU is only used for 2D line detection.
During the 2D-3D correspondence estimation, the angle distance threshold is set as 10 degrees to constrain the 3D projections to be almost parallel with corresponding 2D lines. Then the point to line distance threshold is set around pixels to collect matching pairs. If the number of correspondences for an image is less than a threshold (set as 8 empirically), it is identified unstable therefore the camera pose is predicted by camera motion only. If the number exceeds a threshold, we discard the correspondences with short distance for efficiency (empirically , window size , there will be at most correspondences in the sliding window). In the experiments, 2 or 3 iterations of optimization are often enough to obtain stable camera pose and 2D-3D correspondences.
Since our method is based on monocular visual-inertial odometry without loop closure, we compare it with the two versions of VINS-Mono , i.e., with or without loop closure. Loop closure helps for reducing the overall absolute trajectory error and mapping, but the refinement of the past poses in the loop does not help for realtime localization. Additionally, it causes pose shifts for current pose which have side effects for robot navigation.
Iv-B Result on EuRoC MAV Dataset
The EuRoC MAV Dataset  is a visual-inertial datasets collected onboard a UAV. The datasets contain stereo images (20Hz), synchronized IMU data (200Hz), accurate ground-truth trajectories about 70 meters and a scan LiDAR map. The subset room data consist of 2 LiDAR maps wherein 3 video sequences each. 2D-3D correspondence results and estimated trajectory in LiDAR maps are shown in Fig. 6. All the 3D lines in FoV are projected to the image plane without occlusion checking. From the top left image, we can observe that the projections of 3D lines (in green) are shifted by using the estimated pose of VINS-Mono(odom). Then by iteratively optimizing 2D-3D correspondences and camera poses, stable 2D-3D correspondences are obtained with a more accurate pose. The position drifts are greatly reduced and stable 2D-3D correspondences can be estimated.
For quantitative analysis, former 200 estimated states are used for trajectory alignment with ground-truth and it is consistent for all methods . The absolute trajectory error (ATE)  results are shown in Table I, it is clear that the 2D-3D correspondences improve the pose estimation accuracy compared with only odometry. The V2 room has more noises making the 2D-3D correspondences sometimes unstable for pose optimization. This is the reason why the improvements for the V2 room are less significant than the results of the V1 room. The worst case is that no stable 2D-3D correspondence is available and the final estimations follow the odometry. Furthermore, our method shows competitive results compared with VINS-Mono(loop). While the loop closure optimizes the past poses in the loop and produces a pose jump for the current pose. For the realtime localization purpose, the refinements of the poses in the past make no sense and the pose jumps have side effects for navigation. Our method always estimates the current poses in the sliding window, which greatly reduces the drifts and does not have the issue of pose jumps.
The relative pose errors (RPE)  on data are shown in Table II and Fig. 7, which are used to show the growth of position error with the trajectory length. The accumulated drift of VINS-mono(odom) is growing up with travel length. While the error of our method keeps small and stable along the way, which is related to the accuracy of 2D and 3D line localization.
Iv-C Evaluation on our dataset
To further evaluate our method under varying urban environments, we tested it on our own collected data of indoor corridors and outdoor buildings. An Intel RealSense D435i camera is used to collect synchronized images and IMU data. The left global-shutter imager sensor captures monocular image sequences ( pixels images at 30Hz, the IR projector turned off) with synchronized IMU data (200Hz). The LiDAR maps are obtained by registering several scans of a FARO laser scanner focus3D S, as shown in Fig. 8. Both the indoor corridors (Fig. 8a) and Smith Hall (Fig. 8c) have a lot of occlusions, while the NSH building (Fig. 8b) is much simple with fewer occlusions. For these experiments, the trajectories are in the same pattern to run a complete round and return to the start points to see the position drifts. The loop closure is not stably detected for all runs, so the results of VINS-Mono with loop closure are not discussed here.
The indoor corridor’s results are shown in Fig. 1. The results of two outdoor buildings are shown in Fig. 9. Considering we do not have ground-truth trajectories, the estimation accuracy validation is shown in the following two ways. For the qualitative analysis, the 3D line features are projected to the image plane using the estimated poses of VINS-Mono(odom) and our method. For VINS-Mono odometry, it can be observed that the 3D line projections (green lines) are shifted and scaled by inaccurate camera poses in the top left images. While using our method, the pose estimations provide more accurate and stable 2D-3D structure correspondences in the bottom left images. Additionally, we can observe obvious misalignment between the two trajectories. More online correspondence and localization results are shown in the video 111https://youtu.be/H80Bnxm8IPE.
For the localization accuracy evaluation, we pick 5 frames along the trajectories and manually label 10 pairs of 2D-3D point correspondences, then use the PnP solver to estimate the ground-truth camera pose of these frames. The 5 frames are sampled along the trajectories on different times (, is the total running time) and the RMSE positions along 5 runs are used as the final ground-truths. The position errors are shown in Table III. The accumulated drifts increase a lot along the trajectories for VINS-Mono(odom). While our method greatly improves the localization accuracy with the assistance of stable 2D-3D line correspondences. The localization error keeps very small along the long trajectories. Another interesting observation is that the accumulated error can drift back if we keep the direction of the system and backward to the start position, which is shown in the results of VINS-Mono on NSH building.
|Corridors||NSH building||Smith Hall|
In terms of efficiency, the VINS-Mono does not use map information and can be customized set for output frequency. However, with different setting frequencies, the odometry results change a lot. We select the most stable one at 15 Hz. Then for the estimation of 3D correspondences and the camera pose, it costs about 0.012 seconds for each keyframe. Since the 3D line extraction is offline before the system starts, 2D feature extraction can run on images at 25Hz, our method can be finally running at 15 Hz.
In this paper, we presented a novel monocular camera localization approach in prior LiDAR maps of urban environments. With the 3D geometric lines from LiDAR maps and robust online 2D line detection, our method efficiently obtains coarse 2D-3D line correspondences based on the camera motion prediction from VINS-Mono. The pose optimization with 2D-3D correspondences greatly reduces the pose estimation drifts of VIO system without using visual-revisiting loop closure. Both qualitative and quantitative results on real-world datasets demonstrate that our method can efficiently obtain reliable 2D-3D correspondences and accurate camera pose in LiDAR maps. As future work we intend to enhance the robustness of 2D-3D correspondences on inaccurate pose predictions, such as directly using the pose of the last frame as the prediction.
This work is supported by the Shimizu Institute of Technology, Tokyo and China Scholarship Council.
R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,”IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
-  T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
-  K. Sun, K. Mohta, B. Pfrommer, M. Watterson, S. Liu, Y. Mulgaonkar, C. J. Taylor, and V. Kumar, “Robust stereo visual inertial odometry for fast autonomous flight,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 965–972, 2018.
-  T. Caselitz, B. Steder, M. Ruhnke, and W. Burgard, “Monocular camera localization in 3d lidar maps,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016, pp. 1926–1931.
-  X. Zuo, P. Geneva, Y. Yang, W. Ye, Y. Liu, and G. Huang, “Visual-inertial localization with prior LiDAR map constraints,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3394–3401, 2019.
-  N. Piasco, D. Sidibé, C. Demonceaux, and V. Gouet-Brunet, “A survey on visual-based localization: On the benefit of heterogeneous data,” Pattern Recognition, vol. 74, pp. 90–109, 2018.
-  T. Sattler, B. Leibe, and L. Kobbelt, “Efficient & effective prioritized matching for large-scale image-based localization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 9, pp. 1744–1756, 2017.
-  Y. Kim, J. Jeong, and A. Kim, “Stereo camera localization in 3d lidar maps,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 1–9.
-  Y. Feng, L. Fan, and Y. Wu, “Fast localization in large-scale environments using supervised indexing of binary features,” IEEE Transactions on Image Processing, vol. 25, no. 1, pp. 343–358, 2016.
-  R. W. Wolcott and R. M. Eustice, “Visual localization within lidar maps for automated urban driving,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014, pp. 176–183.
-  A. D. Stewart and P. Newman, “Laps-localisation using appearance of prior structure: 6-dof monocular camera localisation using prior pointclouds,” in 2012 IEEE International Conference on Robotics and Automation, 2012, pp. 2625–2632.
-  G. Pascoe, W. Maddern, and P. Newman, “Direct visual localisation and calibration for road vehicles in changing city environments,” in IEEE International Conference on Computer Vision Workshops, 2015, pp. 9–16.
-  P. Neubert, S. Schubert, and P. Protzel, “Sampling-based methods for visual navigation in 3d maps by synthesizing depth images,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 2492–2498.
-  A. Gawel, T. Cieslewski, R. Dubé, M. Bosse, R. Siegwart, and J. Nieto, “Structure-based vision-laser matching,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016, pp. 182–188.
-  D. J. Campbell, L. Petersson, L. Kneip, and H. Li, “Globally-optimal inlier set maximisation for camera pose and correspondence estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
-  M. Brown, D. Windridge, and J.-Y. Guillemaut, “A family of globally optimal branch-and-bound algorithms for 2d–3d correspondence-free registration,” Pattern Recognition, vol. 93, pp. 36–54, 2019.
-  P. David, D. DeMenthon, R. Duraiswami, and H. Samet, “Simultaneous pose and correspondence determination using line features,” in IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, 2003, pp. II–II.
-  F. Moreno-Noguer, V. Lepetit, and P. Fua, “Pose priors for simultaneously solving alignment and correspondence,” in European Conference on Computer Vision, 2008, pp. 405–418.
-  J. Levinson and S. Thrun, “Automatic online calibration of cameras and lasers.” in Robotics: Science and Systems, vol. 2, 2013, p. 7.
-  L. Zhou, Z. Li, and M. Kaess, “Automatic extrinsic calibration of a camera and a 3d lidar using line and plane correspondences,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 5562–5569.
-  D. Cattaneo, M. Vaghi, A. L. Ballardini, S. Fontana, D. G. Sorrenti, and W. Burgard, “Cmrnet: Camera to lidar-map registration,” in 2019 IEEE Intelligent Transportation Systems Conference, 2019, pp. 1283–1289.
-  M. Feng, S. Hu, M. H. Ang, and G. H. Lee, “2d3d-matchnet: learning to match keypoints across 2d image and 3d point cloud,” in 2019 International Conference on Robotics and Automation, 2019, pp. 4790–4796.
-  V. Lepetit, F. Moreno-Noguer, and P. Fua, “Epnp: An accurate o (n) solution to the pnp problem,” International Journal of Computer Vision, vol. 81, no. 2, p. 155, 2009.
-  X. Lu, Y. Liu, and K. Li, “Fast 3d line segment detection from unorganized point cloud,” arXiv preprint arXiv:1901.02532, 2019.
-  R. G. Von Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall, “Lsd: A fast line segment detector with a false detection control,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722–732, 2008.
-  N. Xue, S. Bai, F. Wang, G.-S. Xia, T. Wu, and L. Zhang, “Learning attraction field representation for robust line segment detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1595–1603.
-  S. Agarwal, K. Mierle, and Others, “Ceres solver,” http://ceres-solver.org.
-  M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
-  D. Girardeau-Montaut, “Cloudcompare-open source project,” OpenSource Project, 2011.
-  K. Huang, Y. Wang, Z. Zhou, T. Ding, S. Gao, and Y. Ma, “Learning to parse wireframes in images of man-made environments,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 626–635.
-  J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgb-d slam systems,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 573–580.
-  Z. Zhang and D. Scaramuzza, “A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 7244–7251.