I Introduction
Accurate robot localization in urban environments is in great demand for autonomous vehicles. Since GPS localization is unstable without direct lineofsight to the satellites, LiDARbased localization modules are often used because of the accurate range measurements. Additionally, urban scenes have relative timeinvariant geometric structures (e.g., buildings), thus onetime 3D map construction can be used for longterm localization. However, the expensive cost and heavy weight of LiDAR sensors limit its wide applications. Cameras and IMUs are lowcost, lightweight and commonly available sensors and current visualinertial based pose estimation and mapping methods are welldeveloped for a variety of robot systems
[1, 2, 3]. Nevertheless, state estimation methods that use only image features are prone to failures due to lighting or texture changes in the environment. Consequently, if camera localization modules can be associated with a prior 3D map, i.e., fuse the visual information with range measurements, there will be a great potential to use these lightweight and small camera modules for accurate localization in urban environments without a LiDAR.However, the fusion of image data with 3D point clouds is challenging due to appearance differences and modality gaps. Current approaches typically transfer the 3D data into 2D space or reconstruct 3D point clouds from 2D images to align data for pose estimation. Based on the characteristics of urban environments, our intuition lies on the fact that the major geometric structures, such as lines and planes, can be both captured in 3D maps and 2D images regardless of appearance differences and modality gaps. The direct 2D3D geometric cooccurrence correspondence is more robust and precise than the association of domaintransferred data. Therefore, our purpose is to directly estimate the 2D3D geometric line correspondences for the accurate and longterm camera localization.
In this work, we propose an approach for realtime lightweight monocular camera localization in prior 3D LiDAR maps using direct 2D3D geometric line correspondences. We assume a coarse pose initialization is given and focus on the pose tracking in maps, which follows the related works [4, 5]
. For geometric concurrent feature extraction, 3D line segments are detected offline from LiDAR maps while robust 2D line segments are extracted online from video sequences. By employing the 6DOF pose prediction from VIO, local visible 3D lines in fieldofview (FoV) are extracted and directly matched with 2D line features to obtain coarse 2D3D line correspondences. Finally, the camera pose and 2D3D correspondences are iteratively optimized by minimizing the projection error of correspondences and rejecting outliers.
The main contribution of this work is to estimate geometric 2D3D line correspondences for camera localization, which efficiently associates every keyframe with the prior LiDAR map. The 2D3D line correspondence estimation is robust to appearance changes and suitable for camera localization in urban environments. Fig. 1 shows a camera image with 2D3D line correspondences and estimated camera poses in the LiDAR map.
Ii Related Work
Visionbased localization in maps is to establish correspondences between 2D and 3D modalities for improving the localization robustness and accuracy [6]. To overcome the modality gaps and appearance differences, general approaches are using “intermediate products” to transfer the matching into the same space, i.e., in 2D space [7] or in 3D space [4, 8].
The first kind of camera localization in 3D maps is matching photometry in 2D image space. For most of the visual simultaneous localization and mapping (SLAM) methods [1, 2], sparse point cloud maps are reconstructed with photometry (or visual) descriptors, then loop closure detection is conducted by matching these descriptors to decrease drifts. In imagebased localization methods [7, 9], large scale maps are reconstructed using SFM with visual features, these features are efficiently matched to yield 2D3D correspondences for camera localization. Nevertheless, appearance and visual features are sensitive to illumination changes and light conditions, which make the correspondences unstable for longterm camera localization. Additionally, to handle the LiDAR maps without associated visual features, LiDAR appearance synthetic images are often used to directly match with camera images by normalized mutual information (NMI) [10] or normalized information distance (NID) [11]. Additionally, recent methods utilize colored point cloud projections [12] and synthesized depth images [13] to match with live images for camera localization. These localization methods are all trying to transfer the 2D3D matching problem to be a 2D2D matching problem.
The second strategy is matching geometry in 3D space to estimate camera poses. By using local bundle adjustment, [4] proposed to match a sparse reconstructed local 3D point clouds with given 3D LiDAR maps, which solves the scale estimation problem of monocular VO system and achieves online estimation of 6DoF camera poses. Similarly in [14], 3D structural descriptors are used for matching LiDAR maps with sparse visual reconstructed point clouds. In [8, 5], dense local point clouds are reconstructed from a stereo camera to match with the 3D LiDAR maps, and then the matching results are loosely or tightly coupled into the VO and VIO system for optimizing camera poses. These localization methods by 3D registration obtain feasible results compared with visiononly based methods. However, the localization accuracy highly depends on local reconstruction performances. SFM suffers from scale problem and the reconstructed sparse points may not have correspondence in maps. Stereo reconstruction gives dense local point clouds but mostly is timeconsuming and does not scale well for longrange depth estimation.
Compared with the local point cloud matching methods, we aim at directly extracting 2D lines from a monocular camera to match with 3D lines from LiDAR maps for camera localization, which does not rely on SFM or stereo reconstruction modules. The global 2D3D registration is known as a kidnapped robot problem for the feature description gap and the nonconvexity [15]. RANSACbased and branchandbound strategies are often used to maximize the global optimum without a pose prior [15, 16]. With the camera pose prediction from GPS or VO (VIO), we can obtain a pose prediction as the prior for local 2D3D matching. SoftPosit [17] and BlindPnP [18] are conventional simultaneous 2D3D correspondences and pose estimation approaches with the provided pose initialization. For the camera to LiDAR sensor calibration methods [19, 20], geometric feature correspondences across images and point clouds are often used to optimize the extrinsic parameters. Recently, some learningbased methods were proposed to directly register images to LiDARMap with GPS pose initialization [21, 22]. These methods suffer from either high computation complexity or unstable outputs, which still need exploration for realtime camera localization in 3D LiDAR maps.
Iii Proposed method
The proposed method simultaneously estimates 6DoF camera poses and 2D3D line correspondences in LiDAR maps. 2D3D line correspondences are utilized to optimize camera poses by minimizing the 3D line projection error while refined camera poses can help reject outlier correspondences. As the preliminary to the online 2D3D correspondence estimation, 3D line features are extracted offline on the large scale 3D LiDAR maps. At the same time, a coarse pose initialization is given for the first frame by the PnP solver [23] on manually labeled 2D3D point correspondences. Then VINSMono [2] is utilized to predict the camera motion between adjacent keyframes. With the predicted poses, local 3D lines in camera fieldofview (FoV) are extracted and directly matched with the online extracted 2D lines from image sequences. Finally, camera poses and 2D3D correspondences are iteratively updated. The pipeline is shown in Fig. 2.
Iiia 2D and 3D line extraction
In urban environments, the geometric structures are often represented by line segments and planes. Here we utilize a simple segmentationbased 3D line detection method [24] to extract 3D lines from the LiDAR maps. The general idea is to cluster point clouds into plane regions and use contour line fitting to obtain 3D line segments. This method is efficient and robust for large scale unorganized point clouds in urban environments. Although it needs time to process millions of points, the 3D lines of all maps are extracted only once before we start tracking the camera poses in the maps.
For 2D line extraction, we want to extract the major geometric 2D lines which are consistent with 3D lines and robust to noises. It is challenging in urban scenes because the substantial texture noises yield fragmented 2D line segments and some geometric edges are invisible in 2D images on the colorhomogeneous structures (e.g., white wall). Many stateoftheart line segment detection (LSD) methods have been proposed in computer vision
[25, 26], where traditional handcraft methods are with high efficiency for online running on CPU. However, the detected lines are fragmented and noisy, an example is shown in Fig. 3(a). These kinds of fragmented and noisy features can produce a lot of 2D3D matching outliers. Considering the line completeness and robustness to noisy, we finally employ a learningbased LSD [26] which uses the attraction field map (AFM) to transfer the LSD problem to be a region coloring problem. For each pixelin images, the model first learns a 2D vector
from the pixel to the nearest point on the nearest line segment.(1) 
where is the coordinate of pixel . Then an AFM is generated by encoding the pixels to line associations. During the training step, groundtruth line segments are transferred to AFM and the network directly learns a model to minimize the similarity between the output and the true AFM. For testing, AFM representations are first generated from images, then a squeeze module [25] is used to iteratively group belonging to the same line to fit line segments. Fig. 3(b) shows the detected line segments of the example image. It shows consistency with geometric 3D structures and robustness to texture noises.
IiiB 2D3D line matching
For a single frame, the main steps to obtain 2D3D correspondences consist of initial camera pose prediction, visible 3D lines collection, and individual 2D3D line correspondence estimation. Here the extraction of 3D lines in FoV helps improve the efficiency because local 3D lines in FoV are very limited compared with all 3D lines in the 3D map. Considering the occlusion checking is difficult to conduct on only 3D lines map, we keep all the 3D lines in FoV without discarding occluded lines.
For an image at time , the corresponding pose estimation from VINSMono is denoted as , and the estimated pose from 2D3D correspondences is denoted as . By using the estimated pose from 2D3D correspondences of the last frame and the camera motion from VINSMono , the predicted pose of current frame can be computed
(2)  
With the pose prediction , the local 3D lines in FoV can be extracted based on the two endpoint projections for efficiency. For checking the visibility of a 3D point in FoV,
(3) 
where is the camera intrinsic matrix. If , and , the 3D point is in FoV. For a 3D line segment, the visibility checking is more complicated. Based on the visibility of two endpoints, there are three cases for validation:

If both two endpoints are in FoV as shown in Fig. 4(a), we keep the whole 3D line segment as a local visible feature.

When only one endpoint is in FoV (in Fig. 4(b)), we iteratively sample new 3D points on the 3D line segment from the visible point by ratio of the line length and check the visibility of the new sample points. The generated subset 3D line segment with the longest length in FoV is stored as a local visible feature.

When both two endpoints are out of FoV but a subsegment is in FoV, as shown in Fig. 4(c), we can also sample points to extract a subsegment in FoV. However, most of the invisible 3D lines are in this case, thus we discard all the 3D segments with two endpoints out of FoV for efficiency.
Now with the predicted camera pose , the 2D lines are directly matched with the local 3D lines in FoV, where and are the start point and endpoint of a 2D line, respectively. For each possible 2D3D correspondence, we use a 3D vector to measure the similarity, 2D angle distance , the distance of two 3D endpoint projections to the corresponding infinite 2D lines, and the overlapped length of the finite 2D lines with the 3D line projection.
(4) 
The projection of a 3D line on the image plane is (Eq. 3). The normalized orientation of 2D lines is denoted as . Then the 2D angle distance can be computed by
(5) 
Assume the parametric representation of the extracted 2D line features is . The distance can be computed by
(6) 
By using the pointtofiniteline projection points, we can compute the overlapped length
(7)  
where and are corresponding to the projection points of and respectively. Then for each extracted 2D line , bruteforce searching strategy is used to find a 3D line whose distance and . Thus we obtain a set of coarse 2D3D line correspondences for further camera pose estimation.
IiiC Pose optimization and correspondence refinement
For a single frame, we can optimize the camera pose by minimizing the pointtoinfiniteline distance of the two 3D endpoint projections to corresponding 2D line distance. The Lie algebra of the estimated camera poses are denoted as . The coefficient vector of the infinite 2D line as . Considering all the 2D3D line correspondences, the minimization function is
(8)  
where contains two endpoints, denotes the number of correspondences. It is formulated as a nonlinear leastsquares minimization problem. We initialize the camera pose with the predicted pose from VINSMono. With Lie algebra, the typical LM algorithm can find the optimal camera pose .
However, single frame 2D3D correspondence observations are not robust enough for online camera localization. When the 3D lines in FoV are limited or parallel to each other in 3D space, the 2D3D correspondences cannot constrain the 6DoF pose. Additionally, even the correspondences are enough for pose estimation, the geometric localization noises of both 2D and 3D lines will make the estimation jitter around the true pose. To solve these problems, a sliding window is utilized to add more previous correspondence observations for optimizing the current pose (as shown in Fig. 5). Assuming the camera motion estimated from VINSMono for two adjacent keyframes is accurate, which is reasonable because the drifts of VINSMono on two adjacent keyframes are small. are denoted as the individual camera motion of two adjacent keyframes from frame to frame . The camera pose of the previous th keyframe can be formulated as
(9) 
Then all previous 2D3D correspondences in the sliding window can be “visible” in current frame. In Eq. 9, the previous camera pose is related to the current pose , while is a constant pose transformation, its Lie algebra is denoted as . Thus the pose optimization function can be written as
(10) 
where is the number of previous frames in the sliding window. Therefore, more observations make the estimation more robust to outliers, and the motion constraints smooth the pose trajectory. When using the sliding window, an unbalance problem will arise due to the different numbers of 2D3D correspondences for each frame. To equalize the contribution of each frame correspondences and improve the efficiency, a threshold for the maximum number of correspondences is set to discard the 2D3D correspondences with short overlap distance . We employ Ceres Solver [27] to implement the optimization. After an optimized camera pose is obtained, the 2D3D correspondences can be reestimated by the updated pose to reject outliers. It follows the 2D3D line matching in Section. IIIB with more restrict thresholds (e.g., ). Then a more accurate camera pose can be updated with the new correspondences. After several iterations, both camera pose and 2D3D correspondences can be optimized.
Iv Experimental results
We tested the proposed method on two different realworld experiments. The first experiment was conducted on the public available EuRoC MAV Dataset [28] with groundtruth trajectories. Secondly, we performed real experiments on our dataset collected by a Realsense D435i camera under varying conditions to validate the performance of the proposed method.
Iva Implementation
Considering the 3D LiDAR maps are often large scale with a big data volume, we first subsample the point clouds using CloudCompare [29] with a constant space resolution (15cm in experiments). Then the 3D line extraction based on [24] is conducted to obtain 3D lines. Since the arbitrary pose initialization in maps is a kidnapped robot problem for global 2D3D correspondence estimation, a coarse pose initialization is given for the first frame by the PnP solver [23] on manually labeled 2D3D point correspondences (at least 4 pairs, we use 6 pairs for robustness). For the learningbased 2D line extraction, we directly use the trained UNet model [26] from Wireframe dataset [30] and modify it for online line segment detection. The testing platform is a desktop with Intel Core i74790K CPU, 32GB RAM, and an Nvidia GeForce GTX 980Ti GPU. The GPU is only used for 2D line detection.
During the 2D3D correspondence estimation, the angle distance threshold is set as 10 degrees to constrain the 3D projections to be almost parallel with corresponding 2D lines. Then the point to line distance threshold is set around pixels to collect matching pairs. If the number of correspondences for an image is less than a threshold (set as 8 empirically), it is identified unstable therefore the camera pose is predicted by camera motion only. If the number exceeds a threshold, we discard the correspondences with short distance for efficiency (empirically , window size , there will be at most correspondences in the sliding window). In the experiments, 2 or 3 iterations of optimization are often enough to obtain stable camera pose and 2D3D correspondences.
Since our method is based on monocular visualinertial odometry without loop closure, we compare it with the two versions of VINSMono [2], i.e., with or without loop closure. Loop closure helps for reducing the overall absolute trajectory error and mapping, but the refinement of the past poses in the loop does not help for realtime localization. Additionally, it causes pose shifts for current pose which have side effects for robot navigation.
IvB Result on EuRoC MAV Dataset
The EuRoC MAV Dataset [28] is a visualinertial datasets collected onboard a UAV. The datasets contain stereo images (20Hz), synchronized IMU data (200Hz), accurate groundtruth trajectories about 70 meters and a scan LiDAR map. The subset room data consist of 2 LiDAR maps wherein 3 video sequences each. 2D3D correspondence results and estimated trajectory in LiDAR maps are shown in Fig. 6. All the 3D lines in FoV are projected to the image plane without occlusion checking. From the top left image, we can observe that the projections of 3D lines (in green) are shifted by using the estimated pose of VINSMono(odom). Then by iteratively optimizing 2D3D correspondences and camera poses, stable 2D3D correspondences are obtained with a more accurate pose. The position drifts are greatly reduced and stable 2D3D correspondences can be estimated.
Dataset  VINSMono  VINSMono  2D3D 

(odom)[2]  (loop)[2]  matching  
V1easy  0.147  0.075  0.89 
V1medium  0.153  0.127  0.069 
V1difficult  0.301  0.152  0.173 
V2easy  0.275  0.099  0.266 
V2medium  0.221  0.135  0.132 
V2difficult  0.726  0.324  0.635 
For quantitative analysis, former 200 estimated states are used for trajectory alignment with groundtruth and it is consistent for all methods [32]. The absolute trajectory error (ATE) [31] results are shown in Table I, it is clear that the 2D3D correspondences improve the pose estimation accuracy compared with only odometry. The V2 room has more noises making the 2D3D correspondences sometimes unstable for pose optimization. This is the reason why the improvements for the V2 room are less significant than the results of the V1 room. The worst case is that no stable 2D3D correspondence is available and the final estimations follow the odometry. Furthermore, our method shows competitive results compared with VINSMono(loop). While the loop closure optimizes the past poses in the loop and produces a pose jump for the current pose. For the realtime localization purpose, the refinements of the poses in the past make no sense and the pose jumps have side effects for navigation. Our method always estimates the current poses in the sliding window, which greatly reduces the drifts and does not have the issue of pose jumps.
The relative pose errors (RPE) [32] on data are shown in Table II and Fig. 7, which are used to show the growth of position error with the trajectory length. The accumulated drift of VINSmono(odom) is growing up with travel length. While the error of our method keeps small and stable along the way, which is related to the accuracy of 2D and 3D line localization.
IvC Evaluation on our dataset
To further evaluate our method under varying urban environments, we tested it on our own collected data of indoor corridors and outdoor buildings. An Intel RealSense D435i camera is used to collect synchronized images and IMU data. The left globalshutter imager sensor captures monocular image sequences ( pixels images at 30Hz, the IR projector turned off) with synchronized IMU data (200Hz). The LiDAR maps are obtained by registering several scans of a FARO laser scanner focus3D S, as shown in Fig. 8. Both the indoor corridors (Fig. 8a) and Smith Hall (Fig. 8c) have a lot of occlusions, while the NSH building (Fig. 8b) is much simple with fewer occlusions. For these experiments, the trajectories are in the same pattern to run a complete round and return to the start points to see the position drifts. The loop closure is not stably detected for all runs, so the results of VINSMono with loop closure are not discussed here.
The indoor corridor’s results are shown in Fig. 1. The results of two outdoor buildings are shown in Fig. 9. Considering we do not have groundtruth trajectories, the estimation accuracy validation is shown in the following two ways. For the qualitative analysis, the 3D line features are projected to the image plane using the estimated poses of VINSMono(odom) and our method. For VINSMono odometry, it can be observed that the 3D line projections (green lines) are shifted and scaled by inaccurate camera poses in the top left images. While using our method, the pose estimations provide more accurate and stable 2D3D structure correspondences in the bottom left images. Additionally, we can observe obvious misalignment between the two trajectories. More online correspondence and localization results are shown in the video ^{1}^{1}1https://youtu.be/H80Bnxm8IPE.
For the localization accuracy evaluation, we pick 5 frames along the trajectories and manually label 10 pairs of 2D3D point correspondences, then use the PnP solver to estimate the groundtruth camera pose of these frames. The 5 frames are sampled along the trajectories on different times (, is the total running time) and the RMSE positions along 5 runs are used as the final groundtruths. The position errors are shown in Table III. The accumulated drifts increase a lot along the trajectories for VINSMono(odom). While our method greatly improves the localization accuracy with the assistance of stable 2D3D line correspondences. The localization error keeps very small along the long trajectories. Another interesting observation is that the accumulated error can drift back if we keep the direction of the system and backward to the start position, which is shown in the results of VINSMono on NSH building.
Corridors  NSH building  Smith Hall  
VINS  2D3D  VINS  2D3D  VINS  2D3D  
()  Mono  matching  Mono  matching  Mono  matching 
0.1  0.125  0.109  0.137  0.152  0.354  0.180 
0.3  0.596  0.128  0.319  0.127  0.507  0.154 
0.5  0.438  0.094  1.203  0.170  1.057  0.201 
0.8  0.575  0.120  0.352  0.156  1.245  0.156 
1.0  0.705  0.112  0.621  0.132  2.176  0.178 
Length  130  95  120 
In terms of efficiency, the VINSMono does not use map information and can be customized set for output frequency. However, with different setting frequencies, the odometry results change a lot. We select the most stable one at 15 Hz. Then for the estimation of 3D correspondences and the camera pose, it costs about 0.012 seconds for each keyframe. Since the 3D line extraction is offline before the system starts, 2D feature extraction can run on images at 25Hz, our method can be finally running at 15 Hz.
V Conclusion
In this paper, we presented a novel monocular camera localization approach in prior LiDAR maps of urban environments. With the 3D geometric lines from LiDAR maps and robust online 2D line detection, our method efficiently obtains coarse 2D3D line correspondences based on the camera motion prediction from VINSMono. The pose optimization with 2D3D correspondences greatly reduces the pose estimation drifts of VIO system without using visualrevisiting loop closure. Both qualitative and quantitative results on realworld datasets demonstrate that our method can efficiently obtain reliable 2D3D correspondences and accurate camera pose in LiDAR maps. As future work we intend to enhance the robustness of 2D3D correspondences on inaccurate pose predictions, such as directly using the pose of the last frame as the prediction.
Acknowledgment
This work is supported by the Shimizu Institute of Technology, Tokyo and China Scholarship Council.
References

[1]
R. MurArtal and J. D. Tardós, “Orbslam2: An opensource slam system for monocular, stereo, and rgbd cameras,”
IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.  [2] T. Qin, P. Li, and S. Shen, “Vinsmono: A robust and versatile monocular visualinertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
 [3] K. Sun, K. Mohta, B. Pfrommer, M. Watterson, S. Liu, Y. Mulgaonkar, C. J. Taylor, and V. Kumar, “Robust stereo visual inertial odometry for fast autonomous flight,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 965–972, 2018.
 [4] T. Caselitz, B. Steder, M. Ruhnke, and W. Burgard, “Monocular camera localization in 3d lidar maps,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016, pp. 1926–1931.
 [5] X. Zuo, P. Geneva, Y. Yang, W. Ye, Y. Liu, and G. Huang, “Visualinertial localization with prior LiDAR map constraints,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3394–3401, 2019.
 [6] N. Piasco, D. Sidibé, C. Demonceaux, and V. GouetBrunet, “A survey on visualbased localization: On the benefit of heterogeneous data,” Pattern Recognition, vol. 74, pp. 90–109, 2018.
 [7] T. Sattler, B. Leibe, and L. Kobbelt, “Efficient & effective prioritized matching for largescale imagebased localization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 9, pp. 1744–1756, 2017.
 [8] Y. Kim, J. Jeong, and A. Kim, “Stereo camera localization in 3d lidar maps,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 1–9.
 [9] Y. Feng, L. Fan, and Y. Wu, “Fast localization in largescale environments using supervised indexing of binary features,” IEEE Transactions on Image Processing, vol. 25, no. 1, pp. 343–358, 2016.
 [10] R. W. Wolcott and R. M. Eustice, “Visual localization within lidar maps for automated urban driving,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014, pp. 176–183.
 [11] A. D. Stewart and P. Newman, “Lapslocalisation using appearance of prior structure: 6dof monocular camera localisation using prior pointclouds,” in 2012 IEEE International Conference on Robotics and Automation, 2012, pp. 2625–2632.
 [12] G. Pascoe, W. Maddern, and P. Newman, “Direct visual localisation and calibration for road vehicles in changing city environments,” in IEEE International Conference on Computer Vision Workshops, 2015, pp. 9–16.
 [13] P. Neubert, S. Schubert, and P. Protzel, “Samplingbased methods for visual navigation in 3d maps by synthesizing depth images,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 2492–2498.
 [14] A. Gawel, T. Cieslewski, R. Dubé, M. Bosse, R. Siegwart, and J. Nieto, “Structurebased visionlaser matching,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016, pp. 182–188.
 [15] D. J. Campbell, L. Petersson, L. Kneip, and H. Li, “Globallyoptimal inlier set maximisation for camera pose and correspondence estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
 [16] M. Brown, D. Windridge, and J.Y. Guillemaut, “A family of globally optimal branchandbound algorithms for 2d–3d correspondencefree registration,” Pattern Recognition, vol. 93, pp. 36–54, 2019.
 [17] P. David, D. DeMenthon, R. Duraiswami, and H. Samet, “Simultaneous pose and correspondence determination using line features,” in IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, 2003, pp. II–II.
 [18] F. MorenoNoguer, V. Lepetit, and P. Fua, “Pose priors for simultaneously solving alignment and correspondence,” in European Conference on Computer Vision, 2008, pp. 405–418.
 [19] J. Levinson and S. Thrun, “Automatic online calibration of cameras and lasers.” in Robotics: Science and Systems, vol. 2, 2013, p. 7.
 [20] L. Zhou, Z. Li, and M. Kaess, “Automatic extrinsic calibration of a camera and a 3d lidar using line and plane correspondences,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 5562–5569.
 [21] D. Cattaneo, M. Vaghi, A. L. Ballardini, S. Fontana, D. G. Sorrenti, and W. Burgard, “Cmrnet: Camera to lidarmap registration,” in 2019 IEEE Intelligent Transportation Systems Conference, 2019, pp. 1283–1289.
 [22] M. Feng, S. Hu, M. H. Ang, and G. H. Lee, “2d3dmatchnet: learning to match keypoints across 2d image and 3d point cloud,” in 2019 International Conference on Robotics and Automation, 2019, pp. 4790–4796.
 [23] V. Lepetit, F. MorenoNoguer, and P. Fua, “Epnp: An accurate o (n) solution to the pnp problem,” International Journal of Computer Vision, vol. 81, no. 2, p. 155, 2009.
 [24] X. Lu, Y. Liu, and K. Li, “Fast 3d line segment detection from unorganized point cloud,” arXiv preprint arXiv:1901.02532, 2019.
 [25] R. G. Von Gioi, J. Jakubowicz, J.M. Morel, and G. Randall, “Lsd: A fast line segment detector with a false detection control,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722–732, 2008.
 [26] N. Xue, S. Bai, F. Wang, G.S. Xia, T. Wu, and L. Zhang, “Learning attraction field representation for robust line segment detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1595–1603.
 [27] S. Agarwal, K. Mierle, and Others, “Ceres solver,” http://ceressolver.org.
 [28] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
 [29] D. GirardeauMontaut, “Cloudcompareopen source project,” OpenSource Project, 2011.
 [30] K. Huang, Y. Wang, Z. Zhou, T. Ding, S. Gao, and Y. Ma, “Learning to parse wireframes in images of manmade environments,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 626–635.
 [31] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgbd slam systems,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 573–580.
 [32] Z. Zhang and D. Scaramuzza, “A tutorial on quantitative trajectory evaluation for visual (inertial) odometry,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 7244–7251.
Comments
There are no comments yet.