I Introduction
Accurate extrinsic calibration has gained importance for vehicles which are equipped with a large number of sensors. Traditional manual calibration techniques, which require known calibration targets [1, 2, 3, 4, 5], suffer limited flexibility and tend not to scale well to multisensor configurations.
The automatic calibration methods of 3D lidars will be focused on in this paper. With the development of mobile robots, lidars have become one of the most popular sensors for perceiving the environment. Thanks to their accuracy and stability in measuring distance, they are used in many applications [6, 7, 8]. However, much recent work prefers the configuration with multiple lidars rather than a single lidar because it can render a richer view of environments and offer denser measurements. Several problems such as occlusion and sparsity can be avoided. On any mobile platform containing multiple lidars, it is of paramount importance that sensors can be calibrated automatically. When the calibration is finished, all measurements will be correctly projected into a unified coordinate system.
Over the past years, automatic methods for calibrating sensors e.g., lidar to camera [9], multicamera [10], and camera to IMU [11] have been proposed. However, few efforts have investigated multilidar calibration since this problem is challenging; as Choi et al. [12] explained, “searching for correspondences among scan points is difficult”. [13] and [14] are two specific multilidar calibration methods, but several drawbacks are presented. Firstly, they rely exclusively on an additional sensor. Secondly, their success will depend on the quality of initialization provided by users. Thirdly, both of them assume that mobile platforms should undergo efficient motion.
To tackle these issues, we propose a novel approach for calibrating dual lidars without any additional sensors and artificial markers. This method assumes that three linearly independent planar surfaces forming a wall corner shape are provided as the calibration targets. Through matching these planes, our method can successfully acquire the unknown extrinsic parameters in two steps: a closedform solution for initialization and an optimizer for refinement by minimizing a defined cost function. This method is used to calibrate three lidars with overlapping regions on our mobile platform [see Fig. 1]. An overview of the method is shown in Fig. 1. In solving calibration with poor human intervention, we make two significant contributions in this paper:

We make it possible to use objects with unknown size in the outdoor environment as the calibration target.

We demonstrate that our method is efficient in applications since the extrinsic parameters can be obtained immediately with oneshot measurement.
Ii Related Work
Iia Calibration for MultiLidar Systems
In recent years, Gao and Spletzer [13] proposed an algorithm to calibrate multiple lidars using point constraints provided by retroreflective tapes in scenes. He et al. [14] demonstrated a technique to extract geometric features among point clouds, which enables an offline algorithm to calibrate multiple 2D lidars in arbitrary scenes. Shortly after that, their approach was improved in a challenging scenario: an underground parking lot, where GPS is not available [15]. However, such methods rely on an additional localization module, making the calibration process complicated.
Artificial landmarks are prevalently used to find correspondences among sensor data. Xie et al. [16] provided a general solution to jointly calibrate multiple cameras and lidars in the presence of a prebuilt environment with apriltags. Steder et al. [17] proposed a trackingbased method to calibrate multiple 2D lidars using a moving object which appears in their overlapping areas. Based on it, Quenzel et al. [18] calibrated the same sensors with an additional verification step. However, these approaches require known markers to be placed in scenes. In this paper, we exploit common planar surfaces as the calibration target inspired by [12], but our approach differs from it by releasing the orthogonal assumption of these surfaces to achieve outdoor calibration.
IiB Calibration for Other Sensing Systems
There exist several published papers on lidar to camera, multicamera and camera to IMU calibration. One of the first work to solve online camera and lidar calibration is [19]. In this method, edge features in images are associated with lidar measurements using depth discontinuities. The extrinsic parameters are optimized by minimizing a cost function. Different metrics based on Gradient Orientation Measure (GOM) [9], Mutual Information (MI) [20], and lineplane constraints [3] were also proposed. However, all of them require initialization provided by users. In our proposed method, we introduce an algorithm to automatically initialize the extrinsic parameters by exploiting the geometric constraints of planar surfaces.
Developed from handeye calibration using the structurefrommotion techniques, motionbased approaches have been implemented to solve the calibration. Heng et al. [10]
proposed CamOdoCal, an automatic algorithm for fourcamera calibration without the assumption of overlapping fields of view. They decouple the calibration process into initialization and refinement. In initialization, a rough estimate of extrinsic parameters is computed by combining visual odometry with the vehicle’s egomotion. To refine the estimates, a bundle adjustment is used to optimize all of the cameras’ poses and feature data. This pipeline is employed in our method. However, CamOdoCal was explicitly designed for vision sensors, which may not be feasible in various sensor configurations. In contrast, Taylor and Nieto released a system
[21, 22] to calibrate multiple heterogeneous sensors and their time offset. Generally, motionbased methods can work for a variety of configurations and can be integrated into several SLAM systems [23]. However, the calibration accuracy of motionbased methods is limited due to the drift of computed odometry, which needs to be refined using the appearance cues in the surroundings.Iii Methodology
Our approach can make use of three linearly independent planar surfaces to calibration a duallidar system. In this work, we introduce a robust algorithm to extract planes from scan points. The geometric structure of these planes can provide extrinsic parameters with enough constraints. We define as a 3D coordinate system with its origin at the geometric center of a lidar . The , and axes are pointing forward, left and upward respectively. In this paper, we consider as the reference frame, and as the target frame. The point clouds perceived by a lidar is denoted by , and the coordinates of a point in is represented as . Detailed notations are listed in Table I and are visualized in Fig. 2.
Notation  Explanation 

/  Reference / Target coordinate system 
/  planar surfaces in / 
/  Intersection point in / 
/  Coefficients of / 
/  Unit normal vector of / 
Iiia Plane Extraction
Denoting the coefficients of , the distance between and [see Fig. 3] is computed as follows:
(1) 
To fit a planar model from a series of discrete points, we employ the random sample consensus (RANSAC) algorithm. By randomly selecting points from , the planar coefficients are acquired by solving a leastsquares problem [24]:
(2) 
where the parameter vector will be updated iteratively until an optimal model is acquired with maximum inlier points. To determine whether a point is an inlier, its square distance to a plane is computed. To extract three models, the RANSAC algorithm is executed separately at three times. At each time, points belonging to former extracted models are ignored. Finally, we can obtain three groups of planar coefficients which are denoted by , , and respectively to describe the planar surfaces. Hence, we can compute by solving a set of linear systems
(3) 
After computing , the unit normal vectors , and can be represented up to scale. According to our assumption of linear independence, there exist three nonzero scalars , and that satisfy the following equation:
(4) 
where we can fix the directions of normal vectors to make , and positive.
It is impossible to match these planar surfaces directly between lidars directly using the above results. Fig. 1 shows an example of the extracted planes, where the color values represent their extraction order. We can observe that the corresponding planes do not have the same order. By utilizing the wall corner shape [see Fig. 2], we find that these orders can be determined uniquely. Without loss of generality, we set as the left and right plane respectively, and as the bottom plane. Their normal vectors should follow the righthand rule:
(5) 
Following the above steps, we can correctly match the corresponding planes between two lidars.
IiiB Initialization Using ClosedForm Solution
We can formulate the calibration of duallidar as a nonlinear optimization problem by minimizing the distance between corresponding planes. But the defined cost function is nonconvex, as described in Section IIIC. To avoid local minima, the parameters should be firstly initialized.
According to Section IIIA, we already have two sets of fitted planes and with known normal vectors. Consequently, their relative rotation can be thus computed for initialization by introducing the Kabsch algorithm [25]. The Kabsch algorithm is an effective approach that provides a leastsquares solution to calculate the rotation between a pair of vector sets. We use , to indicate two matrices. Elements at the th column of are equal to , and these of is . We also denote
the crosscovariance matrix. By calculating the singular value decomposition (SVD) of
, can be computed as:(6) 
The relative translation can be computed directly using the plane intersections:
(7) 
IiiC Nonlinear Optimization
The initial solution is further refined via a nonlinear optimization. By defining a cost function to describe the euclidean distance between and , it can be computed as a sum of the squired distance between a point and its corresponding plane, i.e., . We can write down the cost function and adopt a LevenbergMarquardt algorithm for the nonlinear optimization:
(8) 
where is the counterpart of , and as well as are a rotation matrix and a translation vector from to respectively.
Iv Experiment
To evaluate the proposed extrinsic calibration method, we test it with different configurations of dual lidars. Experiments are presented with synthetic data and real sensor data. All the resulting values are compared against the ground truth or the values provided by four methods in terms of accuracy.
Iva Implementation Details
We adopt pcl
^{1}^{1}1http://pointclouds.org to preprocess point clouds and implement the RANSACbased plane fitting. Eigen
^{2}^{2}2http://eigen.tuxfamily.org library is applied to implement the Kabsch algorithm, and Ceres Solver
^{3}^{3}3http://ceressolver.org is used to solve the nonlinear optimization problem. In the optimization, we set the maximum iteration as and stopping tolerance as .
IvB Experiments in Synthetic Data
To verify the performance of the proposed algorithm, we randomly generate scan points ( planar points and noisy points) in a
space. The planar points are generated evenly on three planar surfaces, which are subjected to zeromean Gaussian noise with a standard deviation of
. The rotation angles on zaxis between and are set at intervals of , while is set on the bottom, which is orthogonal to and. The noisy points are distributed in the space, which are subjected to zeromean Gaussian distribution with a standard deviation of
m. is set arbitrarily where all the planar surfaces can be observed. Rotations from to are randomly generated within , , on , and axis respectively, and translations are generated within respectively. An example of the sensor configuration and the generated points is visualized in Fig. 4(a). In our experiments, we randomly select two configurations with different and [see Table II (top)] as the ground truth to compare with the resulting values.The difference in rotation is measured according to the angle difference between the ground truth and the resulting rotation , which is calculated as ^{4}^{4}4The operator is defined to associate in to its rotation angle on the axis.. The difference in translation is computed using vector subtraction as .
For each group of and , we performed 10 trials on the noisy data and computed the mean as well as the standard deviation of the rotation and translation errors. In Fig. 5, blue bars and red lines indicate the mean and standard deviation respectively. Detailed calibration results are shown in Table II (bottom). An example of the calibrated point cloud is shown in Fig. 4(b). In summary, we can see that the rotation and translation errors are tiny on the synthetic data. This proves that the proposed method can successfully calibrate the extrinsic parameters.
Conf.  Rotation []  Translation [] 

1  2.7337, 0.3946, 0.1809  0.8766, 0.4672, 1.0474 
2  0.5174, 0.1277, 0.1222  1.3785, 1.3929, 1.3020 
Conf.  []  Rotation Error []  Translation Error []  

mean  std.  mean  std.  
1  60  0.0035  0.0035  0.0107  0.0161 
70  0  0  0.0001  0  
80  0.0024  0.0056  0.0045  0.0085  
90  0.0016  0.0040  0.0100  0.0243  
100  0.0051  0.0107  0.0087  0.0178  
110  0.0043  0.0063  0.0097  0.0161  
120  0.0018  0.0051  0.0083  0.0243  
2  60  0.0096  0.0104  0.0260  0.0349 
70  0.0036  0.0083  0.0101  0.0274  
80  0.0021  0.0064  0.0039  0.0109  
90  0.0033  0.0071  0.0052  0.0101  
100  0.0126  0.0170  0.0245  0.0339  
110  0.0029  0.0057  0.0084  0.0163  
120  0.0097  0.0139  0.0217  0.0352 
IvC Experiments in Real Data
We calibrate a sensor system which consists of three 16beam RSLidars^{5}^{5}5https://www.robosense.ai/rslidar/rslidar16 on our vehicle. As presented in Fig. 1, these lidars are mounted at the front (), top (), and tail () position respectively. Especially, is mounted with approximately rotation offset in yaw. In later sections, we denote the configuration between and . The surrounding buildings and ground as the scan planar surfaces are used for calibration. We select two calibration environments in outdoor with two levels (easy and hard) for calibration, which are shown in Fig. 6.
IvC1 Easy
Calibration is performed in the case of two different configurations: , as a standard setup, and , as a challenging setup. We take three methods for comparisons. The former two methods are developed based on the proposed one, but some steps are modified, while the last method is based on the motionbased techniques:

W/O refinement: The refinement step is removed.

ICP refinement: The nonlinear optimization refinement is replaced by a pointtoplane ICP [26].
Since the precise extrinsic parameters of the multiLiDAR system are unknown, we use the values provided by the manufacturer as the ground truth to evaluate these methods.
Method  Rotation []  Error [] 
Ground truth  0.0096, 0.0989, 0.0425  — 
W/O refinement  0.0046, 0.1138, 0.0193  0.2221 
ICP refinement  0.0882, 0.4254, 0.0462  0.5333 
Motionbased  0.2269, 0.0451, 0.0007  0.2332 
Proposed  0.0012, 0.0892, 0.0276  0.0203 
Translation []  Error []  
Ground truth  0.377002, 0.03309009, 1.23236  — 
W/O refinement  0.314979, 0.0509327, 1.23395  0.102694 
ICP refinement  1.21884, 0.0670357, 1.55964  0.903941 
Motionbased  0.2115, 0.4892, 1.1903  0.5474 
Proposed  0.336322, 0.00271319, 1.19076  0.067196 
Method  Rotation []  Error [] 
Ground truth  0.0161, 0.0192, 3.1392  — 
W/O refinement  0.0527, 0.0212, 3.1328  0.0373 
ICP refinement  0.0333, 0.1329, 3.1277  0.1605 
Motionbased  0.0044, 0.0037, 2.6462  0.4986 
Proposed  0.0004, 0.0020, 3.1375  0.0238 
Translation []  Error []  
Ground truth  1.96443, 0.04073154, 1.13756  — 
W/O refinement  1.92115, 0.0968707, 0.80199  0.341959 
ICP refinement  1.95929, 0.0473068, 0.383619  0.753959 
Motionbased  1.7794, 0.4670, 1.11136  0.5472 
Proposed  1.9103, 0.052992, 1.0744  0.08357 
The results with respect to different configurations are listed in Table III and IV. The estimated extrinsic parameters of the proposed algorithm are quite close to the ground truth. Regarding the relative rotation and translation errors, our method achieved [] of and of respectively. We observe that larger errors are caused by ICP refinement and Motionbased methods. Regarding the ICP approach, failure is caused due to the wrong matching of corresponding planes. About the Motionbased approach, the drift and uncertainty of the estimated motion would significantly reduce the accuracy of the calibration results. We can also see that the proposed method performs better than the w/o refinement method since the nonlinear optimization could further reduce the noise. During the calibration, the time for optimization is around and of the two configurations.
To evaluate the calibration results qualitatively, we transform the point cloud in to using the extrinsic parameters provided by ground truth, W/O refinement, and Proposed respectively. The top view of these fused point cloud can be seen in Fig. 7. We observe that the point clouds calibrated by the Proposed approach have litter uncertainty on the planar surfaces. Therefore, our algorithm can successfully calibrate dual lidars in real data with low error.
IvC2 Hard
In the following, we study the performance of our approach in the hard scenario. This scenario is more challenging because there are several objects on these planes that may influence the plane extraction and optimization results. In this experiment, the configuration is calibrated. The relative rotation and translation errors compared with the ground truth are [, ], while the optimization time is around . The fused point clouds are shown in Fig. 8, where the planar surfaces are registered well without much offset. We conclude that the extrinsic parameters can be recovered in this scenario.
IvD Discussion
We have three certain assumptions in this method: lidars are horizontally mounted; they share large overlapping fields of view with each other and three planar surfaces are provided for calibration. Therefore, the proposed method may fail in several cases. For instance, if lidars are mounted at an arbitrary orientation, a wrong initialization will be caused. Another case is that if planar surfaces are wrongly detected, their correspondences will be mismatched.
V Conclusion and Future Work
In this paper, we have presented an automatic algorithm for calibrating a duallidar system without any additional sensors, artificial landmarks, or information about the motion provided by sensors. The RANSACbased model fitting approach is used to extract three linearly independent planar surfaces from scan points. Linear constraints for initialization are provided by the geometric structure of these planar surfaces, and a final nonlinear optimization is then used to refine the estimates. Our proposed method has been demonstrated to recover the extrinsic parameters of a duallidar system with rotation and translation error smaller than and in different testing conditions.
It would be beneficial to determine the parameters in more general cases, e.g., sensors do not share any overlapping fields of view, or they are arbitrarily mounted on a vehicle. Such problems may be solved by developing a simultaneous localization and mapping system, where the extrinsic parameters are jointly optimized with the odometry and map within a unified framework.
References
 [1] A. Geiger, F. Moosmann, Ö. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012, pp. 3936–3943.
 [2] G. Xie, T. Xu, C. Isert, M. Aeberhard, S. Li, and M. Liu, “Online active calibration for a multilrf system,” in 2015 IEEE 18th International Conference on Intelligent Transportation Systems. IEEE, 2015, pp. 806–811.
 [3] L. Zhou, Z. Li, and M. Kaess, “Automatic extrinsic calibration of a camera and a 3d lidar using line and plane correspondences,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 5562–5569.
 [4] Q. Liao, M. Liu, L. Tai, and H. Ye, “Extrinsic calibration of 3d range finder and camera without auxiliary object or human intervention,” 2017.
 [5] Q. Liao, Z. Chen, Y. Liu, Z. Wang, and M. Liu, “Extrinsic calibration of lidar and camera with polygon,” in 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2018, pp. 200–205.
 [6] T. Shan and B. Englot, “Legoloam: Lightweight and groundoptimized lidar odometry and mapping on variable terrain,” 2018.

[7]
Y. Zhou and O. Tuzel, “Voxelnet: Endtoend learning for point cloud based 3d
object detection,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, 2018, pp. 4490–4499.  [8] P. Yun, L. Tai, Y. Wang, C. Liu, and M. Liu, “Focal loss in 3d object detection,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1263–1270, April 2019.
 [9] Z. Taylor, J. Nieto, and D. Johnson, “Automatic calibration of multimodal sensor systems using a gradient orientation measure,” in Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013, pp. 1293–1300.
 [10] L. Heng, B. Li, and M. Pollefeys, “Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry,” in Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013, pp. 1793–1800.
 [11] Z. Yang, T. Liu, and S. Shen, “Selfcalibrating multicamera visualinertial fusion for autonomous mavs,” in Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016, pp. 4984–4991.
 [12] D.G. Choi, Y. Bok, J.S. Kim, and I. S. Kweon, “Extrinsic calibration of 2d lidars using two orthogonal planes,” IEEE Transactions on Robotics, vol. 32, no. 1, pp. 83–98, 2016.
 [13] C. Gao and J. R. Spletzer, “Online calibration of multiple lidars on a mobile vehicle platform,” in Robotics and Automation (ICRA), 2010 IEEE International Conference on. IEEE, 2010, pp. 279–284.
 [14] M. He, H. Zhao, F. Davoine, J. Cui, and H. Zha, “Pairwise lidar calibration using multitype 3d geometric features in natural scene,” in Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013, pp. 1828–1835.
 [15] M. He, H. Zhao, J. Cui, and H. Zha, “Calibration method for multiple 2d lidars system,” in Robotics and Automation (ICRA), 2014 IEEE International Conference on. IEEE, 2014, pp. 3034–3041.
 [16] Y. Xie, R. Shao, P. Guli, B. Li, and L. Wang, “Infrastructure based calibration of a multicamera and multilidar system using apriltags,” in 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018, pp. 605–610.
 [17] J. Röwekämper, M. Ruhnke, B. Steder, W. Burgard, and G. D. Tipaldi, “Automatic extrinsic calibration of multiple laser range sensors with little overlap,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on. IEEE, 2015, pp. 2072–2077.
 [18] J. Quenzel, N. Papenberg, and S. Behnke, “Robust extrinsic calibration of multiple stationary laser range finders,” in Automation Science and Engineering (CASE), 2016 IEEE International Conference on. IEEE, 2016, pp. 1332–1339.
 [19] J. Levinson and S. Thrun, “Automatic online calibration of cameras and lasers.” in Robotics: Science and Systems, vol. 2, 2013.
 [20] G. Pandey, J. R. McBride, S. Savarese, and R. M. Eustice, “Automatic extrinsic calibration of vision and lidar by maximizing mutual information,” Journal of Field Robotics, vol. 32, no. 5, pp. 696–722, 2015.
 [21] Z. Taylor and J. Nieto, “Motionbased calibration of multimodal sensor arrays,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on. IEEE, 2015, pp. 4843–4850.
 [22] Z. Taylor and J. Niet, “Motionbased calibration of multimodal sensor extrinsics and timing offset estimation,” IEEE Transactions on Robotics, vol. 32, no. 5, pp. 1215–1229, 2016.
 [23] T. Qin, P. Li, and S. Shen, “Vinsmono: A robust and versatile monocular visualinertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
 [24] R. Fan, J. Jiao, J. Pan, H. Huang, S. Shen, and M. Liu, “Realtime dense stereo embedded in a uav for road inspection,” 2019.
 [25] W. Kabsch, “A discussion of the solution for the best rotation to relate two sets of vectors,” Acta Crystallographica Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography, vol. 34, no. 5, pp. 827–828, 1978.
 [26] F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat, “Comparing icp variants on realworld data sets,” Autonomous Robots, vol. 34, no. 3, pp. 133–148, 2013.
 [27] G. Pandey, S. Giri, and J. R. Mcbride, “Alignment of 3d point clouds with a dominant ground plane,” in Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on. IEEE, 2017, pp. 2143–2150.
Comments
There are no comments yet.