I Introduction
Unmanned driving technology is becoming more and more popular [1]. Nowadays, 5G technology is accelerating the development of cellular vehicletoeverything (CV2X) technology [2], in which unmanned vehicles need to perceive various objects to navigate and avoid obstacles [3]. In such CV2X systems, in addition to cameras, LiDARs are used because illumination can affect the image quality of cameras [4]
, and the position estimation of feature points is related to the accuracy of cameras’ intrinsic and extrinsic parameters
[5]. However LiDARs have several limitations. Firstly, LiDARs have blind areas [6]. For example, if vehicles are surrounded by tall trucks, they will lose most of the observation information. Locating an unmanned system requires the point cloud information around the vehicle, and the point cloud based location method has several shortcomings. For various reasons, the prior 3D surfel maps may change — for example, road construction or vegetation pruning at the roadside. Secondly, LiDARs are very expensive, and multiLiDAR solutions are costly [7].Mounting LiDARs on lampposts, as shown in Fig. 1, can solve previously problems. Lamppost LiDARs can provide a realtime surfel map, so moving vehicles can directly receive 3D information provided by the lamppost through the 5G network.
But there is a problem that the extrinsic parameters of LIDARs mounted on lampposts or other urban facilities are unknown, and we need to know their position in the world to obtain the complete 3D city realtime surfel map. To solve these problems, the calibration of a multiLiDAR configuration is necessary.
Over the past years, many methods for calibrating LiDARs have been proposed, but several drawbacks are presented. Several of such methods rely exclusively on an additional sensor, need a good initialization provided by users, or need complex objects such as walls, which may not exist on docks and in other scenarios. Motionbased approaches require LiDARs to be installed on mobile platforms such as cars. So these methods are not feasible in the case shown in Fig. 1. For example, trackingbased methods need moving objects to be tracked, but the accuracy of tracking affects the calibration result, and such methods require significant human intervention.
In this paper, we propose an automatic calibration approach for the proposed multiLiDAR system. First, we physically lay them down on the horizontal surface on which you walk, i.e, the ground. After that, we extract the poles from LiDAR data using intensity information. Then we use the identified poles to construct constraints so that the calibration problem is transformed into an optimization problem. Finally, we provide a way to choose the correct results, because there are several locally optimal solutions. We conduct a variety of experiments to show the reliability and accuracy of our proposed approach. The contributions of this paper are summarized as follows:

We propose a simple method which calculates the position of points on arbitrary poles and the points are generated by a LiDAR. Then the errors can be theoretically analyzed.

We give a method of calibrating LiDARs using two nonparallel poles stickered with retroreflectivetape. The themod can be used to obtain a good result, and can be used in scenarios where LiDARs can not be moved as described in Fig. 1.

We provide extensive evaluation experiments on simulated and realworld datasets.
Ii Related Work
There are two main types of LiDAR calibration approaches, appearancebased and motionbased. The former type of approaches fall into turning problems into registration problems, and the approaches usually need fixed markers or prior environment. The latter type of approaches utilize the constraints of sensors’ motion to recover the extrinsic parameters. Then the approaches are formulated as the wellknown haneeye calibration problem. The accuracy of the results of such approaches is related to the accuracy of estimated motions, which is easily affected by accumulated drifts.
Underwood et al. [8] propose a calibration method that needs one vertical pole with retroreflective tape and a sensor platform limited to a planar surface. The platform must be moved so that the sensos can observe the environment from different headings. Gao et al. [9] use retroreflective targets placed in scenes to calibrate a multiLiDAR system, and this approach needs the position of the vehicle platform and the initial calibration estimate. All these approaches need a platform that can be moved, so they are hard to apply in a LiDAR fixed system, like that in Fig. 1.
Shang et al. [10]
present a calibration method for 3D LiDARs, which only needs an orthogonal normal vector pair, and the normal vector needs to be generated from a planar ground plane and a vertical wall. Jiao et al.
[11] use three linearly independent planar surfaces to find correspondences to enable automatic LiDAR calibration, but the requiement of three planar surfaces is very demanding. Muhammad et al. [12] propose a method for multibeam LiDARs. This technique is based on an optimization process performed to estimate the LiDAR calibration parameters from a coarse initial calibration. The drawback of all these calibration approaches is that they need a specific environment, a mobile platform or a reliable initial value, which are hard to get in some situations. Our approach only depends on two nonparallel poles stickered with retroreflective tape, and these are easy to place in any situation. We do not need the LiDAR platform to be movable, nor do we need an initial calibration estimate, which makes our approach more general and practical.Many LiDARcamera calibration algorithms have emerged. Levinson et al. [13] introduce techniques that enable cameralaser calibration online, automatically and in arbitrary environments, using a probabilistic monitoring algorithm. Martin et al. [14] present a pipeline for mutual pose and orientation estimation of the LiDARcamera system using a coarsetofine approach. Pandey et al.[15] report on a mutual information (MI) algorithm. MI as the registration criterion can work in a situation without the need for any specific calibration targets.
Motionbased approaches all require that the LiDARs can be moved. Heng et al. [16] publish a tool called CamOdoCal, a versatile algorithm which does not need any prior knowledge about the rig setup. Jiao et al. [17] align the estimated motions of each sensor as an initialization then refine them with an appearancebased method. In our cases shown, in Fig. 1, motionbased approache are all impossible, because the LiDARs can’t be moved. Quenzel et al. [18] present a method, using pose graph optimization to calibrate the extrinsic parameters of LiDARs. However this method needs one object to be clustered exactly into one segment.
Many of the above techniques rely on one or more assumptions and are not applicable in our case. Our approach only needs two poles with retroreflective tape that helps us to recognize the poles. It doesn’t involve egomotion nor an environment prior assumption. We only need to place the LiDARs nonparallel in the LiDARs’ overlapping area.
Iii Methodology
We first place two poles in a position in which they are not parallel to each other. In this section, we provide details of the process.
Iiia Pole Extraction and Representation
Because the poles have been stickered with retroreflective tape, it is easy to identify them from the point cloud using a simple threshold operation with the parameter of intensity. When the distance between the LiDAR and pole is about 5 meters, the parameters can be chosen from those that are presented in Tab. I.
LiDAR Manufacturers  Threshold of Intensity 

Velodyne  230 
Hesai  200 
Leishen  200 
RoboSense  200 
Even though we filter out many irrelevant points by the threshold, there are some outliers. These points can be filtered by using an arbitrary clustering algorithm because their numbers are small in the majority of cases.
The pole point cloud can be represented by a linear equation, and the linear equation can be denoted as a vector equation: where is a scalar. Then we denoted it as , where means the line through the point and denotes the direction of the line. Another way of expressing the pole point cloud is to use , where means the point in the point cloud. For convenience, is used to represent the transformation from the world to the LiDAR frame, and to represent the pole captured by the LiDAR.
IiiB Initialization for Calibration
As shown in Fig. 2, we can place the poles arbitrarily. Considering the type of LiDAR, such as VLP16^{1}^{1}1Velodyne 16channel mechanical LiDAR is one of the most common LiDARs . or Pandar64^{2}^{2}2Pandar64 is a 64channel mechanical LiDAR from Hesai Technology., we should not arrange the pole horizontally because the LiDAR’s beams may not scan the poles, as can be seen in Fig. 3.
LiDAR calibration is accomplished by aligning the poles from different LiDARs. Essentially, we want to get a rotation matrix of and a translation vector of , which describes the pose relationship between the two LiDARs. We represent the two LiDARs in two different ways, and respectively. Then these data are acquired by solving a leastsquares problem:
(1) 
where indicates the norm of a vector. For the above case, there is only one line and one point cloud , so we can get infinite solutions. If there are two lines and two point clouds, and they are not parallel, the number of solutions will be greatly reduced. So we introduce the variables . If the point cloud corresponds to line , then the will equal 1, else it will equal to 0. We subsequently slightly improve (1) to get
(2) 
IiiC Determination of Correspondence Relation
If we have an excellent prior, it is easy to determine the correspondence relationship , but in many cases, it is not easy to get the prior. Although we can get a reliable initial pose by adjusting the points using editing tools like Cloudcompare [19], it is complicated to do this. Each pairing relationship has four different solutions, and there are two different correspondence relationships, so there are eight different situations. Fig. 4 illustrates four situations which we can match.
We can enumerate all the corresponding relationships and then find a reasonable result.
Although it is easy for human beings to judge what is a reasonable result, for a computer, this is not easy because it does not have enough semantic information to find the correspondence. There is a solution in which we can use the Iterative Closest Point (ICP) algorithm [20] to register the two point clouds, and choose the result that is the closest to the identity. The result of rotation is , and the translation the result is , and the error in rotation can be represented as , where is defined to transform SO(3) to . Similarly, the error in translation is . In most situations, the correct result’s rotation and translation error is always minimal. The relevant results can be seen in Sect. IV.
IiiD Accuracy Evaluation
We next solve the problem of why a pole can be represented by a line and how to measure the error. First, we can assume that the pole is placed vertically at the origin of the coordinate axis, and the pole’s central axis is the Zaxis. The LiDAR is put in the Xaxis with arbitrary orientation. The parameters are only the LiDAR’s position and direction, which can be represented as a vector. The vector is vertical to the central plane of the LiDAR, so the vector can be represented by , and the radius of the pole is . Then we can get the equation of the cylinder:
(3)  
In addition, considering the pole is a cylinder, it can obtain a range of the value . Then, if we get a point on the pole and its coordinate parameter is in (3), we can get
(4)  
where the function corresponds with (3), is the LiDAR’s position mentioned above, and is the angular resolution parameter of each beam. When the information is sufficient, it is easy to get the value of the variables using an algorithm to solve nonlinear equation systems, like the preconditioned conjugate gradients [21], the trustregiondogleg algorithm [22] or the LevenbergMarquardt method [23], [24]. Therefore if we have a current LiDAR position and its beam parameters, we can use (4) to infer the coordinates of all points on the pole. The result can be seen in Fig. 5.
When we get all the points on a pole, these points can be fitted by a straight line, and the line is similar to the central axis of the pole. The fitted line can be represented as a vector and a point on the line. We can use
(5) 
to measure the error between the fitted line and the central axis of the pole, where and are the points with maximum and minimum Zcoordinates on the pole.
The smaller the result in (5), the better the fitted result. Because the VLP16 is a common LiDAR, we choose its parameters, so . Quaternions can be used to indicate the orientation of LiDAR. If the quaternion is notated as , then we provide the following results for reference: and .
Error()  Error()  

10  0.3  0.061196  0.060038 
10 
0.2  0.027408  0.028638 
10 
0.1  0.007277  0.008005 
6 
0.3  0.059603  0.058643 
6 
0.2  0.026762  0.026904 
6 
0.1  0.006515  0.007360 
4 
0.3  0.061122  0.060220 
4 
0.2  0.026510  0.026064 
4 
0.1  0.006632  0.006237 
It can be seen in Tab. II that the smaller the value of , the higher the accuracy. Considering the convenience, we will use a pole with in our later experiments.
Iv Experiment
In this section, we divide the evaluation into two separate steps. The initial calibration experiments are presented with simulated data and real sensor sets. Then we test the calibration approach on the real sensor data.
Iva Experiments on Simulated Data
First, we assume that the positions of the poles and LIDARs are known. Then the calibration results are calculated with theoretical data using the approach mentioned above, and are compared with the ground truth.
We assume that the LiDAR and a pole’s parameters , and radius are known in the world frame. The task is to get the points in the LiDAR’s coordinate system.
First, we move the pole to the origin of the coordinate axis of the world frame. Then we get the new LiDAR :
(6) 
where
(7) 
(8) 
We use as the translation vector of . Then, we put the LiDAR in the Xaxis of the pole coordinate system get LiDAR :
(9) 
where . Denoting as translation vector of , we can get the final LiDAR :
(10) 
where
(11) 
(12) 
Now we can get one LiDAR’s points in the pole coordinate system using the method mentioned in the above section, and then we put these points into the LIDAR coordinate system. If we have two LiDARs, and , and as a reference, the ground truth is .
We randomly generate parameters of the two poles and the postion of the LIDARs. Each pole’s zcomponent of the directinal vector is larger than 0.9. If we think of the pole as a line, the distance bewteen the two poles is the distance between two points and , which are in their line position of . And the distance between the points is between 1.5 and 4. The positions and are in the yaxis and . For the parameters of LiDAR position, the LiDARs are fixed, their positions are and , and their oritentaions are reprenseted as quaternions and .
For tests, we perform ten trials on the noisy data and compute the mean as well as the standard deviation of the rotation and translation error. Each LiDAR’s points are subjected to zeromean Gaussian noise with a standard deviation of 0.006
[25].To measure the difference in our results and ground truth, we can use the formula:
(13) 
where , and represent the nearest and farthest distance of LiDAR in use. So we can assume that and . Therefore, (13) describes the average error in using the LiDAR. The estimated extrinsic parameters of this algorithm are quite close to the ground truth; the . This proves that the proposed method can successfully calibrate the extrinsic parameters.
IvB Experiments on Real Data
We calibrate a sensor system that consists of two LiDARs in a corridor. Since the precise extrinsic parameters of the system are unknown, and it’s impossible to get its accurate value, we use Rényi’s Quadratic Entropy (RQE)[26] to evaluate our results.
We take a method for comparison. The method is developed based on an algorithm using planar surfaces [11], and we select a calibration environment in the indoors corridor for calibration. This environment has enough planes perpendicular to each other to perform the planar surfaces approach.
We get several results because different matching relationships lead to different results. Fig. 6 illustrates the four results which corresponded with Fig. 4. We use the ICP algorithm to select the most appropriate result. Each pair of red and black point clouds in Fig. 6
can be registered by using the ICP algorithm, and then we evaluate the difference between the results and the identity matrix. The results are given in Tab.
III. Since a larger RQE represents better calibration, we can see that we have similar results with the stateoftheart approach.Number  Rotation Error  Translation Error 

a  0.1027  0.3949 
b  0.8393  0.9455 
c  1.6875  17.4749 
d  0.4021  8.4218 
Approaches  RQE 

Planar Surfaces  0.0824 
Proposed  0.0822 
ICP  0.0461 
Wrong Result  0.0497 
IvC Discussion
We have an assumption in this method: LiDARs share overlapping fields of view. The proposed method may fail in several cases. For instance, if the poles are wrongly detected, the entire system will get the wrong result. Some bands of the LiDAR may get very inaccurate data, and this will impact the results. However, compared with the planar surface approach, there are some advantages of our proposed method:

Poles can be fitted with very few points, but the stateoftheart approach requires more points on the surfaces.

Fewer constraints are required, as long as the poles can be detected by two LiDARs.

There is no requirement for how a LiDAR must be installed, and it can be flexibly adapted to various situations. Unlike the planar surfaces approach, it needs to be installed horizontally on a platform.
V Conclusions
In this paper, we presented an automatic approach for calibrating LiDARs with two poles stickered with retroreflective tape. There are still some problems with this method; for instance, the method of pole recognition depends on the intensity information from the LiDAR and this data is not stable, decreasing as the distance increases. Although in some cases its calibration accuracy is lower than that of other algorithms, it does not depend on the terrain and does not require the platform to move, so calibration tasks can be performed in any scenario, like on an open harbor.
Acknowledgment
This work was supported by the National Natural Science Foundation of China, under grant No. U1713211, the Research Grant Council of Hong Kong SAR Government, China, under Project No. 11210017, No. 21202816, and the Shenzhen Science, Technology and Innovation Commission (SZSTI) under grant JCYJ20160428154842603, awarded to Prof. Ming Liu.
References
 [1] J. A. Brink, R. L. Arenson, T. M. Grist, J. S. Lewin, and D. Enzmann, “Bits and bytes: The future of radiology lies in informatics and information technology,” European Radiology, vol. 27, no. 9, pp. 3647–3651, 2017.
 [2] R. Fan, “ Realtime computer stereo vision for automotive applications,” PhD thesis, University of Bristol, 2018.
 [3] R. Fan and N. Dahnoun, “Realtime implementation of stereo vision based on optimised normalised crosscorrelation and propagated search range on a GPU,” in 2017 IEEE International Conference on Imaging Systems and Techniques (IST), pp. 1–6, IEEE, 2017.
 [4] Y. Zhu, B. Xue, L. Zheng, H. Huang, M. Liu, and R. Fan, “Realtime, environmentallyrobust 3d lidar localization,” arXiv:1910.12728, 2019.
 [5] M. Pereira, D. Silva, V. Santos, and P. Dias, “Self calibration of multiple lidars and cameras on autonomous vehicles,” Robotics and Autonomous Systems, vol. 83, pp. 326–337, 2016.
 [6] R. Fan, J. Jiao, H. Ye, Y. Yu, I. Pitas, and M. Liu, “Key ingredients of selfdriving cars,” arXiv:1906.02939, 2019.
 [7] L. Zheng, Y. Zhu, B. Xue, M. Liu, and R. Fan, “Lowcost GPSaided lidar state estimation and map building,” arXiv:1910.12731, 2019.
 [8] J. Underwood, A. Hill, and S. Scheding, “Calibration of range sensor pose on mobile platforms,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3866–3871, IEEE, 2007.
 [9] C. Gao and J. R. Spletzer, “Online calibration of multiple lidars on a mobile vehicle platform,” in 2010 IEEE International Conference on Robotics and Automation, pp. 279–284, IEEE, 2010.
 [10] E. Shang, X. An, M. Shi, D. Meng, J. Li, and T. Wu, “An efficient calibration approach for arbitrary equipped 3d lidar based on an orthogonal normal vector pair,” Journal of Intelligent and Robotic Systems, vol. 79, no. 1, pp. 21–36, 2015.
 [11] J. Jiao, Q. Liao, Y. Zhu, T. Liu, Y. Yu, R. Fan, L. Wang, and M. Liu, “A novel duallidar calibration algorithm using planar surfaces,” arXiv preprint arXiv:1904.12116, 2019.
 [12] N. Muhammad and S. Lacroix, “Calibration of a rotating multibeam lidar,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5648–5653, IEEE, 2010.
 [13] J. Levinson and S. Thrun, “Automatic online calibration of cameras and lasers.,” in Robotics: Science and Systems, vol. 2, 2013.

[14]
V. Martin, M. Z. Michal, and H. Adam, “Calibration of rgb camera with velodyne
lidar,” in
International Conference on Computer Graphics, Visualization and Computer Vision
, 2014. 
[15]
G. Pandey, J. R. McBride, S. Savarese, and R. M. Eustice, “Automatic
targetless extrinsic calibration of a 3d lidar and camera by maximizing
mutual information,” in
TwentySixth AAAI Conference on Artificial Intelligence
, 2012.  [16] L. Heng, B. Li, and M. Pollefeys, “Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1793–1800, IEEE, 2013.
 [17] J. Jiao, Y. Yu, Q. Liao, H. Ye, and M. Liu, “Automatic calibration of multiple 3d lidars in urban environments,” arXiv preprint arXiv:1905.04912, 2019.
 [18] J. Quenzel, N. Papenberg, and S. Behnke, “Robust extrinsic calibration of multiple stationary laser range finders,” in 2016 IEEE International Conference on Automation Science and Engineering (CASE), pp. 1332–1339, IEEE, 2016.

[19]
D. GirardeauMontaut, “Cloudcompareopen source project,”
OpenSource Project, 2011.  [20] F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat, “Comparing ICP variants on realworld data sets,” Autonomous Robots, vol. 34, no. 3, pp. 133–148, 2013.
 [21] R. B. Berry, T. F. Chan, J. Demmel, J. M. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. Van der Vorst10, “Templates for the solution of linear systems: Building blocks for iterative methods1,” Society for Industrial and Applied Mathematics, Philadelphia, USA, pp. 64–68, 1994.
 [22] M. Powell, “A fortran subroutine for solving systems of nonlinear algebraic equations,” Numerical Methods for Nonlinear Algebraic Equations, pp. 150–166, 1970.
 [23] K. Levenberg, “A method for the solution of certain nonlinear problems in least squares,” Quarterly of applied mathematics, vol. 2, no. 2, pp. 164–168, 1944.
 [24] D. W. Marquardt, “An algorithm for leastsquares estimation of nonlinear parameters,” Journal of the society for Industrial and Applied Mathematics, vol. 11, no. 2, pp. 431–441, 1963.
 [25] J. R. Kidd, “Performance evaluation of the velodyne vlp16 system for surface feature surveying,” in Canadian Hydrographic Conference, 2016.
 [26] M. Sheehan, A. Harrison, and P. Newman, “Selfcalibration for a 3d laser,” The International Journal of Robotics Research, vol. 31, no. 5, pp. 675–687, 2012.