DeepAI
Log In Sign Up

Automatic Calibration of Dual-LiDARs Using Two Poles Stickered with Retro-Reflective Tape

11/02/2019
by   Bohuan Xue, et al.
0

Multi-LiDAR systems have been prevalently applied in modern autonomous vehicles to render a broad view of the environments. The rapid development of 5G wireless technologies has brought a breakthrough for current cellular vehicle-to-everything (C-V2X) applications. Therefore, a novel localization and perception system in which multiple LiDARs are mounted around cities for autonomous vehicles has been proposed. However, the existing calibration methods require specific hard-to-move markers, ego-motion, or good initial values given by users. In this paper, we present a novel approach that enables automatic multi-LiDAR calibration using two poles stickered with retro-reflective tape. This method does not depend on prior environmental information, initial values of the extrinsic parameters, or movable platforms like a car. We analyze the LiDAR-pole model, verify the feasibility of the algorithm through simulation data, and present a simple method to measure the calibration errors w.r.t the ground truth. Experimental results demonstrate that our approach gains better flexibility and higher accuracy when compared with the state-of-the-art approach.

READ FULL TEXT VIEW PDF

page 1

page 3

05/13/2019

Automatic Calibration of Multiple 3D LiDARs in Urban Environments

Multiple LiDARs have progressively emerged on autonomous vehicles for re...
12/06/2021

Intelligent Acoustic Module for Autonomous Vehicles using Fast Gated Recurrent approach

This paper elucidates a model for acoustic single and multi-tone classif...
04/27/2019

A Novel Dual-Lidar Calibration Algorithm Using Planar Surfaces

Multiple lidars are prevalently used on mobile vehicles for rendering a ...
10/05/2017

Ground Edge based LIDAR Localization without a Reflectivity Calibration for Autonomous Driving

In this work we propose an alternative formulation to the problem of gro...
03/17/2020

Ford Multi-AV Seasonal Dataset

This paper presents a challenging multi-agent seasonal dataset collected...
05/14/2022

Extrinsic Calibration and Verification of Multiple Non-overlapping Field of View Lidar Sensors

We demonstrate a multi-lidar calibration framework for large mobile plat...
09/26/2021

A Simple Self-calibration Method for The Internal Time Synchronization of MEMS LiDAR

This paper proposes a simple self-calibration method for the internal ti...

I Introduction

Unmanned driving technology is becoming more and more popular [1]. Nowadays, 5G technology is accelerating the development of cellular vehicle-to-everything (C-V2X) technology [2], in which unmanned vehicles need to perceive various objects to navigate and avoid obstacles [3]. In such C-V2X systems, in addition to cameras, LiDARs are used because illumination can affect the image quality of cameras [4]

, and the position estimation of feature points is related to the accuracy of cameras’ intrinsic and extrinsic parameters

[5]. However LiDARs have several limitations. Firstly, LiDARs have blind areas [6]. For example, if vehicles are surrounded by tall trucks, they will lose most of the observation information. Locating an unmanned system requires the point cloud information around the vehicle, and the point cloud based location method has several shortcomings. For various reasons, the prior 3D surfel maps may change — for example, road construction or vegetation pruning at the roadside. Secondly, LiDARs are very expensive, and multi-LiDAR solutions are costly [7].

Mounting LiDARs on lampposts, as shown in Fig. 1, can solve previously problems. Lamppost LiDARs can provide a real-time surfel map, so moving vehicles can directly receive 3D information provided by the lamppost through the 5G network.

But there is a problem that the extrinsic parameters of LIDARs mounted on lampposts or other urban facilities are unknown, and we need to know their position in the world to obtain the complete 3D city real-time surfel map. To solve these problems, the calibration of a multi-LiDAR configuration is necessary.

Fig. 1: LiDARs mounted on lampposts. Because the LiDARs are mounted high, they can provide additional information on the ground. Moreover, multiple LiDARs can provide fewer blind spots.

Over the past years, many methods for calibrating LiDARs have been proposed, but several drawbacks are presented. Several of such methods rely exclusively on an additional sensor, need a good initialization provided by users, or need complex objects such as walls, which may not exist on docks and in other scenarios. Motion-based approaches require LiDARs to be installed on mobile platforms such as cars. So these methods are not feasible in the case shown in Fig. 1. For example, tracking-based methods need moving objects to be tracked, but the accuracy of tracking affects the calibration result, and such methods require significant human intervention.

In this paper, we propose an automatic calibration approach for the proposed multi-LiDAR system. First, we physically lay them down on the horizontal surface on which you walk, i.e, the ground. After that, we extract the poles from LiDAR data using intensity information. Then we use the identified poles to construct constraints so that the calibration problem is transformed into an optimization problem. Finally, we provide a way to choose the correct results, because there are several locally optimal solutions. We conduct a variety of experiments to show the reliability and accuracy of our proposed approach. The contributions of this paper are summarized as follows:

  • We propose a simple method which calculates the position of points on arbitrary poles and the points are generated by a LiDAR. Then the errors can be theoretically analyzed.

  • We give a method of calibrating LiDARs using two non-parallel poles stickered with retro-reflective-tape. The themod can be used to obtain a good result, and can be used in scenarios where LiDARs can not be moved as described in Fig. 1.

  • We provide extensive evaluation experiments on simulated and real-world datasets.

The remainder of the paper is organized as follows. Section II gives a review of related works. The methodology of our approach is described in Section III. Implementation and tests are shown in Section IV. Finally, Section V concludes this work.

Ii Related Work

There are two main types of LiDAR calibration approaches, appearance-based and motion-based. The former type of approaches fall into turning problems into registration problems, and the approaches usually need fixed markers or prior environment. The latter type of approaches utilize the constraints of sensors’ motion to recover the extrinsic parameters. Then the approaches are formulated as the well-known hane-eye calibration problem. The accuracy of the results of such approaches is related to the accuracy of estimated motions, which is easily affected by accumulated drifts.

Underwood et al. [8] propose a calibration method that needs one vertical pole with retro-reflective tape and a sensor platform limited to a planar surface. The platform must be moved so that the sensos can observe the environment from different headings. Gao et al. [9] use retro-reflective targets placed in scenes to calibrate a multi-LiDAR system, and this approach needs the position of the vehicle platform and the initial calibration estimate. All these approaches need a platform that can be moved, so they are hard to apply in a LiDAR fixed system, like that in Fig. 1.

Shang et al. [10]

present a calibration method for 3D LiDARs, which only needs an orthogonal normal vector pair, and the normal vector needs to be generated from a planar ground plane and a vertical wall. Jiao et al.

[11] use three linearly independent planar surfaces to find correspondences to enable automatic LiDAR calibration, but the requiement of three planar surfaces is very demanding. Muhammad et al. [12] propose a method for multi-beam LiDARs. This technique is based on an optimization process performed to estimate the LiDAR calibration parameters from a coarse initial calibration. The drawback of all these calibration approaches is that they need a specific environment, a mobile platform or a reliable initial value, which are hard to get in some situations. Our approach only depends on two non-parallel poles stickered with retro-reflective tape, and these are easy to place in any situation. We do not need the LiDAR platform to be movable, nor do we need an initial calibration estimate, which makes our approach more general and practical.

Many LiDAR-camera calibration algorithms have emerged. Levinson et al. [13] introduce techniques that enable camera-laser calibration online, automatically and in arbitrary environments, using a probabilistic monitoring algorithm. Martin et al. [14] present a pipeline for mutual pose and orientation estimation of the LiDAR-camera system using a coarse-to-fine approach. Pandey et al.[15] report on a mutual information (MI) algorithm. MI as the registration criterion can work in a situation without the need for any specific calibration targets.

Motion-based approaches all require that the LiDARs can be moved. Heng et al. [16] publish a tool called CamOdoCal, a versatile algorithm which does not need any prior knowledge about the rig setup. Jiao et al. [17] align the estimated motions of each sensor as an initialization then refine them with an appearance-based method. In our cases shown, in Fig. 1, motion-based approache are all impossible, because the LiDARs can’t be moved. Quenzel et al. [18] present a method, using pose graph optimization to calibrate the extrinsic parameters of LiDARs. However this method needs one object to be clustered exactly into one segment.

Many of the above techniques rely on one or more assumptions and are not applicable in our case. Our approach only needs two poles with retro-reflective tape that helps us to recognize the poles. It doesn’t involve ego-motion nor an environment prior assumption. We only need to place the LiDARs non-parallel in the LiDARs’ overlapping area.

Iii Methodology

We first place two poles in a position in which they are not parallel to each other. In this section, we provide details of the process.

Iii-a Pole Extraction and Representation

Because the poles have been stickered with retro-reflective tape, it is easy to identify them from the point cloud using a simple threshold operation with the parameter of intensity. When the distance between the LiDAR and pole is about 5 meters, the parameters can be chosen from those that are presented in Tab. I.

LiDAR Manufacturers Threshold of Intensity
Velodyne 230
Hesai 200
Leishen 200
RoboSense 200
TABLE I: Parameter Selection of Different LiDARs

Even though we filter out many irrelevant points by the threshold, there are some outliers. These points can be filtered by using an arbitrary clustering algorithm because their numbers are small in the majority of cases.

The pole point cloud can be represented by a linear equation, and the linear equation can be denoted as a vector equation: where is a scalar. Then we denoted it as , where means the line through the point and denotes the direction of the line. Another way of expressing the pole point cloud is to use , where means the point in the point cloud. For convenience, is used to represent the transformation from the world to the LiDAR frame, and to represent the pole captured by the LiDAR.

Iii-B Initialization for Calibration

As shown in Fig. 2, we can place the poles arbitrarily. Considering the type of LiDAR, such as VLP16111Velodyne 16-channel mechanical LiDAR is one of the most common LiDARs . or Pandar64222Pandar64 is a 64-channel mechanical LiDAR from Hesai Technology., we should not arrange the pole horizontally because the LiDAR’s beams may not scan the poles, as can be seen in Fig. 3.

Fig. 2: The poles stickered with retro-reflective tape.
Fig. 3: The LiDAR beam and pole. The orange line indicates a pole. The black line indicates the beam of LiDAR. A horizontal pole can cause the beams to fail to reach the pole itself. Placing the pole vertically gets more useful data.

LiDAR calibration is accomplished by aligning the poles from different LiDARs. Essentially, we want to get a rotation matrix of and a translation vector of , which describes the pose relationship between the two LiDARs. We represent the two LiDARs in two different ways, and respectively. Then these data are acquired by solving a least-squares problem:

(1)

where indicates the -norm of a vector. For the above case, there is only one line and one point cloud , so we can get infinite solutions. If there are two lines and two point clouds, and they are not parallel, the number of solutions will be greatly reduced. So we introduce the variables . If the point cloud corresponds to line , then the will equal 1, else it will equal to 0. We subsequently slightly improve (1) to get

(2)

Iii-C Determination of Correspondence Relation

If we have an excellent prior, it is easy to determine the correspondence relationship , but in many cases, it is not easy to get the prior. Although we can get a reliable initial pose by adjusting the points using editing tools like Cloudcompare [19], it is complicated to do this. Each pairing relationship has four different solutions, and there are two different correspondence relationships, so there are eight different situations. Fig. 4 illustrates four situations which we can match.

We can enumerate all the corresponding relationships and then find a reasonable result.

(a)
(b)
(c)
(d)
Fig. 4: Four different correspondence relationships. The red, blue, and green lines represent three directions of a coordinate axis. We assume that the green and the blue direction are the two poles’ directions. The ground truth is (a). By rotating and transforming the ground truth, three other convergent results can be obtained.

Although it is easy for human beings to judge what is a reasonable result, for a computer, this is not easy because it does not have enough semantic information to find the correspondence. There is a solution in which we can use the Iterative Closest Point (ICP) algorithm [20] to register the two point clouds, and choose the result that is the closest to the identity. The result of rotation is , and the translation the result is , and the error in rotation can be represented as , where is defined to transform SO(3) to . Similarly, the error in translation is . In most situations, the correct result’s rotation and translation error is always minimal. The relevant results can be seen in Sect. IV.

Iii-D Accuracy Evaluation

We next solve the problem of why a pole can be represented by a line and how to measure the error. First, we can assume that the pole is placed vertically at the origin of the coordinate axis, and the pole’s central axis is the Z-axis. The LiDAR is put in the X-axis with arbitrary orientation. The parameters are only the LiDAR’s position and direction, which can be represented as a vector. The vector is vertical to the central plane of the LiDAR, so the vector can be represented by , and the radius of the pole is . Then we can get the equation of the cylinder:

(3)

In addition, considering the pole is a cylinder, it can obtain a range of the value . Then, if we get a point on the pole and its coordinate parameter is in (3), we can get

(4)

where the function corresponds with (3), is the LiDAR’s position mentioned above, and is the angular resolution parameter of each beam. When the information is sufficient, it is easy to get the value of the variables using an algorithm to solve nonlinear equation systems, like the preconditioned conjugate gradients [21], the trust-region-dogleg algorithm [22] or the Levenberg-Marquardt method [23], [24]. Therefore if we have a current LiDAR position and its beam parameters, we can use (4) to infer the coordinates of all points on the pole. The result can be seen in Fig. 5.

(a)
(b)
Fig. 5: The points in the cylinder. The coloured lines are the LiDAR’s beams, which touch the cylinder. The LiDAR’s pose can be represented as a quaternion . Fig. (a) the normal perspective, from which we can see the points in the cylinder. Fig. (b) is the top view, all points fall precisely on the edge of the cylinder.

When we get all the points on a pole, these points can be fitted by a straight line, and the line is similar to the central axis of the pole. The fitted line can be represented as a vector and a point on the line. We can use

(5)

to measure the error between the fitted line and the central axis of the pole, where and are the points with maximum and minimum Z-coordinates on the pole.

The smaller the result in (5), the better the fitted result. Because the VLP-16 is a common LiDAR, we choose its parameters, so . Quaternions can be used to indicate the orientation of LiDAR. If the quaternion is notated as , then we provide the following results for reference: and .

Error() Error()
10 0.3 0.061196 0.060038

10
0.2 0.027408 0.028638

10
0.1 0.007277 0.008005

6
0.3 0.059603 0.058643

6
0.2 0.026762 0.026904

6
0.1 0.006515 0.007360

4
0.3 0.061122 0.060220

4
0.2 0.026510 0.026064

4
0.1 0.006632 0.006237
TABLE II: Error from Different Parameters

It can be seen in Tab. II that the smaller the value of , the higher the accuracy. Considering the convenience, we will use a pole with in our later experiments.

Iv Experiment

In this section, we divide the evaluation into two separate steps. The initial calibration experiments are presented with simulated data and real sensor sets. Then we test the calibration approach on the real sensor data.

Iv-a Experiments on Simulated Data

First, we assume that the positions of the poles and LIDARs are known. Then the calibration results are calculated with theoretical data using the approach mentioned above, and are compared with the ground truth.

We assume that the LiDAR and a pole’s parameters , and radius are known in the world frame. The task is to get the points in the LiDAR’s coordinate system.

First, we move the pole to the origin of the coordinate axis of the world frame. Then we get the new LiDAR :

(6)

where

(7)
(8)

We use as the translation vector of . Then, we put the LiDAR in the X-axis of the pole coordinate system get LiDAR :

(9)

where . Denoting as translation vector of , we can get the final LiDAR :

(10)

where

(11)
(12)

Now we can get one LiDAR’s points in the pole coordinate system using the method mentioned in the above section, and then we put these points into the LIDAR coordinate system. If we have two LiDARs, and , and as a reference, the ground truth is .

We randomly generate parameters of the two poles and the postion of the LIDARs. Each pole’s z-component of the directinal vector is larger than 0.9. If we think of the pole as a line, the distance bewteen the two poles is the distance between two points and , which are in their line position of . And the distance between the points is between 1.5 and 4. The positions and are in the y-axis and . For the parameters of LiDAR position, the LiDARs are fixed, their positions are and , and their oritentaions are reprenseted as quaternions and .

For tests, we perform ten trials on the noisy data and compute the mean as well as the standard deviation of the rotation and translation error. Each LiDAR’s points are subjected to zero-mean Gaussian noise with a standard deviation of 0.006

[25].

To measure the difference in our results and ground truth, we can use the formula:

(13)

where , and represent the nearest and farthest distance of LiDAR in use. So we can assume that and . Therefore, (13) describes the average error in using the LiDAR. The estimated extrinsic parameters of this algorithm are quite close to the ground truth; the . This proves that the proposed method can successfully calibrate the extrinsic parameters.

Iv-B Experiments on Real Data

We calibrate a sensor system that consists of two LiDARs in a corridor. Since the precise extrinsic parameters of the system are unknown, and it’s impossible to get its accurate value, we use Rényi’s Quadratic Entropy (RQE)[26] to evaluate our results.

(a)
(b)
(c)
(d)
Fig. 6: Four different correspondent relationships. The red point cloud is the referenced from LiDAR 1, and the blue one is aligned point cloud from LiDAR 2. Fig. (a)a is a reasonable result and all the others are incorrect results.
(a)
(b)
Fig. 7: Stacking of different results. The referenced point cloud is red, the green point cloud is generated by planar surfaces approach, and the blue is our method’s result. Fig. (a)a is the normal view result, Fig. (b)b is result of top view. We can see that the results are similar.

We take a method for comparison. The method is developed based on an algorithm using planar surfaces [11], and we select a calibration environment in the indoors corridor for calibration. This environment has enough planes perpendicular to each other to perform the planar surfaces approach.

We get several results because different matching relationships lead to different results. Fig. 6 illustrates the four results which corresponded with Fig. 4. We use the ICP algorithm to select the most appropriate result. Each pair of red and black point clouds in Fig. 6

can be registered by using the ICP algorithm, and then we evaluate the difference between the results and the identity matrix. The results are given in Tab.

III. Since a larger RQE represents better calibration, we can see that we have similar results with the state-of-the-art approach.

Number Rotation Error Translation Error
a 0.1027 0.3949
b 0.8393 0.9455
c 1.6875 17.4749
d 0.4021 8.4218
TABLE III: The difference between ICP result in Fig. 6 and identity matrix
Approaches RQE
Planar Surfaces 0.0824
Proposed 0.0822
ICP 0.0461
Wrong Result 0.0497
TABLE IV: Evaluation of different approaches by RQE

Iv-C Discussion

We have an assumption in this method: LiDARs share overlapping fields of view. The proposed method may fail in several cases. For instance, if the poles are wrongly detected, the entire system will get the wrong result. Some bands of the LiDAR may get very inaccurate data, and this will impact the results. However, compared with the planar surface approach, there are some advantages of our proposed method:

  • Poles can be fitted with very few points, but the state-of-the-art approach requires more points on the surfaces.

  • Fewer constraints are required, as long as the poles can be detected by two LiDARs.

  • There is no requirement for how a LiDAR must be installed, and it can be flexibly adapted to various situations. Unlike the planar surfaces approach, it needs to be installed horizontally on a platform.

V Conclusions

In this paper, we presented an automatic approach for calibrating LiDARs with two poles stickered with retro-reflective tape. There are still some problems with this method; for instance, the method of pole recognition depends on the intensity information from the LiDAR and this data is not stable, decreasing as the distance increases. Although in some cases its calibration accuracy is lower than that of other algorithms, it does not depend on the terrain and does not require the platform to move, so calibration tasks can be performed in any scenario, like on an open harbor.

Acknowledgment

This work was supported by the National Natural Science Foundation of China, under grant No. U1713211, the Research Grant Council of Hong Kong SAR Government, China, under Project No. 11210017, No. 21202816, and the Shenzhen Science, Technology and Innovation Commission (SZSTI) under grant JCYJ20160428154842603, awarded to Prof. Ming Liu.

References

  • [1] J. A. Brink, R. L. Arenson, T. M. Grist, J. S. Lewin, and D. Enzmann, “Bits and bytes: The future of radiology lies in informatics and information technology,” European Radiology, vol. 27, no. 9, pp. 3647–3651, 2017.
  • [2] R. Fan, “ Real-time computer stereo vision for automotive applications,” PhD thesis, University of Bristol, 2018.
  • [3] R. Fan and N. Dahnoun, “Real-time implementation of stereo vision based on optimised normalised cross-correlation and propagated search range on a GPU,” in 2017 IEEE International Conference on Imaging Systems and Techniques (IST), pp. 1–6, IEEE, 2017.
  • [4] Y. Zhu, B. Xue, L. Zheng, H. Huang, M. Liu, and R. Fan, “Real-time, environmentally-robust 3d lidar localization,” arXiv:1910.12728, 2019.
  • [5] M. Pereira, D. Silva, V. Santos, and P. Dias, “Self calibration of multiple lidars and cameras on autonomous vehicles,” Robotics and Autonomous Systems, vol. 83, pp. 326–337, 2016.
  • [6] R. Fan, J. Jiao, H. Ye, Y. Yu, I. Pitas, and M. Liu, “Key ingredients of self-driving cars,” arXiv:1906.02939, 2019.
  • [7] L. Zheng, Y. Zhu, B. Xue, M. Liu, and R. Fan, “Low-cost GPS-aided lidar state estimation and map building,” arXiv:1910.12731, 2019.
  • [8] J. Underwood, A. Hill, and S. Scheding, “Calibration of range sensor pose on mobile platforms,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3866–3871, IEEE, 2007.
  • [9] C. Gao and J. R. Spletzer, “On-line calibration of multiple lidars on a mobile vehicle platform,” in 2010 IEEE International Conference on Robotics and Automation, pp. 279–284, IEEE, 2010.
  • [10] E. Shang, X. An, M. Shi, D. Meng, J. Li, and T. Wu, “An efficient calibration approach for arbitrary equipped 3-d lidar based on an orthogonal normal vector pair,” Journal of Intelligent and Robotic Systems, vol. 79, no. 1, pp. 21–36, 2015.
  • [11] J. Jiao, Q. Liao, Y. Zhu, T. Liu, Y. Yu, R. Fan, L. Wang, and M. Liu, “A novel dual-lidar calibration algorithm using planar surfaces,” arXiv preprint arXiv:1904.12116, 2019.
  • [12] N. Muhammad and S. Lacroix, “Calibration of a rotating multi-beam lidar,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5648–5653, IEEE, 2010.
  • [13] J. Levinson and S. Thrun, “Automatic online calibration of cameras and lasers.,” in Robotics: Science and Systems, vol. 2, 2013.
  • [14] V. Martin, M. Z. Michal, and H. Adam, “Calibration of rgb camera with velodyne lidar,” in

    International Conference on Computer Graphics, Visualization and Computer Vision

    , 2014.
  • [15] G. Pandey, J. R. McBride, S. Savarese, and R. M. Eustice, “Automatic targetless extrinsic calibration of a 3d lidar and camera by maximizing mutual information,” in

    Twenty-Sixth AAAI Conference on Artificial Intelligence

    , 2012.
  • [16] L. Heng, B. Li, and M. Pollefeys, “Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1793–1800, IEEE, 2013.
  • [17] J. Jiao, Y. Yu, Q. Liao, H. Ye, and M. Liu, “Automatic calibration of multiple 3d lidars in urban environments,” arXiv preprint arXiv:1905.04912, 2019.
  • [18] J. Quenzel, N. Papenberg, and S. Behnke, “Robust extrinsic calibration of multiple stationary laser range finders,” in 2016 IEEE International Conference on Automation Science and Engineering (CASE), pp. 1332–1339, IEEE, 2016.
  • [19]

    D. Girardeau-Montaut, “Cloudcompare-open source project,”

    OpenSource Project, 2011.
  • [20] F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat, “Comparing ICP variants on real-world data sets,” Autonomous Robots, vol. 34, no. 3, pp. 133–148, 2013.
  • [21] R. B. Berry, T. F. Chan, J. Demmel, J. M. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. Van der Vorst10, “Templates for the solution of linear systems: Building blocks for iterative methods1,” Society for Industrial and Applied Mathematics, Philadelphia, USA, pp. 64–68, 1994.
  • [22] M. Powell, “A fortran subroutine for solving systems of nonlinear algebraic equations,” Numerical Methods for Nonlinear Algebraic Equations, pp. 150–166, 1970.
  • [23] K. Levenberg, “A method for the solution of certain non-linear problems in least squares,” Quarterly of applied mathematics, vol. 2, no. 2, pp. 164–168, 1944.
  • [24] D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” Journal of the society for Industrial and Applied Mathematics, vol. 11, no. 2, pp. 431–441, 1963.
  • [25] J. R. Kidd, “Performance evaluation of the velodyne vlp-16 system for surface feature surveying,” in Canadian Hydrographic Conference, 2016.
  • [26] M. Sheehan, A. Harrison, and P. Newman, “Self-calibration for a 3d laser,” The International Journal of Robotics Research, vol. 31, no. 5, pp. 675–687, 2012.