I Introduction
Many robotics applications rely on a camera and a laser scanner, which are rigidly attached together, as the main sensors for performing localization, navigation, and mapping the surrounding environment. Effective fusion of laser measurements with camera images requires knowledge of the extrinsic transformation between the two sensors.
The process of calibrating either a 2D or 3D laser scanner to a camera by computing the rigid transformation between their reference frames is known as cameralaser calibration. Various methods have been developed for this [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], including free toolboxes (e.g. [8] for 2D lasers and [7, 11] for 3D lasers). Although a few checkerboard free calibration methods have been developed in the literature (see e.g. [2, 5, 4, 6]), still a very popular approach is to place one or multiple known targets (e.g. planar checkerboards) in the scene such that they are observed in both the image and the laser scans (see e.g. [1, 3, 10, 11, 7, 8, 9, 13, 14, 15]
). We take the later approach in this paper. In this approach, one or multiple sets of laserimage pairs are collected. The checkerboard corners associated with each image are identified and the checkerboard plane is obtained in the camera coordinate frame. Then, the laser scan points that hit the checkerboard plane in each image are extracted and used to solve a nonlinear optimization that estimates the extrinsic transformation between the camera and laser scanner.
An important part of the process is checkerboard extraction, in which we must determine which laser scan points fall on the checkerboard and which lie on the background. Manual extraction of those inliers is a time consuming and inaccurate process, although some popular toolboxes still rely on this manual extraction (see e.g. [7]). Automatic checkerboard extraction has been poorly studied in the literature and [8] seemed to be the only reference that investigates this problem in details. Other references, only briefly discuss the checkerboard extraction as a block in the pipeline of the cameralaser calibration process [9, 3, 11]. All of these methods are heuristic in nature, imposing assumptions on the environment or the cameralaser setup to identify possible candidate laser points that might correspond to the checkerboard. Those candidates are then passed to a randomised hypothesisandtest search (e.g. RANSAC [16]
) to eliminate the outliers and identify the correspondence of the laser points and the checkerboards. The imposed assumptions depend on whether a 2D or 3D laser scanner is employed and are specific to particular situations. For instance, where 2D laser scanner is employed,
[9, 8] and [3, Section 4] rely on range discontinuities to partition the range measurements into line segments that might correspond to a planar checkerboard. Also [8] additionally assumes that the background is stationary in most of the scans and relies on the frequency of occurrence of range measurements. Assumptions on the minimum and maximum distance of the checkerboard to the laser scanner during different scans, have also been used to enhance the checkerboard extraction process [11].The heuristic methods have demonstrated good performance in some applications [8, 11, 3, 9, 16]. However, imposing assumptions limits the application of these methods to particular scenarios. Assumptions such as stationary background limit the usage of the developed techniques to the controlled environments, preventing their application to open spaces where the background could inevitably change due to moving objects. Also, in some practical scenarios, stationary known targets are used for calibration and the cameralaser pair (usually mounted on a vehicle) is moved around and collects the data. In such situations, the background does inevitably change. Relying on range discontinuities is limited to the cases where the checkerboard is not placed on the walls and there are not many other objects present in the scene that might cause unwanted range discontinuities. Little attention has been given in literature to the design of systematic methods for checkerboard extraction based on rigorous theoretical foundations that can perform robustly in all practical situations.
Branchandbound (BnB) is a systematic method for discrete and combinatorial optimization. In the context of geometric model fitting, BnB is a popular technique in situations where outliers are present or measurement correspondences are unknown (see e.g. [17, 18, 19, 20] and the references therein). Building on a rigorous theoretical foundation, this method systematically searches the state space for inliers that optimize an objective function and is guaranteed to find the globally optimal state. A BnB method has recently been developed for the handeye calibration problem [21, 22] and demonstrated successful application. To the best of our knowledge, the BnB method has not been applied to the cameralaser calibration problem prior to the current contribution.
In this paper, we propose a BnB technique for checkerboard extraction in the cameralaser calibration problem. Following [1], we assume that checkerboard normals are obtained in the camera coordinate frame by processing the checkerboard corners. We propose an objective function that represents the number of laser points that fall within a box with a small distance to the checkerboard (inliers). A key contribution of the paper is to propose a tight upper bound for the objective function that ensures the BnB method finds the globally optimal inlier laser points. Our method is not heuristic and relies only on the underlying geometry of the cameralaser calibration problem without imposing any assumption or constraints on the environment. Hence it is quite general and performs robustly in all practical situations. It is applicable directly to both 2D and 3D laser scanners, to the case where multiple checkerboards are present in the scene, as well as the case where some of the checkerboards either do not fall or only partly fall in the field of view of the laser scanner or the camera. We demonstrate successful application of our methodology by providing simulation studies and experimental results.
The inlier points extracted by our BnB approach can be used in any nonlinear optimization method to explicitly compute the extrinsic camera to laser transformation. The BnB method also provides a rough estimate of the extrinsic transformation which can be used for initializing the combined nonlinear optimization close to the optimal transformation, preventing it from local optimums.
Ii Problem formulation
Consider a cameralaser setup attached rigidly together. Denote the camerafixed coordinate frame and the laserfixed coordinate frame by and respectively. For simplicity, we assume that only one checkerboard is present in the scene. Nevertheless, the method presented in this paper is directly applicable to the case where multiple checkerboards are present (see Remark 1). The representation of a point in the camera frame and laser scanner frame are denoted by and , respectively. One has
(1) 
where and , respectively, denote the rotation and translation from to and represent the extrinsic cameralaser calibration.
Assume that a calibration plane (e.g. a checkerboard) is placed in the environment such that it falls within the field of view of both the camera and the laser scanner. The laser scanner measures the 3D coordinate of the points of the environment in the laserfixed frame. Some of these points fall in the plane of the checkerboard while others correspond to the points in the environment. For the purpose of this paper, we do not differentiate between 2D or 3D laser scanners as the theory presented here is applicable to both without any modification.
We assume the dimensions of the checkerboard are known and use standard imagebased techniques to extract the location of intersection points in the image and find the camera intrinsic parameters as well as the pose parameters that relate the checkerboard’s reference frame to the camera’s. This task is easily accomplished using offtheshelf software packages such as [23]. Changing the relative pose of the checkerboard with respect to (wrt.) the cameralaser setup (by moving either the checkerboard or the cameralaser setup), we collect sets of laser scan points each contain points ( and ). Each set of these points corresponds to a scene in which a checkerboard with a known extrinsic transformation wrt. the camera frame is present. The main objective of this paper is to develop an algorithm that takes sets of the laser scan points and their corresponding camera images and separates the laser points that correspond to the checkerboard (inliers) from those that belong to the background (outliers).
Iii Checkerboard extraction using BnB
We propose a branchandbound method (BnB) [18, 20] for checkerboard extraction. Consider a coordinate frame attached to the center of the checkerboard with its zaxis normal to the checkerboard plane pointing outward (to the camera), its yaxis horizontal and its xaxis vertical, as depicted in Fig. 1. Denoting the dimensions of the checkerboard along its and direction by and , a laser point is considered an inlier if it falls inside a box around the calibration board whose dimensions along the , , and axes of the checkerboard are , , and , respectively, where is a user defined threshold (see Fig. 1). For the checkerboard in the th image, denoted by , , and vectors along the , , and axes of the checkerboard whose magnitude equals to the distance of the center of the camera frame to the , , and planes of the checkerboard coordinate frame, respectively. By detecting and processing the checkerboard corners, one can compute , , and in the camera coordinate frame It is straightforward to show that a laser point falls inside the box of Fig. 1 iff the following conditions hold [1].
(2a)  
(2b)  
(2c) 
where denotes the unit vector parallel to . Inspired by [18], we propose the following objective function.
(3) 
where is the indicator function. For a given transformation , any point that satisfies (2) is an inlier and contributes to the summation in (3). Hence, represents the total number of the inliers for a given transformation. We wish to maximize over the transformation .
Algorithm 2 in the Appendix summarizes the BnB method used in this paper. We use the angleaxis representation to parametrize . A 3D rotation is represented as a vector whose direction and norm specify the axis and angle of rotation, respectively [17, 24]. This simplifies the search space of to a ball with a radius of . We enclose this ball with a box with a half length of , denoted by . In practice, usually an initial estimate of the bound on the amplitude of the rotation is available. This initial estimate can be used to initialize the search space of rotations in the queue with a smaller box with . Similarly, the search space of translation is initialized with where represent an estimate of the maximum distance of the center of the camera from the center of the laser scanner.
The BnB method requires an upper bound for (3). Our main contribution is to propose an upper bound in Section IV (see Lemma 1). For branching, we simultaneously divide the boxes into eight subboxes, each with a half of the side length of the original box. Since we do the branching of the rotation and translation search spaces simultaneously at lines 7 and 8 of Algorithm 2, this branching yields the total of new branches. Note that the inliers that satisfy the condition (2) are detected during evaluation of the objective function. Indexes of these inliers are returned as an output of the BnB method (see the line 21 of Algorithm 2).
Iv Upperbound function
The key stage for employing the BnB method is to propose an upperbound for (3). The following Lemma proposes an upperbound and proves that it qualifies as a valid bounding function for BnB. Detailed derivations are given in the proof.
Lemma 1
Consider the boxes and with the centers and and the half side length of and , respectively. Let
(4) 
where is computed for each laser point as
(5) 
The candidate upperbound (4) satisfies
(6) 
Also, if collapses to the single points , we have .
Proof of Lemma 1: Using the triangle inequality, we have
(7) 
for any . Again, applying triangle inequality yields
(8) 
Since and is the center of the box with the half side length , we have . Also, since and is the center of the box with the half side length , and resorting to [25, equation (6)] and noting that , we have . Combining these with (7) and (8) we have
(9) 
Hence, by (5), implies for any . Consequently, . The above derivations are valid if one replaces with , , or and substitutes for , , or , respectively. This proves (6). If and collapse to single points, we have , , and (thus ). Substituting these into (4) yields and completes the proof.
Iva Improving the tightness of the upperbound
Tightness of the upper bound function (4) depends directly on the value of . The less conservative value of is used, the tighter upper bound is obtained and BnB finds the global optimal value with less iterations. In this section, we tighten the upper bound function proposed in Section IV.
Derivation of (5) relies on obtaining upper bounds of the inner products and over . These upper bounds are obtained in (8) by bounding the inner products with the product of the magnitude of the involved vectors, without considering the angle between these vectors. This yields a reasonably tight result for the inner product as the vector belongs to the 3D box and inevitably there exist some in the box for which the angle between and is zero. For the inner product , however, the angle between the involved vectors might not be zero (or even close to zero) since and the vector belongs to the spherical cap shown in Fig. 2 (see [17]). In fact, the upper bound (5) is obtained by bounding the spherical cap with a ball with the radius . Here, we obtain a tighter upper bound by maximizing the inner product directly on the spherical cap, taking into account the angle between and . We have
(10) 
where . Hence, the particular value of that maximizes (10) corresponds to the value that either minimizes or maximizes . Algorithm 1 provides a fast approach for computing the maximum and minimum values of , denoted by and , respectively (proof of this algorithm is given in Appendix A). Using Algorithm 1, a tight upper bound for (10) is obtained as . In order to implement this new upper bound instead of the upper bound proposed in Section IV, one only needs to replace (5) with the following equation (the rest of the Algorithm 2 remains unchanged).
(11) 
Similar to (5), the new upper bound (11) still depends on the laser points and needs to be computed for each laser point separately. It is straightforward to show that the tight upper bound based on (11) also satisfies the requirements of Lemma 1. We provide numerical comparison of the bound (5) and (11) in Section V (see Fig. 4).
Remark 1
Multiple checkerboards are sometimes used for cameralaser calibration [11]. In this case, detecting the inliers is harder as it is not known which one of the laser points in a given imagelaser scan pair is associated with which one of the checkerboards. Assume that checkerboards are visible in the th image and denote the normal vectors associated with the th checkerboard () by , , and . The number of observed checkerboards in each image can be different and it is allowed that some of the checkerboards do not fall within the field of view of the camera or laser scanner in some imagelaser pairs. We extend the objective function (3) to
(12) 
Using the operator where the correspondence between the measurements are unknown has been successfully practiced in the literature (see e.g. [18, 25]). For a given calibration , the objective function (12) choses the correspondence between the checkerboards and the laser scans that yields the maximum number of inliers. Note that extending the objective function to (12) does not change the structure of Algorithm 2 at all and the results of Lemma 1 still hold. One only needs to compute and using (12) instead of (3). The optimum correspondence between the laser points and the checkerboards in each image is obtained at line 21 of Algorithm 2.
V Simulation results
This section provides simulation studies demonstrating that the proposed BnB method is able to effectively extract the checkerboards even in challenging situations where some of the checkerboards are placed on the walls (hence do not create discontinuity in range measurements), or where some of the checkerboards either partially hit or are not hit at all by laser scans in some of the imagelaser scan pairs.
We choose the extrinsic rotation to be a rotation of degrees around the axis of the laser scanner and we choose the extrinsic translation . A checkerboard is placed in different positions in front of the camera and laser scanner, each time with a different orientation (see Fig. 3). We consider a 2D laser scanner providing range measurements in a horizontal plane, every degrees from degrees to degrees yielding a total of measured laser points in each image. The lasercamera setup is placed in a triangular shaped room where the distance of the walls to the laser scanner is meters and the intersection line of the walls is
meters away from the laser scanner. The laser points that do not hit the checkerboards will hit the walls. A uniformly distributed random noise between
centimeters and centimeters is added to the resulting range measurements.The synthetically generated sets are depicted in Fig. 3. To make the data more challenging for checkerboard extraction, in Fig. 2(d), the checkerboard is placed exactly on the wall, preventing any discontinuity in the range measurements. Also, we choose the checkerboard orientation such that the laser scans only partially hit it in Fig. 2(e) and do not hit it at all in Fig. 2(f). The total of laser points are measured, out of which points correspond to the checkerboards (inliers) and the rest fall on the walls. Checkerboard normals represented in the camera frame are polluted by a uniformly distributed random rotation of maximum degrees along each axis of the camera to model the camera noise. The resulting checkerboard normals along with the laser scan points are passed to Algorithm 2. The threshold is chosen to be centimeters and the initial search space is chosen as and .
Fig. 4 shows the evolution of and for the first iterations of BnB when they are examined at line 5 of Algorithm 2. Fig. 4 includes both the case where the original bound (5) or the tighter bound (11) is used in BnB. The upper bounds initially decay very fast, but their rate of decay decreases for large iterations (this is a typical characteristic of objective functions designed using the floor operator). As expected, computed using (11) decays faster than that of (5), showing that (11) indeed yields a tighter upper bound. The BnB method is able to find all of the inliers after iterations when the tight upper bound is used and after iterations when the original upper bound is used. The runtime is reported in Section VI. Observe that still decays at the th iteration and will eventually reach and the algorithm terminates. It is a common practice to terminate BnB algorithms after large enough number of inliers are detected and stops growing for a large enough consecutive iterations. Fig. 5 shows the inliers detected at line 21 of Algorithm 2 (for both the bound (5) and (11) demonstrating that all of the inliers are successfully detected.
Vi Experimental results
In this Section, we verify the effectiveness of the proposed BnB method using real images and 2D laser scan dataset of [8]^{1}^{1}1Available online: http://wwwpersonal.acfr.usyd.edu.au/akas9185/AutoCalib/. The dataset contains images together with their corresponding laser scans. Each laser scan contains range measurements in the horizontal plane, measured every degrees from degrees to degrees. A checkerboard is placed in a different place in each image. The images are processed using the camera calibration toolbox [23]. This toolbox extracts out the corners of the checkerboard, computes camera intrinsics and lens distortion parameters, and provides extrinsic transformation of each checkerboard plane wrt. to the camera frame. The output of [23] is used both in our BnB method and in the toolbox of [8]. Having the output of the camera calibration toolbox, it is straightforward to compute the checkerboard normals , , and required by Algorithm 2. The dimensions of the checkerboard rectangles are provided in the dataset, but the dimensions of the calibration board itself (which are larger than the area of the checkerboard, see Fig. 7) are not provided. From the images, we estimate the dimensions of the calibration board to be at least and we use these values as and in Algorithm 2. The search space is initialized to and and the inlier thresholds is chosen as meters^{2}^{2}2The search space is initialized large enough to contain the true calibration parameters and . By estimating the calibration parameters via the toolbox of [8], we verified that the angle of rotation corresponding to is less than degrees and the amplitude of the transformation is less than meters along each axis. Algorithm 2 works if a larger search space is chosen, although this increases the number of iterations..
Fig. 6 shows the extracted checkerboards after the first iterations of our BnB method with the tight bound (top plot) versus the checkerboards extracted by the toolbox of [8]^{3}^{3}3We adjusted the line line extraction thresholds of[8] to detect all of the checkerboards. Otherwise, one of the checkerboards would not be detected.. The total of inlier laser points are extracted by our method and inliers are detected by the toolbox of [8]. The extracted inlier laser points of both methods are almost identical. No ground truth inliers are given in the data set. Nevertheless, since the employed data has been verified to work efficiently with the toolbox of [8], the validity of the extracted inliers of the BnB method is verified. Fig. 7
shows a sample of the imagelaser scan pairs. Some of the points near the edges of the checkerboard fall on the hand or clothes of the person holding the checkerboard, and are correctly classified as outliers by Algorithm
2. Also, since and are chosen smaller than the true dimensions of the checkerboard, occasionally one or two inlier laser points at the edge of the checkerboard are classified as outliers by our method. Same effect is observed in the results of [8]. This is not important in practice as those laser points are naturally more noisy and might not be very helpful in calibration [26].On our machine equipped with a Core i74720HQ processor, the total iterations of the BnB takes seconds while the toolbox of [8] extracts the checkerboards in less than seconds. We emphasis, however, that the calibration is a oneoff process and the run time of BnB is negligible compared to the data acquisition, justifying its generality, flexibility with practical conditions, and robustness gains. We emphasis that [8] is tailored to 2D laser scanners, necessarily requires stationary background, and relies on range discontinuities, all of which are satisfied in the employed dataset. Our BnB method does not impose any of these assumption and is applicable to 3D laser scanners as well.
Vii Conclusion
We formulate the checkerboard extraction as a combinatorial optimization problem with a clear cut objective function and we propose a branchandbound technique for optimizing the objective. The proposed BnB method is able to robustly extract the checkerboard in a diverse range of practical scenarios, including but not limited to where either 2D or 3D laser scanners are used, multiple checkerboards are present in the scene, checkerboards are placed on the walls and no range discontinuity is associated with checkerboard edges, some of the checkerboards are only partially hit or are not hit at all by the laser scans, background changes from one scan to another, and multiple unwanted objects are present in the scene creating undesired range discontinuities. We demonstrate effective application of the proposed method via simulation and experimental studies.
Appendix A
Consider a coordinate frame whose center is at the origin of and its axis is along (see Fig. 8). Choose the axis of this coordinate frame such that it falls within the plane containing and . The axis is defined by . Any vector is expressed in the coordinate as where is the angle between the axis , and is the angle between the axis and the projection of into the plane. Similarly, the expression of is given by where . We have
(13) 
Finding the value of that maximizes or minimizes (13) boils down to maximizing/minimizing (13) wrt. and . Observe that and . We have . This implies that extremums of (13) occur on the intersection of the plane that includes and (this plane is characterized by or ) and the spherical cap of Fig. 2 (the intersection is depicted in Fig. 8). Replacing for or in (13), we have and . Hence, and . Observe that yields either or . Since , only is valid. Similarly, yields either or where is invalid. If either of or creates a rotation that falls within the box , this rotation corresponds to an extremum of . These correspond to the cases where either , yielding the maximum of (13) to be , or , yielding the minimum of (13) to be . Otherwise, since is continuous on , the extremums might happen at the boundary point , which yield the candidate extremums and . Thus, in order to find the extremums, one should first check if either of or falls inside the box , yielding the maximum or minimum, respectively. Otherwise, the maximum/minimum is obtained from the boundary points. Algorithm 1 summarizes this method.
Require: , , , , , , for .
References
 [1] Q. Zhang and R. Pless, “Extrinsic calibration of a camera and laser range finder (improves camera calibration),” in Proc. IEEE/RSJ International Conf. Intelligent Robots and Systems, vol. 3, 2004, pp. 2301–2306.
 [2] D. Scaramuzza, A. Harati, and R. Siegwart, “Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes,” in Proc. IEEE/RSJ International Conf. Intelligent Robots and Systems, 2007, pp. 4164–4169.
 [3] F. Vasconcelos, J. P. Barreto, and U. Nunes, “A minimal solution for the extrinsic calibration of a camera and a laserrangefinder,” IEEE Trans. pattern analysis and machine intelligence, vol. 34, no. 11, pp. 2097–2107, 2012.
 [4] A. Napier, P. Corke, and P. Newman, “Crosscalibration of pushbroom 2d lidars and cameras in natural scenes,” in Proc. IEEE International Conf. Robotics and Automation, 2013, pp. 3679–3684.
 [5] J. Castorena, U. S. Kamilov, and P. T. Boufounos, “Autocalibration of lidar and optical cameras via edge alignment,” in IEEE International Conf. Acoustics, Speech and Signal Processing, 2016, pp. 2862–2866.

[6]
G. Pandey, J. R. McBride, S. Savarese, and R. M. Eustice, “Automatic
targetless extrinsic calibration of a 3D lidar and camera by maximizing
mutual information.” in
Proc. AAAI Conf. on Artificial Intelligence
, 2012.  [7] R. Unnikrishnan and M. Hebert, Fast extrinsic calibration of a laser rangefinder to a camera. Carnegie Mellon University, 2005.
 [8] A. Kassir and T. Peynot, “Reliable automatic cameralaser calibration,” in Proc. Australasian Conference on Robotics & Automation, 2010.
 [9] F. M. Mirzaei, D. G. Kottas, and S. I. Roumeliotis, “3D LIDAR–camera intrinsic and extrinsic calibration: Identifiability and analytical leastsquaresbased initialization,” The International Journal of Robotics Research, vol. 31, no. 4, pp. 452–467, 2012.
 [10] D. Herrera, J. Kannala, and J. Heikkilä, “Joint depth and color camera calibration with distortion correction,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 2058–2064, 2012.
 [11] A. Geiger, F. Moosmann, Ö. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in IEEE International Conf. Robotics and Automation, 2012, pp. 3936–3943.
 [12] S. Wasielewski and O. Strauss, “Calibration of a multisensor system laser rangefinder/camera,” in Proc. Intelligent Vehicles’ Symposium, 1995, pp. 472–477.
 [13] G. Pandey, J. McBride, S. Savarese, and R. Eustice, “Extrinsic calibration of a 3D laser scanner and an omnidirectional camera,” IFAC Proceedings Volumes, vol. 43, no. 16, pp. 336–341, 2010.
 [14] D. Herrera, J. Kannala, and J. Heikkilä, “Accurate and practical calibration of a depth and color camera pair,” in International Conf. Computer analysis of images and patterns, 2011, pp. 437–445.
 [15] C. Mei and P. Rives, “Calibration between a central catadioptric camera and a laser range finder for robotic applications,” in Proc. IEEE Int. Conf. on Robotics and Automation, 2006, pp. 532–537.
 [16] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.

[17]
R. I. Hartley and F. Kahl, “Global optimization through rotation space
search,”
International Journal of Computer Vision
, vol. 82, no. 1, pp. 64–79, 2009.  [18] T. M. Breuel, “Implementation techniques for geometric branchandbound matching methods,” Computer Vision and Image Understanding, vol. 90, no. 3, pp. 258–294, 2003.
 [19] A. Parra Bustos, T.J. Chin, A. Eriksson, H. Li, and D. Suter, “Fast rotation search with stereographic projections for 3D registration,” IEEE Trans. Pattern Analysis and Machine Intelligence, (accepted in Dec. 2015).
 [20] C. Olsson, F. Kahl, and M. Oskarsson, “Branchandbound methods for euclidean registration problems,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 783–794, 2009.

[21]
J. Heller, M. Havlena, and T. Pajdla, “A branchandbound algorithm for
globally optimal handeye calibration,” in
IEEE Conf. Computer Vision and Pattern Recognition
, 2012, pp. 1608–1615.  [22] Y. Seo, Y.J. Choi, and S. W. Lee, “A branchandbound algorithm for globally optimal calibration of a cameraandrotationsensor system,” in IEEE International Conf. Computer Vision, 2009, pp. 1173–1178.
 [23] J.Y. Bouguet, “Camera calibration toolbox for MATLAB,” http://www.vision.caltech.edu/bouguetj/calib˙doc/ (visited in Auguest 2016).
 [24] A. Khosravian and M. Namvar, “Globally exponential estimation of satellite attitude using a single vector measurement and gyro,” in Proc. IEEE Conf. Decision and Control, 2010.
 [25] A. Parra Bustos, T.J. Chin, and D. Suter, “Fast rotation search with stereographic projections for 3D registration,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2014, pp. 3930–3937.
 [26] W. Boehler, M. B. Vicent, and A. Marbs, “Investigating laser scanner accuracy,” The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 34, pp. 696–701, 2003.
Comments
There are no comments yet.