I Introduction
Autonomous mobile robots require sensors to perceive the environment and estimate their state. Combining measurements from different sensors is a need to improve robustness and to compensate for individual sensor limitations. In order to express measurements into a common reference system, accurate relative transformations between the sensors are required (i.e. extrinsic calibration). For example, when doing Visual Odometry [1] it is a common practice to consider also information from an odometer [2, 3, 4], which requires knowing the spatial transformation between the camera and the wheel odometry system.
Two main approaches can be found in the literature for extrinsic calibration: i) exploiting a priori knowledge about the environment or ii) relying on persensor motion estimates. In the first case, a priori information is typically in the form of scene patterns [5, 6, 7] or known landmarks [4]. Patternbased calibration methods are usually developed for specific pairs of sensors. For example: a camera and a 2D laser scanner [5], or monocular and depth cameras [6]. Using specific calibration techniques to cover all sensors pairs in a multisensor configuration makes the calibration process complex and almost impractical. Additionally, relying on known landmarks for calibration prevents the automatic execution of the calibration process since they have to be placed in the environment beforehand. However, automatic calibration methods are interesting for mobile robots: they demand recalibration capabilities for longterm operation since the relative transformation between their sensing devices can change due to crashes, vibrations, human intervention, etc [8, 9].
In contrast to patternbased calibration methods, motionbased ones do not need to modify the environment and they can be used with any sensing devices, provided that egomotion can be inferred from them. Monocular cameras are the exception, being excluded due to the scale ambiguity problem, which has to be considered explicitly. Another practical limitation arises from the planar movement of mobile robots, which prevents the estimation of the full 6DoF extrinsic calibration parameters solely form incremental motions, as shown in [2].
In this paper, we contribute a method to extrinsically calibrate multiple heterogeneous sensors such as odometers, visual and depth cameras, 2D and 3D laser scanners, etc., mounted on a robot moving on a planar ground (see Fig. 1). Our approach computes the 6 extrinsic parameters in two main steps: the 2D calibration parameters (x, y, yaw) are obtained from persensor incremental motions, while the remaining 3 parameters (z, pitch, roll) are computed from the observation of the ground plane. The ground plane has to be observed for a short period of time, and this requisite is fulfilled by most robotic sensor configurations. By decoupling the extrinsic calibration problem into two simpler subproblems we are able to solve all parameters in closed form, thus removing the necessity to provide initial estimates. Another important aspect of our method is that it formulates the calibration problem taking into account the scale ambiguity in monocular cameras, which is estimated in the motionbased step. Thus, our approach allows to combine these commonlyused devices with other types of sensors in the same calibration framework. A C++ open source implementation of the contributed method is publicly available for download at https://github.com/dzunigan/robot_autocalibration.
Our proposal has been validated in a simulation experiment and assessed with real data from two different datasets: one in outdoor scenarios containing information from a monocular camera and a GPS [10], and another one gathered indoors using one of our mobile robotic platforms, equipped with an odometer, a monocular camera and a 2D laser scanner [11]. We also report improvements in terms of accuracy and consistency of the estimates over the stateoftheart motionbased approach from Della Corte et al. [8].
Ii Related Work
Most extrinsic calibration methods found in the literature consider specific sensors and exploit some a priori information about the observed scene. For example, the work in [5] uses scene corners (orthogonal trihedrons) to calibrate a 2D laser scanner and a camera. The information provided by a spherical object is used in [6] to calibrate a camera and a depth sensor. The overlap requirement between the sensors’ field of view is relaxed in [7] to calibrate multiple depth cameras from the common observation of planar structures. In contrast to these methods, a priori knowledge is not required by motionbased calibration approaches, and hence can be used automatically in unstructured environments.
The extrinsic calibration from individual sensor egomotion was first considered, to the best of our knowledge, by Brookshire and Teller in [12]. They present an observability analysis of the 2D extrinsic calibration problem and a solution based on the iterative minimization of a leastsquares cost function. Similarly, in [13], Censi et al. consider the calibration of both, a generic sensor (for which egomotion can be estimated) with respect to the robot reference system, and the intrinsic parameters of a differential drive configuration. They also analyze the observability of the calibration problem and propose a closed form solution to a leastsquares formulation. In [14], Brookshire and Teller extend their previous formulation by considering the calibration of the 3D transformation parameters. Schneider et al. present in [15]
an online extrinsic calibration approach based on the Unscented Kalman Filter.
Unlike these previous works, the authors in [2] address the scale ambiguity of monocular cameras proposing a closed form leastsquares solution to the odometercamera calibration problem based on incremental motions. Their approach does not require any a priori information about the environment, and thus allows for an automatic execution. Zienkiewicz and Davison consider in [3] a similar calibration problem. They propose a method based on the parameterization of the homography. The solution is found by minimizing the photometric errors induced by the homography arising from planar motions observing the ground plane.
All aforementioned works are limited to the calibration of pairs of sensors. In contrast, the work in [16] considers the calibration of multiple cameras with an odometer. Their pipeline estimates the intrinsic camera parameters as well as the extrinsic parameters with respect to an odometer (initialized from [2]), but it is designed to work only with imaging sensors. The work in [17] tackles the calibration of multiple sensors explicitly. Their formulation is based on the GaussHelmert model, where all motions have to be expressed in the same scale, an thus it is not suitable for cameras. Taylor and Nieto propose in [18] an algorithm to extrinsically calibrate multiple sensors from 3D motions, including cameras, by solving multiple handeye calibration problems. Recently, Della Corte et al. presented in [8] an extrinsic calibration framework from 3D motions, where each sensor’s time delay are also estimated. However, as shown in [2], the full 6DoF calibration parameters cannot be estimated solely from planar motions performed by mobile robots.
In contrast to previous approaches, the method presented in this paper estimates the 6DoF calibration parameters of multiple, heterogeneous sensors (including monocular cameras) mounted on a mobile robot performing planar motions. For that, it relies on incremental sensor motions as well as the observation of the ground plane. The a priori information requirement introduced by the calibration step based on the ground plane detection is soft enough to allow automatic execution with different sensors (e.g. monocular cameras or 3D laser scanners) without requiring overlapping field of view configurations. Additionally, all extrinsic calibration parameters are estimated in closed form, and hence initial guess values are not required. As a consequence, our method is suitable for automatic execution by mobile robots.
Iii Problem Formulation
Iiia Motionbased Calibration Coplanar Sensors
First, we consider the motionbased extrinsic calibration of two coplanar sensors under the assumptions that: i) 2D synchronized incremental poses are available for each of them, and ii) the translation component may be expressed in different scales. Our objective is to estimate the fixed 2D transformation between the two sensors that better explains the difference in the observed incremental motions.
More formally, our aim is to estimate the parameters of the 2D similarity transformation between the th and the th sensors , where are the translation components, the rotation angle, the scaling factor (see Fig. 2), and Sim(2) the group of orientationpreserving similarity transformations^{1}^{1}1Similarity transformations are rigid transformations followed by a scaling [19].. The calibration transformation expresses the measurements taken from the th sensor into the th reference frame as [19]:
(1) 
where is the 2D rotation matrix defined by the angle :
(2) 
From (1), we can define the group operator of Sim(2) as:
(3) 
and its inverse as:
(4) 
We consider incremental rigid body motions as input for the calibration process, expressed as between sample times and . Notice that a SE(2) transformation is a particular case of Sim(2) with a scaling factor set to the identity, thus the previously defined operators for Sim(2) also hold for SE(2) transformations.
The incremental motions can be derived from the sensor poses as^{2}^{2}2Throughout this paper, we set to a higher precedence than to improve the readability of expressions involving Sim(2) operations.:
(5) 
The poses corresponding to the th sensor can be expressed in terms of the th sensor poses and the extrinsic calibration between them:
(6) 
and, therefore, its incremental motion is given by:
(7) 
IiiB Sensor Coplanarity Relaxation
Now, consider the case of a sensor in a generic configuration, under the assumption that the plane in which motions are performed can be observed. Our objective is to set a common reference system to estimate the relative pose parameters in order to project estimated motions into this plane, and hence enforcing the coplanarity constraint.
Formally, we want to estimate the relative rigid body motion between the th sensor and the ground plane. For convenience, we set the plane’s local reference system to be at the projection of the sensor’s origin into the plane and with its axis parallel to the plane’s normal pointing upwards (see Fig. 1). Note that the inplane rotation (around the axis) can be arbitrarily set as it will be calibrated latter on (as the parameter).
Thus, ignoring the relative inplane position and orientation, the remaining 3DoF parameters for each sensor are the target of our estimation at this step. The parameter represents the perpendicular distance and two rotation angles. More specifically, we define the relative SE(3) transformation to be:
(12) 
where and represent parameterized 3D rotation matrices along the and axes, respectively.
Hence, the th 3D point observed by the th sensor can be expressed in the local ground coordinates by applying the relative rigid body transformation as:
(13) 
Based on this relationship, we can define a cost function for a weighted leastsquares formulation of the coplanarity relaxation problem:
(14)  
(15) 
where each term represents the perpendicular distance of the th point to the ground plane and
the weight. The ground plane is defined through the unit normal vector
and the distance to the origin (Hessian normal form). For convenience, we set the ground plane parameters to be and (see Section IVB).The observed 3D incremental motions for the th sensor can be projected to the ground plane using the estimated rotation matrix from (14) as:
(16) 
and then the SE(2) incremental motions can be easily recovered as the  translation component of , and by extracting the rotation angle about the axis from .
These three parameters allow us to enforce the coplanarity constraint required by the motionbased calibration described in Section IIIA and, therefore, to estimate the full 6DoF extrinsic parameters.
Iv Method Description
In the proposed method, we estimate the 6DoF calibration parameters in two main steps. First, for each sensor, we estimate the parameters of the relative rigid body transformation with respect to the ground plane. Then, we estimate the extrinsic 2D parameters for each sensor with respect to the reference one from coplanar incremental motions. More specifically, the calibration pipeline can be summarized as follows:

Acquire sensor data while the robot is moving.

Estimate persensor egomotion.

Estimate z, pitch and roll relative to the ground plane.

Project estimated trajectories to the ground plane and resample synchronous incremental motions.

Perform a motionbased calibration of the remaining parameters: x, y, and yaw.

Refine the initial 2D parameters (x, y, yaw) in a joint optimization framework.
The rest of this section is structured as follows. In Section IVA we show how to solve the calibration of two coplanar sensors from incremental motions. Next, in Section IVB, we show how the parameters relative to the ground plane can be solved. Finally, in Section IVC we discuss practical aspects, covering: the sampling of synchronous incremental motions, considerations for robust estimation, the final refinement, and the metric estimation of z in the monocular case.
Iva Closed Form Solution to the Motionbased Pairwise Calibration Problem
In order to solve (9), we follow a similar approach as the one described in [13]. The solution comes by first reducing the leastsquares formulation to a quadratic system with a quadratic constraint. The constrained optimization problem is then uniquely solved in closed form by using the method of Lagrange multipliers.
To reduce the calibration problem to a quadratic system, we first rewrite the error terms in (11) as:
(17) 
Next, we parameterize the calibration angle by two independent variables: and . Grouping the unknown parameters into the vector
(18) 
we can write the error terms (17) as:
(19) 
where the matrix contains the known coefficients:
(20) 
The cost function (10) can be written compactly as:
(21) 
where is a constant term and the symmetric matrix:
(22) 
Therefore, we can write the least squares problem (9) equivalently as a quadratic system with a quadratic constraint:
(23) 
The constraint in (23) corresponds to and can be written in matrix form as:
(24) 
Thus, the constrained optimization problem (23) can be solved considering the Lagrangian:
(25) 
and its necessary conditions for optimality:
(26) 
Since the scale factor has to be a positive real number, the matrix must be singular to satisfy (26). Thus, we solve for the value of that makes:
(27) 
and then we find the solution in the kernel of .
For the matrix , the expression (27) is a second order polynomial in and, therefore, can be solved in closed form. The two candidate values are computed as the real roots of the secondorder polynomial. The rank of the matrix is then at most 4 by construction. Provided that at least two linearly independent incremental motions are observed and that the two sensors follow nondegenerated trajectories, the matrix has rank exactly 4 and hence a onedimensional kernel. For detailed information about the observability analysis we refer the reader to [12, 13].
To obtain the solution associated to each , consider any nonzero vector in the kernel of . Then, we impose the constraint (23) and the fact that must be positive to uniquely identify from the nullspace:
(28) 
IvB Closed Form Solution to the Coplanarity Relaxation Problem
In order to solve (14), we again reduce the leastsquares formulation to a quadratic system with a quadratic constraint. Then, the constrained optimization problem is uniquely solved in closed form by using the method of Lagrange multipliers.
To reduce the relaxation problem to a quadratic system, we first start by simplifying the error function (15). Recalling from the problem formulation, the Hessian normal parameters of the ground plane are:
(30) 
and thus, the perpendicular distance of the th point to the ground plane reduces to its third component:
(31)  
(32) 
where represents the third row of the rotation matrix as defined in (12), for the th sensor.
Therefore, we parameterize the and angles by three independent variables: , and . Grouping the unknown parameters into the vector
(33) 
we can write the error terms in matrix form
(34) 
and, then, the cost function in (15) yields:
(35) 
with the symmetric matrix .
Therefore, we can write the least squares problem (14) as the following quadratic system with a quadratic constraint:
(36) 
The constraint in (36) corresponds to the orthogonality property of SO(3) matrices and can be written as:
(37) 
For this particular matrix , the necessary conditions for optimality are characterized by a third order polynomial in and therefore can be solved in closed form, with a maximum of three different candidate values . And as before, each solution associated to a can be uniquely recovered from any nonzero vector in the kernel of by imposing the orthogonality constraint (36) and the fact that the perpendicular distance has to be positive.
IvC Practical Considerations
First, the motionbased calibration procedure as described in Section IVA
requires synchronous incremental motions. However, heterogeneous sensors in general are asynchronous and have different sampling rates. We overcome this problem by setting a sensor to provide the time reference and then, for the other sensors, we resample synchronous incremental motions computed from linearly interpolated poses of the planar trajectories.
Secondly, motionbased calibration methods rely on trajectories estimated by other means. However, motion estimation algorithms are subject to large errors arising from local tracking failure or drift. In order to limit the impact that erroneous observations have on the calibration, a Random Sample Consensus (RANSAC) [20] framework can be used with our closed form solution. A RANSAC scheme requires a predefined threshold on the error terms in order to discard outliers. However, the error function as defined in (11) mixes the rotation error and the translation error, which have different magnitudes. What’s more, the translation error is defined in the space of the second sensor (denoted by in (11
)), and has an arbitrary scale in the case of a monocular camera. Therefore, we use a slightly different error function for the RANSAC outlier detection step:
(38) 
which is the translation error expressed in the metric space of the th sensor. The error terms defined in this way depend on the translation as well as on the rotation parameters and allow us to set a predefined threshold intuitively, with a simple interpretation. Note that the error function defined in (11) is still used for the closed form solution, and (38) is used only for the inlieroutlier classification in a RANSAC framework.
Another important consideration for practical calibration of multiple sensors is the consistency of the transformations between all sensors. The motionbased calibration presented in Section IVA only considers constraints between a sensor and the reference one. We can improve the consistency of the calibration by also considering constraints between the other sensors in a joint optimization framework.
Let the reference sensor have index 0. Assuming we consider sensors for calibration, we want to estimate the calibration parameters . We define the joint calibration problem as:
(39) 
where represents the set containing all sensor pairs for which additional constraints are considered. The modified error function in (38) is used again to express the error terms of all sensors in the same metric space. Note that only pairs of sensors that have their first sensor providing metrically accurate motions may be considered for additional constraints. The expression is just the relative transformation expressed in terms of the calibration parameters
. In our implementation, we used the Cauchy loss function
to cope with unmodeled errors not detected during the RANSAC step. The joint calibration problem in (39) can be solved iteratively, starting from the closed form solution described in Section IVA. For more information on how to solve this optimization problem, we refer the interested reader to [21].Finally, in the monocular camera case, the metric value of can be recovered by applying the scale factor estimated in the motionbased calibration to the value observed during the coplanarity relaxation. However, the same reconstruction has to be used for both problems in order to guarantee that they share the same scale.
V Experimental Evaluation
We have conducted several experiments in order to demonstrate the suitability of our calibration method in practice. First, we validated our method with simulated data, where the ground truth parameters are known (Section VA). Next, we assessed the calibration accuracy of our method in both outdoor (Section VB) and indoor (Section VC) environments with real data, and compared it with the stateoftheart motionbased calibration approach proposed by Della Corte et al. [8].
Va Validation with synthetic data
The goal of the simulation experiment is to characterize the accuracy of our method. Here, the calibration parameters are known and the sensors are perfectly synchronized. We considered a twosensor setting for the validation: an odometer and a monocular camera. While the odometer provides metrically accurate motions, the camera yields motions in a different scale (imitating the scale ambiguity problem).
In the simulation, we commanded the robot to follow an eightshaped path several times. The incremental motions are affected by unbiased, normally distributed noise in each axis independently. We considered different levels of noise by varying the standard deviation:
for translation and for rotation, given a noise level . The odometer is affected by noise only in the  axes for translation and in the axis for rotation, since it provides 2D motion estimates. On the other hand, the camera motion has noise in the three axes (for translation and rotation).The simulated camera has a QVGA resolution and a diagonal FOV of with square pixels. For the coplanarity relaxation, the camera estimates a dense, scaled reconstruction of the ground plane in the same scale as the incremental motions. We added unbiased, normally distributed noise with standard deviation to the depth measurements (for a noise level ).
We considered 10 independent runs for different levels of noise. The relative transformation between the sensors is fixed and shared by all runs. The estimated parameters, as well as the ground truth are listed in Table I. We also reported the average and the absolute and relative Root Mean Squared Error (RMSE). We included the noisefree case () to show the correctness of our formulation and compared it with the method in [8]. The approach in [8] incorrectly detects a time delay and bias its solution. Additionally, it cannot handle higher levels of noise. On the other hand, our method retrieves the correct solution in the noisefree case and is more robust under noisy observations. From the table we can also conclude that the calibration parameters estimated from the coplanarity relaxation are more accurate than the motionbased ones. This is mainly due to a higher number of observations ( vs ). Finally, the calibration errors are concentrated in the 2D translation parameters, since they depend on the estimation of all the other parameters (apart from ).
Noise 


Ground truth 


Avg.  2  
[8]  –  –  

Avg.  
Abs.  
Rel.  

Avg.  
Abs.  
Rel.  

VB Outdoor evaluation
For the outdoor experimental evaluation, we chose the publicly available dataset in Blanco et al. [10], as it provides highly accurate trajectories for different rigidly mounted sensors (including cameras) and ground truth extrinsic calibrations. The dataset was recorded with an electric buggy while driving in both planar and nonplanar surfaces. The vehicle’s motion was estimated from a total of three Real Time Kinematics (RTK) GPS receivers and the ground truth extrinsic calibrations were initialized from manual measurements and then refined via nonlinear optimization techniques.
We selected the vehicle’s trajectory as the reference motion and used the trajectory of the left camera as provided by the dataset for extrinsic calibration. Note that even though the sensor being calibrated is a camera, we are using the trajectory provided by the dataset, which is metrically accurate. The vehicle’s 6DoF poses were recorded at , while the images were captured with resolution at .
As our method requires synchronous incremental motions, we synchronized the trajectory of the camera with the vehicle’s trajectory by linear interpolation. We chose such a trajectory as the synchronization time reference as it has the lower rate of the two and thus interpolated camera poses can be computed more accurately. Both the proposed method and the motionbased approach in [8] are given the same three input trajectories for the sake of fairness.
The calibration results of both methods for the three planar sequences are presented in Table II. For our method, the 3DoF calibration parameters of the relative transformation to the ground plane are estimated only once for all sequences (that is why no standard deviation values for these estimations are shown in the table). The 3D ground points are extracted from images by first running the StructurefromMotion (SfM) pipeline [22] on 50 of them with enough texture on the ground and then detecting planar points on the reconstructed 3D scene through homographies. We used the metrically accurate trajectory of the camera to compute the scale factor in order to retrieve the metric value of . That parameter is missing for [8], since it is unobservable solely from planar motions [2]. In general, both methods provide consistent parameters estimation. However, the method in [8] required a close initial guess of the translation component in order to provide reasonable results.
Calibration 
Camera  



Ground truth 


parkin_0L  
parkin_2L  
Ours  parkin_6L  
Avg.  
Stdv.  –  –  –  

parkin_0L  –  
parkin_2L  –  
Della Corte  parkin_6L  –  
et al. [8]  Avg.  –  
Stdv.  –  

VC Indoor evaluation
For the indoor experimental evaluation, we used the Giraff mobile robot [23, 24] equipped with an odometer (by integrating wheel encoders for a differential drive configuration), an Hokuyo UTW30LX 2D laser scanner and an uEye UI1240SEMGL monocular camera. The laser scanner is mounted in parallel to the ground, while the camera is pointing downwards with an incidence angle of about , as depicted in Fig. 3 (left). The laser scanner and the camera are rigidly connected to the robot and, therefore, the relative transformation between them does not change during the experiments. Both the odometer and the camera have been intrinsically calibrated before attempting extrinsic calibration.
We considered five independent calibrations and then we analyzed the mean and deviation of the estimated parameters. To this end, we recorded three data sequences with software acquisition timestamps, while driving the robot through our lab following an eightshaped path several times. The incremental motions of the laser scanner are estimated using the method in [25], which provides 2D pose estimates at about . For the camera, we run ORBSLAM [26] with images at .
This time we synchronized the incremental motions of the laser with respect to the keyframes as selected by the SLAM solution. We chose the camera as the synchronization time reference in order to interpolate between odometry and laser poses, which are available at higher and more consistent frame rates. Each data sequence contains to synchronized incremental motions.
The experimental results within indoor environments for the considered methods are presented in Table III, as well as the expected parameters for reference (from manual measurements). The robotic platform uses the laser for localization and thus contains rough calibration parameters for it. The motion estimation algorithm for the laser uses them to estimate the pose of the robot instead of the laser one. Thus, even though there is a translation of in the axis, the expected calibration parameter is . The , and parameters of the laser are omitted in Table III, since they are unobservable with the current setup [14]. Therefore, in the case of the laser scanner, we skipped the coplanarity relaxation for our method, and for [8] we locked the affected parameters to zero. As before, we estimated the 3DoF parameters of the camera relative to the ground plane only once for all sequences for our method. The method in [8] requires metrically accurate trajectories. Therefore, we applied the scale factor estimated by our method to the 3D trajectories estimated with ORBSLAM. The results are similar to the ones from the simulation experiment (Section VA). Both methods agree on the and parameters, while there are slight discrepancies in the other parameters. Looking at the deviations, both methods are consistent with their estimates. We argue that these differences are due to the time delay estimation in [8]. Overall, our method provides calibration parameters closer to the measured ones.
Calibration  Laser scanner  Monocular Camera  



Expected 


#1  
#2  
Ours  #3  

Avg.  
Stdv.  –  –  –  
[8] 
Avg.  –  

Stdv.  – 
Vi Conclusions
In this paper, we presented a new method for the extrinsic calibration of multiple sensors, suitable for automatic execution on mobile robots. In particular, we first formulated a leastsquares problem to estimate the 2D calibration parameters of two coplanar sensors from incremental motions. Next, we relaxed the coplanarity requirement by first estimating the 3DoF transformation relative to the ground plane, and then projecting planar motions into the common plane. Then, we extended the twosensor case by considering all sensor pairs in a joint leastsquares framework. Our formulation allows to accurately estimate the 6DoF calibration parameters of multiple heterogeneous sensors under the assumptions that: i) the ground plane can be observed and ii) accurate persensor motions are available. A scale factor is also considered as an estimation parameter during the motionbased calibration and, therefore, our method can also handle monocular cameras. Finally, the proposed approach has been validated with simulated data and assessed in both indoor and outdoor scenarios, supporting its practical application and enhancing the performance of a stateoftheart motionbased approach. Currently, we are investigating closed form solutions to estimate sensors’ time delays.
References
 [1] D. Nistér, O. Naroditsky, and J. Bergen, “Visual odometry for ground vehicle applications,” J. Field Robot., vol. 23, no. 1, 2006.
 [2] C. X. Guo, F. M. Mirzaei, and S. I. Roumeliotis, “An analytical leastsquares solution to the odometercamera extrinsic calibration problem,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2012.
 [3] J. Zienkiewicz and A. Davison, “Extrinsics autocalibration for dense planar visual odometry,” J. Field Robot., vol. 32, no. 5, 2015.
 [4] G. Antonelli, F. Caccavale, F. Grossi, and A. Marino, “Simultaneous calibration of odometry and camera for a differential drive mobile robot,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2010.
 [5] R. GomezOjeda, J. Briales, E. FernandezMoral, and J. GonzalezJimenez, “Extrinsic calibration of a 2D laserrangefinder and a camera based on scene corners,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2015.
 [6] A. N. Staranowicz, G. R. Brown, F. Morbidi, and G.L. Mariottini, “Practical and accurate calibration of RGBD cameras using spheres,” Comput. Vis. Image Underst., vol. 137, 2015.
 [7] E. FernándezMoral, J. GonzálezJiménez, P. Rives, and V. Arévalo, “Extrinsic calibration of a set of range cameras in 5 seconds without pattern,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2014.
 [8] B. Della Corte, H. Andreasson, T. Stoyanov, and G. Grisetti, “Unified motionbased calibration of mobile multisensor platforms with time delay estimation,” IEEE Robot. Autom. Lett., vol. 4, no. 2, 2019.
 [9] J.R. RuizSarmiento, C. Galindo, and J. GonzalezJimenez, “Building multiversal semantic maps for mobile robot operation,” Knowl.Based Syst., vol. 119, 2017.
 [10] J.L. Blanco, F.A. Moreno, and J. Gonzalez, “A collection of outdoor robotic datasets with centimeteraccuracy ground truth,” Auton. Robots, vol. 27, no. 4, 2009.
 [11] J. GonzálezJiménez, C. Galindo, and J. R. RuizSarmiento, “Technical improvements of the Giraff telepresence robot based on users’ evaluation,” in IEEE Int. Symp. Robot Human Interactive Commun. (ROMAN), 2012.
 [12] J. Brookshire and S. Teller, “Automatic calibration of multiple coplanar sensors,” in Robotics: Science and Systems VII (RSS), 2012.
 [13] A. Censi, A. Franchi, L. Marchionni, and G. Oriolo, “Simultaneous calibration of odometry and sensor parameters for mobile robots,” IEEE Trans. Robot., vol. 29, no. 2, 2013.
 [14] J. Brookshire and S. Teller, “Extrinsic calibration from persensor egomotion,” in Robotics: Science and Systems VIII (RSS), 2013.
 [15] S. Schneider, T. Luettel, and H.J. Wuensche, “Odometrybased online extrinsic sensor calibration,” in IEEE Int. Conf. Intell. Robot. Syst. (IROS), 2013.
 [16] L. Heng, B. Li, and M. Pollefeys, “CamOdoCal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry,” in IEEE Int. Conf. Intell. Robot. Syst. (IROS), 2013.
 [17] K. Huang and C. Stachniss, “Extrinsic multisensor calibration for mobile robots using the GaussHelmert model,” in IEEE Int. Conf. Intell. Robot. Syst. (IROS), 2017.
 [18] Z. Taylor and J. Nieto, “Motionbased calibration of multimodal sensor arrays,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2015.
 [19] S. Leonardos, C. AllenBlanchette, and J. Gallier, “The exponential map for the group of similarity transformations and applications to motion interpolation,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2015.
 [20] R. Raguram, J.M. Frahm, and M. Pollefeys, “A comparative analysis of RANSAC techniques leading to adaptive realtime random sample consensus,” in Europ. Conf. Comput. Vis. (ECCV), 2008.
 [21] R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “go: A general framework for graph optimization,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2011.
 [22] J. L. Schönberger and J.M. Frahm, “StructurefromMotion revisited,” in IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2016.
 [23] J. R. RuizSarmiento, C. Galindo, and J. GonzalezJimenez, “Robot@home, a robotic dataset for semantic mapping of home environments,” Int. J. Robot. Research, vol. 36, no. 2, 2017.
 [24] A. Orlandini, A. Kristoffersson, L. Almquist, P. Björkman, A. Cesta, G. Cortellessa, C. Galindo, J. GonzalezJimenez, K. Gustafsson, A. Kiselev et al., “ExCITE project: A review of fortytwo months of robotic telepresence technology evolution,” Presence: Teleoperators Virtual Environ., vol. 25, no. 3, 2016.
 [25] M. Jaimez, J. Monroy, M. LopezAntequera, and J. GonzalezJimenez, “Robust planar odometry based on symmetric range flow and multiscan alignment,” IEEE Trans. Robot., vol. 34, no. 6, 2018.
 [26] R. MurArtal, J. M. M. Montiel, and J. D. Tardós, “ORBSLAM: A versatile and accurate monocular SLAM system,” IEEE Trans. Robot., vol. 31, no. 5, 2015.
Comments
There are no comments yet.