Calibration of Multiple Fish-Eye Cameras Using a Wand

07/04/2014 ∙ by Qiang Fu, et al. ∙ Beihang University 0

Fish-eye cameras are becoming increasingly popular in computer vision, but their use for 3D measurement is limited partly due to the lack of an accurate, efficient and user-friendly calibration procedure. For such a purpose, we propose a method to calibrate the intrinsic and extrinsic parameters (including radial distortion parameters) of two/multiple fish-eye cameras simultaneously by using a wand under general motions. Thanks to the generic camera model used, the proposed calibration method is also suitable for two/multiple conventional cameras and mixed cameras (e.g. two conventional cameras and a fish-eye camera). Simulation and real experiments demonstrate the effectiveness of the proposed method. Moreover, we develop the camera calibration toolbox, which is available online.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 22

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Camera calibration is very important in computer vision, and numerous researches have been carried out on it. Most of these studies are based on conventional cameras, which obey the pinhole projection model and provide a limited overlap region of the field of view (FOV). The overlap region can be expanded greatly by using fish-eye cameras [1], because fish-eye cameras can provide images with a very large FOV (about ) without requiring external mirrors or rotating devices [2]. Fish-eye cameras have been used in many applications, such as robot navigation [3], 3D measurement [4] and city modeling [5]. The drawbacks of fish-eye cameras are low resolution and significant distortion. Their use for 3D measurement is limited partly due to the lack of an accurate, efficient and user-friendly calibration procedure.

So far, many methods of calibrating conventional cameras [6],[7]

have been proposed, but they are inapplicable to fish-eye camera calibration directly because the pinhole camera model no longer holds for cameras with a very large FOV. Existing methods of calibrating fish-eye cameras are roughly classified into three categories: i) methods based on 3D calibration patterns

[8],[9], ii) methods based on 2D calibration patterns [10],[11],[12], iii) self-calibration methods [13],[14]. The most widely-used methods are based on 2D calibration patterns, which are often applicable to a single camera. In order to calibrate the geometry relation between multiple cameras, it is required that all cameras observe a sufficient number of points simultaneously [6]. It is difficult to achieve by 3D/2D calibration patterns if two of the cameras face each other. On the other hand, many wand-based calibration methods [6],[15],[16] were proposed for motion capture systems consisting of multiple cameras, such as the well-known Vicon system [17]. However, most of them were dedicated to dealing with conventional cameras. Calibration methods for fish-eye cameras with a 1D wand have not been discussed in the literature as far as we know.

For such a purpose, we propose a new method to calibrate the intrinsic and extrinsic parameters (including radial distortion parameters) of two/multiple fish-eye cameras simultaneously with a freely-moving wand. Thanks to the generic camera model used, the proposed calibration method is also suitable for two/multiple conventional cameras and mixed cameras (e.g. two conventional cameras and a fish-eye camera). The calibration procedure of two cameras is summarized as follows. First, the intrinsic and extrinsic parameters are initialized and optimized by using some prior information such as the real wand lengths and the nominal focal length provided by the camera manufacturer. Then, the bundle adjustment [18] is adopted to refine all unknowns, which consist of the intrinsic parameters (including radial distortion parameters), extrinsic parameters and coordinates of 3D points. With the help of vision graphs in [19], the proposed method is further extended to the case of multiple cameras, which does not require all the cameras to have a common FOV. The calibration procedure of multiple cameras is summarized as follows. First, the intrinsic and extrinsic parameters of each camera is initialized by involving pairwise calibration results. Then, the bundle adjustment is used to refine all unknowns, which consist of the intrinsic and extrinsic parameters (including radial distortion parameters) of each camera, and coordinates of 3D points.

This paper is organized as follows. Some preliminaries are introduced in Section II. In Section III, the calibration algorithm for two cameras and multiple cameras is presented. Then the experimental results are reported in Section IV, followed by the conclusions in Section V.

Ii Preliminaries

Ii-a Generic camera model

The perspective projection is described by the following equation [10]:

(1)

where is the angle between the optical axis and the incoming ray, the focal length is fixed for a given camera, and is the distance between the image point and the principal point. By contrast, fish-eye lenses are usually designed to obey one of the following projections:

(2)
(3)
(4)
(5)

In practice, the real lenses do not satisfy the designed projection model exactly. A generic camera model for fish-eye lenses is proposed as follows [10]

(6)

It is found that the first five terms can approximate different projection curves well. Therefore, in this paper we choose the model that contains only the five parameters

As shown in Fig. 1, a 3D point is imaged at by a fish-eye camera, while it would be by a pinhole camera. Let denote the camera coordinate system and the image coordinate system (unit mm). We can obtain the image coordinates of in by

(7)

where is defined in (6), and is the angle between the radial direction and the -axis. Then we can get the pixel coordinates from

(8)

where is the principal point, and are the number of pixels per unit distance in horizontal and vertical directions, respectively. Thus, for each fish-eye camera, the intrinsic parameters are

Note that in this paper we do not choose the equivalent sphere model in [20]. If this generic model is used, the following calibration process will not be changed except for some intrinsic parameters. Besides, the tangential distortion is not considered here for simplicity. As pointed out in [21], the lens manufacturing technology now is of sufficiently high levels so that the tangential distortion can be ignored. Otherwise, the tangential distortion terms need to be taken into account in (6). With them, the following calibration process will not be changed except for some additional unknown parameters.

Fig. 1: Fish-eye camera model [10]. The 3D point is imaged at by a fish-eye camera, while it would be by a pinhole camera.

Ii-B Essential matrix

As shown in Fig. 2, the 1D wand has three collinear feature points ( denote their locations for the th image pair), which satisfy

where

denotes the Euclidean vector norm. Let

and denote the camera coordinate systems of the left and the right cameras, respectively. The 3D points are projected to on the unit hemisphere centered at and on the unit hemisphere centered at . The extrinsic parameters are the rotation matrix and translation vector from the left camera to the right camera.

Fig. 2: Illustration of 1D calibration wand. The 3D points denote their locations for the th image pair.

Suppose that a 3D point is projected to

on the unit hemisphere centered at and the unit hemisphere centered at respectively. Since are all coplanar, we have [22]

(9)

where

(10)

Furthermore, (9) is rewritten in the form as

(11)

where is known as the essential matrix.

Ii-C Reconstruction algorithm

In this section, a linear reconstruction algorithm for spherical cameras is proposed, which is the direct analogue of the linear triangulation method for perspective cameras [18]. Suppose that the homogeneous coordinates of a 3D point are

in and , respectively. The 3D point is projected to

on the unit hemisphere centered at and the unit hemisphere centered at , respectively. Then we have

(12)

where are scale factors and For each image point on the unit hemisphere, the scale factor can be eliminated by a cross product to give three equations, two of which are linearly independent. So the four independent equations are written in the form as follows

(13)

with

(14)

where and are the th row of and , respectively. Based on (13),

is the singular vector corresponding to the smallest singular value of

. So far, given the homogeneous coordinates of in , namely is reconstructed. This is called the linear reconstruction algorithm.

Note that equation (13) provides only a linear solution, which is not very accurate in presence of noises. It could be refined by minimizing reprojection errors or Sampson errors [18]. However, since the reconstruction algorithm is carried out at each optimization iteration, it is more efficient to choose the linear reconstruction algorithm mentioned above. Furthermore, the linear reconstruction algorithm can be extended easily to the case of -view () triangulation for calibration of multiple cameras (section III-B) [18].

Iii Calibration algorithm

Iii-a Calibration of two cameras

Based on the preliminaries mentioned in section II, we next present a generic method to simultaneously calibrate the intrinsic and extrinsic parameters (including radial distortion parameters) of two cameras with a freely-moving 1D wand, which contains three points in known positions, as shown in Fig. 7 (a). This method is simple, user-friendly and can be used to calibrate two fish-eye cameras. Let the intrinsic parameters of the th camera be . Without loss of generality, we take the th camera and th camera as an example in this subsection. The first three steps of the calibration procedure involve only twelve intrinsic parameters , leaving the other parameters dealt with only in the final step.

Step 1: Initialization of intrinsic parameters. For the th camera, the principal point is initialized by the coordinates of the image center, and the pixel sizes and are given by the camera manufacturer. If the th camera is a conventional or fish-eye camera, then the initial values of are obtained by fitting the model (6) to the projections (1)-(5). Concretely, let the interval be equally divided into many pieces Then we have

(15)

where the nominal focal length of the th camera is and the maximum viewing angle is provided by the camera manufacturer. Based on (15), for the th camera, is determined by

(16)

So far, we get the initialization of intrinsic parameters . Note that it is required to specify the projection type of cameras in advance in [10]. Otherwise, it is possible to get inaccurate calibration results. However, this is not a problem in this paper because we obtain the best initialization of automatically. Besides this, the initialization of the principle point is reasonable, because the principal point of modern digital cameras lies close to the center of the image [18].

Step 2: Initialization of extrinsic parameters. With the intrinsic parameters
and the pixel coordinates of image points for the th image pair, we can compute and by (6)-(8). Therefore, according to (11), the essential matrix is obtained by using the 5-point random sample consensus (RANSAC) algorithm [23] if five or more corresponding points are given.

If the essential matrix is known, then the initial values for the extrinsic parameters and

are obtained by the singular value decomposition of

[18]. Note that , so the obtained translation vector differs from the real translation vector by a scale factor. Let denote the reconstructed points of for the th image pair, which are given by the linear reconstruction algorithm based on (13) with the intrinsic and extrinsic parameters obtained above. In order to minimize errors, the scale factor is

(17)

where is the number of image pairs. Finally, the initial value for the translation vector is

(18)

Thus, we obtain the initialization of extrinsic parameters and .

Step 3: Nonlinear optimization of intrinsic and extrinsic parameters. Denote the reconstructed points of for the th image pair by respectively, which are given by the linear reconstruction algorithm based on (13) with the intrinsic and extrinsic parameters obtained above. Because of noises, there exist distance errors as follows

(19)
(20)
(21)

where In particular, and the rotation matrix are related by the Rodrigues formula, namely [18, p. 585]. Therefore, according to equations (19)-(21), the objective function for optimization is

(22)

which is solved by using the Levenberg-Marquardt method [18].

Step 4: Bundle adjustment. The solution above can be refined through the bundle adjustment [18], which involves both the camera parameters and 3D space points. For the th image pair, we can compute by the linear reconstruction algorithm based on equation (13) with the camera parameters obtained in Step 3. If

then the th image pair is removed from the observations. After this, the number of image pairs reduces from to . Without loss of generality, the image pairs from th to th are removed. Since the 3D space points and are collinear, they have the relation as follows

(23)

where are spherical coordinates centered at and denotes the orientation of the 1D wand.

The six additional camera parameters for the two cameras are initialized to zero first, which together with constitute

Let functions denote the projection of a 3D point onto the th camera image plane under the parameter . Bundle adjustment minimizes the following reprojection error

(24)

where are the image points of 3D points in the th camera respectively. Since are known, we could obtain from (23). Then, are initialized by respectively. After all the optimization variables are initialized, the nonlinear minimization is done using the Sparse Levenberg-Marquardt algorithm [24].

Note that the main difference here from existing work is to take the extra parameters of the radial distortion in the set of unknowns into bundle adjustment.

Iii-B Calibration of multiple cameras

Step 1: Initialization of intrinsic and extrinsic parameters. The multiple camera system could be represented by a weighted undirected graph as in [19]. For example, the vision graph of a system consisting of five cameras is shown in Fig. 3. Each vertex represents an individual camera and the weights are given as where is the number of points in the common field of view of the two cameras. If , then the vertices corresponding to the two cameras are not connected. Next, we use the Dijkstra’s shortest path algorithm [25] to find the optimal path from a reference camera to other cameras. With the shortest paths from the reference camera to other cameras and corresponding pairwise calibration results, we could get the rotation matrices and translation vectors that represent the transformation from the reference camera to other cameras. For example, if the transformations from the th camera to th camera and from the th camera to th camera are and respectively, then the transformation from the th camera to th camera is obtained as follows:

(25)
Fig. 3: Vision graph and the optimal path from reference camera 0 to the other four cameras in solid lines. is the number of common points between cameras and is the corresponding weight. Vertices 0 and 4 are not connected because .

If the length of a path from the reference camera is longer than two, we could apply the equation (25) sequentially to cover the entire path. Besides, the initial value of each camera’s intrinsic parameters is determined from the corresponding pairwise calibration results when the most points exist in the common field of view of two cameras.

Note that only the pairwise calibration involved in the optimal path is performed by using the calibration algorithm of two cameras mentioned before. However, if all the camera pairs are calibrated as in [19], then it will be very time-consuming especially when the number of cameras is large.

Step 2: Bundle adjustment. As in the calibration algorithm of two cameras, are computed by -view (

) triangulation method in section II-C and a distance error threshold can be set to remove outliers. The intrinsic and extrinsic parameters of

cameras (except the extrinsic parameter of the reference camera—the th camera, as it is constantly and ) constitute . Let functions define projection of a 3D point onto the th camera image plane, then bundle adjustment minimizes the following reprojection error

(26)

where are the image points of 3D points in the th camera, and is the number of times are viewed in the th camera. After all the optimization variables are initialized, the nonlinear minimization is done by using the Sparse Levenberg-Marquardt algorithm [24].

Iv Experimental results

Iv-a Simulation experiments

Iv-A1 Simulation setting

In the simulation experiments, the ththth fish-eye cameras all have image resolutions of 640 pixels 480 pixels with pixel sizes of and FOVs of . As for the 1D calibration wand, the feature points and satisfy

Suppose that the 1D calibration wand undertake 300 times with general motions inside the volume of mmm. The rotation matrices from the th to the thth cameras are , (in the form of Euler angles, unit: degree), respectively. The translation vectors from the th to the thth cameras are , respectively. The calibration error of rotation is measured by the absolute error in degrees between the true rotation matrix

and the estimated rotation matrix

defined as [26]

(27)

where and are the th column of and , respectively. The calibration error of translation is measured by

(28)

where the true translation vector is and the estimated translation vector is . If there are 3D points viewed by a camera, the global calibration accuracy of this camera is evaluated by the root-mean-squared (RMS) reprojection error

(29)

where denotes the image point of the th 3D point and is the corresponding reprojection point obtained by using calibration results. Next, we perform simulation for both two cameras (the thth fish-eye cameras) and multiple cameras (the ththth fish-eye cameras).

Iv-A2 Noise simulations

The truth values of the three cameras’ focal lengths and principal points are 2 mm and , while initial values are 1.8 mm and , respectively. Gaussian noises with the mean value

and the standard deviation

varying from 0 to 2 pixels are added to the image points. Simulations are performed 10 times for each noise level and the average of estimated parameters is taken as the result. Fig. 4 (a)-(e) show the calibration errors of the intrinsic and extrinsic parameters, while Fig. 4 (f) gives the RMS reprojection errors of the cameras. In Fig. 4, ‘2cams’ means calibration of the thth fish-eye cameras (two cameras), and ‘3cams’ means calibration of the ththth fish-eye cameras (multiple cameras).

Fig. 4: Calibration errors of the intrinsic and extrinsic parameters and reprojection errors for different noise levels. ‘2cams’ means calibration of the fish-eye cameras and ‘3cams’ means calibration of the fish-eye cameras: (a) focal lengths of two and multiple cameras; (b) principle points () of two and multiple cameras; (c) principle points () of two and multiple cameras; (d) calibration error of rotation of two and multiple cameras; (e) calibration error of translation of two and multiple cameras; (f) RMS reprojection error of two and multiple cameras.

As shown in Fig. 4, the calibration errors of the intrinsic and extrinsic parameters do not change drastically with the noise level. Moreover, the RMS reprojection errors of the cameras increase almost linearly with the noise level. All these errors are small even when pixels. This shows that the calibration algorithm in this paper performs well and achieves high stability for the cases of both two cameras and multiple cameras.

Iv-A3 Initial value simulations

The truth values of the three cameras’ principal points are , while initial values are . Gaussian noise with the mean value and the standard deviation pixel are added to the image points. The truth values of the three cameras’ focal lengths vary from 1.5 mm to 2.5 mm, while the initial values are fixed to 2 mm. Simulations are performed 10 times for each focal length and the average of involving parameters is taken as the result. Fig. 5 (a)-(e) show the calibration errors of the intrinsic, while Fig. 5 (f) gives the extrinsic parameters and the RMS reprojection errors of the cameras.

Fig. 5: Calibration errors of the intrinsic and extrinsic parameters and reprojection errors for different focal length offsets. ‘2cams’ means calibration of the fish-eye cameras and ‘3cams’ means calibration of the fish-eye cameras: (a) focal lengths of two and multiple cameras; (b) principle points () of two and multiple cameras; (c) principle points () of two and multiple cameras; (d) calibration error of rotation of two and multiple cameras; (e) calibration error of translation of two and multiple cameras; (f) RMS reprojection error of two and multiple cameras.

Next, the true values of the three cameras’ focal lengths are 2 mm, while the initial values are 1.8 mm. Gaussian noise with the mean value and the standard deviation pixel is added to the image points. The truth values of the three cameras’ principle points vary from to along the diagonal line , while the initial values are fixed to . Experiments are performed 10 times for each principle point and the average of estimated parameters is taken as the result. Fig. 6 (a)-(e) show the calibration errors of the intrinsic and extrinsic parameters, while Fig. 6 (f) gives the RMS reprojection errors of the cameras.

Fig. 6: Calibration errors of the intrinsic and extrinsic parameters and reprojection errors for different center offsets of the principle points. ‘2cams’ means calibration of the fish-eye cameras and ‘3cams’ means calibration of the fish-eye cameras: (a) focal lengths of two and multiple cameras; (b) principle points () of two and multiple cameras; (c) principle points () of two and multiple cameras; (d) calibration error of rotation of two and multiple cameras; (e) calibration error of translation of two and multiple cameras; (f) RMS reprojection error of two and multiple cameras.

As shown in Fig. 5 and Fig. 6, the calibration errors of the intrinsic and extrinsic parameters change to a small extent with the focal length offset or the center offset of the principle point. Moreover, the RMS reprojection errors of the cameras remain almost constant. In summary, the optimization always converges to a good solution even when initial solutions largely differ from the true solution.

Iv-B Real experiments

In the real experiments, we use Basler scA640-120gm/gc cameras with the image resolution of 658 pixels 492 pixels, equipped with conventional lenses (Pentax C60402KP) having a FOV of or fish-eye lenses (Fujinon FE185C057HA-1) having a FOV of The nominal focal lengths of conventional lenses and fish-eye lenses are 4.2 mm and 1.8 mm, respectively. The 1D calibration wand is a hollow wand with three collinear LEDs on it (see Fig. 7 (a)) and the distances between the LEDs therein are

In order to clearly observe the LEDs, the outside light could be minimized by setting the exposure time of each camera to a small value. Let the wand undertake general rigid motion for many times so that image points fill the image plane as far as possible. Meanwhile, the pixel coordinates of corresponding image points are obtained by using the geometry of the three collinear LEDs.

In the following, we investigate the performance of the proposed method and compare it with the state-of-the-art checkboad-based methods proposed by Bouguet [27] and Kannala [10]. In this paper, we use a checkboard (see Fig. 7 (b)) pattern and the corner points are detected automatically by using the method in [28]

. Compared to conventional checkboard-based methods, the proposed method is more convenient and more efficient especially when there are many cameras to calibrate. The deficiency of the proposed method is less accurate than conventional checkboard-based methods, because the feature extraction is less accurate.

First, we perform experiments on two cameras, including two conventional cameras, two fish-eye cameras and two mixed cameras (camera 0 is a fish-eye camera and camera 1 is a conventional camera). With some prior knowledge given by the camera manufacturer and wand constraints, the intrinsic and extrinsic parameters (including radial distortion parameters) of the two cameras can be calibrated simultaneously by the proposed algorithm in section III-A. The calibration results are shown in Tables 1-3, from which we find that the three methods give similar calibration results.

We also perform experiments on multiple cameras, including two conventional cameras (camera 0 and camera 2) and a fish-eye camera (camera 1). The two conventional cameras have a small common FOV, so it is impractical to use checkboard-based methods to calibrate the intrinsic and extrinsic parameters of these three cameras simultaneously. However, it is easy to finish this task by using the proposed algorithm in section III-B. Fig. 8 shows the vision graph generated from the calibration with camera 0 chosen as the reference camera. Due to a small overlap between camera 0 and camera 2, the optimal transformation path for this camera is 0-1-2, rather than 0-2. The calibration results of intrinsic parameters are shown in Table 4, from which it is found that the three methods give similar results again.

After calibration, we perform 3D reconstruction with the calibration results for all the camera setups above. Put the 1D calibration wand randomly at twenty different places in a measurement volume of 3m3m3m. Thus, for each camera twenty images are taken, samples of which are shown in Fig. 9. The corresponding pixel coordinates of 3D points are extracted manually. Tables 1-4 also give the reconstruction results, where

(30)

with being the reconstructed points of for the th image pair or image triple. We know from these tables that the proposed method and [27] have similar measurement accuracy in the case of two conventional cameras. However, if there are two fish-eye cameras or two mixed cameras, then our method gives better results than [27]

. This is probably because: i) the 1D calibration wand is freely placed in the scene volume, and this can increase the calibration accuracy; ii) our 2D pattern is a printed paper on a board, thus it is not accurate enough. From Tables 1-4, it is concluded that the measurement error of the proposed method is about 1% for all camera setups.

Table 1. Calibration results of the intrinsic and extrinsic parameters and reconstruction results of two conventional cameras.
Method Proposed Bouguet Kannala Camera cam 0 cam 1 cam 0 cam 1 cam 0 cam 1     4.5932 3.3307        N/A N/A 4.3547 4.0564 -0.6424 0.0200 N/A N/A -0.5023 0.2402 (pixel) 355.4040 376.8750 354.8109 370.3515 343.2948 361.6145 (pixel)  236.2023 271.7577 230.5293 268.0877 223.6586 293.3133 N/A (mm) N/A (pixel) 0.5817 0.5421 N/A N/A N/A N/A (mm) 6.3160 5.9858 N/A
Table 2. Calibration results of the intrinsic and extrinsic parameters and reconstruction results of two fish-eye cameras.
Method Proposed Bouguet Kannala Camera cam 0 cam 1 cam 0 cam 1 cam 0 cam 1     1.8449 1.7273     N/A N/A 1.7558 1.7083 -0.0033 0.0753 N/A N/A 0.0706 0.1061 (pixel)    350.2229 352.2591    355.6940 357.9337 344.8255 356.7091 (pixel)  238.8122 256.1896  236.9247 257.9265 237.3625 248.5101 N/A (mm) N/A (pixel) 0.3948 0.3575 N/A N/A N/A N/A (mm) 5.0890 16.1052 N/A
Table 3. Calibration results of the intrinsic and extrinsic parameters and reconstruction results of two mixed cameras.
Method Proposed Bouguet Kannala Camera cam 0 cam 1 cam 0 cam 1 cam 0 cam 1     1.8192 4.1128     N/A N/A 1.7448 4.0191 -0.1012 0.5818 N/A N/A 0.0235 -0.5901 (pixel)    357.1181 381.2913    348.0850 362.4492 359.5363 360.5308 (pixel)  237.8219 258.8621  243.3855 271.0786 240.3054 248.9361 N/A (mm) N/A (pixel) 0.3526 0.6980 N/A N/A N/A N/A (mm) 8.3876 12.3939 N/A
Table 4. Calibration results of the intrinsic parameters and reconstruction results of multiple cameras.
Method Proposed Bouguet Kannala Camera cam 0 cam 1 cam 2 cam 0 cam 1 cam 2 cam 0 cam 1 cam 2 3.2958 1.7573 4.1353 N/A N/A N/A 4.1519 1.6884 4.1626 -1.0775 -0.0872 -1.6900 N/A N/A N/A -0.1672 -0.0177 -0.2902 (pixel) 338.1441 346.9810 356.9225 324.4709 354.5205 347.8188 324.8254 355.0662 347.5127 (pixel) 231.6928 251.2369 269.0179 228.0510 260.3225 267.9654 228.9251 259.4588 260.4136 (pixel) 0.5364 0.3148 0.5763 N/A N/A N/A N/A N/A N/A (mm) 3.9091 N/A N/A

Note that the methods in [3] and [4] require parallel stereo vision to perform 3D measurement. However, this is not a requirement for the proposed method in this paper. In the experiments above, the differences between the measuring distances and the ground truth may come from several sources, such as the inaccurate extraction of image points and the manufacture errors of the 1D calibration wand. Although there are so many error sources, the calibration accuracy obtained by using the proposed method is satisfying. Also, these experiments demonstrate the practicability of the proposed calibration method.

V Conclusions

A calibration method with a one-dimensional object under general motions is proposed to calibrate multiple fish-eye cameras in this paper. Simulations and real experiments have demonstrated that the calibration method is accurate, efficient and user-friendly. The proposed method is generic and also suitable for two/multiple conventional cameras and mixed cameras. When there are two conventional cameras, the proposed method and 2D pattern based methods have similar calibration accuracy. However, the proposed method gives more accurate calibration results in case of two fish-eye cameras or two mixed cameras (a conventional camera and a fish-eye camera). The achieved level of accuracy for two fish-eye cameras is promising, especially considering their use for 3D measurement purposes. In addition, 2D/3D pattern based methods are inapplicable when multiple cameras have little or no common FOV, whereas it is not a problem for the proposed method.

References

  • [1] Li S.: ‘Binocular spherical stereo’, IEEE Transactions on Intelligent Transportation Systems, 2008, 9, (4), pp. 589-600.
  • [2] Abraham S., Förstner W.: ‘Fish-eye-stereo calibration and epipolar rectification’, ISPRS Journal of Photogrammetry and Remote Sensing, 2005, 59, (5), pp. 278-288.
  • [3] Shah S., Aggarwal J.K.: ‘Mobile robot navigation and scene modeling using stereo fish-eye lens system’, Machine Vision and Applications, 1997, 10, (4), pp. 159-173.
  • [4] Yamaguchi J.: ‘Three dimensional measurement using fisheye stereo vision’, in Bhatti, A. (Ed.): ‘Advances in Theory and Applications of Stereo Vision’ (InTech, 2011.), pp. 151-164.
  • [5] Havlena M., Pajdla T., Cornelis K.: ‘Structure from omnidirectional stereo rig motion for city modeling’. Proceedings of the International Conference on Computer Vision Theory and Applications, Funchal, Portugal, 2008, pp. 407-414.
  • [6] Zhang Z.: ‘Camera calibration with one-dimensional objects’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26, (7), pp. 892-899.
  • [7] Heikkilä J.: ‘Geometric camera calibration using circular control points’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22, (10), pp. 1066-1077.
  • [8] Puig L., Bastanlar Y., Sturm P., Guerrero J.J., Barreto J.: ‘Calibration of central catadioptric cameras using a DLT-like approach’, International Journal of Computer Vision, 2011, 93, (1), pp. 101-114.
  • [9] Du B., Zhu H.: ‘Estimating fisheye camera parameters using one single image of 3D pattern’. Proceedings of the International Conference on Electric Information and Control Engineering, Wuhan, China, 2011, pp. 367-370.
  • [10] Kannala J., Brandt S.: ‘A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28, (8), pp. 1335-1340.
  • [11] Mei C., Rives P.: ‘Single view point omnidirectional camera calibration from planar grids’. Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy, 2007, pp. 3945-3950.
  • [12] Feng W., Röning J., Kannala J., Zong X., Zhang B.: ‘A general model and calibration method for spherical stereoscopic vision’. Proceedings of the SPIE 8301, Intelligent Robots and Computer Vision XXIX: Algorithms and Techniques, Burlingame, USA, 2012, 830107.
  • [13] Micusik B., Pajdla T.: ‘Structure from motion with wide circular field of view cameras’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28, (7), pp. 1135-1149.
  • [14] Espuny F., Burgos Gil J.: ‘Generic self-calibration of central cameras from two rotational flows’, International Journal of Computer Vision, 2011, 91, (2), pp. 131-145.
  • [15]

    De França J.A., Stemmer M.R., França M.B.d.M., Piai J.C.: ‘A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter’, Pattern Recognition, 2012, 45, (10), pp. 3636-3647.

  • [16] Pribanic T., Sturm P., Peharec S.: ‘Wand-based calibration of 3D kinematic system’, IET Computer Vision, 2009, 3, (3), pp. 124-129.
  • [17] ‘Vicon Real-time motion capture system’, http://www.vicon.com/System/Calibration, accessed June 2014.
  • [18] Hartley R., Zisserman A.: ‘Multiple View Geometry in Computer Vision’ (Cambridge University Press, second edition, 2004).
  • [19] Kurillo G., Li Z., Bajcsy R.: ‘Wide-area external multi-camera calibration using vision graphs and virtual calibration object’. Proceedings of Second ACM/IEEE International Conference on Distributed Smart Cameras, Stanford, USA, 2008, pp. 1-9.
  • [20] Geyer C., Daniilidis K.: ‘A unifying theory for central panoramic systems and practical implications’. Proceedings of the European Conference on Computer Vision, Dublin, Ireland, 2000, pp. 445-461.
  • [21]

    Kanatani K.: ‘Calibration of ultrawide fisheye lens cameras by eigenvalue minimization’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35, (4), pp. 813-822.

  • [22] Svoboda T., Pajdla T., Hlaváč V.: ‘Motion estimation using central panoramic cameras’. Proceedings of the IEEE Conference on Intelligent Vehicles, Stuttgart, Germany, 1998, pp. 335-340.
  • [23] Nistér D.: ‘An efficient solution to the five-point relative pose problem’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26, (6), pp. 756-770.
  • [24] Lourakis M.: ‘Sparse non-linear least squares optimization for geometric vision’. Proceedings of the European Conference on Computer Vision, Crete, Greece, 2010, pp. 43-56.
  • [25] Chen J.: ‘Dijkstra’s shortest path algorithm’, Formalized Mathematics, 2003, 11, (3), pp. 237-247.
  • [26] Zheng Y., Kuang Y., Sugimoto S., Åström K., Okutomi M.: ‘Revisiting the pnp problem: a fast, general and optimal solution’. Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 2013, pp. 2344-2351.
  • [27] Bouguet J-Y.: ‘Camera calibration toolbox for matlab’, http://www.vision.caltech.edu/bouguetj/calib_doc/, accessed June 2014.
  • [28] Geiger A., Moosmann F., Car Ö., Schuster B.: ‘Automatic camera and range sensor calibration using a single shot’. Proceedings of the International Conference on Robotics and Automation, Saint Paul, USA, 2012, pp. 3936-3943.
Fig. 7: Calibration objects: (a) one-dimensional calibration wand used in the proposed method; (b) checkboard used in the comparison of the proposed method.
Fig. 8: Vision graph and the optimal path from reference camera 0 to the other two cameras in solid lines. Numbers indicate the number of common points between cameras and their corresponding weights.
Fig. 9: Sample images of the 1D object captured by the cameras for reconstruction: (a) a conventional camera; (b) a fish-eye camera.