Efficient Pose Selection for Interactive Camera Calibration

07/09/2019 ∙ by Pavel Rojtberg, et al. ∙ Fraunhofer 2

The choice of poses for camera calibration with planar patterns is only rarely considered - yet the calibration precision heavily depends on it. This work presents a pose selection method that finds a compact and robust set of calibration poses and is suitable for interactive calibration. Consequently, singular poses that would lead to an unreliable solution are avoided explicitly, while poses reducing the uncertainty of the calibration are favoured. For this, we use uncertainty propagation. Our method takes advantage of a self-identifying calibration pattern to track the camera pose in real-time. This allows to iteratively guide the user to the target poses, until the desired quality level is reached. Therefore, only a sparse set of key-frames is needed for calibration. The method is evaluated on separate training and testing sets, as well as on synthetic data. Our approach performs better than comparable solutions while requiring 30

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

Code Repositories

pose_calib

Efficient Pose Selection for Interactive Camera Calibration


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Preliminaries

We will use the pinhole camera model that, given the camera orientation , position

and the parameter vector

, maps a 3D world point to a 2D image point :

(1)

Here is a 3x4 affine transformation, denotes the depth of after affine transformation, and K is the camera calibration matrix containing the focal lengths (and aspect ratio) and the principal point . Zhang [16]

also includes a skew parameter

— however, for CCD cameras it is safe to assume to be zero [12, 6]. models the commonly used [12] radial (2a) and tangential (2b) lens distortions (following [7]) as

(2a)
(2b)

where .

Therefore .

1.1 Estimation and error analysis

Given images each containing point correspondences, the underlying calibration method [16] minimizes the geometric error

(3)

where is an observed (noisy) 2D point in image and is the corresponding 3D object point.

Eq. (3) is also referred to as the reprojection error and often used to assess the quality of a calibration. Yet, it only measures the residual error and is subject to over-fitting. Particularly if exactly point correspondences are used [6, §7.1].

The actual objective for calibration however, is the estimation error , i.e. the distance between the solution and the (unknown) ground truth. Richardson et al. [10]

propose the Max ERE as an alternative metric that correlates with the estimation error and also has a similar value range (pixels). However, it requires sampling and re-projecting the current solution. Yet for user guidance and monitoring of convergence only the relative error of the parameters is needed. Therefore, we directly use the variance

of the estimated parameters. Particularly, we use the index of dispersion (IOD) to ensure comparability of the parameters among each other.

Given the covariance of the image points the backward transport of covariance [6, §5.2.3] is used to obtain

(4)

where is the Jacobian matrix, is the vector of unknowns and

denotes the pseudo inverse. For simplicity and because of the lack of prior knowledge we assume a standard deviation of 1px in each coordinate direction for the image points thus

.

The diagonal entries of contain the variance of the estimated . is already computed in Levenberg-Marquardt step of [16].

1.2 Calibration pattern

Our approach works with any planar calibration target e.g. the common chessboard and circle grid patterns. However, for interactive user guidance a fast board detection is crucial. Therefore, we use the self-identifying ChArUco [5] pattern as implemented in OpenCV. This saves the time consuming ordering of the detected rectangles to a canonical topology when compared to the classical chessboard. However, one can alternatively use any of the recently developed self-identifying targets [1, 2, 4] here.

The pattern size is set to 9x6 squares resulting in up to 40 measurements at the chessboard joints per captured frame. This allows to successfully complete the initialization even if not all markers are detected as discussed in section 3.3.

2 Pose selection

The core idea of our approach is to explicitly specify individual key-frames which are used for calibration using the method of Zhang [16].

In this section first the relation of intrinsic parameters and board poses is discussed to motivate our split of the parameter vector into pinhole and distortion parameters. For each parameter group we then present our set of rules to generate an optimal pose while explicitly avoiding degenerate configurations.

2.1 Splitting pinhole and distortion parameters

Looking at eq. (1) we see that both and are applied at post-projection and thus describe 2D-to-2D mappings. Therefore, one might consider estimating just from one board pose that uniformly samples the image. However, as both intrinsic and extrinsic parameters are estimated simultaneously by [16], ambiguities arise.

Assuming and the distortion parameters to be zero, by multiplying out (1) we get

(5)

for all pattern points . In this case there are two ambiguities between

  1. [nolistsep]

  2. the focal length and the distance to camera and

  3. the in-plane translation and principal point .

These ambiguities can be resolved by requiring the pattern to be tilted towards the image plane such that there is only one that satisfies eq. (1) for all pattern points.

(a) Estimated distortion map
(b) Target pose at max. distortion
Figure 2: Distortion map showing the magnitude of for each pixel. To find the target pose we apply thresholding and fit an axis aligned bounding box.

Considering the distortion parameters of on the other hand, there are no similar ambiguities due to the non-linearity of the mapping. The parameters are rather determined by the maximal distortion strength evident in the image. Here it is more important to accurately measure the distortion in the corresponding image regions (see Figure (a)a).

Therefore, we split the parameter vector C into and and consider each group separately.

2.2 Avoiding pinhole singularities

While optimizing parameters in , singular poses must be avoided. In addition to the case discussed above, we incorporate the cases identified in [11]. Particularly, we restrict the 3D configuration of the calibration pattern as follows:

  • The pattern must not be parallel to the image plane.

  • The pattern must not be parallel to one of the image axes.

  • Given two patterns, the ”reflection constraint” must be fulfilled. This means that the vanishing lines of the two planes are not reflections of each other along both a horizontal and a vertical line in the image.

These restrictions ensure that each pose adds information that further constrains the pinhole parameters.

2.3 Pose generation

Figure 3: Exemplary pose selection state. Top: Index of dispersion. Left: Intrinsic calibration position candidates after one (magenta) and two (yellow) subdivision steps . Right: Distortion map with already visited regions masked out.

As described in Section 2.1, each parameter group requires a different strategy to generate an optimal calibration pose.

For the intrinsic parameters we follow [13, 16] and aim at maximizing the angular spread between image plane and calibration pattern. Accordingly, poses are generated as follows:

  1. We choose a distance such that the whole pattern is visible, maximising the amount of observed 2D points.

  2. Depending on the principal axis (e.g.  for ) the pattern is tilted in the range of

    around that axis. The actual angle is interpolated using the sequence

    which corresponds to the binary subdivision of the range (see Figure 3). This strategy, as desired, maximizes the angular spread.

  3. The resulting pose would still be parallel to one of the image axes which prevents the estimation of the principal point along that axis [11]. Therefore, the resulting view is rotated by which implements this requirement while keeping the principal orientation.

  4. When determining the view is further shifted along the respective image axis by 5% of the image size. This increases the spread along that axis and leads to faster convergence in our experiments.

For the distortion parameters the goal is to increase sampling accuracy in image regions exhibiting strong distortions. For this we generate a distortion map based on the current calibration estimate that encodes the displacement for each pixel. Using this map we search for the distorted regions as follows:

  1. Threshold the distortion map (Figure (a)a) to find the region with the strongest distortion.

  2. Given the threshold image, an axis aligned bounding box (AABB) is fitted to the region, corresponding to a parallel view on the pattern. Note that the constraints for do not apply here.

  3. The area covered by the AABB is excluded from subsequent searches (see Figure 3). Effectively, the distorted regions are thereby visited in order of distortion strength.

  4. The pattern is aligned with the top-left corner of the AABB and positioned at a depth s.t. its projection covers 33% of the image width.

The angular range and width limits mentioned above were set such that the calibration pattern could be reliably detected using the Logitech C525 camera.

2.4 Initialization

The underlying calibration method [16] requires at least two views of the pattern for an initial solution which we select as follows:

  • For the parameters a pose tilted by around is selected (see Section 2.3). This particular angle was suggested by [16] and lies in between the extrema of where the focal length cannot be determined and where the aspect ratio and principal point cannot be determined.

  • Without any prior knowledge we aim at an uniform sampling for estimating . To this end we compute a pose such that the pattern is parallel to the image plane and covers the whole view. While this violates the axis alignment requirements for poses, it still provides extra information as it is not coplanar to the first pose [16]. Furthermore, the reflection constraint is fulfilled.

To render an accurate overlay for the first pose without prior knowledge of the used camera, we employ a bootstraping strategy similar to [10]; if the pattern can be detected, we perform a single frame calibration estimating the focal length only — the principal point is fixed at the center and is set to zero.

3 Calibration process

In the following we present the parameter refinement and user guidance parts as well as any employed heuristics. This completes the calibration pipeline as used for the real data experiments.

3.1 Parameter refinement

After obtaining an initial solution using two key-frames, the goal is to minimize the cumulated variance of the estimated parameters . We approach this problem by targeting the variance of a single parameter at a time. Here we pick the parameter with the highest index of dispersion (MaxIOD) ( iff ). Depending on the parameter group, a pose is then generated as described in Section 2.

For determining convergence, we use a ratio test of the parameter variance . If the reduction is below a given threshold, we assume the parameter to be converged and exclude it from further refinement. Here, we only consider parameters from the same group as there is typically only little reduction in the complementary group. The calibration terminates once all parameters have converged.

3.2 User Guidance

To guide the user, the targeted camera pose is projected using the current estimate of the intrinsic parameters. This projection is then displayed as an overlay on top of the live video stream (See Figure 1 and the video in the supplemental material).

To verify whether the user is sufficiently close to the target pose we use the Jaccard index

(intersection over union) computed from the area covered by the projection of pattern from the target pose and the area covered by the projection from the current pose estimate . We assume that the user has reached the desired pose if .

Comparing the projection overlap instead of using the estimated pose directly is more robust since the pose estimate is often unreliable — especially during initialization.

3.3 Heuristics

Throughout the process we enforce the common heuristic [6, §7.2] that the number of constraints should exceed the number of unknowns by a factor of five. The used calibration method [16] not only estimates the intrinsic parameters , but also the relative pose of model plane and image plane i.e. the parameters , a 3D rotation, and , a 3D translation. When using calibration images we thus have unknowns and each point correspondence provides two constraints. For initialization () we thus have unknowns, meaning point correspondences are needed in total or 27 correspondences per frame. For any subsequent frame only 15 points are required.

To prevent inaccurate measurements due to motion blur and rolling shutter artifacts the pattern should be still. To ensure this we require all points to be re-detected in the consecutive frame and the mean motion of the points to be smaller then px (determined empirically).

(a) Poses 11-20 are optimizing .
(b) Poses 11-20 are optimizing .
Figure 4: Correlation of pose selection strategies and calibration parameter uncertainty expressed using the standard deviation (thus the error bars mean ”variance of ”).
The first two poses are selected according to the initialization method. Poses 2-10 and 11-20 are selected by complementary strategies. Evaluated with synthetic images on 20 camera models sampled around the estimate of the Logitech C525 camera.

4 Evaluation

The presented method was evaluated on both synthetic and real data. The synthetic experiments aimed at validating the parameter splitting and pose generation rules presented in Section 2, while the real data was used for comparison with other methods. Furthermore, the compactness of the results with real data was estimated by optimizing directly on the testing set.

4.1 Synthetic data

We performed multiple calibrations, each using 20 synthetic images. The first two camera poses were chosen as described in section 2.4 to allow a rough initial solution. The next 8 poses were chosen to optimize while the last 10 poses were optimizing (and vice versa).

The camera parameters were based on the calibration parameters of a Logitech C525 camera . However, the actual parameters were sampled around using a covariance matrix that allowed 10% deviation for each of the parameters as

(6)

Therefore, each synthetic calibration corresponds to using a different camera C with known ground truth parameters. To allow generalization to different camera models, we kept the above pose generation sequence, but used 20 different cameras C.

Figure 4 shows the mean standard deviation of the parameters. Notably there is a significant drop in iff a pose matching the parameter group is used.

We also evaluated the usage of MaxIOD as an error metric by comparing it to MaxERE [10] and a known estimation error . Just as the MaxERE, the MaxIOD correlates with (see Figure (a)a). Additionally, as Figure (b)b indicates, the IOD reduction is suitable for balancing calibration quality and the number of required calibration frames.

(a)
(b)
Figure 5: (a) Comparing error metrics on synthetic data: both MaxERE and the proposed Max IOD correlate with estimation error . (standard deviation over 20 samples) (b) Required number of frames and in respect to the variance reduction threshold

4.2 Real data

For evaluating our method with real images, we recorded a separate testing set consisting of 50 images at various distances and angles covering the whole field of view. All images were captured using a Logitech C525 webcam at a resolution of 1280x720px. The autofocus was fixed throughout the whole evaluation, while exposure was fixed per sequence. Our method was compared to AprilCal [10] and calibrating without any pose restrictions using OpenCV.

We used the pattern described in section 1.2 that provides 40 measurements per frame for OpenCV as well as for our method. With AprilCal, we used the 5x7 AprilTag target that generates approximately the same amount of measurements.

The convergence threshold was set to 10% for our method and the stopping accuracy parameter of AprilCal was set to 2.0. As the OpenCV method does not provide convergence monitoring, we stopped calibration after 10 frames here.

Method mean frames used mean
Pose selection 0.518 9.4 0.470
OpenCV [3] 1.465 10 0.345
AprilCal [10] 0.815 13.4 1.540
Compactness test 0.514 7 0.476
Table 1: Our method compared to AprilCal and OpenCV on real data. Showing the average over five runs. Training on the testing set results in .

Table 1 shows the mean results over 5 calibration runs for each method, measuring the required number of frames, and . Here our method requires only 70% of the frames required by AprilCal while arriving at a 36% lower (64% compared to OpenCV).

4.3 Analyzing the calibration compactness

The results in the previous section show that our method is able to provide the lowest calibration error while using fewer calibration frames then comparable approaches. However, it is not clear whether the solution is using the minimal amount of frames or whether it is possible to use a subset of frames while arriving at the same calibration error.

Therefore, we further tested the compactness of our calibration result. We used a greedy algorithm that, given a set of frames captured by our method, tries to find a smaller subset. It optimizes for the testing set, directly minimizing the estimation error.

The algorithm is computed as follows; given a set of training images (the calibration sequence)

  1. [nolistsep]

  2. the initialization frames as described in Section 2.4 are added unconditionally;

  3. each of the remaining frames is now individually added to the key-frame set and a calibration is computed.

  4. For each calibration the estimation error is computed using the testing frames.

  5. The frame that minimizes is incorporated into the key-frame set. Continue at step 2.

  6. Terminate if cannot be further reduced or all frames have been used.

The greedy optimal solution requires 75% of the frames compared to the proposed method while keeping the same estimation error (see Table 1). This indicates that, while a significant improvement over [10], our method is not yet optimal in the compactness sense. The greedy algorithm requires an a-priori recorded testing set and only finds a minimal subset of an existing calibration sequence, but cannot generate any calibration poses.

4.4 User survey

We performed an informal survey among 5 co-workers to measure the required calibration time when using our method. The tool was used for the first time and the only given instruction was that the overlay should be matched with the calibration pattern. The camera was fixed and the pattern had to be moved. On average the users required 1:33 min for capturing 8.7 frames at .

5 Conclusion and future work

We have presented a calibration approach to generate a compact set of calibration frames that is suitable for interactive user guidance. Singular pose configurations are avoided such that capturing about 9 key-frames is sufficient for a precise calibration. This is 30% less than comparable solutions. The provided user guidance allows even inexperienced users to perform the calibration in less than 2 minutes. Calibration precision can be weighted against the required calibration time using the convergence threshold. The camera parameter uncertainty is monitored throughout the processes, ensuring that a given confidence level can be reached repeatedly.

Our evaluation shows that the amount of required frames can still be reduced to speed up the process even more. We only use a widespread and simple distortion model, additional distortion coefficients like thin prism [15], rational [8] and tilted sensor are to be considered in future work. Eventually one could incorporate a detection of unused parameters. This would allow to start with the most complex distortion model which could be gradually reduced during calibration.

Furthermore the method needs adaptation to special cases like microscopy where the depth of field limits the possible calibration angles or calibration at large distance where scaling the pattern accordingly is not desirable.

The OpenCV based implementation of the presented algorithm is available open-source at https://github.com/paroj/pose_calib.

References

  • Atcheson et al. [2010] B. Atcheson, F. Heide, and W. Heidrich. Caltag: High precision fiducial markers for camera calibration. In VMV, volume 10, pages 41–48, 2010.
  • Birdal et al. [2016] T. Birdal, I. Dobryden, and S. Ilic. X-tag: A fiducial tag for flexible and accurate bundle adjustment. In 3D Vision (3DV), 2016 Fourth International Conference on, pages 556–564. IEEE, 2016.
  • Bradski et al. [2005] G. Bradski, A. Kaehler, and V. Pisarevsky. Learning-based computer vision with intel’s open source computer vision library. Intel Technology Journal, 9(2), 2005.
  • Fiala and Shu [2008] M. Fiala and C. Shu. Self-identifying patterns for plane-based camera calibration. Machine Vision and Applications, 19(4):209–216, 2008.
  • Garrido-Jurado [2017] S. Garrido-Jurado. Detection of ChArUco Corners, 2017. URL http://docs.opencv.org/3.2.0/df/d4a/tutorial_charuco_detection.html. [Online; accessed 11-February-2017].
  • Hartley and Zisserman [2005] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Robotica, 23(2):271–271, 2005.
  • Heikkila and Silvén [1997] J. Heikkila and O. Silvén. A four-step camera calibration procedure with implicit image correction. In

    Computer Vision and Pattern Recognition, Proceedings., 1997 IEEE Computer Society Conference on

    , pages 1106–1112. IEEE, 1997.
  • Ma et al. [2004] L. Ma, Y. Chen, and K. L. Moore. Rational radial distortion models of camera lenses with analytical solution for distortion correction. International Journal of Information Acquisition, 1(02):135–147, 2004.
  • Pankratz and Klinker [2015] F. Pankratz and G. Klinker. [poster] ar4ar: Using augmented reality for guidance in augmented reality systems setup. In Mixed and Augmented Reality (ISMAR), 2015 IEEE International Symposium on, pages 140–143. IEEE, 2015.
  • Richardson et al. [2013] A. Richardson, J. Strom, and E. Olson. AprilCal: Assisted and repeatable camera calibration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2013.
  • Sturm and Maybank [1999] P. F. Sturm and S. J. Maybank. On plane-based camera calibration: A general algorithm, singularities, applications. In Computer Vision and Pattern Recognition, IEEE Computer Society Conference on., volume 1. IEEE, 1999.
  • Sun and Cooperstock [2005] W. Sun and J. R. Cooperstock. Requirements for camera calibration: Must accuracy come with a high price? In Application of Computer Vision, WACV/MOTIONS’05 Volume 1. Seventh IEEE Workshops on, volume 1, pages 356–361. IEEE, 2005.
  • Triggs [1998] B. Triggs. Autocalibration from planar scenes. In Computer Vision—ECCV’98, pages 89–105. Springer, 1998.
  • Tsai [1987] R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. Robotics and Automation, IEEE Journal of, 3(4):323–344, 1987.
  • Weng et al. [1992] J. Weng, P. Cohen, and M. Herniou. Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on Pattern Analysis & Machine Intelligence, (10):965–980, 1992.
  • Zhang [2000] Z. Zhang. A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(11):1330–1334, 2000.