Direct Triangulation with Spherical Projection for Omnidirectional Cameras

06/08/2022
by   Ciaran Eising, et al.
University of Limerick
0

In this paper, it is proposed to solve the problem of triangulation for calibrated omnidirectional cameras through the optimisation of ray-pairs on the projective sphere. The proposed solution boils down to finding the roots of a quadratic function, and as such is closed form, completely non-iterative and computationally inexpensive when compared to previous methods. In addition, even thought the motivation is clearly to solve the triangulation problem for omnidirectional cameras, it is demonstrated that the proposed methods can be applied to non-omnidirectional, narrow field-of-view cameras.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

07/24/2018

The Double Sphere Camera Model

Vision-based motion estimation and 3D reconstruction, which have numerou...
09/06/2018

Labeling Panoramas with Spherical Hourglass Networks

With the recent proliferation of consumer-grade 360 cameras, it is worth...
11/28/2019

Cameras Viewing Cameras Geometry

A basic problem in computer vision is to understand the structure of a r...
10/26/2019

A Preliminary Study on Optimal Placement of Cameras

This paper primarily focuses on figuring out the best array of cameras, ...
08/02/2017

Flat2Sphere: Learning Spherical Convolution for Fast Features from 360° Imagery

While 360 cameras offer tremendous new possibilities in vision, graphics...
11/10/2019

A unified approach for projections onto the intersection of ℓ_1 and ℓ_2 balls or spheres

This paper focuses on designing a unified approach for computing the pro...

I Introduction

Triangulation is the process of finding an estimate of a point in 3D space given its projection in at least two images and given the set of projection matrices that describe the relative position and orientations of the cameras (at least up to scale) and the camera intrinsics (except in the case of projective reconstruction). Naturally, in the absence of measurement uncertainty, this is a trivial, but in the presence of uncertainty, one should optimise the projected point. This is typically done for pinhole cameras without considering lens distortion. A very nice overview of the various methods of triangulation under the pin-hole assumption is described in

[1], including discussions on linear triangulation, iterative techniques and -norm optimal triangulation. As they provide a quite complete overview, readers are referred to that paper.

Omnidirectional cameras have found much use recently, as they have a distinct advantage of having extremely wide fields of view. Fisheye cameras have found useful applications over the last number of decades, including localisation [2], Simultaneous Localisation and Mapping [3], and object detection [4] among many others. Such cameras offer a significantly wider field-of-view than standard cameras, often with greater than field-of-view. In consumer electronics, 360 cameras are becoming popular [5], though the images from such camera are typically stitched images from a camera cluster, e.g., dual fisheye [6].

In this paper, a completely closed form, non-iterative solution to the spherical triangulation problem is proposed, which selects the point that optimises the reprojection error to the unit sphere. Such an approach is suitable for omnidirectional cameras, as discussed in [7], as the spherical camera model can have a complete surround field of view. In fact, in [7, §13.4.1.1], Förstner and Wrobel describe an iterative approach for triangulation based on the spherical camera model. What is proposed in this paper optimises the same function, and numerical equivalence to the Förstner-Wrobel method is demonstrated. However, the triangulation proposed here is significantly simpler and computationally cheaper, as it directly solves the triangulation problem without iteration.

Of course, any two-view triangulation will always be suboptimal, due to the implicit assumption of complete knowledge of the relative pose. A rigorous solution is achieved only within a bundle adjustment framework, whereby the scene structure and camera pose are optimised over many frames. Triangulation is typically used to find initial values for bundle adjustment. However, one should not dismiss triangulation because of this, as a good initial seed is useful for the optimisation within bundle adjustment. Bundle adjustment is a topic for textbooks, and the reader is referred to [7].

Ii Spherical Triangulation

In this section, the spherical triangulation methods are described. First, the spherical versions of the Midpoint and Direct Linear triangulation methods are presented, followed by two ray optimisation methods.

Ii-a Omnidirectional Camera Models

What is initially required is an injective map from the image domain to the unit central projective sphere embedded in Euclidean space (itself embedded in )

where and . In principle, any appropriate definition for the mapping function can be used. Such a mapping is done via a model of the camera geometry, of which several exist for omnidirectional cameras (e.g., as discussed in [8, 9, 10]).

Ii-B Euclidean and similarity reconstruction

In the following sections, it is assumed that the camera poses, described by the rotations represented by the matrices

and the camera positions represented by the vectors

are known in an external world coordinate system. This may not always be the case. Often the relative poses of the cameras are only known up to a undetermined scale factor. For two cameras, this is represented either as an essential matrix or is decomposed to a rotation matrix and a translation vector (recalling that , where is the matrix representation of the cross product with ). The scale ambiguity is represented here by the fact that only a unit vector in the direction of translation is known, and the translation distance is unknown.

In such a case, a Euclidian triangulation (without scale ambiguity) is reduced to a similarity triangulation (with scale ambiguity). In the following sections, the position of the reference camera can be replaced with the zero vector , the position of the second camera becomes , the rotation of the first camera can be set to identity , and the rotation of the second camera is the relative rotation per the decomposition of the essential matrix. Beyond these adaptations, the following works with no issues for similarity triangulation.

Ii-C Midpoint method

The midpoint method is outlined by [11] for the case of point in the projective plane. Here, the equivalent method for points on the unit sphere is described. Assuming and on the unit sphere are defined in a common coordinate system, the equation is formed, giving three equations in two unknowns, which can be solved using linear least squares. In this case, and define a point along the rays and respectively that is closest to the other ray. The midpoint is given by . Alternatively, to avoid an explicit least square algorithm, and where one can use:

In the result presented later, Midpoint is used to refer to this method.

Ii-D Spherical linear triangulation

Linear triangulation in the case of features on the projective plane is well known [11]. However, consider as the projection of on to the unit sphere . is colinear with in the absence of uncertainty. That is, , where and describe the orientation and position of the camera respectively. The cross product can be made explicit as as

where is the th row of . An equation of the form can be formed, which can be solved using the pseudo-inverse . In this case,

(1)

and

(2)

Naturally, at least two observations of are required for linear triangulation. Multiple observations can be used for triangulation by extending in (1) and (2). In the result presented later, the term Sph-Lin refers to this method.

Ii-E Direct spherical triangulation

Fig. 1: Spherical triangulation

Ii-E1 Cost function

It has long been known that two points in a perspective image (denoted by the correspondence ) that observe the same 3D point, must adhere to the epipolar constraint in the absence of uncertainty, i.e. [12]. Geometrically, this means that the points should lie on the corresponding epipolar lines in each of the perspective images. Hartley and Sturm [11] exploited this fact to find the optimal epipolar line pair in the presence of uncertainty, onto which the point correspondences were projected. However, this cannot be used for omnidirectional imagery, as the perspective image is a very poor model of such a camera. Instead, a similar process to select the optimal epipolar plane is employed.

In spherical coordinates, it is understood that a noisy correspondence pair does not generally satisfy the constraint . The goal is to find the pair of spherical coordinates that are as close to the original pair , but satisfying the epipolar constraint, i.e., that minimise the function

(3)

This is similar at first glance to the cost function used by Hartley and Sturm [11], but in this case, refers 3D Euclidean distance (Figure 1). The restriction that means that the estimated pair must both lie on . The goal is, therefore, to find the optimal parameterisation of . This is also the same function that is optimised by the Förstner-Wrobel triangulation [7, §13.4.1]. However, as mentioned previously, Förstner and Wrobel describe an iterative solution to finding the point optimised pair, whereas here a direct solution is proposed.

Ii-E2 Parameterising the plane

The following relates to Figure 1. In the case that an estimate of the essential matrix is available, is decomposed into a rotation matrix and translation vector , which describe the relative pose of the two cameras. As the translation is resolved only up to scale, the translation vector is represented as a unit vector. defines the camera baseline and is in fact the epipole on the first spherical surface ( defines the epipole on the second spherical surface). The goal is to find the optimal epipolar plane, which is one of the pencil of planes defined by the baseline between the camera positions.

Firstly, observe that in , the epipole is the image of the other camera position on the unit sphere, and lies on the baseline between the cameras (defined by , given ). Now parameterise a pencil of planes in Hessian normal form:

(4)

and can be any non-parallel normal vectors perpendicular to . It therefore makes some sense to pick normal vectors that simplify the problem as best possible. The basis of the first spherical image can be changed such that the -axis aligns with the epipole , by applying the appropriate rotation to the measured feature . Note that is fixed per camera pair, so can be applied to all features as a pre-processing.

If the -axis is not already aligned with , a rotation about the vector defined can be by

(5)

where and are the axis-angle representation of the rotation and can be converted trivially to the matrix representation . refers to the Euclidean norm. In the case that the -axis is already aligned with , then set . From now on, feature refers to the feature in the new rotated coordinate system. That is, all features are replaced with .

Now with , it is possible to select and . This combination can be used to represent any plane that intersects the camera baseline, except for the plane orthogonal to , which is dealt with shortly. The parameterisation of the plane becomes:

(6)

where .

Here, can be arbitrarily large. For example, in the case that is close to the plane orthogonal to (i.e., the -plane), will be some large value, and as it approaches the -plane, numerical issues can dominate. It is, therefore, desirable to keep the value of within some reasonable range. In the case that is closer to the -plane than the -plane (i.e., ), then set and , i.e., and can be swapped. (6) becomes

(7)

where . In all practical cases, by selecting between (6) and (7) based on the proximity of to the - and -planes, the valid range of is kept to approximately . It may drift slightly above 1 or below -1 in some cases, depending on the second observation .

Ii-E3 Minimizing sum of squares

The aim is to pick the value of that minimises the square distance of the observed correspondence pair to the plane . The cost function is therefore

(8)

where denotes the Euclidean distance of the point on the unit sphere to a given plane with normal .

Again, to simplify a bit, put in the same orientation as , by applying the rotation between the camera orientations. If there exists an external world coordinate system, then , where and describe the orientation of the two cameras. Therefor . From here, simply refer to this updated feature as .

The Euclidean distance of a point to a plane is given by

(9)

and the total squared distance is given by

(10)

where

(11)

Elementary calculus tells us that the minima and maxima of occur when . The derivative of is given by

(12)

The solutions of occur when the numerator (which is a quadratic polynomial in ) is equal to zero. Therefore, the numerator is equated with zero and the roots are obtained using the standard quadratic formula. In fact, the smaller of the two roots need only to be evaluated, as the smaller root will lead to a smaller value of (II-E3), i.e., the minima of . Therefore, can be directly estimated as:

(13)

will always be real, as . One must handle the case that , in which case one must find the root of , i.e., , except in the case that . In the case that and , the derivative , and thus there is no solution and remains undefined. Thus, the optimal plane normal is

(14)

The correspondences on the unit spheres for two views are corrected through orthogonal projection to the optimal plane described by the normal vector .

(15)

This vector need not lie on the unit sphere. Of course, if needed, it can be projected to the unit sphere . However, as is discussed soon, the Midpoint method will be used for the final triangulation, and this does not require the vectors to lie on the unit sphere. In the results presented later, this method is referred to as Sph-Quad.

Ii-E4 Minimizing sum of magnitudes

Hartley and Sturm also proposed an approach to minimize the sum of magnitudes of distances for perspective images and demonstrated that this had slightly better performance than the sum of squares approach when considering reprojection error [11]

, albeit at the cost of requiring a root finding over an 8th order polynomial. It is worth investigating if a similar approach can be taken for the case of a spherical camera model. For the moment, assume that

, and as such . The aim is to select the value of that minimizes

(16)

where and are equal to -1 or +1 depending on the signs of and respectively.

The derivative is given as

(17)

Setting the numerator equal to zero and rearranging gives

(18)

and are functions of , the value of which is being sought. It is desirable to remove these, so square both sides, and rearrange to give a quadratic in .

(19)

The roots of this are found using the quadratic formula, resulting in

(20)

The correct root is found by selecting the value of that returns the smallest value of (II-E4).

Following the above steps for the case that and , it can be seen that only the denominator of (20) changes to . Thus, the complete solution for is

(21)

In the result presented later, this method is referred to as Sph-Abs.

Ii-E5 Note on the geodesic distance to epipolar plane

It is perhaps a little bit more natural to think about the error on the surface of , as this is the projection surface, in place of the Euclidean error in . The geodesic distance is the angle between the point and the great circle defined by the intersection of the plane with the unit sphere :

where and are the angles between the plane and and respectively. Using the small angle assumption ():

(22)

Noting that the for small values of , then the same solution as (II-E3) is realised.

The same can be realised for the minimization of the magnitudes.

(23)

which is the same solution as in (II-E4). The assumption that the angle is small () is valid; a large angle would mean a poor feature registration, or a feature on a moving object, which cannot be triangulated.

Ii-E6 Triangulation

The pair are now on , and, except for numerical issues, will intersect at a unique point in . A basis for the subspace of that is analogous to the plane could be found, and the intersection in the subspace calculated. The linear method defined by (1) and (2) could be used. However, a computationally less expensive approach, and equally valid, is to use the Midpoint method discussed earlier.

Förstner and Wrobel [7, Alg. 21] describe a spherical triangulation method that can return a point at infinity and handle rays that converge in a backward direction. Their triangulation method is based on the observation that the view of the end points of a line segment in one camera is the mathematical equivalent of the observation of a single point from two camera locations, with the camera baseline playing the role of the line segment [7, §13.3.3, §12.3.3]. Without going into the detail (for which the reader is referred to the mentioned book), it can be thought of as measuring the parallelity of the vectors and .

Ii-F Assumptions of the proposed solution

The proposed method is optimal if it is assumed that the uncertainty in the ray directions is isotropic and homogeneous. The former indicates that the covariance matrix is

, with the norm constraint enforced. The latter indicates that variance

of the correspondence pair are equal. Additionally, the sum-of-squares optimisation is optimal in the case of Gaussian uncertainty, whereas the sum-of-absolutes solution is optimal if in the case of Laplacian uncertainty. A great deal of space is dedicated in [7, §10.2, §12.2.1] to discussing uncertain camera coordinates, both for planar and spherically normalised coordinates. While not wishing to replicate what is contained in that work, it is pertinent to make some observations.

All uncertainty models are approximations of the actual uncertainty characteristics. Consider a non-omnidirectional camera, that is one whose image is reasonably close to perspective, but with radial distortion. Most methods of triangulation in this case assume isotropic and homogeneous coordinate uncertainty on the corrected Euclidean plane [11, 13]. In the presence of radial distortion, the standard approach is to first undistort the coordinates. This, however, means that the uncertainty model assumed by those methods is no longer correct. The image undistortion alters the uncertainty characteristics. Kukelova and Larsson [1], it seems, are the first to attempt to optimise the projection of the point in the presence of radial distortion, providing an iterative method that is based on either the single-parameter division model [14] or the two-parameter polynomial [15] models of radial distortion. While the method of [1] can be extended to higher order versions of the division or polynomial distortion models, it is not easy to see if it is extendable to omnidirectional models, that often incorporate trigonometric functions, or have no analytic inverse in the case of polynomial projection models.

Regardless of the applicability to omnidirectional cameras, the work of Kukelova and Larsson only addresses radial distortion. There are, however, other sources of inhomogeneity and anisotropy of point uncertainty. For example, as discussed in [7, §13.3.1]

, if correlation is used for correspondence, the structure tensor scaled with image uncertainty provides an approximation of the uncertainty of the correspondence displacement. Optics can impact the performance of point correspondences, as lens characteristics tend to worsen towards the image periphery, particularly for omnidirectional/fisheye cameras

[16]. Artefacts such as vignetting and reduced optical resolution incur a deviation from the assumed uncertainty model. The point spread function of a camera, particularly fisheye cameras, are often far from isotropic [17].

The aim, of course, is not to dismiss triangulation methods as fruitless. Quite to the contrary, it would seem that despite these confounding factors, triangulation performance in work over the last couple of decades is at least acceptable for the given applications, particularly when acknowledging that, in fact, all triangulation methods are suboptimal as they assume a known and uncertainty-free relative pose of the camera(s), which is solved only in the context of bundle adjustment.

Iii Experimental Evaluation

In the results presented, F-W refers to the Förstner-Wrobel Triangulation method [7, Alg. 21], and Pln-Poly refers to Hartley-Sturm’s polynomial optimisation on the image plane [11], along with the terms already introduced (being, Midpoint, Sph-Lin, Sph-Quad, and Sph-Abs).

In this section, an experimental evaluation of the methods in this paper, compared to existing methods, is presented. The aims are threefold: to demonstrate 1) the numerical equivalence to F-W, 2) the effectiveness of the method for both omnidirectional and non-omnidirectional cameras, and 3) the computational efficiency compared to existing methods. To compare spherical methods, an omnidirectional set of near points is created. For non-omnidirectional reconstruction, a set of narrow field-of-view far points is used. The following outcomes of the experiments can be expected:

  • Sph-Quad and F-W should return numerically the same results across all tests, as they optimise the same function.

  • In the case that Gaussian uncertainty is added to the synthetic data, Sph-Quad should perform better than Sph-Abs. The inverse should be true for when Laplacian uncertainty is added.

  • All of the methods that optimise the ray pair should perform better than the methods that do not (being Midpoint and Sph-Lin), particularly for the reprojection error.

  • Sph-Quad and Sph-Abs should be computationally faster than the other optimisation methods, as they are direct methods with no iterative parts.

  • For non-omnidirectional cameras, Pln-Poly should outperform the spherical methods. However, this should be mild, considering that the surface of the sphere approaches the planar surface as the field-of-view shrinks.

Iii-a Evaluation on Synthetic Data

Iii-A1 Synthetic data creation

The triangulation methods are tested on synthetically generated 3D data. The scenes were generated by defining a grid in 3 dimensions that is 20 units in the X and Z dimensions, and 10 units in Y dimensions. For the omnidirectional set of points, the grid of points is placed 1 unit from the primary camera (near points), and for the narrow field-of-view set of points, it is place at 10m from the camera (far points). The primary camera is fixed at the origin, and the second camera is randomly placed at unit distance. A random rotation to the second camera about each axis is applied in the range degrees. In the case of the tests on the spherical triangulation, this rotation is inconsequential. Figure 2 shows the grid of features in .

For the case of modelling uncertainty on the sphere, the points are projected to each of the unit spheres, giving the true coordinates on the spherical image pair . The direction vectors are perturbed giving the modelled measurement . The perturbation is added by applying a random rotation to the vectors and , the rotation being represented as the quaternion [7, §8.1.5.3], where , and are random normal variables (mean 0, variance ). When modelling uncertainty on the omnidirectional image or on the projective plane, a 2D random normal variable is added to the projected location.

Finally, the performance of the triangulation when the uncertainty is in the omnidirectional image is tested. A fisheye camera is modeled by projecting the 3D points to the image using the double sphere model. The parameters of the BF2M2020S23 lens ( field-of-view) from [8] are used. The parameter set for this model and camera is , , , , , . The image size is pixels. The camera is only calibrated using data from within the image area, and as such the model may not be valid outside of this range. Therefore, any points from the set that do not project to within these pixel limits are removed. Figure 2 shows the two omnidirectional images of the grid of features.

Fig. 2: Synthetically generated test data, showing the grid of 3D points (near points) and the randomised camera positions and the two synthetically created images.

Iii-A2 Comparison to Förstner-Wrobel Triangulation

As mentioned previously, both Sph-Quad and F-W optimise the same function under the coplanarity constraint (3). Given that both methods optimise the same function, they should return the same results, within some tolerances to allow for computational differences (for example, the fact that theF-W is iterative with a finite number of iterations). It is worth demonstrating this computationally to remove any doubts.

Sph-Quad and F-W are applied to the synthetic spherical image data near points in the presence of Gaussian uncertainty, returning the optimised pairs and respectively. In the F-W, a convergence tolerance of is used. In Table I, the mean and max differences of the estimates and the mean residuals are shown, for varying levels of uncertainty. The difference between the vectors is measured as the -norm, i.e., and . The residual is , and similarly for the case of F-W triangulation. The difference between the resultant optimised vectors is very small, and in particular the residuals are almost identical.

Added Gaussian uncertainty ()
0.001 0.01 0.1
TABLE I: Comparison to Förstner-Wrobel [7, Alg. 21]

Iii-A3 Synthetic Results

Fig. 3: Residuals of the various spherical triangulation methods on the near points, with both Gaussian (1st column) and Laplacian (2nd Column) uncertainty added on the projection sphere . In all cases, Sph-Quad, Sph-Abs and F-W are almost identical. Regardless of whether Laplacian or Gaussian uncertainty is added, the methods perform similarly. Additionally, a fisheye camera is modelled, and Gaussian uncertainty is added on the modelled fisheye image (3rd column), in which case the Midpoint method performs marginally worse than the optimisation methods.

In Figure 3, the results for the spherical triangulation methods are presented, using the near points. The sum-of-square optimisation approach (Sph-Quad) is optimal in the case of normal uncertainty. However, the sum-of-absolutes (Sph-Abs) optimisation is optimal for Laplacian uncertainty. For this reason, it makes sense check the performance of the triangulation method in the presence of both Gaussian and Laplacian uncertainty. Additionally, the set of points projected to the omnidirectional image has normal uncertainty applied on the image. These are unprojected to prior to spherical triangulation. In this case, none of the methods will be optimal, as the uncertainty model has been distorted by the process of raising the image points to the sphere.

Finally, it should be noticed that some of the steps in the Sph-Quad and Sph-Abs are simpler than the Pln-Poly, and as such should be computationally less expensive. It is therefore interesting to see how well the spherical methods work when applied to standard/narrow field-of-view cameras. In this case, the far points are projected to the projective plane. Normal uncertainty is added to the projected features (i.i., on ). Results are given in Figure 4. Even though the uncertainty is modelled on the , residuals on and are also presented for completeness.

Fig. 4: Residuals of the various triangulation methods (including the planar methods) for the far points, with uncertainty added on the projective plane . The Sph-Quad, Sph-Abs, F-W and Pln-Poly methods perform almost identically in all cases, with the Midpoint and Sph-Lin methods being significantly worse.

Iii-B Evaluation on Real data

Of course, with any real data sequences, the issue is always having a reliable ground truth. Calibration checkerboard sequences, thankfully, easily allow for an implicit ground truth without requiring an external reference. The sets of fisheye calibration images from [18, 19] are used, along with two sets of narrow field-of-view images from [20]. The checkerboard pattern is extracted using the OpenCV findChessboardCorners and cornerSubPix functions, and the ground truth is generated using fisheye::calibrate [21]. The ground truth consists of the position and orientation of the cameras relative to the checkerboard pattern. The results are given in Table II. Again, the -norm is used in both (reprojection), (reprojection), and (3D) errors.

LMS Fisheye [18] KIT IPF Fisheye [19]
MATLAB GoPro [20]
(narrow, mild distortion)
MATLAB Mono [20]
(narrow, no distortion)
Midpoint 2.010 4.238 1.214 4.058 8.336 1.347 1.982 3.643 6.281 5.247 1.311 1.797
Sph-Lin 2.075 4.383 1.256 4.071 8.361 1.385 1.974 3.741 6.252 5.239 1.293 1.846
Sph-Quad 1.672 3.786 1.007 3.784 8.123 1.249 1.785 3.366 6.004 5.068 1.201 1.734
Sph-Abs 1.716 3.895 1.008 3.804 8.167 1.245 1.794 3.343 5.994 5.066 1.207 1.736
Pln-Poly 1.986 4.128 1.148 4.080 8.413 1.366 1.809 3.340 5.999 5.068 1.202 1.733
TABLE II: Median , and errors for real calibration image datasets. F-W is not presented here, as it returns the same results as Sph-Quad.

Iii-C Runtime

All methods have been implemented in Python. No specific optimisations have been performed, and Numpy functions have been used where appropriate (e.g., root finding, matrix inversion). Therefore, the results here should not be taken as an absolute measure of potential computation performance, but rather as a comparison between the approaches. Table III shows the runtimes. Where opt. only is noted, only the optimisation of the features on the surface is considered. Otherwise, the runtime includes the final triangulation. So that like-with-like can be compared, the F-W method with Midpoint method as final triangulation is also presented.

Algorithm Runtime ()
Midpoint 11.7
Sph-Lin 114.6
Sph-Quad (Midpoint) 24.4
Sph-Abs (Midpoint) 28.4
F-W 865.9
F-W (Midpoint) 252.0
Pln-Poly 307.3
Sph-Quad (Opt. only) 14.7
Sph-Abs (Opt. only) 18.5
F-W (Opt. Only) 225.3
Pln-Poly (Opt. Only) 283.2
TABLE III: Runtime per triangulated points

Iii-D Discussion

Unsurprisingly, throughout the results, the F-W and Sph-Quad return the same results, tough Sph-Quad is computationally less expensive. Somewhat surprisingly, in the synthetic data, Sph-Abs is almost the same as Sph-Quad (Figure 3, left columns), regardless of the uncertainty model. There are divergences in the omnidirectional real data (Table II, left columns), where the Sph-Quad outperforms other methods, including the Sph-Abs in reprojection. Throughout, Midpoint and Sph-Lin perform worse than the optimisation methods, as would be expected.

In the synthetic far points data (Figure 4), the Pln-Poly method is only very slightly better than the spherical optimisation methods (Sph-Quad and Sph-Abs). In contrast, the Midpoint and the Sph-Lin methods perform quite poorly on this data. In the real data (Table II, right columns), there is little difference overall between the Sph-Quad, Sph-Abs, and Pln-Poly methods. This supports the earlier assertion that the spherical optimisation methods proposed here can be used just on narrow field-of-view wit reduced computational cost compared to Pln-Poly. However, it must be noted that the spherical methods are unsuitable for uncalibrated cameras, unlike Pln-Poly.

As expected, the Midpoint is computationally the fastest method. For the methods that optimise on the projection surface, the Sph-Quad and Sph-Abs methods are by a magnitude of order faster than the F-W and Pln-Poly methods.

Iv Conclusions

In this paper, an efficient method for triangulation that optimises the reprojection error to a spherical surface is presented and is shown to boil down to the solution of a quadratic equation. The proposed direct spherical optimisation method, based on the sum-of-squares, returns the same result as that proposed by Förstner and Wrobel [7] within computational accuracy. A second method, based on the sum-of-absolutes, is also presented here (and also boils down to the solution of a quadratic), but performs slightly worse than the former in the real fisheye image datasets that were tested. Both presented methods presented are shown to be significantly computationally cheaper than existing methods. Finally, it is also shown that, when applied to calibrated ordinary field-of-view cameras, the results are essentially the same as the method of Hartley-Sturm [11]. One must note, however, that the methods presented here are only defined for calibrated cameras, whereas that of Hartley-Sturm can be used in the uncalibrated case.

References

  • [1] Z. Kukelova and V. Larsson, “Radial distortion triangulation,” in

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    , 2019, pp. 9681–9689.
  • [2] Z. Gu, H. Liu, and G. Zhang, “Real-time indoor localization of service robots using fisheye camera and laser pointers,” in Proceedings of the International Conference on Robotics and Biomimetics (ROBIO), 2014.
  • [3] S. Ji, Z. Qin, J. Shan, and M. Lu, “Panoramic slam from a multiple fisheye camera rig,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 159, pp. 169–183, 2020.
  • [4] J. Zhu, J. Zhu, X. Wan, and C. Xu, “Downside hemisphere object detection and localization of mav by fisheye camera,” in Proceedings of the International Conference on Control, Automation, Robotics and Vision (ICARCV), 2018.
  • [5] T. Jokela, J. Ojala, and K. Väänänen, “How people use 360-degree cameras,” in Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia, 2019.
  • [6] T. Ho and M. Budagavi, “Dual-fisheye lens stitching for 360-degree imaging,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 2172–2176.
  • [7] W. Förstner and B. P. Wrobel, Photogrammetric Computer Vision: Statistics, Geometry, Orientation and Reconstruction, 1st ed.   Springer Publishing Company, Incorporated, 2016.
  • [8] V. Usenko, N. Demmel, and D. Cremers, “The double sphere camera model,” in Proceedings of the International Conference on 3D Vision (3DV), 07 2018.
  • [9] C. Hughes, P. Denny, M. Glavin, and E. Jones, “Equidistant fish-eye calibration and rectification by vanishing point extraction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 12, pp. 2289–2296, 2010.
  • [10] B. Khomutenko, G. Garcia, and P. Martinet, “An enhanced unified camera model,” IEEE Robotics and Automation Letters, vol. 1, no. 1, pp. 137–144, 2016.
  • [11] R. Hartley and P. Sturm, “Triangulation,” Computer Vision and Image Understanding, vol. 68, no. 2, pp. 146–157, 1997.
  • [12] H. C. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature, vol. 293, no. 5828, pp. 133–135, 1981.
  • [13] K. Kanatani, Y. Sugaya, and H. Niitsuma, “Triangulation from two views revisited: Hartley-sturm vs. optimal correction,” in Proceedings of the British Machine Vision Conference (BMVC), 2008, pp. 18.1–18.10.
  • [14] A. Fitzgibbon, “Simultaneous linear estimation of multiple view geometry and lens distortion,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2001.
  • [15] D. C. Brown, “Alternative models for fish-eye lenses,” Photogrammetric Engineering, vol. 32, no. 2, pp. 444–462, 1966.
  • [16] J. Schneider, C. Stachniss, and W. Förstner, “On the accuracy of dense fisheye stereo,” IEEE Robotics and Automation Letters, vol. 1, no. 1, pp. 227–234, 2016.
  • [17]

    M. Lehmann, C. Wittpahl, H. B. Zakour, and A. Braun, “Resolution and accuracy of nonlinear regression of point spread function with artificial neural networks,”

    Optical Engineering, vol. 58, no. 4, pp. 1–9, 2019.
  • [18] A. Eichenseer and A. Kaup, “A data set providing synthetic and real-world fisheye video sequences,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 1541–1545.
  • [19] S. Urban and B. Jutzi, “LaFiDa—a laserscanner multi-fisheye camera dataset,” Journal of Imaging, vol. 3, no. 1, 2017.
  • [20] The Mathworks, Inc., MATLAB R2022a: Computer Vision Toolbox Reference, 2022.
  • [21]

    Itseez, “OpenCV 3.4: Open Source Computer Vision Library,”

    https://opencv.org/, 2020, online; accessed 16 December 2020.