1 Introduction
Ellipse fitting in images has been a long researched problem in computer vision for many decades [24]. Ellipses can be used for camera calibration [14, 13], estimating the position of parts in an assembly system [25] or for defect detection in printed circuit boards (PCB) [21]. This paper deals with a very special application of ellipse fitting: detected ellipses are applied for calibration of multisensor systems via estimating the spatial locations of the spheres corresponding to the ellipses.
Particularly, this paper concentrates on two separate problems: (i) automatic and accurate ellipse detection is addressed first, then (ii) the spatial location of the corresponding sphere is calculated if the radius of the sphere is known.
Ellipse detection. There are several solutions for ellipse fitting, but only a few of those can detect the ellipse contours accurately as well as robustly. Traditional algorithms can be divided into two main groups: (i) Hough transform and (ii) Edge following.
Hough transform (HT) based methods for ellipse detection tend to be slow. A general ellipse has five degrees of freedom and it is found by an exhaustive search on the edge points. Each edge pixel in the image votes for the corresponding ellipses
[6]. Therefore, evaluating the edge pixel in the fivedimensional parameter space has high computation and memory costs. Probabilistic Hough Transform (PHT) is a variant of the classical HT: it randomly selects a small subset of the edge points which is used as input for HT [19]. The 5D parameter space can be divided into two pieces. First, the ellipse center is estimated, then the remaining three parameters are found in the second stage [29, 27].Edge following methods try to connect the line segments, usually obtained by the widelyused Canny edge detector [4, 23]. These segments are refined in order to fit to the curve of an ellipse. The method of Kim et al. [18] merges the short line segments to longer arc segments, where the arc fitting algorithms are frequently called. Mai et al. published [5] another method based on similar idea, the difference lies in linking the segments and edge points by adjacency and curvature conditions.
Lu et al. detect images basedon arcsupport lines [21]. First, arcsupport groups are formed from line segments, detected by the Canny [4] or Sobel detectors [17]. Then an initial ellipse set generation and ellipse clustering is applied to remove the duplicated ones. Finally, a candidate verification process removes some of the candidates and refit the remaining ones.
[2] proposed the Randomized Hough Transform (RHT) for ellipse detection. Their work is based on [28], but achieve significantly faster detection time. In addition to randomization, further filtering methods are applied to remove false detections.
The method introduced by Fornaciari et al. approaches realtime performance [9]
. The method detects the arc from Sobel derivatives and classifies them according their convexity. Based on their convexity, mutual positions and implied ellipse center, the arcs are grouped and the ellipse parameters are estimated. Finally, parameters clustering is applied to duplicated ellipses.
Nowadays, deeplearning methods
[16] are very popular, they can also be applied for ellipse detection. However, the authors of this paper thinks that ellipse detection is a pure geometric problem, therefore deeplearning solutions are not the best choice for this task. The problem is not only the lack of sufficiently large training data, but also the time demand at training. Thus, deeplearning based methods are considered as an overkill for ellipse detection.Sphere location in 3D. The main application area of the proposed ellipse detector is the calibration of different sensors, especially range sensors and cameras. Usually, chessboards [10] or other planar calibration targets [22] are applied for this task, however, recently spherical calibration objects [20] has also begun to be used for this purpose. The basic idea is that a sphere can be accurately and automatically detected on both depth sensors and camera images. Extrinsic parameters can than computed by pointset registration methods [1] if at least four sphere centers are localized. Unfortunately, detection of planar targets [10] are inaccurate due to the sparsity of the point cloud, measured by a depth camera or LiDAR device.
The theoretical background of our solution is as follows: (i) an ellipse determines a cone in 3D space; (ii) if the radius of this sphere is known, the 3D location of the sphere can be computed using the fact that the cone is tangent to the sphere.
Our method differs from that of Kummerle et al. [20] in the sense that 3D sphere and 2D ellipse parameters are analytically determined. The 3D position of the sphere is directly computed from these parameters contrary to [20] in which the 3D estimation is based on the perspective projection of the reconstructed sphere and the edges are used to tune the parameters.
Contribution. The novelty of the paper is twofold: (i) A novel, robust ellipse estimation pipeline is proposed that yields accurate ellipse parameters. It does not have parameters to be set, it is fully automatic. (ii) A 3D location estimation procedure is also proposed. It consists of two steps: first, rough estimation for the sphere location is given, then the obtained coordinates are refined via numerical optimization. Both steps and substeps are novel to the best of our knowledge.
Structure of the paper. Section 2 shows how a sphere is perspectively projected onto an image, and how the ellipse parameters, corresponding to the contour points, are obtained. Section 3 introduces the RANSAC threshold for ellipse estimation. Then, Section 4 describes our method for ellipse detection and 3D center estimations. Section 5 evaluates the performance of the proposed method against stateoftheart techniques, and finally, Section 6 concludes our work.
2 Theoretical Background: Perspective Projection of Sphere onto Camera Image
In this section, we show how the spatial location of the sphere determines the ellipse parameters in the images.
A calibrated pinhole camera is assumed to be used, thus the intrinsic parameters are known. Let denote the camera matrix. Its elements are as follows:
(1) 
where , , and location are the horizontal and vertical focal length and the principal point [12], respectively.
Without loss of generality, the coordinate system is assumed to be aligned to the camera. The location of a pixel in the image is denoted by the 2D vector
. Then the normalized image coordinates are calculated as follows:(2) 
where denotes the normalized coordinate of .
The backprojection of a pixel corresponds to a ray in the spatial space. The spatial points of the ray can be written as . Using the implicit equation of the sphere, i.e. , the intersection of a ray and the sphere can be determined as
(3) 
where and denote the radius and the center of the sphere, respectively. It is straightforward that the previous equation is a quadratic function in :
(4) 
The quadratic function has a single root at the border of the sphere when the ray is tangent to the spherical surface. Thus, the discriminant of the quadratic equation equals to zero in this case:
(5) 
This constraint to the determinant can be rearranged to the implicit form of a conic section , where the coefficients are as follows:
(6) 
3 Ellipse Approximation With a Circle
One critical part of the sphere center estimation is to detect the projection of the sphere in the images. As we proved in Sec. 2, this shape is a conic section, usually an ellipse. One particular case is visualized in Fig 1, where the intersection point of the ellipse axes lands one of the corners of the image. To robustly fit an ellipse to image points, either HT or RANSAC [7]
has to be applied, but both methods are slow. The former needs to accumulate results of the 5D parameter space, and the latter needs a large iteration number. It principally depends on the inlier/outlier ratio and the number of parameters to be estimated. A general ellipse detector needs
points, however, only points are needed for circle fitting. Therefore, the iteration number of RANSAC can be significantly reduced by estimating the ellipse with a circle at the first place.This section introduces a RANSAC threshold selection process for circle fitting. The threshold should be larger than the largest distance between the ellipse and circle contours but this threshold has to be small enough not to classify to many points as false positive inliers. The first condition is fulfilled by our definition of the threshold. Theoretically obtained values in Fig. 5 show that our approach which based on the described method in Sec. 3.1 satisfies the second condition as well.
Therefore, one of the novelty in this paper is to show that circle fitting with higher threshold can be applied for ellipse fitting. The next section introduces how this threshold can be selected.
3.1 Threshold Selection for Circle Fitting
Circle model is a rough approximation for ellipses in robust RANSACbased [7] fitting. Although the error threshold for the inlier selection has a paramount importance, it is difficult do define it on a manual way. Basically, this threshold is needed because of the noise in datapoints. The main idea here is that ellipses can be considered as noisy circles. Realistic camera setups confirm this assumption because if the spherical target is not too close to the image plane, then the ratio of the ellipse axes will be close to one. This is convenient in spite of the most extreme visible position: when the ellipse is as far as possible from the principal point: near one of the image corners as it is visualized in Fig. 1.
We propose an adaptive solution to find threshold based on the ratio of the minor and major axes, denoted by and . In our algorithm, the coherence of the variables is defined as , where is the radius of the actually fitted circle in the image. Hence, the searched variable is . Because of realistic simulation, we have to add the noise in pixels, too.
The discussed 3D problem is visualized in Fig. 1 where not the whole ellipse is fall into the area of the image. Because of the main goal is to estimate , two angles has to be defined: angle between the axis of the cone and the image plane and angle between a generator of the cone and the axis of the cone. To find this two angles, consider the two plane segments of the cone which contain the axis of the cone and either the minor or the major axis of the ellipse. The 2D visualization is pictured in Fig. 2 and 3. This two views are perpendicular to each other because the axes of the ellipse are perpendicular. Fig. 3 shows that the minor axis and the axis of the cone are also perpendicular. If the ellipse center gets further from the principal point then is smaller, and the difference between and steadily increases as it is seen in Fig. 2. However, if , the major axis is also perpendicular to the axis of the cone: this is the special case, when and the projection is a circle which center is the principal point.
First, the vectors and distances have to be defined which determinate and . The 2D coordinates of the intersection point of the ellipse axes in the image is denoted by , the camera position is and the distance of the image plane to the camera in the camera coordinate system is . The 3D coordinates of become
(7) 
The angle between the axis of the cone and the image plane is determined by two vectors: the direction vector from to , and the vector from to the principal point , because it is known that the line of the main axis of the conic section contains the principal point of the image. The vector coordinates become
(8) 
Using vector and , angle between image plane and cone axis is calculated as
Similarly, the angle between the generator and the axis of the cone depends on the distance of the sphere center to the camera and the radius of the sphere , shown in Fig. 2 or 3. In our approach, is given, however, is undefined because of the varying position of the sphere. The distance is estimated using the focal length, the radius of the sphere and the radius of the fitted ellipse: . Considering the two scalars and , the angle becomes:
(9) 
The searched ratio can be estimated using the angles and and the special properties of the projections as it is discussed in App. C :
(10) 
However, the pixels can be scaled horizontally and vertically as it is visualized in Fig. 4. The ratio depends on this scaling factors , and the angle between the direction vector along axis and the main axis of the ellipse:
(11) 
After applying elementary trigonometrical expressions, the modified scale becomes
(12) 
Then the RANSAC threshold can be computed using the detected circle radius and the calculated ratio .
Fig. 5 shows the validation of the automatic RANSAC threshold estimation with varying depth of the sphere center and with varying distance between the ellipse center and the principal point in the image plane in three different test environments, which are detailed in Sec. 5. The red curve segments show the range of the applied thresholds in our tests. The thresholds are realistic and successfully applied in our ellipse detection method.
4 Proposed Algorithms
In this section, the proposed algorithms for spatial sphere estimation is overviewed. Basically, the estimation consists of two parts: the first one finds the contours of the ellipse in the image, the second part determines the 3D location of the sphere corresponding to the ellipse parameters. Simply speaking, the 2D image processing task has to be solved first, then 3D estimation of the location is achieved.
4.1 Ellipse Detector
The main challenge for ellipse estimation is that a real image may contain many contour lines that are independent of the observed sphere. There are several techniques to find an ellipse in the images as it is overviewed in the introduction.
Our ellipse fitting method is divided into several subparts:

Edge points are detected in the images.

The largest circle is found in the processed image by RANSACbased [7] circle fitting on the edge points.

Then the edges with high magnitude around the selected circle is collected.

Finally, another RANSAC [7]cycle is run in order to robustly estimate the ellipse parameters.
Edge point detection. RANSACbased algorithms [7] usually work on data points. For this purpose, 2D points from edge maps are retrieved. The Sobel operator [23] is applied for the original image, then the strong edge points, i.e. points with higher edge magnitude than a given threshold, are selected. The edge selection is based on edge magnitudes, however, edge directions are also important as the tangent directions of the contours of ellipses highly depend on the ellipse parameters. Finally, the points are made sparser: if there are more strong points in a given window, only one of those are kept.
Circle fitting. This is the most critical substep of the whole algorithm. As it is proven in Sec. 3.1, the length of the two principal axes of the ellipse is close to each other. As a consequence, a circlefitting method can be applied, and the threshold for RANSAC, overview in Sec. 3.1 has to be applied.
The main challenge is that only minority of the points are related to the circle. As it is pictured in the left image of Fig. 6, only of the points can be considered as inliers for circle fitting. Therefore, a very high repetition number is required for RANSAC. In our practice, millions of iterations are set to obtain acceptable results for real or realistic test cases.
An important novelty of our circle detector is that the edge directions are also considered: edge directions of inliers must be orthogonal to the tangent of the circle curve.
Collection of strong edge points. The initial circle estimation yields only a preliminary estimation. For the final estimation, the points are relocated. For this purpose, the proposed algorithm searches the strongest edge points along radial direction. An example for the candidate points are visualized in the right image of Fig. 7.
Final ellipse fitting. The majority of the obtained points belong to the ellipse. However, there can be incorrect points, effected by e.g. shadows or occlusion. Therefore, robustification of the final ellipse fitting is a must. Standard RANSAC [7]based ellipse fitting is applied in our approach. We select the method of Fitzgibbon et al. [8] in order to estimate a general ellipse. A candidate ellipse is obtained then for each circle. Figure 8 shows the candidate ellipses on the left. There are many ellipses around the correct solutions, the similar ellipses have to be found: the final parameters are selected as the average of those. An example for the final ellipse is visualized in the right picture of Fig. 8.
4.2 3D Estimation
When the ellipse parameters are estimated, the 3D location of the sphere can be estimated as well if the radius of the sphere is known. All points and the corresponding tangent lines of the ellipse in conjunction with the camera focal points determine tangent planes of the ellipse. If there is an ellipse, represented by quadratic equation
it can be rewritten into matrix form if the points are written in homogeneous coordinates as
The inner matrix, denoted by in this paper, determines the ellipse. The tangent line at location can be determined by as it is discussed in [12]. Then the tangent plane of the sphere, corresponding to this location, can be straightforwardly determined.
The 3D problem is visualized in Fig. 9. The tangent plane of the sphere is determined by the focal point . Two of its tangent vectors are represented by the tangent direction of the ellipse in the image space ^{1}^{1}1Third coordinate is zero in 3D., and the vector . The plane normal is the cross product of the two tangent vectors.
If a point and the normal, denoted by and , respectively, of the tangent plane are given, the distance of the sphere center with respect to this plane is the radius itself. If the length of the normal is unit, in other words , the distance can be written as
where is the center of the sphere, that is the unknown vector of the problem. Each tangent plane gives one equation for the center. If there are three tangent planes, the location can be determined. In case of at least four planes, the problem is overdetermined. The estimation is given via an inhomogeneous linear system of equations as follows:
(13) 
For the overdetermined case, the pseudoinverse of the matrix has to be computed and multiplied with the vector on the right side as the problem is an inhomogeneous linear one [3].
Numerical optimization. As the 3D estimation of the sphere center, defined in Eq. 13, minimizes the distances in 3D between sphere center and tangent planes, it is not optimal since the detection error appears in the image space. Therefore, we apply finally numerical optimization for estimating the sphere center. Initial values for the final optimization are given by the geometric minimization, then the final solution is obtained by a numerical optimization. LevenbergMarquardt technique is used in our approach.
5 Experiments
We have tested the proposed ellipse fitting algorithm both on three different testing scenarios:

Synthesized test. In order to test the accuracy and numerical stability of the algorithms, we have tested the proposed fitting and 3D estimation algorithms on a synthetic environment. We have selected Octave for this purpose. As the 3D models are generated by the testing environment, quantitative evaluation can be carried out. The weakpoint of the full synthetic test is that only point coordinates are generated, therefore our imageprocessing algorithm cannot be tested.

Semisynthetic test. Semisynthetic tests extend the fullsynthetic one by images. There are rendering tools, applied usually for Computer Graphics (CG) applications, that can generate images of a know virtual 3D scene. We have used Blender^{2}^{2}2http://blender.org
as it is one of the best opensource CG tools and can produce photorealistic images. As the virtual 3D models are known, ground truth sphere locations as well as camera parameters can be retrieved from the tool, therefore quantitative evaluation of the algorithms is possible.
The calibration of different devices is very important for autonomous system, therefore we have tried the proposed methods for an autonomous simulator. We selected an opensource simulator, called CARLA ^{3}^{3}3www.carla.org, for this purpose. It is based on the widelyused UnReal Engine.

Real test. Even the most accurate CG tools cannot generate fullrealistic images, therefore real testing cannot be neglected. The GT data in these test are obtained by using a novel sphere fitting algorithm, designed for LiDAR data processing [26] ,and by LiDARcamera calibration.
5.1 Synthetic tests
Synthetic test was only constructed in order to validate that the formula, given in Eq. 13, is correct. For this purpose, a simple synthetic testing environment was implemented in Octave ^{4}^{4}4Octave is an opensource MATLABclone. See http://www.octave.org.. Camera parameters as well as the groundtruth (GT) sphere parameters were randomly generated, ellipse parameter was computed by projecting the points into the camera image as it is overviewed in Section 2.
Conclusion of synthetic test. It was successfully validated that Equation 13 is correct, the GT sphere parameters can always be exactly retrieved.
5.2 Semisynthetic tests
The semisynthetic tests include virtual scenes containing simple shaded spheres. An example is visualized in the left image of Fig. 6. The images are generated by the wellknown free and open source 3D creation suite Blender^{5}^{5}5http://www.blender.org. We have generated four test images. They are visualized in Fig. 10 and Fig. 11.
Two scenes are generated by Blender. The first contains only a single sphere with Phongshading only. It is visualized in Fig. 10. This is considered as an easier test case, because the images contain only a single ellipse. The second scene is the wellknown classroom scene, which is widely used in computer graphics papers. The scene contains several objects and a sphere. Therefore, the synthetic images are rich in edges which makes the detectors work harder. The images are visualized in Fig. 11.
Since the application of these algorithm for LiDARcamera calibration is also important, one additional scene is rendered by the CARLA simulator. This scene contains a typical driving simulation with an additional sphere on the road. It is visualized in Fig. 12.
5.3 Real tests
The main challenge for a realistic test is to find a good large sphere. We have selected a gymnastic plastic ball for this purpose, with a radius of . The images were taken by an iCube camera, whose intrinsic parameters were calibrated using the well known Zhangcalibration technique [30]. Then radial distortion was removed, and the algorithms were run with camera parameters (focal lenght; optics have narrow field of view), , . (principal point; camera sensor resolution is ). Fig. 13 shows the images, and the detected ellipses.
5.4 Evaluation
The proposed ellipse detector is compared to four stateoftheart (SoA) methods. The error of the methods measured as the Euclidean distance between the GT spatial sphere centers and the estimated ones calculated from the ellipse parameters.
In the realworld tests, the GT is obtained by fitting a sphere to the LiDAR data using a novel method [26], that is tailored for LiDAR datapoints. Finally, the sphere centers are transformed from the LiDAR coordinate system to the camera coordinate system. The GT data in the synthetic tests are obtained from the testing environment.
Four SoA are compared to the proposed method. These are as follows:

FastAndEffective: [9] proposed a method that can be used for realtime ellipse detection. First, arch groups are formed from the Sobel derivatives, then the ellipse parameters are estimated. The main problem with this method is the large number of parameters, which have to be tuned for each image sequence individually.

Random Hough Transform (RHT): The method introduced by [2] reduced the computation time of the HT by randomization. The method achieves lower detection accuracy, since it considered only the edge point positions, not their gradient.

HighQuality: [21] proposed this method, which results high quality ellipse parameters. They generate arcs from Sobel or Canny edge detectors, and several properties of these arcs are exploited, e.g. overall gradient distribution, arcsupport directions or polarity. This methods need parameters to be tuned for each test sequence.

FastInvariant: [15] trade off accuracy for further speed improvements. Their method removes straight arcs based on line characteristic and obtains ellipses by conics.
Table 1 shows the results of both the synthetic and realworld tests. The rows containing the integer values are the different image indices in the same test environment, and the Euclidean error is shown for every method. In some of the cases, the methods were not able to find the right ellipses in the images, even after careful parameter tuning. In these cases, the error of the method is not presented. The accuracy of the proposed method (denoted by the rows beginning with Prop.), is comparable to the SoA. The best methods is clearly the HighQuality in all test cases, however, it was not able to find any ellipse in the 5th image of the realworld test. While RHS achieves the worst accuracy, the FastInvariant and FastAndEffective methods have almost the same results. RHS needs to know the approximated size of the major axis and the ratio between the minor and major axis of the ellipse in pixels. The latter two methods require more than eight parameters to be set. Even though the proposed method does not achieve significantly better results then the others, it is the only completely parameterfree, thus fully automatic, method.
Blensor  1  2  3  4  Real  1  2  3  4  5 

Prop.  0.6463  0.0100  0.0372  0.2327  Prop.  0.9115  1.1317  1.7169  0.8702  0.4443 
FAE  0.0660  0.1087  0.0601  1.7892  FAE  0.6926  0.2386  0.1566  0.1618   
RHS  0.7831  0.0537  0.0396  0.1907  RHS  0.6449  0.3330  0.3781  0.1667  0.7859 
HQ  0.0516  0.0527  0.0135  0.0263  HQ  0.5231  0.4587  0.3239  0.1734   
FI  0.0660  0.1087  0.0601  1.7892  FI  0.6926  0.2386  0.1566  0.1618   
Classroom  1  2  3  4  5  6  7  8 

Prop.  0.016  0.0213  0.0302    0.2094    0.0555   
FAE  0.0168  0.0168  0.0682  0.0553    0.0985  0.1650   
RHS  0.0689  0.0689  0.1698  0.1165  0.6551  0.41658  7.8553  0.0574 
HQ  0.0141  0.0141  0.0037  0.0058  0.0571  0.0056  0.03422  0.0081 
FI  0.0168  0.0168    0.0553    0.0985  0.1650   
Carla  1  2  3  4  5  6  7  8 
Prop.  0.2993    1.5381  2.6420  0.3870  1.4974  3.1761  0.55548 
FAE    0.1846  0.1038  2.2196  0.1074  0.6769  0.5710  0.2690 
RHS  0.2260  2.17628  1.8085  0.8335  1.0599  0.3954  0.0733  0.32469 
HQ  0.0586  0.0486  0.0434  0.0740  0.0933  0.0836  0.1260  0.0391 
FI    0.1846  0.1038  2.2196  0.1074  0.4107  0.5710   
6 Conclusions
This paper proposes a novel 3D location estimation pipeline for spheres. It consists of two steps: (i) the ellipse is estimated first, (ii) then the spatial location is computed from the ellipse parameters, if the radius of the sphere is given and the cameras are calibrated. Our ellipse detector is accurate as it is validated by the test. The main benefit of our approach is that it is fully automatic as all parameters, including the RANSAC [7] threshold for circle fitting, are fixed in the implementation. To the best of our knowledge, our second method, i.e. the estimator for surface location, is a real novelty in 3D vision. The main application area of our pipeline is to calibrate digital cameras to LiDAR devices and depth sensors.
Acknowledgements
T. Tóth and Z. Pusztai were supported by the project EFOP3.6.3VEKOP16 201700001: Talent Management in Autonomous Vehicle Control Technologies, by the Hungarian Government and cofinanced by the European Social Fund. L. Hajder was supported by the project no. ED_18120190030 (Application domain specific highly reliable IT solutions subprogramme). It has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the Thematic Excellence Programme funding scheme.
References
 [1] (1987) Leastsquares fitting of two 3D point sets. PAMI 9 (5), pp. 698–700. Cited by: §1.
 [2] (2005) Randomized hough transform for ellipse detection with result clustering. EUROCON 2005  The International Conference on ”Computer as a Tool” 2, pp. 1397–1400. Cited by: §1, item 2.
 [3] (1996) Numerical Methods for Least Squares Problems. Siam. Cited by: §4.2.
 [4] (1986) A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8 (6), pp. 679–698. Cited by: §1, §1.
 [5] (2011) A split and merge based ellipse detector with selfcorrecting capability. IEEE Trans. Image Processing 20 (7), pp. 1991–2006. Cited by: §1.
 [6] (1972) Use of the hough transformation to detect lines and curves in pictures. Commun. ACM 15 (1), pp. 11–15. Cited by: §1.
 [7] (1981) RANdom SAmpling Consensus: a paradigm for model fitting with application to image analysis and automated cartography. Commun. Assoc. Comp. Mach. 24, pp. 358–367. Cited by: §3.1, §3, item 2, item 4, §4.1, §4.1, §6.
 [8] (1999) Direct Least Square Fitting of Ellipses. IEEE Trans. on PAMI 21 (5), pp. 476–480. Cited by: Appendix A, §4.1.
 [9] (201411) A fast and effective ellipse detector for embedded vision applications. Pattern Recogn. 47 (11), pp. 3693–3708. External Links: ISSN 00313203, Link, Document Cited by: §1, item 1.
 [10] (2012) Automatic camera and range sensor calibration using a single shot. In IEEE International Conference on Robotics and Automation, ICRA 2012, 1418 May, 2012, St. Paul, Minnesota, USA, pp. 3936–3943. Cited by: §1.
 [11] (1998) Numerically stable direct least squares fitting of ellipses. In International Conference in Central Europe on Computer Graphics, Visualization and Interactive Digital Media, pp. 125–132. Cited by: Appendix A.
 [12] (2003) Multiple View Geometry in Computer Vision. Cambridge University Press. Cited by: §2, §4.2.
 [13] (200010) Geometric camera calibration using circular control points. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (10), pp. 1066–1077. External Links: Document, ISSN 19393539 Cited by: §1.
 [14] (2001) Camera selfcalibration from ellipse correspondences. Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164) 3, pp. 2191–2196 vol.3. Cited by: §1.
 [15] (201608) A fast ellipse detector using projective invariant pruning. IEEE Transactions on Image Processing PP, pp. . External Links: Document Cited by: item 4.

[16]
(2019)
Ellipse proposal and convolutional neural network discriminant for autonomous landing marker detection
. J. Field Robotics 36 (1), pp. 6–16. Cited by: §1.  [17] (1988) Design of an image edge detection filter using the sobel operator. IEEE Journal of solidstate circuits 23 (2), pp. 358–367. Cited by: §1.
 [18] (2002) Fast and robust ellipse extraction from complicated images. In Proceedings of IEEE Information Technology and Applications, Cited by: §1.
 [19] (1991) A probabilistic hough transform. Pattern Recognition 24 (4), pp. 303–316. Cited by: §1.
 [20] (2018) Automatic calibration of multiple cameras and depth sensors with a spherical target. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018, Madrid, Spain, October 15, 2018, pp. 1–8. Cited by: §1, §1.
 [21] (2020) Arcsupport line segments revisited: an efficient highquality ellipse detection. IEEE Transactions on Image Processing 29 (), pp. 768–781. External Links: Document, ISSN 19410042 Cited by: §1, §1, item 3.
 [22] (201403) Calibration between color camera and 3d lidar instruments with a polygonal planar board. Sensors (Basel, Switzerland) 14, pp. 5333–5353. External Links: Document Cited by: §1.
 [23] (2000) Color image processing and applications. SpringerVerlag, Berlin, Heidelberg. External Links: ISBN 3540669531 Cited by: §1, §4.1.
 [24] (1982) The measurement of circularity and ellipticity on a digital grid. Pattern Recognition 15 (5), pp. 383–387. Cited by: §1.
 [25] (201110) RHTbased ellipse detection for estimating the position of parts on an automobile cowl cross bar assembly. Journal of Biosystems Engineering 36, pp. 377–383. External Links: Document Cited by: §1.
 [26] (2019) Robust fitting of geometric primitives on lidar data. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications  Volume 5: VISAPP,, pp. 622–629. External Links: Document, ISBN 9789897583544 Cited by: 3rd item, §5.4.
 [27] (1978) Detection of ellipses by a modified hough transformation. IEEE Trans. Computers 27 (8), pp. 777–781. Cited by: §1.
 [28] (198801) Ellipse detection using the hough transform. pp. . External Links: Document Cited by: §1.
 [29] (1989) Detecting partially occluded ellipses using the hough transform. Image Vision Computing 7 (1), pp. 31–37. Cited by: §1.
 [30] (2000) A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (11), pp. 1330–1334. Cited by: §5.3.
Appendix
Appendix A Ellipse fitting to 2D points.
The task is to estimate an ellipse if data points are given. The ellipse is given by its implicit form
where parameters – represent the quadratic curve. If there are datapoints: , the algebraic fitting problem can be written as , where
The scale of the parameters are arbitrary selected. Quadratic curves can be ellipses, parabolas or hiperbolas. For the former case, the constraint has to be hold. This can be written by the quadratic equation , where
Then the scale ambiguity of the parameters is also eliminated since If a Lagrangian multiplier
is introduced, the constrained problem, i.e. the algebraic fitting of an ellipse itself, can be solved by the generalized eigenvalue technique
[8] as , where .Appendix B Data normalization for ellipse fitting
As it is frequently appeared for numerical methods, data normalization is critical for efficient parameter estimation. If the coordinates in coefficient matrix come from pixel row/column number, then the order of magnitude for different rows are significantly different in the column of . For example, the first, second, and last columns contain elements around , , and , respectively.
A usual technique for data normalization is to move the origin to the center of the gravity of the points, represented by the vector , and then scale the data. These transformations can be represented by the product of matrices as
If the original estimation ellipse is written as
then the normalized one is as follows:
The conversion between the original and normalized ellipse parameters are as follows:
Vice versa:
Appendix C Ratio of ellipse axes
The ratio of the axes of an ellipse determinates the RANSAC threshold of circle fitting in our method as it is discussed in Section 3. To find this ratio , we project the two conic sections from Fig. 2 and 3 into one figure rotating one of the sections with . The result of the projection is visualized in Fig. 14. The length of the perpendicular section to the main axis of the cone along will be in every view, hence the combined figure, which contains both and supplemented by known angles looks like in this image.
If and denote the two parts of the section , containing , the axes of the ellipse and their ratio depend on and as and .
Because of the law of cosines in triangles and , and applying the angle bisector rule in triangle , the following formula can be written:
(14) 
(15) 
(16) 
Substituting Eqs. 14 and 15 into Eq. 16 yields:
(17) 
As it can be reordered:
(18) 
After elementary modifications, the following equation is obtained:
(19) 
Therefore, the scale is
(20) 
Due to the law of sines:
(21) 
Hence, substituting Eq. 21 into Eq. 20, the searched ratio:
(22) 
Comments
There are no comments yet.