1 Introduction
FINDNG relative pose between two calibrated views using 2D2D correspondences is a cornerstone in geometric vision [1], and it makes a basic building block in many structurefrommotion (SfM), visual odometry, and simultaneous localization and mapping (SLAM) systems. The essential matrix for calibrated cameras, which encodes the relative pose in two projective views, is the most popular representation of relative pose [2].
Due to scale ambiguity of translation, the relative pose has only degreesoffreedom (DoFs), including for rotation and for translation. Except for degenerate configurations, point correspondences are hence sufficient to determine the relative pose. Given point correspondences, the fivepoint method using essential matrix representation [1] can be used to calculate the essential matrix efficiently. The above solution is the socalled minimal solution to the problem. The minimal problem is usually integrated into a hypothesisandtest framework, such as RANSAC [3]
, to find the solution corresponds to the largest consensus set. Hence it gains robustness against outlier correspondences. Once the maximal consensus set is found in such a framework, it is required that all inliers are used to reestimate a model to reduce the influence of noise. In relative pose estimation context, this is known as
nonminimal relative pose estimation. We will call it point problem in this paper. As it has been investigated in [4, 5, 6], nonminimal relative pose estimation will lead to more accurate result than minimal case solutions.Though it seems to be a rather fundamental problem, it is surprising that
point problem receives very little attention. The wellknown direct linear transformation (DLT) technique
[2] can be used to estimate the essential matrix usingor more point correspondences. However, DLT ignores the inherent nonlinear constraints of the essential matrix. To remedy this problem, an essential matrix is recovered after obtaining an approximated essential matrix from the DLT solution. Recently an eigenvaluebased formulation and its variant were proposed to solve the
point problem [5, 6]. However, all above mentioned methods fail to guarantee the global optimality and efficiency simultaneously.The progress of point problem is far behind the absolute pose estimation from 2D3D point correspondences (perspectivepoint (PP)) [7] and 3D3D point correspondences [8, 9]. Take PP as an example, the EPP [7] is rather efficient and has linear complexity with the number of observations. It has been successfully used in RANSAC framework given an arbitrary number of inlier 2D3D correspondences. Arguably, relative pose estimation is more difficult than absolute pose estimation. It is desirable to find a practical solution to point problem, whose efficiency and global optimality are both satisfactory. This is the motivation of this paper.
Among the existing parameterizations for the relative pose, the essential matrix is the most popular one. Despite its great success, the essential matrix parameterization suffers from the pure rotation and solution multiplicity. To overcome these problems, some methods that use rotation matrix parameterization have been proposed [5, 6]. More recent advances have shown that the rotation matrix can still be accurately recovered from the essential matrix for pure rotations situations [10]. Besides, some criteria have been proposed to detect pure rotation, and the unique correct solution can be selected without triangulation. Based on this progress, this paper also utilizes essential matrix parameterization due to its simplicity.
The paper is organized as follows. Section 2 describes the background materials and related works. In Section 3, we introduce a novel parameterization of essential matrix manifold and provide formulations of minimizing the algebraic error. Based on the this, Section 4 derives a convex optimization approach by semidefinite relaxation (SDR). Section 5 provides the proof for tightness and local stability of SDR. Section 6 outlines the performance of our method in comparison to other approaches, followed by a concluding discussion in Section 7.
2 Related Works
Finding optimal essential matrix by norm objective and branchandbound method was proposed in [11]. An algebraic error for essential matrix estimation was investigated in [4]. Its global optimality was obtained by a square matrix representation of homogeneous forms and relaxation. An egienvaluebased formulation was proposed to estimate rotation matrix [5], in which the problem is optimized by local gradient descent or branchandbound. It was later improved by a certifiably globally optimal solution by relaxation and semidefinite programming (SDP) [6].
Another related area is nonminimal fundamental matrix estimation for uncalibrated cameras. Fundamental matrix has fewer constraints than the essential matrix. A matrix is a fundamental matrix if and only if it has two nonzero singular values. An essential matrix has the additional property that the two nonzero singular values are equal
[2]. Algebraic error has been widely used in fundamental matrix estimation [12, 13, 14]. In [13], a method for minimizing the algebraic error is proposed which ensures the rank constraint. However, it does not guarantee to avoid local minima in the general case. In [15, 16], the constraint for a fundamental matrix is imposed by setting its determinant to , leading to a cubic polynomial constraint. In [14], the fundamental matrix estimation problem is reduced to one or several constrained polynomial optimization problems by imposing the constraint that the null space of the solution must contain a nonzero vector.
For both the essential matrix and fundamental matrix, pose estimation from point using algebraic error can be formulated as polynomial optimization problems [17]. Polynomial optimization problems can always be rewritten as quadratically constrained quadratic programs (QCQPs), which has a broad of methods to solve. In multiple view geometry, the semidefinite relaxation (SDR) for polynomial optimization problems was first studied by Kahl & Henrion in [18]. Recently, a number of certifiably correct solutions to estimation problems in computer vision and robotics have been developed. For example, SDR and Lagrangian duality of QCQPs have been used in point cloud registration [19], triangulation [20], pose graph optimization [21], and special Euclidean group synchronization [22]. These methods either rely on SDR or the dual form of the primal problem. However, SDR does not guarantee a priori that this approach generates an optimal solution. The global optimality for general QCQPs is still an open problem. When the QCQP satisfies certain conditions and data noise lies within a critical threshold, a recent study proves that the solution to the SDP optimization algorithm is guaranteed to be globally optimal [23, 24].
Essential matrix or fundamental matrix estimation by algebraic error has been extensively exploited in previous literature [12, 13, 14, 4, 5, 6]. Perhaps the most related paper to our paper is the work of [6], which converts the point problem of an eigenvaluebased formulation to a QCQP by SDR. However, its efficiency is not satisfactory and the tightness of SDR has not been proved. This present paper minimizes an equivalent algebraic error and reformulates it as a much simpler QCQP. The proofs of tightness and local stability are provided which are based on recent works of Cifuentes et al. [23, 24]. The contribution of this paper is threefold.

Formulation. A novel parameterization is proposed to characterize the normalized essential matrix manifold, which has fewer constraints than others. An point formulation is proposed based on this parameterization.

Efficiency. Based on the proposed parameterization, we obtained a formulation for point problem which has much fewer variables and constraints. Its solver is over times faster than stateoftheart method. The optimality conditions to recover the solution for the original QCQP is provided.

Theoretical aspects. We provide the theoretical guarantees of the SDP relaxation, including SDR tightness and local stability with observation noise. To the best of my knowledge, this is the first time to achieve such kinds of results for point problem.
3 Formulations of Minimizing the Algebraic Error
In this work, we assume to be in the central and calibrated camera. Denote is the th point correspondence of the same 3D world point from two distinct viewpoints. They are represented as homogeneous coordinates in normalized image plane ^{1}^{1}1Bold capital letters denote matrices (e.g., and ); bold lowercase letters denote column vectors (e.g., ); nonbold lowercase letters represent scalars (e.g., ). Vectors whose entries are separated by semicolon/comma stands for column/row vectors. . Each point in the normalized image plane can be translated into a unique unit bearing vector originating from the camera center. Let (, ) denotes a correspondence of bearing vectors pointing at the same 3D world point from two distinct viewpoints, where represents the observation from the first viewpoint, and the one from the second. The bearing vectors are determined by and .
The relative pose is given by the translation – expressed in the first frame and denoting the position of the second frame w.r.t. the first one – and the rotation – transforming vectors from the second into the first frame. The normalized direction will be identified with points in the sphere , i.e.,
(1) 
The 3D rotation will be featured as orthogonal matrix with positive determinant belonging to the Special Orthogonal group , i.e.,
(2) 
where
is an identity matrix.
3.1 Essential Matrix Manifold
The essential matrix is defined as [2]
(3) 
where subscript
constructs the corresponding skewsymmetric matrix for a
dimensional vector, i.e.,(4) 
Denote the essential matrix as
(5) 
where is the th row of . Denote its corresponding vector as
(6) 
where means stacking all the entries of a matrix by columnfirst order.
In this paper, an essential matrix set is defined as
(7) 
This essential matrix set is called normalized essential matrix manifold [25, 26]. Theorem 1 provides an equivalent condition to define , which will greatly simplify the optimization in our method.
Theorem 1.
A real matrix, , is an element in if and only if there exists a vector satisfies the following two conditions:
(8) 
Proof.
For if direction, first it can be verified that . Combining this result with condition (i), we can see has an eigenvalue with multiplicity and an eigenvalue . According to the definition of singular value, the nonzero singular values of are the square roots of the nonzero eigenvalues of . Thus the two nonzero singular values of are equal to . According to Theorem 1 in [27], is an essential matrix. By combining condition (ii), is an element in .
For only if direction, is supposed to be an essential matrix . According to the definition of , there exists a vector satisfying condition (ii). Besides, there exists a rotation matrix such that . It can be verified that , thus condition (i) is also satisfied. ∎
3.2 Minimizing the Algebraic Error
For noisefree cases, the epipolar constraint implies that [2]
(9) 
Due to the existence of noise, this equality will not strictly hold. Instead we pursue the optimal pose by minimizing an algebraic error
(10) 
In this problem, the objective is called algebraic error and has been widely used in previous literature [12, 13, 14, 4].
The objective in problem (10) can be reformulated as a quadratic form
(11) 
where
(12) 
and “” means Kronecker product. Note that is a Gram matrix, so it is positive definite and symmetric.
3.3 QCQP Formulations
By explicitly writing the constraints for the essential matrix manifold , the problem (10) of minimizing algebraic error can be reformulated as
(13)  
s.t. 
This problem is a QCQP: The objective is sumofsquares (SoS), which is positive semidefinite quadratic polynomials; the constraint on the translation vector, , is also quadratic; a rotation matrix can be fully defined by quadratic constraints [29]; and, lastly, the relationship between , and , , is also quadratic. This formulation has unknowns and constraints.
According to Theorem 1, an equivalent QCQP form of minimizing the algebraic error is
(14)  
s.t. 
There are variables and constraints in this problem. The constraints can be written explicitly as below
(15a)  
(15b)  
(15c)  
(15d)  
(15e)  
(15f)  
(15g) 
Problems (13) and (14) are equivalent because the objectives are the same and their feasible regions are equivalent. Both of them are nonconvex. In Appendix A, it provides another equivalent optimization problem (35). In the following text, we will only consider problem (14) due to its simplicity. It is homogeneous and does not need any homogenization, which is also simpler than the alternative formulations.
Remark: Both the problem formulations (13) and (14) are equivalent to an eigenvaluebased formulation [5, 6]. In the supplementary material of [6], it provides a proof that the objective in eigenvaluebased formulation is equivalent to the algebraic error. Since all formulations essentially utilize the normalized essential matrix manifold as feasible regions and have the equivalent objectives, these formulations are equivalent. While our formulations (13) and (14) have the following two advantages:
(1) Our formulations have fewer variables and constraints. In contrast, the eigenvalue based formulation involves variables and constraints. As shown in the following sections, the simplicity of our formulations will result in much more efficient solvers and enable the proof of tightness and local stability.
(2) Our formulations are easy to integrate priors for each 2D2D correspondence by simply introducing weights in the objective. For example, we may introduce weights for samples by slightly changing the objective as , where is the weight for th observation. In this paper, we simply set without loss of generality. For general cases in which , we can keep current formulation by simply conducting variable substitutions and .
4 Semidefinite Relaxation and Optimization
QCQP is a wellstudied problem in the global optimization literature with many applications. Solving its general case is an NPhard problem. Global optimization methods for QCQP are typically based on convex relaxations of the problem. There are two main relaxations for QCQP: semidefinite relaxation, and the reformulationlinearization technique. In this paper, we use SDR because it usually has better performance [30] and it is convenient for tightness analysis.
Let us consider a QCQP in a general form as
(16)  
s.t. 
Matrices , , where denotes the set of all real symmetric matrices. In our problem,
(17) 
is a vector stacking all entries in essential matrix and translation vector ; ; ; ; correspond to the canonical form of Eqs. (15a) (15g), respectively.
A crucial first step in deriving an SDR of problem (16) is to observe that
(18)  
(19) 
In particular, both the objective and constraints in problem (16) are linear in the matrix . Thus, by introducing a new variable and noting that is equivalent to being a rank one symmetric positive semidefinite (PSD) matrix. Thus we obtain the following equivalent formulation of problem (16)
(20)  
s.t.  
Here, means that is PSD. Solving rank constrained semidefinite programs (SDPs) is NPhard [31]. SDR drops the rank constraint to obtain the following relaxed version of problem (20)
(21)  
s.t.  
The problem (21) turns out to be an instance of semidefinite programming [31], which belongs to convex optimization and can be readily solved using primaldual interior point methods [32]. Its dual problem is
(22)  
s.t. 
where , . Problem (22) is called the Lagrangian duality of problem (16), and is the Hessian of the Lagrangian. In our problem, , if , and .
In summary, the relationships between the main formulations are demonstrated by Fig. 1.
Lemma 1.
Proof.
Denote the optimal value for problem (21) and its dual problem (22) as and . The inequality follows from weak duality. Equality, and the existence of and which attain the optimal values follow if we can show that the feasible regions of both the primal and dual problems have nonempty interiors, see Theorem 3.1 in [31] (also known as Slater’s constraint qualification [33].)
For the primal problem, Let be an arbitrary point in the essential matrix manifold : , where . Denote . It can be verified that is an interior in the feasible domain of the primal problem. For the dual problem, let . Recall that and it can be seen that . That means is an interior in the feasible domain of the dual problem. ∎
According to Lemma 1, the Lagrangian duality is equivalent to its SDR for our formulation of point problem. The Lagrangian duality provides an important theoretical tool for analyzing tightness and local stability.
4.1 Essential Matrix and Relative Pose Recovery
Once the optimal of the SDP primal problem (21) is calculated by an SDP solver, it leaves a task to recover the optimal essential matrix . First we extract a submatrix of by
(23) 
Empirically, we found that
. Denote the eigenvector that corresponding to the nonzero eigenvalue as
, then the optimal essential matrix is recovered by(24) 
where means reshape the vector to an matrix by columnfirst order.
Once the essential matrix is obtained, we can recover the rotation matrix and translation vector by the standard method in textbooks [2]. In Section 4.2, the theoretical guarantee of such pose recovery method will be provided. In Section 5, the tightness that guarantees the global optimality will be provided.
4.2 Necessary and Sufficient Conditions for Global Optimality
The following Theorem 2 provides a theoretical guarantee for previously described pose recovery method. Denote as the topleft submatrix formed by the entries of essential matrix ; denote as the bottomright submatrix formed by the entries of translation vector , i.e., and .
Theorem 2.
Proof.
First, we prove the if part. Note that and are real symmetric matrices because they are in the feasible region of the primal SDP. Besides it is given that , thus there exist two vectors and satisfying and .
Since the constraints in problem (14) do not include any cross term between and , the intersection part of and in matrix is zero in SDP problem. Substituting and into Eq. (19), it can be verified that and satisfy the constraints in primal problem (14). We can see and its uniquely determined derivatives ( and ) are feasible solutions for the SDP relaxation problem and the primal problem, respectively. Thus the relaxation is tight.
Then we prove the only if part. Since the SDP relaxation is tight, it has that . Then , . Since and can not be zero matrices (otherwise is not in the feasible region), the equalities should hold. ∎
Theorem 2 provides a necessary and sufficient global optimality condition to recover the optimal solution for the primal problem. Empirically, the optimal by the SDP problem always satisfies this condition. Specifically, there are nonzero diagonal blocks in it, which are and , i.e.,
(25) 
The parts could be arbitrary that satisfy the matrix symmetry.
Remark: The block diagonal structure of in Eq. (25) is caused by the sparsity pattern of the problem. The aggregate sparsity pattern in our SDP problem, which is the union of the individual sparsity patterns of the data matrices, , includes two cliques: one includes the th entries of , and the other includes the th entries of , see Fig. 2(a). There is no common nodes in these two cliques, see Fig. 2(b). The chordal decomposition theory of the sparse SDPs can explain the structure of well. The interested reader may refer [34, 35] for more details.
4.3 Time Complexity
First, we consider the time complexity with respect to the observation number . The construction of the optimization problem is linear with observation number, so the time complexity of problem construction is . While the computation of is rather efficient, see Eq. (12). The time complexity of optimization is independent of sample number , so the time complexity of the SDP optimization is .
Second, we discuss the time complexity with respect to the variable number and constraint number . Most convex optimization solvers handle SDPs using an interiorpoint algorithm. The SDP problem (21) can be solved with a worst case complexity of
(26) 
give a solution accuracy [32]. It can be seen that the time complexity can be largely reduced given smaller and . Since our formulations have much fewer variables and constraints than that in [6], they own much lower time complexity. The above complexity does not assume sparsity or any special structures in the data matrices. In practice, our formulation is about two orders faster than that in [6].
Finally, we discuss the time complexity for pose recovery. In our method, the essential matrix is recovered by finding the eigenvector corresponding to the largest eigenvalue of a matrix . In contrast, the method in [6] needs to calculate eigenvectors corresponding to the largest eigenvalues of a matrix. Thus our method has smaller time complexity for pose recovery.
5 Tightness and Local Stability of the SDP Relaxation
In this section, we prove the tightness and local stability of the SDR for our problem. Our proof is mainly based on [23, 24]. First, we prove that the SDR is tight given noisefree observations in our problem. The proof is based on Lemma 2.4 in [24], and it will be presented in Section 5.1. Then we prove that the SDR has local stability near the noisefree observations. The proof is based on Theorem 5.1 in [24], and will be presented in Section 5.2. To understand the proofs in this section, preliminary knowledge about optimization, manifold, and algebraic geometry are necessary. We recommend the readers to refer Chapter 6 of [23] or [24] for more details.
In the following text, the bar on a symbol stands for a value under the noisefree case. For example, stands for the matrix in objective constructed by noisefree observations; is the optimal state estimated by noisefree observations; is the optimal translation estimated by noisefree observations.
5.1 Tightness of the SDP Relaxation
Lemma 2.
If the point observations are noisefree, the matrix in Eq. (12) satisfies that , where is the number of point matches. The equality holds except for degenerate configurations including points on a ruled quadratic, points on a plane, and no translation (explanation of these degeneracies can be found in [36]).
Proof.
(i) When , from the construction of in Eq. (12), each observation will add a rank matrix to the Gram matrix . The rank matrices are linear independent except for the degenerate cases [36]. Thus the rank of is for usual cases. When a degeneracy occur, there will be linear dependence between these rank1 matrices, and the rank will be below . (ii) When , there exists the stacked vector of an essential matrix satisfying that , so the upper limit of is . Then we complete the proof by combining the previous two properties. ∎
Given a set of point correspondence, if after excluding points that belongs to planar degeneracy, the rank of is . This principle is the basis of the eightpoint method [37]. Note that we do not need to distinguish the points that belongs to the degenerate configurations, so the rank assumption in the following text can be easily satisfied in point problem in which .
Lemma 3.
Let be positive semidefinite. If for a given vector , then .
Proof.
Since is positive semidefinite, its eigenvalues are nonnegative. Suppose the rank of is . The eigenvalues can be listed as . Denote the th eigenvector as . The eigenvectors are orthogonal to each other, i.e., when . Vector can be expressed as . Thus, , and . Given , we have for . So is attained. ∎
Lemma 4.
Proof.
Our proof is based on the Lemma 2.4 in [24]. Let in the primal problem, where and are ground truth. And let in Lagrangian duality (22). The three conditions needed in Lemma 2.4 in [24] are satisfied: (i) Primal feasibility. Substituting in the primal problem, the constraints are satisfied since is ground truth and point observations are noisefree. (ii) Dual feasibility. . (iii) Lagrangian multiplier. . Since is the ground truth, according Eq. (11). Recall that is a Gram matrix and thus it is positive definite. According to Lemma 3, is attained. ∎
5.2 Local Stability of the SDP Relaxation
In this subsection, we will prove that our QCQP formulation has a zerodualitygap regime when its problem parameters are perturbed (e.g., with noise in the case of sensor measurements). Following [24, 23], we will use the following notations in the remains of this section to make notation simplicity.

is a zerodualitygap parameter. In our problem, .

Given noisefree observations, let be optimal for the primal problem, and be optimal for the dual problem. In our problem, and . According to the proof procedure of Lemma 4, , .

Denote as the Hessian of the Lagrangian at . In our problem, .

Denote as the primal feasible set given , and denote .

Denote .
In our QCQP, the objective is convex with . However, the presence of the auxiliary variables makes the objective is not strictly convex. The Theorem 5.1 in [24] provides a framework to prove the local stability for such kinds of problems.
Theorem 3 (Theorem 5.1 in [24]).
Assume that the following conditions are satisfied:
RS (restricted Slater): There exists such that and , where .
R1 (constraint qualification): Abadie constraint qualification holds.
R2 (smoothness): is a smooth manifold nearby , and .
R3 (not a branch point): is not a branch point of with respect to .
Then the SDR relaxation is tight when is close enough to . Moreover, the QCQP has a unique optimal solution , and the SDR problem has a a unique optimal solution .
Among the four conditions in Theorem 3, the RS (restricted Slater) is the main assumption, which is related to the convexity of the Lagrangian function. It corresponds to the strict feasibility of an SDP. R1R3 are regularity assumptions which are related to the continuity of the Lagrange multipliers. In the following text, we prove that the restricted Slater and R3 are satisfied in problem (14), which build an important foundation to prove the local stability. In our problem, and is independent of , thus and .
Lemma 5 (Restricted Slater).
For QCQP problem (14), suppose . Then there exists such that and , where .
Proof.
Considering the constraints Eqs. (15a) (15g) in QCQP, the gradient is
(27) 
Let
(28) 
Note that for noisefree cases
so it satisfies that
Combining with this result, it can be verified that .
It remains to check the positivity condition. According to the definition of , . Since is constructed by noisefree observations, . In other words, is orthogonal to the space spanned by . It is given that , thus . Considering as a nontrivial solution of a homogeneous linear equation system, can be expressed by a coordinate system .
It can be seen that only th entries in coordinate system , which correspond to , are nonzero. Take Hessian for variable and calculate the linear combination with coefficient , we have
(29) 
Recall that in the proof procedure of Theorem 1 we have proved that the eigenvalues of are , , and . So the eigenvalues of are , , and . And it can be verified that is the normalized eigenvector corresponding to eigenvalue of . Thus, is the orthogonal complement of .
Since for any vector , its th entries are orthogonal to , is strictly positive. It follows that . ∎
Definition 1 (Branch Point [24]).
Let be a linear map: . Let be the zero set of the equation system , and let denote the tangent space of at . We say that is a branch point of with respect to if there is a nonzero vector with .
Lemma 6.
For QCQP problem (14), suppose . Then is not a branch point of with respect to .
Proof.
It can be verified that , where and are free parameters. The last equivalence takes advantage of . If is a branch point, there should exist a nonzero vector , i.e., . We will prove that such nonzero does not exist. Substituting into equation (see Eq. (5.2)), we obtain a homogeneous linear system with unknowns and :
(30a)  
(30b)  
(30c)  
(30d)  
(30e)  
(30f)  
(30g) 
Eliminating from Eqs. (30a)(30b)(30c)(30g), we obtain that and . Substituting into this equation system, it can be further verified that this equation system only has zeros as its solution. So can only be a zero vector. ∎
Theorem 4.
For problem (14) and its Lagrangian duality (22), let being constructed by noisefree observations and . (i) There is zerodualitygap whenever is close enough to . In other words, there exists a hypersphere of nonzero radius with center , i.e., , such that for any there is zerodualitygap. (ii) Moreover, the SDP relaxation problem recovers the optimum of original problem.
Proof.
From Lemma 4, when the point observations are noisefree, the relaxation is tight. From the proof procedure, the optimum is , which uniquely determines the zerodualitygap parameter . Our proof is an application of the Theorem 3. The four conditions needed in Theorem 3 are satisfied: (RS) From Lemma 5, the restricted Slater is satisfied. (R1) The equality constraints in the primal problem forms a variety. Abadie constraint qualification (ACQ) holds everywhere since the variety is smooth and the ideal is radical, ref. Lemma 6.1 in [24]. (R2) smoothness. The normalized essential matrix manifold is smooth [26]. (R3) From Lemma 6, is not a branch point of . ∎
Now we complete the proof that the SDR of our proposed QCQP problem is tight under low noise observations. In other words, when the observation noise is small, Theorem 4 guarantees that the optimum of the original QCQP can be found by optimizing its SDR. Our proof is an application of the method in [24, 23], while the proof is nontrivial. First, we proposed a compact parameterization without any redundant constraint which enables the proof. Second, we prove the conditions are satisfied by construction. To the best of our knowledge, this is the first time to achieve such kinds of theoretical results for the SDR in point problem. Also, it is the first application of Theorem 3 for practical problems. It should note that finding the noise bounds that SDR can tolerate is still an open problem. In practice, we found that the SDR is tight even for large noise level which is much larger than that in actual occurrence.
6 Experimental Results
We compared our method against several stateoftheart methods on synthetic and real data. Specifically, we compared our method to classical or stateoftheart methods:

general fivepoint method 5ptNister [1] for relative pose estimation.

two view bundle adjustment (BA) method nonlinear.
Among these methods, the implementation of SDPBriales was provided by the authors. And the implementations of other comparison methods were provided by OpenGV [38]. The method eigensolver and nolinear need initialization, which we initialized using 8pt for all point correspondences.
To evaluate the performance of the proposed algorithms we separately compare the relative rotation and translation accuracy. Specifically, we define

the angle difference for rotations as

and the translation direction error as
In the above criteria, and are ground truth and estimated rotation, respectively; and are ground truth and estimated translation, respectively.
6.1 Efficiency and SDP Solver Selection
We test two free interior point method (IPM) solvers, including SDPT3 [39] and SeDuMi [40]. Default parameters are used for all solvers in the following experiments. Our method is implemented in MATLAB, and all experiments are performed on an Intel Core i7 CPU with GHz.
The two IPM solvers generate similar results, while SDPT3 has slightly better accuracy than SeDuMi. The SDP optimization takes about ms with SeDuMi and ms with SDPT3. Specifically, the problem construction with observations takes about ms; the essential matrix recovery takes about ms; pose decomposition take about ms. Totally it takes ms to solve an point problem with SeDuMi. Thus our method is much more efficient than SDPBriales, which takes about seconds in total. The comparison of ours and SDPBriales is listed in Table I.
method  #variable  #constraint  total time 

SDPBriales [6]  40  536  5.19 s 
ours  12  7  62 ms 
Since there are some specific structures in our problem, such as matrices are sparse, and is known a priori to be small relative to the dimension of the problem. There are some SDP solvers which exploit the structures of the SDPs, such as sparsity of the data and the lowrank property of the solution, to accelerate the optimization [41, 42, 43]. Their efficiency has been successfully validated for largescale problems. While the SDP in our method is smallscale, and these solvers are not necessarily more efficient than general solvers. We tested the SparseCoLO [41], MoDuMi (Modified SeDuMi for largeandsparse lowrank SDPs) [42], and CDCS (Cone Decomposition Conic Solver) [43].
Compared with SeDuMi, we found that: (i) SparseCoLo takes about ms, which is slightly slower. (ii) MoDuMi [42] takes seconds, which is much slower. (iii) CDCS is faster than SeDuMi ( ms), but its accuracy is unsatisfactory. Based on these results, we use SeDuMi in the following experiments. Pursuing a customized more efficient solver is a promising direction for future work.
Comments
There are no comments yet.