Plücker Correction Problem: Analysis and Improvements in Efficiency

02/18/2016 ∙ by João R. Cardoso, et al. ∙ Coimbra Institute of Engineering University of Lisbon University of Coimbra 0

A given six dimensional vector represents a 3D straight line in Plucker coordinates if its coordinates satisfy the Klein quadric constraint. In many problems aiming to find the Plucker coordinates of lines, noise in the data and other type of errors contribute for obtaining 6D vectors that do not correspond to lines, because of that constraint. A common procedure to overcome this drawback is to find the Plucker coordinates of the lines that are closest to those vectors. This is known as the Plucker correction problem. In this article we propose a simple, closed-form, and global solution for this problem. When compared with the state-of-the-art method, one can conclude that our algorithm is easier and requires much less operations than previous techniques (it does not require Singular Value Decomposition techniques).



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In many 3D vision problems that range from camera calibration to robot navigation, it is required to represent 3D straight lines. Plücker coordinates are one of the most used formulations article:pottman:2001

. Vectors of Plücker coordinates are built-up by stacking both direction and moment of the respective lines (both 3D vectors), giving a 6D vector. Direction and moment vectors must satisfy the so-called Klein quadric constraint, that is, they need to be orthogonal to each other. In this paper we address the problem of estimating Pücker coordinates from general (unconstrained)

vectors. This is called the Plücker correction problem.

For many reasons, especially when considering data with noise, it is hard to include the orthogonal constraint in the estimator (frequently, it requires non-linear procedures). Some authors do not consider this constraint on their methods or propose optimization techniques involving this constraint but, to avoid unnecessary computational effort, non-linear procedures are stopped before fulfilling the respective constraint. Several examples can be found in the literature, e.g. camera models (mapping between pixels and 3D straight lines) article:miraldo:2013 ; triangulation of 3D lines article:josephson:2008 ; structure-from-motion using lines Bartoli05 ; article:lemaire:2007 ; and 3D reconstruction of lines using a single image of a non-central catadioptric camera article:lemaire:2007 . Thus, to get Plücker coordinates on these cases, a Plücker correction needs to be applied.

The state-of-art method for solving the Plücker correction problem is due to Bartoli and Sturm at Bartoli05 , hereafter called BS method. They find the closest Plücker coordinates (in Euclidean sense) from an unconstrained six-dimensional vector by orthogonally projecting the input vector onto the Klein quadric (and so it verifies the orthogonal constraint). Their approach involves SVD decompositions and can be found in (Bartoli05, , p. 425). It should be noticed that in the description of the BS Plücker correction algorithm (Table 2 in Bartoli05 ) there is a typo in the entries of the matrix . The correct matrix is given by


We performed a detailed analysis of the proof of the BS method ((Bartoli05, , p. 425)) and propose some clarifications. For the proof of BS method the following identity is used:


where and are matrices of sizes and , respectively, with , and is an matrix with orthonormal columns (i.e . , but ). This identity is related with the invariance of the Frobenius norm for left multiplication by matrices with orthonormal columns.

However, (2) does not hold in general. It is valid if, and only if, the space of columns of coincides with the space of columns of , in which case , for some matrix of size . We notice that, in the proof of the BS method, this requirement is not explicitly mentioned.

Paper Wu15 also addresses the same problem. However they do not address the problem of the computational efficiency and the proposed approach is quite different from the method described in this paper. Moreover, even if the solution proposed in Wu15 is indeed a global minimum, there is no formal proof of such a fact. We shall recall that Lagrange multipliers yield only local minima, unless the existence of a global minimum is guaranteed.

1.1 Notations and Problem Definition

Column vectors are represented by bold small letters (e.g.  for an -dimensional vector). Bold capital letters denote matrices (e.g.  for an matrix). Regular small letters denoted zero dimensional elements (e.g.  ). denotes the Frobenius norm for matrices or the -norm for vectors. Recall that, for any matrix , the Frobenius norm is given by and, for any vector , the -norm satisfies .

The Plücker coordinates of a 3D straight line can be represented by a six dimensional vector:


where are, respectively, the direction and moment of the line, verifying the Klein quadric constraint


see article:pottman:2001 .

Let and be given vectors in (not necessarily satisfying the orthogonality constraint), and assume that and denote vectors in . Mathematically, the Plücker correction problem corresponds to solving the nonlinear constrained optimization problem formulated as


While the objective function


is convex, the Klein quadric constraint is not. The optimization problem (5) belongs to a class of non-convex problems known in the literature as quadratically constrained quadratic programs. This means, in particular, that the existence of a global minimum may be a non trivial problem.

The main goals of this paper are to prove the existence of a global minimum for (5) and to give an explicit formula for computing such a minimum. Our approach is essentially based on the application of the classical Lagrange multipliers to the constrained problem (5). It does not involve singular value decompositions which turns it faster than the BS method. Our results are supported by mathematical proofs and numerical experiments. In addition our method is designed to deal with general -dimensional vectors and , and not exclusively with -dimensional vectors.

1.2 The Explicit Formula

Let us consider two general 3D vectors and , such that and . As it will be shown in the following sections, the proposed solution for the global minimum of (5) is given by




2 Plücker Correction using Lagrange Multipliers

Suppose again that and satisfy and , and let denote the Lagrangian function associated to (5). Some calculation yields


where the real number is the Lagrange multiplier, and are the aimed solutions. The partial derivatives of the Lagrangian with respect to , and are (for formulae of derivatives with respect to vectors see Lutkepohl96 )


Equating these partial derivatives to zero, one obtains the first order optimality conditions (also known as Karush, Kuhn, Tucker conditions; see Luenberger08 ; Nocedal99 ):


From (13) and (14),


(a geometric interpretation of (15) and (16) can be found in Figure 1). Replacing and in (15), leads to the quadratic equation in


where and are


Hence, the solutions of (17) are


Denoting, for ,


we know that and are the candidates to be a local minimum of the Lagrangian . However, since the gradient of the constraint involved in (5) annihilates at , one also needs to consider this non regular and non stationary point. Thus a local minimum of (5) must be attained at one of the following three points: , or .

Now we shall note the following facts.

  • The assumption of and being such that , with , guarantees, in particular, that cannot be , which ensures that and in (16) are well defined.

  • In (19), is always a real number because . Indeed,

    Moreover, if , then and are orthogonal, which means that the global minimum of the objective function is attained at .

We end this section by showing that the objective function (6) satisfies


which proves that given in (20) is the local minimum where the objective function attains the smallest value. In Sec. 2.1 is is shown that is also the global minimum. Substituting and given in (16) in the objective function (6), some calculation yields the following real function depending on the real variable :


Now (21) can be rewritten as


Let us write the function in the form


with and .

For and given in (19), it is not hard to check that , which implies . Hence, to prove the first inequality of (23), it remains to show that .

In fact, because () satisfies the quadratic equation (17), one has . Therefore,

and, consequently,

Finally, considering the equality and the fact that the quadratic function is non negative, for all real numbers , the second inequality of (23) arises as a consequence of the equivalences

2.1 Existence of a Global Minimum

Given a unit vector in the Euclidean space , consider the line through the origin with the direction of . Given another vector , it is well-known that the unique vector of that is closest to (in Euclidean sense) is the orthogonal projection of onto , which is the vector (see, for instance, (Meyer00, , p.435)).

Figure 1: Geometric interpretation of the Plücker correction problem. Given vectors and , the goal is to find two perpendicular vectors and that minimize the sum representing the sum of the distance between and the line through the origin with the direction of the unit vector with the distance between and the line through the origin with the direction of the unit vector .

Hence, the constrained problem (5

) corresponds to find orthonormal vectors

and minimizing the sum of the distance , between and the line through the origin with the direction of , with the distance , between and the line through the origin with the direction of (see Fig. 1). This means that (5) can be reformulated as


This formulation of the Plücker correction problem is apparently more complicated and less practical than (5), but it is helpful to show that (5) has in fact a global minimum. To see this, one just needs to observe that the constraints in (25) define a closed and bounded (and, consequently, compact) set in endowed with the Euclidean metric. Thus, by the classical Weierstrass theorem (see, for instance, (Luenberger08, , Appendix A.6)), there exists at least a global minimum. This proves that the analysis carried out in Section 2, using the Lagrange multipliers, guarantees that the objective function (6) attains a global minimum at

where is defined as in (19).

2.2 Cases

As far as we know, the cases when

rarely occur in practical problems of computer vision. However, it is worth to make some comments on this particular case.

If with , the solution of (5) is obviously and . If but then replacing by in the Lagrangian (9), we get a simpler expression. Finding the first order optimality conditions and solving them, one easily concludes that any pair of vectors of the form , with and arbitrary, gives a local minimum. The value of the Lagrangian at all these local minima is and does not depend on . Choosing, for instance, , it follows that the pair is a local minimum of (5). Using a similar argument to that of Sec. 2.1, this pair is also a global minimum.

Similarly, if , with , it can be concluded that provides also a global minimum for (5).

3 Experiments

In this section, we compare the method based on the explicit formula (7), with the method of Bartoli and Sturm Bartoli05 , in terms of computational effort. Both methods were implemented using MATLAB. The codes will be available in the author’s website. We consider three different algorithms:

  • LMPC which denotes to the method derived in this paper;

  • BS which corresponds to the Bartoli and Sturm’s approach; and

  • BS-LSVD which denotes to the method proposed by Bartoli and Sturm, where the SVD is computed using closed-form techniques.

A detailed description of each algorithm is shown in A

To compare the methods we use the following procedure: we randomly generated unconstrained vectors (Klein quadric is not enforced). For each trial, we apply both Plücker correction algorithms to the respective six-dimensional vectors, storing the values of the corresponding objective functions (6). The results in terms of computational time required for each algorithm is shown in Tab. 1.

Algorithm For all trials For each trial (median)
LMPC 6.5797 5.3940
BS 43.688 34.827
BS-LSVD 14.0058 11.805
Table 1: Evaluation of the computational time for the three algorithms, for general unconstrained trials.

Both LMPC and BS-LSVD methods can be implemented using only closed-form steps. However, while LMPC can be computed with a few steps (it only requires six lines of code), BS-LSVD requires more algebraic operations and takes a longer time to run, see Tab. 1. Indeed, BS-LSVD is about two times slower than LMPC. On the other hand, if we consider the original Bartoli and Sturm approach BS, which requires the computation of two singular value decompositions, it requires iterative steps and, as a result, this method is significantly slower. From our experiments, one can see that this method is more than six times slower than our approach.

3.1 Discussion of the Experimental Results

In many applications there are estimates of 3D line coordinates that are not obtained from points. 3D lines can be estimated from intersections of planes, for example, or they can be obtained from non-conventional sensors, such as non-central generic cameras. In non-central generic cameras calibration, it consists in estimating a 3D line for each pixel article:sturm:2011 ; article:miraldo:2013 . If we represent 3D lines using Plücker coordinates, the Klein quadric constraint has to be enforced — in this case this correction has to be applied to all and each pixel, which corresponds to call the Plücker correction algorithm a large number of times. Let us consider a camera system, with a standard image size of , which contains a total of pixels. For this case and from the experimental results, one can conclude that the Plücker correction step using the method proposed in this paper will be, at least, two times faster than Bartoli and Sturm method, which consists in saving more than eight seconds.

Other example is, using perspective cameras, the estimation of 3D lines from the intersections of the back-projecting planes from two or more images.

4 Conclusions

In this paper we have addressed the Plücker correction problem, by minimizing the Frobenius norm between the estimated and input vectors. By solving the corresponding optimization problem using Lagrange multipliers, a simple solution, that can be computed in closed-form with a very small number of operations, was proposed. Contrarily to the state-of-the-art method, we have proved theoretically that our method computes always the global minimum. In addition, the special cases (where a solution cannot be computed) are analysed. As the experimental results show, the proposed method is faster.



  • (1) H. Lütkepohl, Handbook of Matrices, Jonh Wiley and Sons, 1996.
  • (2) J. Nocedal and S. Wright, Numerical Optimization, Springer-Verlag New York, 1999.
  • (3) C. D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, 2000.
  • (4) H. Pottmann and J. Wallner, “Computational Line Geometry”, Springer-Verlag Berlin Heidelberg, 2001.
  • (5) R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd Edition, Cambridge University Press, 2004.
  • (6) A. Bartoli and P. Sturm, “Structure-from-Motion Using Lines: Representation, Triangulation and Bundle Adjustment”, Computer Vision and Image Understanding, 2005.
  • (7) D. Lanman, M. Wachs G. Taubin, and F. Cukierman, “Reconstructing a 3D Line from a Single Catadioptric Image”, IEEE Int’l Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT), 2006
  • (8) T. Lemaire and S. Lacroix, “Monocular-vision based SLAM using Line Segments”, IEEE Proc. Int’l Conf. Robotics and Automation (ICRA), 2007.
  • (9) D. Luenberger and Y. Ye, Linear and Nonlinear Programming, Third Ed., Springer, 2008.
  • (10) K. Josephson and F. Kah, “Triangulation of Points, Lines and Conics”, J Math Imaging Vis, 2008.
  • (11) P. Sturm, S. Ramalingam, “A Generic Concept for Camera Calibration”, European Conf. Computer Vision (ECCV), 2011
  • (12) P. Miraldo and H. Araujo, “Calibration of Smooth Camera Models”, IEEE Trans. Pattern Analisys and Machine Intelligence, 2013.
  • (13) P. Miraldo and H. Araujo, “Planar Pose Estimation for General Cameras using Known 3D Lines”, IEEE/RSJ Proc. Int’l Conf. Intelligent Robots and Systems (IROS), 2014.
  • (14) P. Miraldo, H. Araujo, and N. Gonçalves, “Pose Estimation for General Cameras using Lines”, IEEE Trans. Cybernetics, 2015.
  • (15) F. Wu, M. Zhang, G. Wang, and Z. Hu, “Algebraic Error Based Triangulation and Metric of Lines”, PLoS ONE 10(7), e0132354, 2015.

Appendix A Algorithms

In this section we show the code used in the experiments. Considering two general vectors and that do not verify the Klein constraint, using our method, the closest orthogonal vectors are given by and such that:

[gobble=2,numbers=left,frame=lines,label=LMPC] p = a1*b1 + a2*b2 + a3*b3; q = a1*a1 + a2*a2 + a3*a3 + b1*b1 + b2*b2 + b3*b3; mu = 2*p/(q+sqrt(q*q-4*p*p)); u_= 1/(1-mu*mu); x1 = (a1-mu*b1)*u_; x2 = (a2-mu*b2)*u_; x3 = (a3-mu*b3)*u_; y1 = (b1-mu*a1)*u_; y2 = (b2-mu*a2)*u_; y3 = (b3-mu*a3)*u_;

The algorithm proposed by Bartoli and Sturm is based on the singular value decomposition and can be implemented as: [gobble=2,numbers=left,frame=lines,label=BS] [U,S,V] = svd(A,0); Z = S*V’; z11 = Z(1,1); z21 = Z(2,1); z22 = Z(2,2); z12 = Z(1,2); T = [z12, z22; z21, -z11]; [ ,St,V_] = svd(T); hv = V_(:,2); hV = [hv(1),-hv(2);hv(2),hv(1)]; R = U*hV*diag(diag(hV’*S*V’)); x = R(:,1); y = R(:,2);

Usually, the singular value decomposition requires iterative techniques. However, since in this case we are dealing with matrices, is it possible to derive a closed form solution for this decompositions. Thus, in our experiments we also implemented a closed form solution for the Bartoli and Sturm method: [gobble=2,numbers=left,frame=lines,label=BS-LSVD] s1 = sqrt((a11*a11*a11*a11 + 2*a11*a11 *a12*a12 + 2*a11*a11 *a21*a21 - . . . ) s2 = sqrt(a11*a11 /2 - (a11*a11*a11*a11 + 2*a11*a11 *a12*a12 + . . . ); s2 = sqrt(s2); v11 = -((a11*a12 + a21*a22 + a31*a32))/(a11^2 + a21^2 + a31^2 - s1*s1); v21 = 1; nv = (v11*v11 + v21*v21)^(1/2); v11 = v11/nv; v21 = v21/nv; v12 = v21; v22 = -v11; u11 = (a12*v21 + a11*v11)/s1; u12 = (a12*v22 + a11*v12)/s2; u21 = (a21*v11 + a22*v21)/s1; u22 = (a21*v12 + a22*v22)/s2; u31 = (a31*v11 + a32*v21)/s1; u32 = (a31*v12 + a32*v22)/s2; z11 = s1*v11; z12 = s1*v21; z21 = s2*v12; z22 = s2*v22; t11 = z12; t12 = z22; t21 = z21; t22 = -z11; st1 = t11/2 + t22/2 - (t11*t11 - 2*t11*t22 + t22*t22 + 4*t12*t21)^(1/2)/2; st2 = t11/2 + t22/2 + (t11*t11 - 2*t11*t22 + t22*t22 + 4*t12*t21)^(1/2)/2; if st1 ¡ st2 v1 = (t12*1)/(st1 - t11); v2 = 1; nv = (v1*v1 + v2*v2)^(1/2); v1 = v1/nv; v2 = v2/nv; else v1 = (t12*1)/(st2 - t11); v2 = 1; nv = (v1*v1 + v2*v2)^(1/2); v1 = v1/nv; v2 = v2/nv; end h11 = v1; h12 = -v2; h21 = v2; h22 = v1; x1 = (u11*h11 + u12*h21)*(h11*s1*v11 + h21*s2*v12); y1 = (u11*h12 + u12*h22)*(h12*s1*v21 + h22*s2*v22); x2 = (u21*h11 + u22*h21)*(h11*s1*v11 + h21*s2*v12); y2 = (u21*h12 + u22*h22)*(h12*s1*v21 + h22*s2*v22); x3 = (u31*h11 + u32*h21)*(h11*s1*v11 + h21*s2*v12); y3 = (u31*h12 + u32*h22)*(h12*s1*v21 + h22*s2*v22);

As it can be easily seen in the previous codes, our method LMPC requires much less operations when compared with the closed form of Bartoli and Sturm algorithm BS-LSVD. On the other hand, it does not require iterative techniques, such as the BS method, which usually makes the algorithm slower.