This paper addresses the application of robust decomposition techniques, like the successful Robust PCA (RPCA) , to the problem of outlier (i.e., gross noise) detection in the special case of camera-pose recovery and Visual Odometry, referred in this paper to as motion estimation.
The emergence of outliers in real situations is unavoidable and estimation techniques need to achieve a certain level of robustness to prevent producing wrong estimations. In the context of motion estimation —the practical case studied in this paper— outliers arise by two main reasons. The first reason is the violation of model assumptions, for instance, when a scene is considered static but foreground objects start moving (see the car in Fig. 1). The second is the failure of matching methods to retrieve correct point matches at different viewpoints.
Recently, it has been shown that RPCA can be used to recover an uncorrupted version of a matrix affected by outliers , but such a recovery is just achievable when matrices fulfil several well-understood constraints  (see Section III). However, this is not the case when dealing with the data matrices arising in motion estimation problems, and therefore RPCA can not be used to exactly recover an uncorrupted version of the data, i.e., such matrices are outside the recovery boundaries of RPCA .
In this paper we propose to exploit the information resultant from the sparse and low-rank decomposition to make a binary decision on each tuple of point matches (column) about its pertinence to the outlier set. As we will show along this paper, despite the impossibility of performing an exact recovery of every element of the observation matrix, the resultant information is enough to make this set of binary decisions. Unlike works like , our outlier detection technique has the advantage that is not based on the correct recovery of the low-rank matrix and the sparse matrix (or their column spaces). Instead, we exploit the rank constraints present in motion estimation problems, leading to a novel Robust Decomposition with Constrained Rank (RD-CR). Our technique is based on proximal gradients, inspired by the Accelerated Proximal Gradient algorithm (APG) , but extended to deal with fixed-rank constraints efficiently. We show that our approach presents extended boundaries of application, making robust decomposition techniques useful for motion estimation applications.
We also present a new framework for robust motion estimation that applies RD-CR as a preprocessing stage, prior to the application of a very fast compressed least-squares method. Compressed least-squares makes use of an efficient data compression scheme known as the Reduced Measurement Matrix (RMM)  to condense thousands of constraints (point matches) into a compact matrix surrogate that allows for very fast optimization. RD-CR mitigates the robustness problem of compressed least-squares methods by filtering out the outliers. This finally leads to an efficient and robust regression framework. We demonstrate this framework for the stereo Visual Odometry problem with evaluations in both synthetic cases and the realistic and challenging sequences of the KITTI dataset . This evaluation also serves to test the applicability of RD-CR technique in complex scenarios, showing its good virtues when compared against state-of-the-art techniques. To the best of our knowledge this is the first time robust decompositions are tested in such challenging scenarios.
Ii Related Work
Ii-a Robust Estimation
Robust estimation aims at providing algorithms with certain level of resistance to noise and outliers . It is a well-studied problem within the field of statistics and plays a crucial role in computer vision and robotics. Within computer vision and robotics, these techniques are widely applied to motion estimation problems, such as camera pose estimation and Visual Odometry (VO). In these domains, state-of-the-art approaches are usually based on consensus strategies, like the RANdom SAmple Consensus (RANSAC) or any of its variants . The main problem of RANSAC is its computational complexity. RANSAC estimates robust models by identifying inliers via the generation of several hypotheses (models). This process grows exponentially w.r.t the percentage of outliers, leading to undesired performance for practical applications. A recent variation of consensus methods, called robust averaging , avoids this drawback by merging several “weak” models in a robust fashion to produce a robust outcome. In this case less models are needed, leading to a more efficient approach. However these methods have a breakdown point of .
In many occasions, simpler alternatives, such as robust cost functions, are required to perform a quick and simple job. An instance of this is the Huber M-estimator , a variation of least-squares that uses two different modes to weight data points close to zero (inliers) and those that are far away (outliers) . Unfortunately, although efficient, its theoretical breakdown point is , meaning that in the worst case a single gross error could produce an arbitrarily large bias in the estimation. Another example is the Iteratively Reweighted Least Squares (IRLS) technique . In this case robustness is achieved by dynamically modifying a set of weights describing the influence of each observation. When this update is based on a robust cost like the -norm (IRLS-L1) the breakdown point becomes , therefore, being a good alternative to consensus approaches .
Achieving a breakdown point of is the best one can expect without any extra information of the data distribution. In previous cases, all the methods try to detect outliers by analysing how data samples agree with a set of putative models. This exhaustive generation leads to an expensive process. A promising alternative we use in this paper consists in analysing the quality of the data with respect to an abstract set of constraints, such as the rank constraints arising in , groups. These rank constraints can be exploited very efficiently without the need of creating a set of putative models. Instead, observations are checked as a “whole” against the rank constraint to decide which elements are more likely to violate the condition. Moreover, these properties still hold beyond the breakdown point , at least theoretically.
Ii-B RPCA for Outlier Detection
Outlier detection has been previously addressed by RPCA-like techniques in different domains, such as identifying outliers in digits images . However, in these cases the ambient dimension is high and the rank of the data matrix is low (i.e., the recovery warranties are met). As a consequence, outlier detection is posed as the problem of recovering a clean column space of the input data by mean of a robust decomposition based on a penalizer. However, the problem of motion estimation from stereo devices involves dealing with a low ambient dimension problem, which is very challenging due to theoretical limitations.
An initial attempt to deal with this problem can be found in . The proposed algorithm performs outlier detection for 3D registration by using a RPCA-like approach. However, they always stay within the recovery boundaries by gathering as many frames as possible, creating a high ambient dimension. This avoids the main problem of the low ambient dimension but at the cost of limiting the applicability of the technique, which can only work in small scenarios where conditions remain similar through time.
In this paper we propose to overcome these limitations with a more general alternative that is suitable for large sequences in outdoor environments, where only pairs of frames are required. The recovery boundaries are not fulfilled, but even in these extreme situations, our approach produces useful information that helps detecting outliers. Moreover, we assume that our models are based on multiple 2D observations  instead of 3D. This has proven to be more convenient for Visual Odometry, where, least-squares based methods are frequently used to model the problem and the Maximum Likelihood Estimator (MLE) of these functions is achieved by using 2D observations.
Iii Problem formulation
In this work, we consider the estimation of motion models , between pair of stereo frames along a given sequence of frames . Each frame consists of two images taken from the left and right cameras at the same time instant . This formulation is suitable for the stereo Visual Odometry problem addressed in this work. To this end, we gather observations from two cameras in two different time instants. Camera models are represented by , , where superindices stand for (l)eft and (r)ight cameras and subindices denote time instants. Without lack of generality, we assume that the stereo rig is calibrated and that the image planes of both cameras, and , are aligned and have a known baseline . The motion of the stereo rig from time to is given by a rigid 3D transformation as shown in (1),
This motion is represented as an element of the Special Euclidean group in 3D as in
where is a rotation matrix and
a translation vector. Each of the cameras can be algebraically described as shown in (3)–(6),
given that , represents a projection matrix, and
is a calibration matrix with focal length and principal point at . Given this configuration, the transformation of the stereo rig can be estimated from a set of 2D correspondences coming from the views , respectively . When estimating the transformation , one should account for the presence of noise and outliers in the observations in order to avoid a biased solution.
Our proposal exploits the rank constraints present in rigid 3D motions to identify outliers. To achieve this, it is important to notice that some arrangements of the input data as a matrix lead to such rank constraints . As an example of this concept, let us consider two sets of homogeneous 3D points at two time instants , stacked as described in (8)-left. When the right side expresses a matrix in terms of an initial set of points and two motion transformations acting on it, as
From this expression is easy to see that the rank of the matrix is , since . This example shows the rank constraints exploited by our approach. However, in our case we use multi-view 2D observations from a calibrated stereo rig, instead of 3D points. The reasoning for this case, is given in Lemma 1 and Fig. 2.
Let us assume a measurement matrix made of uncorrupted 2D observations that have been normalized to remove their mean or transformed according to (known). consists of observations from four views of a moving stereo rig. Let also consider without lack of generality that the planes of the left and right cameras are aligned and that the separation between the cameras forming the stereo rig defined in one direction (x-axis). In these conditions, the 3D motion of the stereo rig will lead to as expressed in (9).
Here, is a non-linear function that converts each point from homogeneous to non-homogeneous coordinates. This operation rises the maximum rank from 4, the case explained before for 3D data, to 6. In the following section we explain how to perform outlier detection by exploiting Lemma 1.
A direct geometrical proof can be derived by considering the degrees of freedom of points in a sequence of stereo frames (see Fig.2). Given an arbitrary point seen by the left camera , the location of its counterpart in is determined by an epipolar constraint with one DoF. The same is true for the match between and . Finally the associations along time, and , given by the motion transformation are determined by two more DoF, summing up to six DoF. ∎
Iii-a Robust Decomposition with Constrained Rank
We use the rank constraint in (9) to detect potential outliers present in the data matrix . The corrupted matrix can me decomposed as a sum of two terms . In this decomposition we assume that is the uncorrupted version of , which satisfies the correct rank constraint. On the other hand, is a sparse matrix accounting for the outliers, i.e., “infrequent events” of unbounded magnitude. This decomposition can be achieved by imposing sparsity constraints on the error matrix as
However, this formulation becomes intractable due to the presence of the -norm, whose minimization is of combinatorial nature and leads to a NP-hard optimization problem . Recent discoveries have led to the proposal of convex counterparts of (10), referred in the literature to as Robust PCA or Robust Principal Component Pursuit. These techniques are based on a convex relaxation of (10) as
where is the nuclear norm that acts as the convex envelope of the rank function, and the -norm acts as the convex envelope of the norm.
The convex problem (11), although very convenient, suffers from two conceptual limitations when applied to low-dimensional ambient spaces. First, it does not impose the required rank constraint, i.e., . This could produce low-rank solutions with different ranks, generally smaller, than the one arising in our problem, and thus leading to wrong solutions. Second, and more important, (10) and (11) are not equivalent in those cases where the rank is close to the dimensions of the matrix 
, as in this case. The behaviour of these limits is abrupt, which is why this phenomenon is known as phase transition, having an universal character, as stated in. For most methods, the region of applicability is approximately of the form
where, is the rank fraction and is the proportion of sparsity (i.e., outliers). The constant depends on the specific formulation but it is usually (see ). Also, RPCA convergence theorems assume that the support of
is uniformly distributed, meaning that cases where the sparse components are localized in blocks, rows or columns are out of the applicability of RPCA. Nonetheless, some experiments have been reported in, where has support on columns or rows and is still applicable, as far as the other requirements are met. To this end,  proposes the -norm as a natural way to deal with outliers as columns. However, in our tests, when the ambient dimension is low, the behaviour of methods is equal or worse than the counterpart, and computationally more costly.
Our proposal to deal with these problems consists of extending the region of applicability of by modifying the program (11) to explicitly incorporate the rank constraints. To this end, we reformulate (11) as
We refer to (13) as Robust Decomposition with Constrained Rank (RD-CR). This formulation enables solving problems in harder conditions, i.e., higher ranks and greater proportions of outliers. However, in motion estimation problems, the rank is still too high to achieve an exact estimation of and . For this reason, we tackle the problem by using the residual matrix to infer which columns (point matches) are outliers. The steps required to perform this process are explained in detail in section IV.
Iv Motion Estimation via RD-CR
This section explains our proposal of two stages framework for robust estimation of motion models (see Fig 1). The first stage uses the RC-RD method as a preprocessing step and the second stage applies a compressed regression procedure that exploits data reduction to achieve high performance.
Iv-a RD-CR as a preprocessing stage
The process of outlier detection starts by solving the program (13) with the proposed RD-CR approach, which is based on proximal gradients, following the philosophy of APG . However, in our case the use of the nuclear norm is avoided and the SVD is changed for a more convenient skinny SVD that serves to generate feasible candidates of rank with a computational complexity of per iteration . Our technique also serves to deal with matrices presenting a rank below
, as the skinny SVD will produce singular values very close to zero in those cases. This method, summarized in Algorithm1, consisting of three conceptual steps; namely, initialization, projected gradient descent and continuation.
Iv-A1 Initialization (lines 1–3)
The imposed constrained rank makes (13) non-convex, for which a proper initialization of and is required. A good initialization is achieved by solving the convex surrogate (11) through the use of the APG method . APG uses a standard proximal algorithm improved by Nesterov acceleration  and a continuation technique to speed up the convergence of the terms involving proximity operators , i.e. both the nuclear and norms. Despite its convexity, it should be noticed that in the extreme cases we are addressing (high ) APG might not converge to a suitable solution. However, when APG is run for a fixed amount of iterations (), it produces candidates with enough quality to initialize our method. The remaining parameters are set to , and .
Iv-A2 Projected Gradient Descent (lines 6–13)
The main loop of Algorithm 1 performs iterations, each one consisting of the computation of a projected descent step for and . To this end, descent steps and are computed from the Euclidean gradient . is projected to the rank-6 manifold through the skinny SVD, while is projected to promote sparsity through the soft-thresholding operator
We use fixed-length steps , for the descent process with values in the ranges , , which are close to the Lipschitz constant of (13).
Iv-A3 Continuation (lines 4 and 14–16)
Continuation is defined as a technique to improve the convergence of proximity operators like the soft-thresholding by dynamically adapting the parameter following a geometrical series . However, in our approach we adapt the continuation scheme to be more coherent with the current estimation. This is done by taking as a fraction of the mean size of the singular values of the Euclidean gradient , as,
Then, is used to generate a new , which empirically led to a higher performance.
A proof of convergence for the Algorithm 1 is very hard due to the large value of for our low ambient dimension scenario. However, this does not limit its applicability to real problems since in our experiments running the algorithm during iterations has produced good results. Moreover, the proposed method is very efficient due to the moderate computational complexity of Algorithm 1, i.e. , and given that and .
After applying Algorithm 1, the resultant contains outliers distributed in columns. Then, in order to decide if the -column (point match) is an outlier or not, we apply the following decision criterion,
The decision made by is based on a threshold directly proportional to the average -norm of the matrix up to a saturation point . This expression serves to adapt the threshold to the mean residual of the data, what in our experiments has reported the best performance through cross-validation. In the cases where , the match point is removed from the correspondence set.
Iv-B Fast Compressed Least-Squares
The second stage of our framework consists in the actual estimation of the transformation . In order to make this stage as fast as possible we consider those cost functions that are easy to linearise with respect to the unknowns. We have selected the algebraical cost function proposed in , defined as;
This function measures the reprojection error of points in the two stereo views as the misalignment of homogeneous vectors. Since this cost is algebraical instead of a geometric one, it requires a proper normalization of the data, i.e., removing its mean and scaling it up by its standard deviation as described in. In this context, stands for homogeneous 3D points and for homogeneous 2D points. is the standard cross product of vectors. The rest of the notation remains as explained before.
The choice of (18) is motivated by its tolerance to be efficiently compressed. Compression consists in turning (18), which sums over all the observations, into a compact equivalent , known as Reduced Measurement Matrix (RMM). In this case, the compression technique used is fully equivalent to the original data, i.e., is a lossless compression. The compression step is performed by following the process,
For our particular problem is a matrix, which is determined from the number of coefficients required to linearise the original cost function (18). Here, , for is the matrix of linearised coefficients which results from combining the calibration , and the observations, as shown in (24). Then is the stack of the matrices and , i.e., a vector resulting from stacking the components of with an homogeneous component.
The reduction of (19) is very efficient due to its algebraical nature and just has to be performed once, before the optimization takes place. Then, the optimization is posed as a compressed least-squares problem on the manifold in order to fulfil the orthonormal constraint of 3D rigid motions. This can be formulated as,
where is the estimated motion transformation given as an element of the Lie algebra, , which is the tangent space of at the identity. The use of the exponential map serves to move from the tangent space back to the manifold. This map is given by the Rodrigues formula  and is a requirement to perform the evaluation of the cost function. Despite the special properties of (20), it can be easily optimized by using Levenberg-Marquardt. The only requirement for this optimization is the computation of the Jacobian of (20), something that is done by computing its partial derivatives as,
The partial derivatives of are calculated by considering a -order approximation (with in our experiments) of the expression from the Taylor’s series defined as,
The optimization of this function is very fast and converges in a couple of iterations. The use of reduces each iteration to just various matrix by vector products and vector by vector products, what makes the compressed least-squares procedure very efficient. The total time for the compression and the optimization is around one millisecond in a standard desktop computer.
V Experimental Evaluation
This section shows the experiments performed on synthetic and real data to validate our approach. On one hand, synthetic data is used as a control environment to analyse the behaviour of the RD-CR algorithm in different situations. On the other hand, experiments with real data serve to compare the performance of the whole framework against state-of-the-art techniques for stereo Visual Odometry. From the results we conclude that our approach is competitive against state-of-the-art methods in terms of accuracy and is more efficient in terms of computation.
V-a Synthetic data tests
We have created a set of tests that simulate the conditions of a stereo rig performing a rigid motion in 3D, which is the problem of application addressed in this paper. For each case, we evaluate the method for a specific matrix size (i.e., number of matches) and proportion of outliers . We define the stereo rig with the reference camera matrices . This means that is the reference frame, while is set at baseline distance . Then, motion transformations are created such that each . For each model we create matches , according to the four views. Point matches are then contaminated with random noise of magnitude and impulsive noise with magnitude , to represent outliers. In each case the of elements are corrupted by in all their components. This data is reorganized to form a matrix as explained in Sec. III.
We have run this test repeatedly for values of from to matches and ratios of outliers from to . In each combination of and , repetitions are performed and from them the average of outlier detection is calculated, i.e., the accuracy of the method. The results of this experiment are shown in Fig. 3(Top). Our approach RD-CR shows a high success ratio, identified by warm colours, for the different sizes of matches considered and percentages of outliers. Additionally, as we can see in Fig. 3(Bottom), the percentage of outlier removal of RD-CR closely matches the expected percentage of corruption, meaning that the method recall is high and it does not produce many false positives.
By imposing the correct rank, RD-CR reduces the percentage of excessive elimination of outliers, a problem that is present in APG, as shown in Fig. 4. Given several levels of corruption we compare the excess of outlier removal (false positives) caused by RD-CR and APG. From these results we can conclude that, in general, RD-CR overcomes the efficacy of APG to detect outliers and causes less false positives for the different configurations of noise. This is mainly due to the correct rank estimation done by RD-CR, as APG tends to underestimate the rank, leading to more false positives.
This synthetic experiment aims to show that it is possible to exploit RPCA-like methods to detect outliers even when working beyond the recovery boundaries of these techniques, which is the initial assumption of this work. In these cases, the high accuracy of our approach at outlier detection is closely linked to a precise selection of the threshold within the function . However, when working in real scenarios this information cannot be determined in a precise fashion. In addition, real scenarios bring extra challenges, such as multiple corruption per correspondence, i.e., more than one point is severely affected within a match, and can adopt moderated values, making the detection process more challenging. Nevertheless, our next experiments prove that RD-CR still work in real scenarios under these adverse conditions.
V-B Real data tests
We now show the good performance of our framework applied to stereo Visual Odometry in real sequences. Data sets are taken from the challenging KITTI benchmark  in its Visual Odometry modality. KITTI sequences correspond to large urban environments, providing stereo images taken from a car. We use sequences 00–10 for which ground truth is available.
The comparison is carried out with 5 state-of-the-art methods for motion estimation: RANSAC  and L1-averaging  representing classical consensus and modern consensus-by-averaging methods. IRLS-L1  and Huber M-estimator  as representatives of fast robust estimators, and Compressed Least-squares (C-LS) —the compressed flavour of least-squares presented in section IV-B— stands for non-robust methods. In addition, APG is also tested in order to highlight the difference in accuracy when the rank constraint is not imposed. Other methods, like Bundle Adjustment, are excluded since our goal is the estimation of individual models, without considering the temporal relationships among the different motion transformations.
In our experiments RANSAC and L1-averaging are set to produce a maximum of and models respectively during the consensus, which is a realistic number for real-time applications. The constant for the Huber M-estimator is set to , after a process of validation. The number of point matches per frame is around . For all methods the computation of the models is done by an optimization process on , via a couple of iterations of Levenberg–Marquardt. The cost function used in this optimization follows a standard projection of 3D points into the left and right cameras , over a fixed number of matches , i.e., the minimum amount required to generate a valid model. In our experiments this cost function is defined as,
As an error metric we use the mean of the individual relative errors along a whole sequence. To measure this error, we have considered the most natural metric, i.e., the group structure of rigid transformations, which is defined as;
In this expression, is the logarithm of the group, mapping from the group to its tangent space. This tool is used to measure the difference between the estimated motion and the ground truth . From that, a relative error is computed to facilitate an intuitive interpretation,
It should be stressed that this metric provides a measure of the local error, and does not take into account the accumulative effect of the error in the global trajectories. This point is critical in order to understand the differences between the results provided by Table I, which is focused on local errors; and Fig. 5 and Fig. 6, which show the accumulative effects of the errors.
|Consensus based Methods|
|Non-Consensus based Methods|
|Robust Decomposition based Methods|
|Methods||Time analysis||Time (s)|
|Number of model generated from random matches.||Time used to generate each model by means of an iterative process on .|
|The amount of point matches considered.||Time employed to check the membership of a specific match to the inlier set.|
|Time taken by the initialization of the -averaging.||Iters||Average number of iterations of a given method|
|Time for the projection of a model to the tangent space .||Time needed to map a model from its tangent space to the manifold.|
|Time used to compute a cost function and its partial derivatives.||Time used for updating the weights of after each iteration.|
|Time for compressing the initial matches into the Reduced Measurement Matrix.||Time used to perform an optional refinement based on the re-estimation of the inlier set.|
|ItersAPG||Number of iterations needed by APG.||ItersRDCR||Number of iterations needed by RD-CR.|
|Time to compute an economic SVD.||Time to compute a skinny SVD (compact SVD plus thresholding for ).|
|Time needed to compute the Riemannian gradients.||Time to compute the nuclear norm.|
|Time to compute the soft-thresholding operator.||Time used to perform the computation of approximate points by following Nesterov’s rule.|
|Time required to perform the Compressed Least-squares method.|
Table I summarizes the average results of the selected methods when applied to the KITTI sequences for the task of Visual Odometry. The analysis of this table reveals that RD-CR leads to results at the level of the consensus methods. Moreover, our proposal is more efficient since it does not require to generate multiple hypotheses, being able to detect outliers from the structure of the data and leading to a promising trade-off (see Table II).
As shown, RD-CR produces better results than the L1-averaging for all the sequences but seq. 01. RD-CR also performs similar to RANSAC, achieving better results for sequences 02, 05 and 07. In general, our approach outperforms robust regression methods like IRLS-L1 and the Huber M-estimator. In challenging scenes, such as sequence 01, 07 or 10, RD-CR produces significantly better results, while the IRLS-L1, Huber method and C-LS are seriously affected by outliers.
Our results also show that skipping the rank constraint has a dramatic negative influence in the outlier detection process. For this reason, APG performs dramatically worse than the proposed RD-CR technique. There are two phenomena involved in this problem. When the rank is overestimated, APG will miss outliers (false negatives), since they will be absorbed into the low-rank matrix . On the contrary, if APG underestimates the rank, the number of linear dependencies in will increase and some elements will migrate to (false positives), losing some inliers needed for the good estimation of the solution.
Figure 5 shows results from the point of view of the estimated trajectories. The analysis of the first two plots, Fig. 5(a)–(b) shows that the accuracy of our proposal is at the level of RANSAC or even higher. This is depicted in the zoomed region of Fig. 5(b), where the trajectory estimated by RD-CR is closer to the ground truth. This behaviour is repeated for the trajectories shown in Fig. 5(c),(d),(e) and (f), where the difference of RD-CR with respect to the others is more pronounced. In the last sequences (c)–(f), the trajectories estimated by RD-CR are several meters closer to the ground truth than those from the state-of-the-art methods.
Finally, we would like to consider the failure cases. In the case of seq. 01, depicted in Fig. 6(a), we attribute the bad results to the severe visual aliasing found in this highway scene, a phenomenon leading to a very high ratio of matches that are outliers but behave similarly to inliers, i.e., they are mistaken for a valid motion. It is also fair to highlight that robust estimator like IRLS-L1 perform very well in some sequences. For instance, in seq. 04 (see Fig. 6(b)), IRLS-L1 produces excellent results. The main problem RD-CR finds in this sequence is the matches corresponding to a car moving at the same speed that our vehicle. This subtle motion is wrongly combined with the predominant motion transformation leading to a small but continuous drift through the sequence. We are currently working on a way to solve this specific issue for future versions of the approach.
V-B1 Complexity evaluation
We present the excellent computational performance of the proposed approach by comparing it against selected state-of-the-art techniques. A special effort has been made in order to produce a fair comparison of the methods. Since all the algorithms are coded in Matlab one needs to account for the bad scalability of loops, which usually lead to a super-linear cost with respect to the loop length. To avoid this problems we analyse iterative blocks in terms of their average number of iterations and the average cost of each iteration. The average cost per iteration is calculated by using short loops, to avoid the region of bad scalability.
The results of our analysis are summarized in Table II. For each method we highlight the most important blocks that are involved in the computational performance of the approaches.
In this evaluation, RD-CR shows to be one of the most efficient approaches, just behind the non-robust C-LS, which is the second stage of RD-CR. Consensus models spend a large amount of time at generating putative models and checking the pertinence of matches to the inlier set. This is specially noticeable in our experiments since the number of matches is large (). Such a choice is justified by works like , showing that managing large amounts of matches improves stability and accuracy (up to a limit). Methods based on least-squares, such as the IRLS and the Huber M-estimator suffer from a similar problem due to the computation of large Jacobians.
The behaviour of APG and RD-CR is quite different to the previous methods. These approaches do not deal with individual instances of the data, but with the whole data set as a single entity. This conception allows APG and RD-CR to be very efficient even when having to face a large number of matches. Furthermore, due to the structure of RD-CR, the number of iterations required to its convergence is less than the iterations needed by APG; that is, RD-CR achieves a higher accuracy with less number of iterations, being computationally very appealing.
Vi Conclusion and future work
We have proven that outlier detection in motion estimation problems can benefit from the use of robust decomposition techniques, even for those cases which are beyond the theoretical limits of the robust recovery. We have exploited the information available in deficient recoveries to infer which columns are outliers. To this end, we have proposed a new method called RD-CR, able to impose rank constraints arising in motion estimation problems, in an efficient way. Additionally, we presented a robust regression framework that combines RD-CR as a preprocessing stage, prior to a fast compressed least-squares algorithm based on data reduction. We verified that this conception alleviates known problems of compressed least-squares methods and leads to a robust estimator that is both precise and fast. We have validated our approach by performing tests in both synthetic and real data sets, providing a good view of the theoretical behaviour of the method and its good performance in real scenarios. As future improvements, we are experimenting with new ways of addressing rank constraints that have proven to be computationally more efficient and capable of dealing with large-scale problems.
This work has been supported by the Universitat Autònoma de Barcelona, the Fundación Séneca 08814PI08, the Spanish government, by the projects FIS201129813C0201; TRA201129454C0301 (eCo-DRIVERS) and NICTA. NICTA is an entity funded by the Australian Government as represented by the Department of Broadband, Communications, and the Digital Economy, and the ARC through the ICT Centre of Excellence Program.
J. Wright, “Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization,” inNIPS, 2009.
-  E. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?” Journal of the ACM, vol. 58, no. 3, May 2011.
-  H. Xu, C. Caramanis, and S. Sanghavi, “Robust pca via outlier pursuit,” in NIPS, 2010.
-  D. L. Donoho and J. Tanner, “Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing.” Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, vol. 367, Nov. 2009.
-  Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma, “Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix,” in Workshop on Comp. Adv. in Multi-Sensor Adapt. Processing, 2009.
-  R. Hartley, “Minimizing algebraic error in geometric estimation problems,” in ICCV, 1998.
-  G. Ros, J. Guerrero, A. Sappa, D. Ponsa, and A. López, “Fast and robust l1-averaging-based pose estimation for driving scenarios,” in BMVC, 2013.
-  A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the KITTI vision benchmark suite,” in CVPR, 2012.
-  R. Raguram, J. Frahm, and M. Pollefeys, “A comparative analysis of RANSAC techniques leading to adaptive real-time random sample consensus,” in ECCV, 2008.
F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel,
Robust Statistics: The Approach Based on Influence Functions (Wiley Series in Probability and Statistics). New York: Wiley-Interscience, 2005.
-  M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, 1981.
-  V. M. Govindu, “Lie-algebraic averaging for globally consistent motion estimation,” in CVPR, 2004.
-  R. Hartley, K. Aftab, and J. Trumpf, “L1 rotation averaging using the Weiszfeld algorithm,” in CVPR, 2011.
-  P. J. Huber, Robust Statistics, ser. Wiley Series in Probability and Statistics - Applied Probability and Statistics Section Series. Wiley, 2004.
-  P. W. Holland and R. E. Welsch, “Robust regression using iteratively reweighted Least-Squares,” Communications in Statistics: Theory and Methods, pp. 813–827, 1977.
-  D. P. O’Leary, “Robust regression computation using iteratively reweighted least squares,” SIAM Journal on Matrix Analysis and Applications, pp. 466–480, 1990.
-  C. Slaughter, A. Y. Yang, J. Bagwell, C. Checkles, L. Sentis, and S. Vishwanath, “Sparse Online Low-rank projection and Outlier rejection (SOLO) for 3-D rigid-body motion registration.” in ICRA, 2012.
-  G. Sibley, C. Mei, I. Reid, and P. Newman, “Vast-scale outdoor navigation using adaptive relative bundle adjustment,” International J. of Robotics Research, 2010.
-  B. Kitt, A. Geiger, and H. Lategahn, “Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme,” in IV Symposium, 2010.
-  D. Nistér, O. Naroditsky, and J. Bergen, “Visual odometry,” in CVPR, vol. 1, 2004.
-  H. Strasdat, A. J. Davison, J. M. M. Montiel, and K. Konolige, “Double window optimisation for constant time visual SLAM,” in ICCV, 2011.
-  C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: A factorization method,” Int. J. Comput. Vision, 1992.
-  Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” UIUC, Tech. Rep., 2009.
-  T. Bouwmans and E. Zahzah, “Robust PCA via Principal Component Pursuit: A review for a comparative evaluation in video surveillance,” Computer Vision and Image Understanding (special issue), 2014.
-  Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” in NIPS, 2011.
-  Y. Nesterov, “Smooth minimization of non-smooth functions,” Math. Program., 2005.
-  E. Hale, W. Yin, and Y. Zhang, “Fixed-point continuation for -minimization: Methodology and convergence,” SIAM Journal on Optimization, 2008.
-  R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision. Cambridge University Press, 2003.
-  G. Ros, J. Guerrero, A. D. Sappa, D. Ponsa, and A. M. López, “VSLAM pose initialization via Lie groups and Lie algebras optimization,” in Proceedings of the IEEE ICRA, Karlsruhe, Germany, 2013.