Image-based estimation of camera motion, known as visual odometry (VO), plays a very important role in many robotic applications such as control and navigation of unmanned mobile robots, especially when no external navigation reference signal is available. Although a number of successful works have been presented over the past decade in this relatively mature field, the conclusion is that no method is all-powerful and working in any scenario. For example, salient feature based sparse methods such as [1, 2] do not work well when there is insufficient texture in the image for defining feature points. By taking advantage of all intensity information, direct methods like [3, 4, 5, 6] achieve better performance in textureless environments as long as the assumption of photometric consistency is sufficiently met. Engel et al. [7, 8] further improve the efficiency of  by using only photometric information around semi-dense region. Other systems such as [9, 10, 11, 12] track the camera using an iterative closest point (ICP) algorithm over the depth information only. This, however, requires the presence of sufficient 3D structure and fails, for instance, in the situation of a planar scene. Furthermore, the ICP algorithm is always computationally expensive, and usually depends on GPU resources for real-time performance.
In this work we combine the merits of semi-dense processing and ICP based tracking. Compared to sparse methods, the proposed semi-dense method exploits the common structure of man-made environments, thus can handle relatively textureless situations. It ensures computational efficiency while being able to work in the degenerate case for ICP trackers (i.e. a single plane). Instead of applying a direct method which is sensitive to illumination changes, an accurate ICP inspired geometric framework is proposed. More precisely, the estimation of the camera pose at the current frame with respect to the reference frame is cast as a 2D-3D registration problem. The 3D part is given by a 3D semi-dense map defined in the reference frame, and the 2D part is given by the semi-dense region extracted in the current frame. Similar to the classical ICP framework which aims at aligning surfaces in 3D, our method aims at non-parametric curve-to-curve registration. To improve robustness against sensor noises and outliers, we apply an probabilistic model in the spirit of . The resulting maximum a posteriori (MAP) problem is equivalent to a weighted least squares problem which can be solved using the iteratively re-weighted least squares (IRLS) method.
Our main contribution is four-fold:
Introducing the idea of approximate nearest neighbour field, which permits the use of compact Gauss-Newton updates in the registration.
Exploring the optimal robust weight function for the probabilistically formulated, 2D-3D semi-dense ICP based motion estimation.
A real-time implementation running at 25 Hz on a laptop using only CPU resources.
Extensive evaluation under varying experimental conditions and varying algorithm setups (e.g., different methods for extracting the semi-dense region and reweighting residuals) and performance comparison against state-of-the-art solutions on publicly available datasets.
The paper is organized as follows. More related work is discussed in Section II. Section III provides an in-depth review of geometric semi-dense 2D-3D registration. Then Section IV presents the core of our new approach, including the idea of approximate nearest neighbour field and keys to robust motion estimation despite occlusion, noises and outliers. An overview of our framework is given in Section V, which is followed by extensive evaluation including the exploration of the best configuration and the comparison against state-of-the-art methods in Section VI.
Ii Related Work
Line, curve based and semi-dense methods: Lines are alternative features to points and have been widely used in many VO and SLAM frameworks such as [13, 14]. One reason is that lines are abundant in man-made structures and environments, and do not depend on sufficient texture. Another reason is that line features are easily parametrized and included in a bundle adjustment (BA) pipeline [13, 15] for the purpose of global optimization. However, straight lines are still not a general feature because object contours can be arbitrary curves in 3D space. Therefore, Nurutdinova et al. presents a method which uses parametric curves as landmarks for motion estimation and BA . Futhermore, Engel et al. apply direct method to semi-dense region [7, 8], which fully utilizes the photometric information around all boundaries, edges and contours. The most relevant work to ours is , which presents a direct edge alignment approach for 6-DOF tracking. They address the problem of non-differentiability of their Distance Transform (DT) based cost function by using a sub-gradient method. Conversely, we improve the differentiability of the cost function intrinsically and achieve more accurate results at a comparable computational cost.
ICP: The Iterative Closest Point (ICP) algorithm is a fundamental component of our method and it has been used exhaustively in 3D-3D registration problems. Typical issues when applying those methods are missing data, noise, outliers, and local minima in the registration process. Yang investigates globally optimal solutions to the point set registration problem . However, this method is not efficient enough for real-time applications, where the frame-to-frame displacement remains small enough anyway for a successful application of local methods. The most related work is  which applies ICP and distance transforms to semi-dense 3D-2D registration. Chebyshev/Chamfer distance field is chosen as an approximation of the Euclidean distance field to achieve real-time performance. Without discussing how to propagate the reference frame,  stops at solving an absolute pose estimation problem rather than providing a full VO system.
Photometric and hybrid registration methods: ICP algorithm and its close derivatives [11, 12, 9, 10] still represent the methods of choice for real-time LIDAR tracking though sometimes expensive computational resources like GPU are necessary. The advent of RGB-D cameras has, however, led to a new generation of 2D-3D registration algorithms that exercise a hybrid use of both depth and RGB information. For instance, Steinbrücker uses the depth information along with the optimized relative transformation to warp one RGB-D image to the next , thus permitting direct and dense photometric error minimization. Similar idea is applied in [4, 7, 8].
Robust M-estimators and IRLS: When system noises and outliers are taken into account, M-estimators are popular choices for re-weighting the naïve least squares problem. The earliest tutorial about using different M-estimators in the application of conic fitting was given in . Recently, Aftab investigates the full range of robust M-estimators that are amenable to IRLS . In consideration of the great success of applying IRLS and M-estimators in motion estimation works such as [3, 4, 7], we utilize it in our work as well.
Iii Review of Geometric Semi-dense 2D-3D Registration
This section reviews the basic idea behind geometric semi-dense 2D-3D registration. After a clear problem definition, we will review existing registration methods, and conclude with a brief summary of the open problems addressed in this paper.
Iii-a Problem formulation
Image gradient is calculated in both horizontal and vertical direction at each pixel location. Euclidean norm of each gradient vector is calculated and illustrated in (a) (brighter means bigger while darker means smaller). Semi-dense region is obtained via thresholding the gradient’s Euclidean norm map. By accessing the depth information of the semi-dense region, a 3D semi-dense map (b) is created, in which hot colors refer to close while cold colors mean far.
Let be a set of pixel locations in a frame defining the so-called semi-dense region. As illustrated in Fig. 2, it is obtained by thresholding the norm of the image gradient, which could, in the simplest case, originate from a convolution with Sobel kernels. Let us further assume that depth value for each pixel in the semi-dense region is available as well. In the preregistered case, they are simply obtained by looking up the corresponding pixel location in the associated depth image. For each pixel, a local patch ( pixels) is visited and the smallest depth is selected in the case of a depth discontinuity111The depths of all pixels in the patch are sorted and clustered based on a simply Gaussian noise assumption. If there exists a cluster center that is closer to the camera, the depth value of the current pixel will be replaced by the depth of that center. This circumvents resolution loss and elimination of fine depth texture.. This operation ensures that we always retrieve the foreground pixel despite possible misalignments caused by calibration errors or asynchronous measurements under motion. An example result is indicated in Figure 2(b). We furthermore assume that both the RGB and the depth camera are fully calibrated (intrinsically and extrinsically). Thus we have accurate knowledge about a world-to-camera transformation function projecting any point along the ray defined by unit vector onto the image location . The inverse transformation transforming points in the image plane into unit direction vectors located on the unit sphere around the center of the camera is also known. If the RGB image and the depth map are already registered, the extrinsic parameters can be omitted. Our discussion from now on will be based on this assumption.
Consider the 3D semi-dense map (defined in the reference frame ) as a curve in 3D, and its projection into the current frame as a curve in 2D. The goal of the registration step consists of retrieving the pose at the current frame (namely its position and orientation ) such that the projected 2D curve aligns well with the semi-dense region extracted in the current frame . Note that—due to perspective transformations—this is of course not a one-to-one correspondence problem. Also note that we parametrize our curves by a set of points originating from pixels in an image.
Iii-B ICP-based motion estimation
The problem can be formulated as follows. Let
denote the 3D semi-dense map in reference frame , where denotes the distance of point to the optical center. Its projection into current frame results in the semi-dense region
to be a function that returns the nearest neighbour of in under the Euclidean distance metric. The overall objective of the registration is to find
where represents the parameter vector that defines the pose of the camera. are Cayley parameters  for orientation 222Note that the orientation is always optimized as a change with respect to the previous orientation in the reference frame. The chosen Cayley parametrization therefore is equivalent to the local tangential space at the location of the previous quaternion orientation and, therefore a viable parameter space for local optimization of the camera pose., and . The above objective is of the same form as the classical ICP problem, which alternates between finding approximate nearest neighbours and then register those putative correspondences, except that in the present case, the correspondences are between 2D and 3D entities. A very similar objective function has been already exploited by  for robust semi-dense 2D-3D registration in a hypothesis-and-test scheme. It proceeds by iterative sparse sampling and closed-form registration of approximate nearest neighbours.
Iii-C Distance fields
As already outlined in , the repetitive explicit search of nearest neighbours is too slow even in the case of robust sparse sampling. This is due to the fact that all distances need to be computed in order to rank the hypotheses, and this would again require an exhaustive nearest neighbour search. This is where distance transforms come into play. The explicit location of a nearest neighbour does not necessarily matter in order to evaluate the optimization objective (4), the distance alone may already be sufficient. Therefore, we can pre-process the semi-dense region in the current frame and derive an auxiliary image in which the value at every pixel simply denotes the Euclidean distance to the nearest point in the original semi-dense region. Euclidean distance fields can be computed very efficiently using region growing techniques. Chebychev distance is an alternative when faster performance is required. For further information, the interested reader is referred to .
Let us define the function that retrieves the distance to the nearest neighbour by simply looking up the value at inside the chosen distance field. The optimization objective (4) can now easily be rewritten as
Note that, in order to emulate a smooth optimization objective and bypass the effects of image discretization, the distances in the field are sampled using bilinear interpolation.
There are a few problems with objective (5):
It is the sum of squared residual distances. The residual distance is a positive entity which means that it is hard to optimize by techniques other than gradient descent like methods333The values of the residual distance are always positive, which makes Gauss-Newton method not applicable. We discuss how to enable Gauss-Newton method by introducing a novel alternative to Euclidean distance field in the following section.. Despite that it may have good convergence properties, it is known to be slow due to the cascaded update procedure, which may for instance involve a bisectioning line-search along the gradient direction.
As very well explained in , the distance transform may easily lead to wrong registrations. For instance, if only a part of a model curve is observed in the current frame, the corresponding distance field may easily converge in a wrong location, even if only translational displacements in the image plane are taken into account. A detailed illustration of this problem is given in Fig. 3 of . In their work, they solve the problem through a variable lifting strategy, which however blows up the space of optimized parameters quite significantly.
Even in the absence of the above two problems, a simple continuous minimization of the L2-norm of the residual distances would simply fail because it is easily affected by outlier associations. In , this problem is circumvented by switching to the L1-norm of the residual distances. While a direct continuous minimization of the L1-norm is practically feasible, it remains conceptually wrong as claimed in  that the plain residual distance is not necessarily differentiable around zero.
The following section will address these problems one by one.
The idea of approximate nearest neighbour field is first discussed, which enables registration through only few Gauss-Newton iterations. Then we introduce the gradient directions to project the residuals, thus leading to correct registration even though only part of a model is observed. Afterwards, we follow the probabilistic formulation given by  and solve its equivalent weighted least squares problem. Finally, the sensor model is learned to determine the optimal weight function. Note that our tracking approach has similarities with . However, our approximate nearest neighbour field obey the Euclidean distance metric, and we provide a more concise derivation of the Gauss-Newton update steps including robustification against outliers.
Iv-a Approximate Nearest Neighbour Field
As discussed in III-C, a full (signed) residual is needed to make the Gauss-Newton updates applicable. Thus, we replace Euclidean distance field with one that can retrieve the exact location of the nearest point on a curve. There is a straightforward alternative to the commonly used distance field that maintains all necessary information for computing full residuals, namely an Approximate Nearest Neighbour Field (ANNF). An ANNF is given by a integer matrix, where denotes the width of the image, and its height. The integers at coordinates simply denote the pixel index of the nearest neighbour, rather than the distance to it. An example of an ANNF is illustrated in Fig. 3(a).
What is perhaps surprising is that the ANNF can be computed equally efficiently than the distance field. The reason for this is simply given by the functioning of efficient Euclidean distance field extraction algorithms. They perform region growing starting from the semi-dense region itself. The border of the growing region updates and propagates a reference to the closest point in the seed region (i.e. the original semi-dense region). Extracting a distance field or an ANNF is simply a matter of what piece of information is retained.
Using the ANNF, the function from (3) now boils down to a trivial look-up. This enables us to again go back to objective (4), and attempt a solution via Gauss-Newton updates. Let us define the residuals
Supposing that were a linear expression of , it is clear that solving (7) would be equivalent to solving . The idea of Gauss-Newton updates (or iterative least squares) consists of iteratively performing a first-order linearization of about the current value of , and then each time improve the latter by solving the resulting linear least squares problem. The linear problem to solve in each iteration therefore is given by
and, using , its solution is given by
The motion vector is finally updated as .
While this may sound straightforward, there is one element that requires particular attention. The nearest neighbour of each point should remain fixed in each round of iterative least squares. This statement particularly addresses the (numerical) Jacobian computation, as even tiny variations of can easily lead to a potentially substantial change of the nearest neighbour (e.g. from a point in one curve to another point in a completely different curve). We circumvent this problem by fixing the nearest neighbours during the Jacobian computation. The Jacobian simply becomes
Iv-B Projection of residual vectors
While the fixation of the nearest neighbours during the Jacobian computation has clear benefits, it also leads to one further problem. Imagine a case where we have to register one long horizontal and one short vertical line in the image plane, and there are only two degrees of freedom. The horizontal line is already registered, but the vertical one not yet. A shift along the horizontal axis would solve the problem, however, the Jacobian will not provoke an overall error reduction along this dimension. This is because—with fixed nearest neighbours—the “registered” points along the horizontal edge may lead to spurious residual errors for any horizontal shift that ultimately outweighs the error reduction along the short vertical line.
As shown in Fig. 3(b), we solve this problem by projecting the residual vectors onto the local gradient directions. The new residual is given by
where the gradient of the registered point in the reference frame is denoted by , and remains fixed throughout the optimization. This is only an approximation of the local curve gradient in the current frame, which is sufficiently valid under the assumption that the frame-to-frame transformation—and notably the rotation about the principal axis of the camera—is small enough. Also, while the residual errors have now become scalars again, they remain signed entities, and thus Gauss-Newton remains applicable. The new Jacobian is finally given by
Note that the projection of residual vectors onto the local gradient direction also helps to better approximate the orthogonal distance between curves, and thus address the problem raised in —how to avoid wrong registrations in the case where some of the curves are observed partially.
Iv-C Robust motion estimation
From a probabilistic point of view, the motion would be estimated by maximizing the posteriori in the presence of noises. Following the derivation in , the Maximum A Posteriori (MAP) problem is translated into the weighted least squares minimization problem,
The weight is defined as , which is a function of the sensor model . IRLS is used for solving (13) and we discuss how to determine the optimal weight function by learning the statistical characteristics of the sensor model in the next section.
Iv-D Learning the sensor model
We investigate several of the most widely used robust weight functions. They and their corresponding weighted squared errors are illustrated in Fig. 4. The interested reader can find more details in [20, 21].
The choice of the weight function depends on the statistics of the residual, which is identified in a dedicated experiment. We start by defining reference frames in a sequence by applying the same criteria to create new reference frames as shown in the full pipeline (cf. Fig. 6). The residuals are calculated using the ground truth relative pose between the reference frame and the current frame. The residuals are collected over an entire sequence, and then summarized in a histogram as shown in Fig. 5. By fitting the various distribution models depicted in Fig. 4 to the data, we finally identify the t-distribution to be the best model to describe the residual statistics. Assuming the mean of the t-distribution is always zero, only two parameters ( and ) have to be determined during the model fitting. As shown in 
, the varianceis later on recursively updated on the actual data before being used for calculating weights.
V Framework overview
Here we discuss how to improve the robustness of the method further by incorporating a constant velocity motion model. Finally, we describe the complete VO pipeline.
V-a Constant velocity motion model
Given a sufficiently high processing rate, even a simple motion model can be very helpful to predict a good starting point for the optimization which is relatively close to the optimum where the residuals are minimal. This strategy has been widely used in VO and SLAM work [1, 25, 4], and it improved the robustness of the system by effectively avoiding local minima in the optimization. Instead of assuming a prior distribution for the motion as in , we follow  and implement a simple decaying velocity model. It effectively improves the convergence speed and the tracking robustness, especially when the displacement between the reference frame and the current frame is relatively large.
V-B Complete VO system
The complete VO system is designed based on the above robust motion estimation method. Two main threads are running in parallel, which are marked with dashed lines in Fig 6. In the motion estimation thread, only the RGB image is used for the extraction of the semi-dense region and the subsequent ANNF computation. The objective is constructed and then optimized via the Gauss-Newton method. The reference frame needs to be updated once the current frame is too far away. Thus, we track the disparity between the semi-dense region in the reference frame and the corresponding pixels in the registered current frame. If the median disparity is larger than a given threshold, a new reference frame is created by the 3D semi-dense map (3DSDM) preparation thread, in which the depth information is loaded and corrected by the foreground reasoning operation described in Section III-A.
Vi Experimental Evaluation
We now proceed to the evaluation of the proposed method. We start by exploring various configurations in which performance under different semi-dense region extractors and different weight functions are assessed and compared. Then we provide a comparison between standard Euclidean distance field based method and our method, showing the advantage of the ANNF. We furthermore evaluate our algorithm on a set of benchmark datasets and compare the performance with several state-of-the-art camera tracking solutions. Finally, a semi-dense reconstruction result of two indoor scenes is provided which qualitatively demonstrates that the proposed method is able to work in relatively large-scale environments.
Note that errors listed in all tables are given in terms of either root-mean-square error or median error. The unit for rotation and translation error are and respectively. The best performance is always highlighted in bold.
Vi-a Performance: Different gradient extractors
A good extraction of semi-dense region is key to good motion estimation accuracy. Thus, we provide a comparison in Table II where several different methods are applied for calculating the image gradients. “Smoothed” refers to a Gaussian kernel, which is used for smoothing the image. “Gradient” refers to a Sobel-like gradient computation method which uses a kernel. It shows the impact that each method makes on the accuracy v.s. required computational time. Note that the t-distribution based IRLS is used. Run-time for computing the semi-dense region is expressed in seconds.
Vi-B Performance: Different weight functions
In order to confirm experimentally that the chosen weight function is optimal, we compare the performance for all robust weight functions over several sequences of the TUM benchmark datasets. Comparison on fr2/desk is provided as an example in Table II. “Smoothed + Gradient” is used for extracting the semi-dense region. The run-time is counted in seconds and includes the extraction of the semi-dense region, the ANNF computation and the following optimization.
|Smoothed + Sobel||0.581||0.018||0.00948|
|Smoothed + Gradient||0.560||0.016||0.01863|
Vi-C Euclidean distance field v.s. ANNF
As discussed in III-B, by using ANNF, we are able to calculate the signed orthogonal residual between the registered curves, thus enabling Gauss-Newton updates to solve the problem. Here we confirm that this leads to faster convergence compared to gradient descent (over the Euclidean distance field). The result in Fig. 7 demonstrates that ANNF based method converges much faster than standard Euclidean distance field based method. Note that “Smoothed + Gradient” and t-distribution based IRLS are used.
Vi-D Comparison against the state of the art
We compare the performance of our method against three state-of-the-art, open-source motion estimation frameworks: DVO , LSD [7, 8] and ICP . For best performance, we apply “Smoothed + Gradient” for extracting the semi-dense region, t-distribution based IRLS and enabled motion model. All methods are evaluated on published and challenging indoor benchmark datasets from the TUM RGB-D  series. The datasets we picked for evaluation and corresponding results are listed in Table III. DVO, LSD-SLAM and our method perform comparably efficient on a laptop with only CPU resources (30 Hz, 30 Hz and 25Hz respectively) while ICP achieves 60 Hz on a GPU (NVIDIA Tesla K40). It can be easily observed that our method provides the best overall performance on TUM dataset.
During the evaluation on the TUM dataset, we discovered that almost all underperforming registration results for our method are related to motion blur in the image. The reason is that semi-dense region cannot be accurately extracted from blurry images, thus also harming the resulting 3D semi-dense map. Consequently, the motion estimation based on the inaccurate 3D semi-dense map will not be accurate. Deblurring techniques or adaptive thresholds could alleviate this problem, but a much more straightforward solution consists of simply discarding frames for which the semi-dense region reveals a sudden jump in cardinality.
Vi-E Semi-dense reconstruction
In order to show that our method is capable to work in relatively large-scale environments, we provide reconstruction results on two sequences from the TAMU RGB-D datasets  and ICL-NUIM synthetic datasets . As shown in Fig. 8, the semi-dense reconstruction is much more visually expressive than sparse point clouds.
We present a robust real-time semi-dense visual odometry algorithm for RGB-D cameras. The camera motion is estimated through a non-parametric 2D-3D geometric curve registration approach. The introduction of ANNF enabled the use of Gauss-Newton method. To improve robustness against occlusion, noises and outliers, the ICP-based pipeline is formulated as a maximum a posteriori problem, which is subsequently transformed into a weighted least squares problem. Furthermore, we explored a number of robust M-estimators by studying the statistical properties of the sensor model, and pick an adequate choice. Experiments show that our geometric registration alternative outperforms state-of-the-art camera tracking solutions in most cases. The method may be pushed even further by a more accurate and robust method for extracting contours, a more elaborate motion estimation filter, as well as a sliding windowed refinement. Hybrid cues (geometric and photometric) are viable for our framework and will be implemented in future work.
-  G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” in Mixed and Augmented Reality, 6th IEEE and ACM International Symposium on, 2007, pp. 225–234.
-  R. Mur-Artal, J. Montiel, and J. D. Tardós, “ORB-SLAM: a versatile and accurate monocular slam system,” IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
-  T. Tykkälä, C. Audras, and A. I. Comport, “Direct iterative closest point for real-time visual odometry,” in Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pp. 2050–2056.
-  C. Kerl, J. Sturm, and D. Cremers, “Robust odometry estimation for RGB-D cameras,” in Robotics and Automation (ICRA), 2013 IEEE International Conference on, pp. 3748–3754.
-  F. Steinbrücker, J. Sturm, and D. Cremers, “Real-time visual odometry from dense RGB-D images,” in Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pp. 719–722.
-  C. Audras, A. Comport, M. Meilland, and P. Rives, “Real-time dense appearance-based SLAM for RGB-D sensors,” in Australasian Conf. on Robotics and Automation, vol. 2, 2011, pp. 2–2.
-  J. Engel, J. Sturm, and D. Cremers, “Semi-dense visual odometry for a monocular camera,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1449–1456.
-  J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in European Conference on Computer Vision. Springer, 2014, pp. 834–849.
-  R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon, “Kinectfusion: Real-time dense surface mapping and tracking,” in Mixed and augmented reality (ISMAR), 10th IEEE international symposium on, 2011, pp. 127–136.
-  T. Whelan, M. Kaess, M. Fallon, H. Johannsson, J. Leonard, and J. McDonald, “Kintinuous: Spatially extended kinectfusion,” in 3rd RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras, (Sydney, Australia), 2012.
-  F. Pomerleau, S. Magnenat, F. Colas, M. Liu, and R. Siegwart, “Tracking a depth camera: Parameter exploration for fast ICP,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3824–3829.
-  F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat, “Comparing ICP variants on real-world data sets,” Autonomous Robots, vol. 34, no. 3, pp. 133–148, 2013.
-  E. Eade and T. Drummond, “Edge landmarks in monocular SLAM,” Image and Vision Computing, vol. 27, no. 5, pp. 588–596, 2009.
-  Y. Lu and D. Song, “Robustness to lighting variations: An RGB-D indoor visual odometry using line segments,” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pp. 688–694.
-  G. Klein and D. Murray, “Improving the agility of keyframe-based SLAM,” in European Conference on Computer Vision. Springer, 2008, pp. 802–815.
-  I. Nurutdinova and A. Fitzgibbon, “Towards pointless structure from motion: 3D reconstruction and camera parameters from general 3d curves,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2363–2371.
-  M. P. Kuse and S. Shen, “Robust camera motion estimation using direct edge alignment and sub-gradient method,” in IEEE International Conference on Robotics and Automation (ICRA-2016), Stockholm, Sweden, 2016.
-  J. Yang, H. Li, and Y. Jia, “Go-ICP: Solving 3D registration efficiently and globally optimally,” in Proceedings of the 14th International Conference on Computer Vision (ICCV), 2013, pp. 1457–1464.
-  L. Kneip, Z. Yi, and H. Li, “SDICP: Semi-dense tracking based on iterative closest points,” in Proceedings of the British Machine Vision Conference (BMVC), M. W. J. Xianghua Xie and G. K. L. Tam, Eds. BMVA Press, September 2015, pp. 100.1–100.12. [Online]. Available: https://dx.doi.org/10.5244/C.29.100
-  Z. Zhang, “Parameter estimation techniques: A tutorial with application to conic fitting,” Image and Vision Computing, vol. 15, no. 1, pp. 59–76, 1997.
-  K. Aftab and R. Hartley, “Convergence of iteratively re-weighted least squares to robust M-estimators,” in 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 480–487.
-  A. Cayley, “About the algebraic structure of the orthogonal group and the other classical groups in a field of characteristic zero or a prime characteristic,” in Reine Angewandte Mathematik, 1846.
-  R. Fabbri, L. D. F. Costa, J. C. Torelli, and O. M. Bruno, “2D euclidean distance transform algorithms: A comparative survey,” ACM Computing Surveys (CSUR), vol. 40, no. 1, p. 2, 2008.
-  J. J. Tarrio and S. Pedre, “Realtime edge-based visual odometry for a monocular camera,” in IEEE International Conference on Computer Vision (ICCV), 2015, pp. 702–710.
-  P. Tanskanen, K. Kolev, L. Meier, F. Camposeco, O. Saurer, and M. Pollefeys, “Live metric 3D reconstruction on mobile phones,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 65–72.
-  J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580.
-  A. Handa, T. Whelan, J. McDonald, and A. Davison, “A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM,” in IEEE Intl. Conf. on Robotics and Automation, ICRA, Hong Kong, China, May 2014.