1 Introduction
Point cloud reconstruction of outdoor scenes has many important applications such as 3D architectural modeling, terrestrial surveying, Simultaneous Localization and Mapping (SLAM) for autonomous vehicles, etc. Compared to images, point clouds from 3D scanners exhibit less variation under different weather or lighting conditions, e.g., summer and winter (Fig. 1), or day and night (Fig. 5). Furthermore, the depths of point clouds from 3D scanners are more accurate than imagebased reconstructions. Consequently, point clouds from 3D scanners are preferred for largescale outdoor 3D reconstructions. Most existing methods for 3D reconstruction are solved via a twostep approach: a frontend data association step and a backend optimization step. More specifically, data association is used to establish feature matches [30] in point cloud fragments for registration, and loopclosures [26] between point cloud fragments for posegraph [21] optimization. Unfortunately, no existing algorithm for feature matching and loopclosure detection guarantees complete elimination of outliers. Although outlier feature matches are usually handled with RANSACbased geometric verification [16, 30], such pairwise checks do not consider global consistency. In addition, the numerous efforts on improving the accuracy in loopclosure detection [6, 8, 20, 26] are not completely free from false positives. Many backend optimization algorithms [13, 14, 21] are based on nonlinear leastsquares that lack the robustness to cope with outliers. A small number of outliers would consequently lead to catastrophic failures in the 3D reconstructions. Several prior works focus on disabling outlier loopclosures in the backend optimization [5, 15, 25]. However, these methods do not consider the effect from the outlier feature matches with the exception of [34] that solves global geometric registration in a very smallscale problem setting.
The main contribution of this paper is a probabilistic approach for robust backend optimization to handle outliers from a weak frontend data association in largescale point cloud based reconstructions. Our approach simultaneously suppresses outlier feature matches and loopclosures. To this end, we model our robust point cloud reconstruction problem as a Bayesian network. The global poses of the point cloud fragments are the unknown parameters, and odometry and loopclosure constraints are the observed variables. A binary latent variable is assigned to each loopclosure constraint; it determines whether a loopclosure constraint is an inlier or outlier. We model feature matches in the odometry constraints with a longtail Cauchy distribution to gain robustness to outlier matches. Additionally, we use a CauchyUniform mixture model for loopclosure constraints. The uniform and Cauchy distributions model outlier loopclosures and the feature matches in inlier loopclosures, respectively. In contrast to many existing backend optimizers that use rigid transformations as the odometry and loopclosure constraints [5, 14, 15, 21, 25], we use the distances between feature matches to exert direct influence on these matches.
We use the ExpectationMaximization (EM) algorithm [3, 15] to find the globally consistent poses of the point cloud fragments (Sec. 4). The EM algorithm iterates between the Expectation and Maximization steps. In the Expectation step, the posterior of a loopclosure constraint being an inlier is updated. In the Maximization step, a local optimal solution for the global poses is found from maximizing the expected complete data loglikelihood over the posterior from the expectation step. We also generalize our approach to solve reconstruction problems with an easier setting (Sec. 5). In particular, a strong assumption is imposed: odometry and inlier loopclosure constraints are free from outlier feature matches. We show that by using a GaussianUniform mixture model, our approach degenerates to the formulation of a stateoftheart approach for robust indoor reconstruction [5]. Fig. 1 shows an example of the reconstruction result with our method compared to other methods in the presence of outliers.
2 Related Work
Reconstruction of outdoor scenes has been studied in [22, 23]. Schöps et al. [23] propose a set of filtering steps to detect and discard unreliable depth measurements acquired from a RGBD camera. However, loopclosures is not detected and this can lead to reconstruction failures. Relying on very accurate GPS/INS, Pollefeys et al. [22] propose a 3D reconstruction system from RGB images. However, GPS/INS signal may be unavailable or unreliable, especially on cloudy days or in urban canyons. Our work relies on neither GPS/INS nor RGB images. In contrast, we focus on reconstruction from point cloud data acquired from 3D scanners that is less sensitive to weather or lighting changes. There are also many works on indoor scene reconstruction. Since the seminal KinectFusion [18], there are several followup algorithms [4, 19, 27]. Unfortunately, these methods do not detect loopclosures. Nonetheless, there are many RGBD reconstruction methods with loopclosure detection [5, 7, 10, 11, 24, 28, 31, 32, 33].
Choi et al. [5] achieve the stateoftheart performance for indoor reconstruction with robust loopclosure. However, they assume no outlier feature matches in the odometry and inlier loopclosure constraints. We relax this assumption to achieve robust feature matching. More specifically, [5]estimates a switch variable [25] for each loopclosure constraint using line processes [2]. Outlier loopclosures are disabled by setting the respective switch variables to zero. Additional switch prior terms are imposed and chosen empirically [25] to prevent a trivial solution of removing all loopclosure constraints. In comparison, our approach does not require the additional prior terms. We estimate the posterior of a loopclosure being an inlier constraint in the Expectation step shown in Sec. 4. The EM approach is also used by Lee et al. [15]. However, they solve a robust posegraph optimization problem without coping with the feature matches for reconstruction.
3 Overview
In this section, we provide an overview of our reconstruction pipeline that consists of four main components: point cloud fragment construction, point cloud registration, loopclosure detection, and robust reconstruction with EM.
Point cloud fragment construction.
A single scan from a 3D scanner, e.g. LiDAR, contains limited number of points. We integrate multiple consecutive scans with odometry readings obtained from dead reckoning e.g., the Inertial Navigation System (INS) [17] to form local point cloud fragments. A set of 3D features is then extracted from each point cloud fragment using [30].
Point cloud registration.
Loopclosure detection.
It is inefficient to perform an exhaustive pairwise registration for largescale outdoor scenes with many point cloud fragments. Hence, we perform point cloud based placerecognition [26] to identify a set of candidate loopclosures. We retain the top potential loopclosures for each fragment and remove the duplicates. For each loopclosure between fragments and , we keep the set of top feature matches denoted as . We define as a loopclosure constraint, which can either be an inlier or outlier. Similar to the odometry constraint, an inlier loopclosure can also contain outlier feature matches.
Robust reconstruction with EM.
The constraints from point cloud registration and loopclosure detection can contain outliers. In particular, both odometry and loopclosure constraints can contain outlier feature matches. Moreover, many detected loopclosures are false positives. In the next section, we describe our probabilistic modeling approach to simultaneously suppress outlier feature matches and false loopclosures. The EM algorithm is used to solve for the globally consistent fragment poses. Optional refinement using ICP can be applied to further improve the global point cloud registration.
4 Robust Reconstruction with EM
We model the robust reconstruction problem as a Bayesian network shown in Fig. 2. Let , where , denote the fragment poses, denote the odometry constraints obtained in point cloud registration, and denote the loopclosure constraints obtained in loopclosure detection. We explicitly assign the loopclosure constraints into clusters that represent the inliers and outliers. For each loopclosure constraint , we introduce a corresponding assignment variable .
is a onehot vector:
and assigns assigns as an inlier and outlier loopclosure constraint, respectively. We use to denote the assignment variables. is the unknown parameter, is the latent variable, and and are both observed variables.Robust reconstruction can be solved as finding the Maximum a Posterior (MAP) solution of . However, the MAP solution involves an intractable step of marginalization over the latent variable . We circumvent this problem by using the EM algorithm that takes the maximization of the expected complete data loglikelihood over the posterior of the latent variables. The EM algorithm iterates between the Expectation and Maximization steps. In the Expectation step, we use , i.e., fragment poses solved from the previous iteration to find the posterior distribution of the latent variable ,
(1) 
in which does not depend on , since they are conditionally independent given according to the Bayesian network in Fig. 2.
In the Maximization step, the posterior distribution (Eq. (1)) is used to update by maximizing the expectation of the complete data loglikelihood denoted by
(2)  
We define for the term with odometry constraints, and for the term with loopclosure constraints.
Initialization.
The unknown parameters, i.e., global poses of the fragments, are initialized with the relative poses computed from odometry constraints using ICP. Other dead reckoning methods such as wheel odometry and/or INS readings can also be used.
4.1 Modeling Odometry Constraints
Odometry constraints are obtained from point cloud registration between two consecutive point cloud fragments. Recall that an odometry constraint is a set of feature matches between fragments and , which can contain outlier matches. To gain robustness, we model each feature match
with a longtail multivariate Cauchy distribution. Suppose these feature matches are independent and identically distributed (i.i.d.), we take a geometric mean over their product to get
(3) 
where
(4) 
which we assume an isotropic covariance with scale , and denotes the Mahalanobis distance such that
(5) 
The value of is set based on the density of extracted features. For example, m in the outdoor dataset.
4.2 Modeling LoopClosure Constraints
A loopclosure constraint is the set of feature matches between fragments and . We propose to use a CauchyUniform mixture model to cope with the (1) outlier loopclosure constraints and (2) outlier feature matches in the inlier loopclosure constraints.
To distinguish between inlier and outlier loopclosures, we model the distribution of assignment variable
as a Bernoulli distribution defined by the inlier probability
,(6) 
Next, we use two distributions: Cauchy and Uniform distributions to model the inlier and outlier loopclosure constraints, respectively.
Cauchy distribution – inlier loopclosure constraints.
The inlier loopclosure constraints can contain outlier feature matches. We use the same multivariate Cauchy distribution as Eq. (3) and further reorganize the terms. We define for brevity, such that
(7) 
in which
(8) 
and denotes the number of feature matches in .
Uniform distribution – outlier loopclosure constraints.
We model the outlier loopclosure constraints with a uniform distribution defined by a constant probability ,
(9) 
4.3 Expectation Step
Recall that the expectation step is evaluated in Eq. (1). Plugging Eq. (6), (7) and (9) into the Bayes’ formula, we obtain the posterior of being an inlier loopclosure constraint,
(10) 
where
(11) 
The constant consists of two distribution parameters: is the probability of being an inlier loopclosure; is the constant probability to uniformly sample a random loopclosure, which are difficult to set manually based on different datasets. Hence, we propose to estimate based on the input data. More specifically, we learn from the odometry constraints, since all odometry constraints are effectively inlier loopclosure constraints.
The process to learn is as follows. First, for each odometry constraint , we denote its corresponding error term (analogous to Eq. (10)), where
(12) 
Next, we compute the median error denoted as . Since we regard all odometry constraints as inlier loopclosure constraints, let
(13) 
where we set , meaning that a loopclosure with a small error () is very likely to be an inlier (). Finally, we solve for using Eq. (13).
4.4 Maximization Step
In the maximization step, we solve for that maximizes , where and are shorthand notations defined in Eq. (2). These two terms are evaluated independently, and then optimized jointly.
Evaluate .
Assuming the odometry constraints in are i.i.d., the joint probability of all odometry constraints is given by
(14) 
Substituting the joint probability of the feature matches within each odometry constraint (Eq. (3)), we can rewrite as
(15) 
Evaluate .
Using the product rule, the joint probability of loopclosure constraints and their corresponding assignment variables can be written as . Plugging Eq. (6), (7) and (9) in, we have
(16) 
We can rewrite as
(17) 
with the joint probability from Eq. (16) and the posterior from Eq. (10), which can be further expanded to
(18) 
Maximize .
The maximization of can be reformulated into a nonlinear leastsquares problem with the following objective function
(19)  
which can be easily optimized using the sparse Cholesky solver in Google Ceres [1]. The computation complexity is cubic to the total number of feature matches.
5 Generalization using EM
In the previous section, we solved the problem when constraints are contaminated with outlier feature matches. In this section, we study a problem with an easier setting where correct loopclosure constraints contain no outlier feature matches. Recall that longtail multivariate Cauchy distribution is used to gain robustness against outlier feature matches. We replace the multivariate Cauchy distribution with a multivariate Gaussian distribution for the easier problem without outlier feature matches, and show that our EM formulation degenerates to the formulation of a stateoftheart approach for robust indoor reconstruction
[5]. To avoid repetition, we only highlight the major differences to the previous section. Each analogous term is augmented with a superscript that stands for “Gaussian”.Odometry constraints.
Replacing the multivariate Cauchy distribution in Eq. (3) with a multivariate Gaussian distribution, we have
(20) 
where
(21) 
and and remain unchanged.
Loopclosure constraints.
We note that the Bernoulli distribution in Eq. (6) still holds, and the major changes start from Eq. (7). Using the multivariate Gaussian distribution, we have
(22) 
in which
(23) 
and is the number of feature matches. We note that is a sumofsquare errors that can lead to arithmetic overflow in the term from the posterior of the latent variable (analogous to Eq. (10)). In contrast, there is no arithmetic overflow in the term from Eq. (10) since from Eq. (8) is a sumoflog errors. We propose to alleviate the arithmetic overflow problem by using a Pareto distribution that approximates as
(24) 
where is a scale parameter. For outlier loopclosures, the uniform distribution in Eq. (9) still holds.
Living room 1  Living room 2  Office 1  Office 2  Average  

Before pruning  Recall(%)  61.2  49.7  64.4  61.5  59.2 
Precision(%)  27.2  17.0  19.2  14.9  19.6  
Choi et al. [5]  Recall(%)  57.6  49.7  63.3  60.7  57.8 
after pruning  Precision(%)  95.1  97.4  98.3  100.0  97.7 
Ours (Sec. 5)  Recall(%)  58.7  48.4  63.9  61.5  58.1 
after pruning  Precision(%)  97.0  94.9  96.6  93.6  95.4 
Living room 1  Living room 2  Office 1  Office 2  Average  
Whelan et al. [27]  0.22  0.14  0.13  0.13  0.16 
Kerl et al. [12]  0.21  0.06  0.11  0.10  0.12 
SUN3D [29]  0.09  0.07  0.13  0.09  0.10 
Choi et al. [5]  0.04  0.07  0.03  0.04  0.05 
Ours (Sec. 5)  0.06  0.09  0.05  0.04  0.06 
GT Trajectory  0.04  0.04  0.03  0.03  0.04 
Expectation step.
Using the approximation of in Eq. (24), the posterior from Eq. (10) becomes
(25) 
where
(26) 
It becomes apparent in that the arithmetic overflow problem is alleviated by the replacement of with . In the previous section, in Eq. (13) is learned from the median error of all the error terms in the odometry constraints. Unfortunately, the median error from becomes uninformative because we assume no outlier feature matches, i.e., since . Despite the absence of outlier feature matches, is upper bounded by some threshold, . Hence, the mean error term can be directly estimated from Eq. (23) as . Subsequently, let
(27) 
where we set and solve for . We set m for our experiments on the indoor dataset (see next section) based on the typical magnitude of sensor noise.
Maximization step.
Finally, we reformulate the maximization problem as a nonlinear leastsquares problem with the following objective function
(28)  
which is similar to the formulation in [5] with two minor differences. First, we average the square errors over the number of feature matches but [5] does not. Second, we estimate the posterior by iterating between the Expectation and Maximization steps but [5] estimates it using line processes [2]. It is important to note that Eq. (28) is derived from the original Gaussian formulation in Eq. (22) instead of the Pareto approximation in Eq. (24).
6 Evaluation
We use the experimental results from two datasets for the comparison between our approach and the stateoftheart approach [5]. The first dataset is from smallscale indoor scenes with no outlier feature matches in the odometry and inlier loopclosure constraints, and the second dataset is from largescale outdoor scenes with outlier feature matches. Our GaussianUniform EM (Sec. 5) and CauchyUniform EM (Sec. 4) are evaluated on the smallscale indoor and largescale outdoor datasets, respectively.
6.1 SmallScale Indoor Scenes
The “Augmented ICLNUIM Dataset” provided and augmented by [9] and [5], respectively, is used as the smallscale indoor dataset. This dataset is generated from synthetic indoor environments and includes two models: a living room and an office. There are two RGBD image sequences for each model, resulting in a total of four test cases. To ensure fair comparison, we follow the same evaluation criteria and experimental settings as [5].
Results.
Tab. 1 shows the comparison of the average recall and precision of the loopclosures on (1) before pruning, (2) [5]
after pruning and (3) our method after pruning. Here, “before pruning” refers the loopclosures from the loopclosure detection, and “after pruning” refers to the inlier loopclosures after robust optimization. It can be seen that the average precision and recall of our method is comparable to
[5]. This is an expected result since we showed in Sec. 5 that our method degenerates to the method in [5] with minor differences in the absence of outlier feature matches. We further evaluate the reconstruction accuracy of the final model using the error metric proposed in [9], i.e., the mean distance of the reconstructed surfaces to the ground truth surfaces. Tab. 2 shows the comparison of the reconstruction accuracy of our method to other existing approaches. In addition, as suggested in [5], the reconstruction accuracy of the model obtained from fusing the input depth images with the ground truth trajectory (denoted as GT Trajectory in Tab. 2) is reported for reference. As expected, our method shows comparable result with the stateoftheart on the indoor dataset.6.2 LargeScale Outdoor Scenes
The largescale outdoor dataset is based on the “Oxford Robotcar Dataset” [17]. It consists of 3D point clouds captured with a LiDAR sensor mounted on a car that repeatedly drives through Oxford, UK, at different times over a year. We select two different driving routes from the dataset, a short route (about 1km) and a long route (cityscale). Furthermore, we take two traversals at different times for each route, resulting four traversals in total. Unlike the synthetic indoor dataset, there is no ground truth of the surface geometry. We evaluate the trajectory accuracy against the GPS/INS readings as an indirect measurement of reconstruction accuracy. We prepare the dataset as follows:

[leftmargin=*]

Point cloud fragments. We integrate the pushbroom 2D LiDAR scans and their corresponding INS readings into the 3D point clouds. We segment the data into fragments with 30m radius for every 10m interval. Each fragment is then downsampled using a VoxelGrid filter with a grid size of 0.2m. 242 and 1770 fragments are constructed for the 1km route and the cityscale route, respectively.

Odometry trajectory. The odometry trajectory is disconnected due to discontinuous INS data since we are combining two traversals. We simulate the odometry trajectory via geometric registrations between consecutive point cloud fragments, and manually identify one linkage transformation between the two traversals. We also check the entire odometry trajectory to ensure that there are no remaining erroneous transformations. The resulting odometry trajectory is used to initialize the fragment poses, .

Odometry constraints. For every two consecutive frames along the odometry trajectory, we perform point cloud registration as described in Sec. 3. Specifically, we extract 1024 features for each fragment, and collect the top 200 feature matches to form an odometry constraint. Note that the feature matches are selected without additional geometric verification, and it can contain outliers. 241 and 1769 odometry constraints are constructed for the 1km route and the cityscale route, respectively.

Loopclosure constraints. We perform loopclosure detection as described in Sec. 3. We take every 5th fragment along the trajectory as a keyframe fragment; loopclosures are detected among the selected keyframe fragments. For the 1km route, we find the top 5 loopclosures for each keyframe fragment and then remove the duplicates. For the cityscale route, we find the top 10 loopclosures for each keyframe fragment and then remove the duplicates. 171 and 1438 loopclosure constraints are constructed for the 1km and cityscale route, respectively. The outlier loopclosure ratio is more than 80% for both routes.
1km  Cityscale  

Odometry  1.85  11.81 
Choi et al. [5]  123.24  207.93 
(identity covariance)  
Choi et al. [5]  1.97  50.92 
Ours (Sec. 4)  1.34  2.45 
Baseline Methods.
We compare the effectiveness of our approach with two baseline methods based on [5]: a stronger and a weaker baseline. The stronger baseline encodes uncertainty information of the feature matches between two fragments into a covariance matrix. The feature matches used to construct the covariance matrix are those within 1m apart after geometric registration. Refer to [5] for the more details on the covariance matrix. The covariance matrix of the weaker baseline is set to identity, i.e., no uncertainty information on the feature matches. The relative poses between the point cloud fragments computed from ICP are used as the odometry and loopclosure constraints in the baseline methods.
Results.
Tab. 3 summarizes the mean distances of the estimated poses to the GPS/INS trajectory as an indirect measure of the reconstruction accuracy on the 1km and cityscale outdoor datasets. Fig. 3 and 4 show the plots of the trajectories. We align the first five fragment poses with the GPS/INS trajectory, error measurements start after the 5th fragment pose. The results show that the accuracy increases when more information about the feature matches is considered in the optimization process. We can see from Tab. 3, and Fig. 3 and 4 that the weaker baseline ([5] with uninformative identity covariance) without information of the feature matches gives the worst performance. The stronger baseline ([5] with informative covariance matrix) that encodes information of feature matches using the covariance matrix shows better performance. In contrast, our method that directly takes feature matches as the odometry and loopclosure constraints outperforms the two baselines. Furthermore, Fig. 1 and 5 show reconstruction results for qualitative evaluation. It can be seen from the bottom left and right plots in Fig. 5 that our method produces the sharpest reconstructions of the 3D point clouds.
7 Conclusion
In this paper, we proposed a probabilistic approach for robust point cloud reconstruction of largescale outdoor scenes. Our approach leverages on a CauchyUniform mixture model to simultaneously suppress outlier feature matches and loopclosures. Moreover, we showed that by using a GaussianUniform mixture model, our approach degenerates to the formulation of a stateoftheart approach for robust indoor reconstruction. We verified our proposed methods on both indoor and outdoor benchmark datasets.
Acknowledgement
This work is supported in part by a Singapore MOE Tier 1 grant R252000A65114.
References
 [1] S. Agarwal, K. Mierle, and Others. Ceres solver. http://ceressolver.org.
 [2] M. J. Black and A. Rangarajan. On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. In IJCV, 1996.
 [3] G. Celeux and G. Govaert. A classification em algorithm for clustering and two stochastic versions. In CSDA, 1992.
 [4] J. Chen, D. Bautembach, and S. Izadi. Scalable realtime volumetric surface reconstruction. In TOG, 2013.
 [5] S. Choi, Q.Y. Zhou, and V. Koltun. Robust reconstruction of indoor scenes. In ICCV, 2015.
 [6] M. Cummins and P. Newman. Appearanceonly slam at large scale with fabmap 2.0. In IJRR, 2011.
 [7] F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard. 3d mapping with an rgbd camera. In TRO, 2014.
 [8] D. GalvezLopez and J. D. Tardos. Realtime loop detection with bags of binary words. In IROS, 2011.
 [9] A. Handa, T. Whelan, J. McDonald, and A. J. Davison. A benchmark for RGBD visual odometry, 3D reconstruction and SLAM. In ICRA, 2014.
 [10] P. Henry, D. Fox, A. Bhowmik, and R. Mongia. Patch volumes: Segmentationbased consistent mapping with rgbd cameras. In 3DV, 2013.
 [11] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox. Rgbd mapping: Using kinectstyle depth cameras for dense 3d modeling of indoor environments. In IJRR, 2012.
 [12] C. Kerl, J. Sturm, and D. Cremers. Dense visual slam for rgbd cameras. In IROS, 2013.
 [13] F. R. Kschischang, B. J. Frey, and H.A. Loeliger. Factor graphs and the sumproduct algorithm. In TIT, 2001.
 [14] R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard. g 2 o: A general framework for graph optimization. In ICRA, 2011.
 [15] G. H. Lee, F. Fraundorfer, and M. Pollefeys. Robust posegraph loopclosures with expectationmaximization. In IROS, 2013.
 [16] G. H. Lee and M. Pollefeys. Unsupervised learning of threshold for geometric verification in visualbased loopclosure. In ICRA, 2014.
 [17] W. Maddern, G. Pascoe, C. Linegar, and P. Newman. 1 year, 1000 km: The oxford robotcar dataset. In IJRR, 2017.
 [18] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon. Kinectfusion: Realtime dense surface mapping and tracking. In ISMAR, 2011.
 [19] M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger. Realtime 3d reconstruction at scale using voxel hashing. In TOG, 2013.
 [20] D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. In CVPR, 2006.
 [21] E. Olson, J. Leonard, and S. Teller. Fast iterative alignment of pose graphs with poor initial estimates. In ICRA, 2006.
 [22] M. Pollefeys, D. Nistér, J.M. Frahm, A. Akbarzadeh, P. Mordohai, B. Clipp, C. Engels, D. Gallup, S.J. Kim, P. Merrell, et al. Detailed realtime urban 3d reconstruction from video. In IJCV, 2008.
 [23] T. Schöps, T. Sattler, C. Häne, and M. Pollefeys. Largescale outdoor 3d reconstruction on a mobile device. In CVIU, 2017.
 [24] F. Steinbrucker, C. Kerl, and D. Cremers. Largescale multiresolution surface reconstruction from rgbd sequences. In ICCV, 2013.
 [25] N. Sünderhauf and P. Protzel. Switchable constraints for robust pose graph slam. In IROS, 2012.
 [26] M. A. Uy and G. H. Lee. PointNetVLAD: Deep Point Cloud Based Retrieval for LargeScale Place Recognition. In CVPR, 2018.
 [27] T. Whelan, H. Johannsson, M. Kaess, J. J. Leonard, and J. McDonald. Robust realtime visual odometry for dense rgbd mapping. In ICRA, 2013.
 [28] T. Whelan, M. Kaess, J. J. Leonard, and J. McDonald. Deformationbased loop closure for large scale dense rgbd slam. In IROS, 2013.
 [29] J. Xiao, A. Owens, and A. Torralba. Sun3d: A database of big spaces reconstructed using sfm and object labels. In ICCV, 2013.
 [30] Z. J. Yew and G. H. Lee. 3DFeatNet: Weakly Supervised Local 3D Features for Point Cloud Registration. In ECCV, 2018.
 [31] Q.Y. Zhou and V. Koltun. Dense scene reconstruction with points of interest. In TOG, 2013.
 [32] Q.Y. Zhou and V. Koltun. Simultaneous localization and calibration: Selfcalibration of consumer depth cameras. In CVPR, 2014.
 [33] Q.Y. Zhou, S. Miller, and V. Koltun. Elastic fragments for dense scene reconstruction. In ICCV, 2013.
 [34] Q.Y. Zhou, J. Park, and V. Koltun. Fast global registration. In ECCV, 2016.