1 Introduction
Point clouds registration is an important task in 3D computer vision. The same object or scene is scanned from two view points, e.g. with a laser scanner, and the goal is to recover the rigid transformation (rotation and translation) that aligns the two scans to each other. Realistic scenarios add complications: measurement noise, occlusions due to the change in view point, and outliers due to independent motions of free moving objects in the scene (
distractors). This makes robustness a central issue for point cloud registration algorithms.Probably the most popular approach to solve the problem is using some variant of the Iterative Closest Point (ICP) algorithm [3]. This method works by iterating between two stages: first match pairs of points between the two clouds, and then apply the transformation that minimizes a loss defined by the distance between the two points in each pair. The simplest version of ICP uses the Euclidean distance between points, but later versions make use of more complex distance measures to achieve faster and more accurate convergence. Some of the most popular and successful variants use local normals to define pointtoplane distance measures. ICPlike methods are typically sensitive to noise, requiring the use of steps such as explicit outlier removal to improve their robustness.
Recently, Oron et al. introduced the BestBuddies Similarity (BBS) measure [21]. BBS counts the number of mutual nearestneighbors between two point sets. This simple measure was used for template matching between images and proved to be resilient to outliers and occluders. This success motivated us to study how the BBS measure could be adapted to the task of point cloud registration. We suggest several differentiable loss functions inspired by BBS. Our registration algorithms consist of optimizing over these losses with a variant of gradient descent (Adam [16]), to recover the parameters of the aligning transformation. We collectively name the resulting algorithms Best Buddy Registration (BBR), and demonstrate their high level of robustness to noise, occlusions and distractors, as well as an ability to cope with extremely sparse point clouds. Some of the algorithms are able to achieve very high accuracy in noisy settings where robustness is essential.
Deep neural networks (DNN) have increasingly been used for the processing of point clouds lately. BBR can easily be integrated into such DNNs as a registration stage, and be optimized as part of the overall gradient descent optimization of the network.
^{1}^{1}1For example, the DeepMapping network [7] includes a registration stage based on the nonrobust Chamfer distance, which could be replaced by BBR.To facilitate this, we implemented BBR in Pytorch
^{2}^{2}2https://github.com/AmnonDrory/BestBuddiesRegistration, which also makes it possible to run the algorithms on widely available neural network infrastructure, such as GPUs. (See figure 2).The main contributions of this paper are:

A robust and accurate point cloud registration algorithm that is especially useful in realistic scenarios with a large time offset between the pair of point clouds, meaning large occlusions and outlier motions.

The algorithm naturally fits into the deep learning settings as a component: it can be implemented using operations that already exist in deep learning frameworks, and optimized using Adam gradient descent, which is commonly used for neural network optimization
2 Related Work
There are various approaches to the problem of point cloud registration. These algorithms can be divided into classic (i.e., nondeep) and deep methods.
Classic methods. ICP was introduced by Besl and Mckay [3], and Chen and Medioni [5]. See the survey of Rusinkiewicz and Levoy [27] or the recent review of the topic by Pomerelo et al. [24]. The basic ICP algorithm deals with pointtopoint registration, but already [5] considered pointtoplane registration to improve accuracy. This, however, requires the use of normals as an additional source of information.
Segal et al. [31] later extended ICP to a full planetoplane formulation and gave it a probabilistic interpretation. Jian and Vemuri [15] proposed a robust point set registration. Their approach reformulated ICP as the problem of aligning two Gaussian mixtures such that a statistical discrepancy measure between the two corresponding mixtures is minimized. It was recently accelerated by Eckart et al. [8] who introduced a Hierarchical multiscale Gaussian Mixture Representation (HGMR) of the point clouds. Similarly, FilterReg [10]
is a probabilistic pointset registration method that is considerably faster than alternative methods due to its computationallyefficient probabilistic model. Their key idea is to treat registration as a maximum likelihood estimation, which can be solved using the EM algorithm. With a simple augmentation, they formulate the E step as a filtering problem and solve it using advances in efficient Gaussian filters.
ICP is prone to errors due to outliers and missing data. Thus, a variety of heuristics, as well as more principled methods, were introduced to deal with it. Chetverikov
et al. [6] proposed a robust version of ICP, termed Trimmed ICP, which is based on Least Trimmed Squares that is designed to robustify the minimization of the error function. Bouaziz et al. [4] used sparse inducing norms to cope with missing points and outliers.Rusinkiewicz [28] recently introduced a symmetric objective function for ICP that approximated a locallysecondorder surface centered around the corresponding points. The proposed objective function achieved a larger basin of convergence, compared to regular ICP, while providing stateoftheart accuracy.
Fitzgibbon [9] replaces ICP with a generalpurpose nonlinear optimization (the LevenbergMarquardt algorithm) that minimizes the registration error directly. His surprising finding is that his technique is comparable in speed to the specialpurpose ICP algorithm.
Another line of research gives the correspondence problem a probabilistic interpretation. Instead of assuming a onetoone correspondence, assignments are assumed to be probabilistic. Similar to us, these methods, described next, use gradient descent to find the optimal registration between two point clouds.
The differentiable approximation we take resembles that taken in SoftAssign [26]
. There, they solve the correspondence problem, as an intermediate step, using a permutation matrix. Because that matrix is nondifferentiable it is replaced with a DoublyStochastic Matrix.
EMICP [12] treats point matches as hidden variables and suggests a method that corresponds to an ICP with multiple matches weighted by normalized Gaussian weights, giving birth to the EMICP acronym of the method.
In KCReg [34], the authors take an information theoretic approach to the problem. First, they define a kernel correlation that measures affinity between every pair of points. Then, they use that to measure the compactness of a point set and then show that registering two point sets minimizes the overall compactness. In addition, they show that this is equivalent to minimizing Renyi’s Quadratic Entropy. In fact, the only difference between the gradients of KCReg [34] and EMICP [12] is the normalization term.
Deep methods. The introduction of PointNet [25] for processing unordered point clouds led to the development of PointNetbased registration algorithms.
PointNetLK [1] maps the two point clouds to some latent space in which it applies the LucasKanade registration [19]
. To do that, they define a supervised learning problem that takes two rotated versions of the same point cloud and produces the rotation between the two. The method is implemented using a Recurrent Neural Network, and avoids the costly step of point correspondence. On the downside, it requires a training phase to learn the embedding space, unlike our work that requires no training at all.
Deep Closest Point [37] consists of three parts: a point cloud embedding network, an attentionbased module combined with a pointer generation layer [36]
to approximate combinatorial matching, and a differentiable singular value decomposition (SVD) layer to extract the final rigid transformation. PointGMM
[14] represents the data via a hierarchical Gaussian mixture and learns to perform registration by transforming shapes to their canonical position. DeepVCP [17], for Virtual Corresponding Points, trains a network to detect keypoints, match them probabilistically, and recover the registration using them.A major drawback of deep learning based registration methods is that they strongly depend on the data that they have been trained on. A registration network that is trained for a given dataset does not necessarily generalize well to other datasets [30]. As we do not have a training step, our approach does not suffer from this problem.
Our approach builds on the work of Oron et al. [21] and that of Plötz and Roth [23]. Oron et al. introduced the concept of the bestbuddies similarity measure as a robust method for template matching in images. The idea was to map image patches to points in some high dimensional space and count the number of mutual nearest neighbor matches between the two point sets. This was shown to converge to the error measure when the number of points tends to infinity.
Plötz and Roth [23] proposed an approximation scheme to the nearest neighbor problem. Instead of selecting a particular element to be the nearest neighbor to a query point, they use a soft approximation that is governed by a temperature parameter. When the temperature goes to zero, the approximation converges to the deterministic nearest neighbor. Similarly to ICP and its variants, the bestbuddies similarity relies on nearest neighbor search, and we use this nearest neighbor approximation in our work.
3 Method
We consider two point clouds , , where
. We wish to find the transformation that aligns them, and in this work we assume this is a rigid transformation with 6 degrees of freedom (6DOF). We define several differentiable loss functions inspired by the
Best Buddy Similarity measure (BBS). We collectively name our registration algorithms Best Buddy Registration (BBR). For each loss function , the algorithm BBRL works by optimizing over this loss function to find the aligning transformation:(1) 
where is a 3D rotation matrix,
a 3D translation vector, and
a temperature parameter (discussed ahead). We parameterize the rotation using Euler angles: . We next describe the four variants of our algorithm: BBRsoftBBS, BBRsoftBD, BBRN and BBRF.We start by defining the BBS measure: Let denote the distance matrix between points in and points in . A best buddies matrix determines if a pair of points and are mutual nearest neighbors:
(2) 
where equals if the term in the brackets is true and zero otherwise.
The BestBuddies Similarity (BBS) loss is negative the number of best buddies pairs^{3}^{3}3in the original definition [21], BBS is normalized by . We omit that here.:
(3) 
The bestbuddies similarity measure was shown to be very robust to outliers and missing data [21], in the context of template matching for images. We bring it to 3D point clouds. is a robust measure for the quality of the matching between two point clouds and . However, it cannot be used for gradientdescent optimization, because it uses the nondifferentiable argmin operator. To overcome this, we use the soft argmin approximation introduced by [23] for Neural Nearest Neighbors Networks. Specifically, approximates as follows:
(4) 
where is a temperature parameter (see ahead), and is a small constant used for numerical stability. The matrix is the elementwise multiplication of rowwise and columnwise soft argmin of the distance matrix . Observe how it corresponds to the brackets in Equation (2). The softBBS loss is now given by:
(5) 
is not only a differentiable approximation to , but also a generalization. While is only nonzero if and are mutual nearest neighbors, can also be nonzero when, for example, is ’s 3rd nearest neighbor, while is ’s 4th nearest neighbor. The value of the temperature parameter, , controls this behaviour. The smaller it is, the more strict becomes, meaning more similar to . is also learned during the optimization (together with the other optimized parameters, and ). However, we find it important to initialize it with a reasonable value. A bad choice of can result in a flat loss, unsuitable for gradient descent. In all our experiments we set as it generally provides a smooth approximation to , but with a good slope near the minimum. For numerical stability, we allow it to decrease down to .
The next loss we suggest is the soft buddy distance loss, or , which makes use of the distance between the two points in each pair. This is in contrast to the BBS measure, which only counts the number of bestbuddies, and softBBS loss, which is a soft approximation to that. We define softBD as follows:
(6) 
The next loss is softBD with normals loss, or , which uses a pointtoplane distance measure calculated from local normals. Such a distance measure is used in some of the most popular and successful ICP variants, such as generalized ICP [31], and symmetric ICP [28]. In the previous methods we suggested, we used the Euclidean pointtopoint distance to create the distance matrix . In BBRN we replace that with the following symmetric pointtoplane distance, based on Rusinkievicz [28]:
(7) 
where is the normal at point , is the normal at point , and denotes the version of the distance matrix calculated using this distance. The loss function is then calculated as in softBD, except using instead . This distance is symmetric in the sense that it uses the normals from both points, unlike algorithms such as pointtoplane ICP [5] that only use normals from one of the point clouds.^{4}^{4}4This is not to be confused with the symmetric nature of the bestbuddies similarity measure that we introduce here.
The final loss we present is the bestbuddy filtering loss, or . At the heart of the BBS measure lies the robustness achieved by using mutual nearest neighbors. BBRF translates this idea into best buddy filtering: using only pairs that are mutual nearest neighbors. In addition, it follows the trend in ICPlike algorithms, in that it uses both pointtopoint and pointtoplane distance measures: the Euclidean pointtopoint distance is used for the stage of matching pairs between the two point clouds. Then the symmetric pointtoplane distance between these pairs is used to define the following loss:
(8) 
Notice that in this variant of BBR we have a hard selection of pairs, which isn’t optimized during gradient descent. Instead, the distances between the points in each pair are minimized.
The central difference between BBRF and symmetric ICP [28] is that best buddy filtering replaces explicit outlier rejection. This removes the necessity to calibrate outlierrejection parameters, while resulting in better accuracy in settings where robustness is important, as Section 4 shows. Another difference is that BBRF uses Adam gradient descent for optimization, while symmetric ICP uses a closed form solution to an approximate linearized version of the symmetric pointtoplane distance measure.
BBRF is especially useful for very large point clouds, where memory and running time become a constraint, because it does not require the full distance matrix or . For the pair matching step, we use the KDtree method [2], and then calculate the pointtoplane distances only for bestbuddy pairs.
4 Experiments
We present experiments that are designed to analyze the behaviour of the BBR methods, and evaluate them on several datasets including Stanford [35], TUM RGBD [33], KITTI Odometry [11], and ApolloSouthbay [18]. We compare our approach to several established alternatives, focusing on classic approaches (e.g., ICP) as opposed to learned methods, as the latter do not necessarily generalize well across datasets [30].
Figure 1 shows the bestbuddies pairs during a typical run of BBRsoftBBS on the Stanford Bunny model [35]. The algorithm converges after 120 iterations. At first, there are few bestbuddies pairs, but as time progresses their number grows until convergence.
4.1 Performance Evaluation Setup
We conduct a set of experiments to evaluate accuracy and robustness. To do that, we apply different rotations, translations, subsampling, and noise, to each point cloud. We compare ourselves to the following popular point cloud registration algorithms: (i) HGMR [8], a GMM based method; (ii) Coherent Point Drift (CPD) [20], a probabilistic algorithm based on GMM; (iii) Generalized ICP (GICP) [31], a very popular and accurate ICP variant that uses local normals, and (iv) Symmetric ICP (SymICP) [28], which provides stateoftheart performance on several point cloud registration challenges.
ICP algorithms are sensitive to noise, and therefore commonly employ a set of standard practices for outlier rejection. Our BBR methods require no such processes.
Setting.
We test the 4 variants of our algorithm: BBRsoftBBS, BBRsoftBD, BBRN and BBRF. BBRsoftBBS tends to be less accurate than the others, and therefore we omit it from most experiments.
The local normals that are used in BBRN and BBRF are estimated by calculating the principal axis of a neighborhood of neighbors around each point in the full cloud (before subsampling). For consistency, SymICP was given the same normals used for BBRN. GICP calculates its own normals onthefly, from the subsampled point cloud that it takes as input. For all BBR methods, the optimization is performed by running Adam for a predefined number of iterations.
We use the Probreg^{5}^{5}5https://github.com/nekanat/probreg library’s implementation of HGMR and CPD, and the original SymICP implementation, for all of which we use the default parameters. We use the Point Cloud Library’s [29] implementation of GICP, setting the parameters as in [17].
4.2 Comparing Accuracy and Robustness between BBR variants
We start by comparing the different variants of BBR in a simple experiment, testing their accuracy and ability to converge with different initial rotations: 5, 10, 30, 60 and 90 degrees. For each angle, we repeat 20 times: select two random subsets of 500 points from the Stanford Bunny point cloud, rotate the target point cloud around a random axis, perform registration and measure the angular error. We consider registration to have failed if the final error is over .
Results are shown in figure 3. BBRsoftBBS is the clear leader in robustness to large initial error. Unlike the others, it is able to handle initial rotations of with hardly any failures. Notice that BBRN is especially sensitive to large initial rotations, due to its reliance on normals to recognize bestbuddies. When the initial rotation is large, the same object will have very different normals in each of the two scans. For all algorithms, large initial rotations do not cause degradation in accuracy  for the attempts that did succeed. When looking at the accuracy of the successful attempts, it’s clear that the methods that make use of local normals (BBRN and BBRF) are significantly more accurate.
4.3 Accuracy
The experiments shown in Figure 4 demonstrate our ability to register point clouds with random rotation and translation, using the Bunny, Horse, and Dragon point clouds from [35]. We randomly select a source and target subset from the original point cloud, each containing points. The target point cloud is rotated around a randomly selected axis by an angle of . It is then translated along a random axis to a distance of . We then run the registration algorithms on the pointcloud pair and record their translational and angular error. We repeat the experiment times and report the median error for each algorithm and each cardinality of the point clouds.
For all experiments in Figure 4 we set and . The other parameters are: Bunny: , . Horse: , . Dragon: up to 2000, . BBRF achieves the lowest error rate, across almost all point cloud sizes. It is followed closely by BBRN. Only when the point density is high, SymICP performs onpar with BBRN. This demonstrate BBR’s ability to work well with very sparse point clouds. It should also be pointed out that BBRsoftBD outperforms the comparable registration method that do not use normals (as well as GICP).
4.4 Robustness
The next experiments demonstrate the resistance of the algorithm to a variety of challenges, including occlusions, the presence of a distractor, measurement noise, and a large initial error.
Partial Overlap and Occlusion.
The experiment shown in Figure 5 evaluates the resistance to partial overlap and occlusion. We perform registration between two partial scans of the Stanford Bunny, bun000 and bun090, each captured from a different view point. Following Rusinkiewicz’s [28] experimental setup, we first align the scans according to the ground truth motion. Then we follow the same experimental method as in Section 4.3, with , and . BBRF achieves the most accurate results, followed by BBRN and SymICP. However, SymICP deteriorates considerably when given a very sparse point cloud, while both of our algorithms cope with it very well.
Resistance to Distractors.
This experiment evaluates the effects of distractor noise. This is the case where in addition to the main object of interest, the scene also contains a second object with a different motion. In this synthetic experiment, the main object, a large horse, was randomly translated and rotated as in Section 4.3, with , and . The distractor object, a small horse, underwent a different motion. Such a situation may occur when attempting to estimate the egomotion of a vehicle, using scans that include other independently moving vehicles. The main object contains 1000 points, and we vary the number of points in the distractor object from 200 up to 900. In figure 6 we show the median error as a function of the number of points in the distractor. BBRF shows a strong resistance to distractor noise in this experiment, while SymICP is quite susceptible to it. Among methods that do not make use of normals, BBRsoftBD is the most accurate.
Measurement noise.
The TUM RGBD dataset [33] contains point clouds of indoor scenes captured with the Kinect sensor. It contains natural measurement noise, due to the warp and scanning noise from the Kinect sensor. It has been noted by Rusinkiewicz [28] that this dataset poses a qualitatively different challenge than the bunny point cloud. He demonstrates his algorithm on a specific pair of partiallyoverlapping scans from this dataset. We use the same pair, 1305031104.030279 and 1305031108.503548 of the freiburg1_xyz sequence from the TUM RGBD (Figure 7), sample 1000 points from each, and experiment with adding a random rotation of up to around a random axis. We repeat this 50 times and perform registration for each, showing the cumulative distribution of the final errors in Figure 8. The BBR algorithms perform better than all competing methods.
We test another pair of scans from TUM RGBD, 1305031794.813436 and 1305031794.849048 of the freiburg1_xyz sequence (Figure 9). We sample 1500 points from each, and experiment with adding a random rotation of up to around a random axis. We repeat this 50 times and perform registration for each, showing the cumulative distribution of the final errors in Figure 10. BBRsoftBD (labeled BBR) performs considerably better than either SymICP [28] or CPD [20], showing its robustness to realistic measurement noise and occlusions.
Large initial error with partial overlap.
In this experiment we evaluate the ability to converge when the source and target point clouds are only partially overlapping, and starting from a large initial error. We use two scans of the bunny point cloud, bun180 and bun270, that were captured from significantly different view points. As before, we first align them, sample 1000 points from each, and then apply a random motion to one of them. In this experiment the motion is large: a random rotation in the range degrees, around a random axis, and a translation of half the extent of the point cloud () along a random direction. Figure 11 shows the cumulative distribution of errors over 50 repeats. BBRsoftBD performs considerably better than all other algorithms.
BBRsoftBBS Convergence.
We’ve previously demonstrated the ability of BBRsoftBD, BBRN and BBRF to handle cases where the initial error is of the order of up to . Here we show that BBRsoftBBS can be used to register in situations where the initial error is much larger. This is due to its large basin of convergence. In this section we use the same Bunny, Horse and Dragon models that were used in the accuracy test, this time with a large random rotation in range degrees, and run BBRsoftBBS. We set and . As can be seen in Figure 12, BBRsoftBBS manages to reduce the rotation error significantly, to below , in all experiments.
4.5 Odometry
In this section we present experiments in the realistic setting of vehicle navigation, specifically focusing on a difficult setting where the separation between the two clouds is relatively large, leading to significant occlusions and outliers caused by independently moving objects in the scene. We use two datasets of highresolution Lidar scans: KITTI Odometry [11], and ApolloSouthbay [18], both of which consist of large point clouds of over 100K points. We follow the experimental setup of [17], where the test set consists of pairs of clouds scanned by the same vehicle at two times, during which the vehicle has travelled up to 5 meters, for up to 2 seconds (KITTI) or even 5 seconds (ApolloSouthbay). An initial estimate for the motion is assumed to be available, which is inaccurate by up to 1 meter in each of x,y,z, and up to 1 degree in each of . We report the mean translation and angular errors, as well as the maximum (worst case).
For this test, we use the BBRF variant of our BBR algorithm. Normals are estimated from neighborhoods of points in the full cloud. To achieve high accuracy, we only moderately subsample the point clouds to 30K points, uniformly at random.
Tables 1 and 2 compare our results to those reported in [17], and to those of SymICP [28]. In both experiments, BBRF achieves state of the art accuracy by three of the four accuracy measures. This demonstrates BBR’s capabilities of achieving high accuracy in realistic scenarios that include occlusions, outlier motions and measurement noise.
Method  Angular  Error()  Translation  Error(m) 
Mean  Max  Mean  Max  
ICPPo2Po [3]  0.139  1.176  0.089  2.017 
ICPPo2Pl [3]  0.084  1.693  0.065  2.050 
GICP [31]  0.067  0.375  0.065  2.045 
AAICP [22]  0.145  1.406  0.088  2.020 
NDTP2D [32]  0.101  4.369  0.071  2.000 
CPD [20]  0.461  5.076  0.804  7.301 
3DFeatNet [38]  0.199  2.428  0.116  4.972 
DeepVCPBase [17]  0.195  1.700  0.073  0.482 
DeepVCPDuplication [17]  0.164  1.212  0.071  0.482 
SymICP [28]  0.066  0.422  0.058  0.863 
BBRF (ours)  0.065  0.356  0.058  0.730 
Method  Angular  Error()  Translation  Error(m) 
Mean  Max  Mean  Max  
ICPPo2Po [3]  0.051  0.678  0.089  3.298 
ICPPo2Pl [3]  0.026  0.543  0.024  4.448 
GICP [31]  0.025  0.562  0.014  1.540 
AAICP [22]  0.054  1.087  0.109  5.243 
NDTP2D [32]  0.045  1.762  0.045  1.778 
CPD [20]  0.054  1.177  0.210  5.578 
3DFeatNet [38]  0.076  1.180  0.061  6.492 
DeepVCPBase [17]  0.135  1.882  0.024  0.875 
DeepVCPDuplication [17]  0.056  0.875  0.018  0.932 
SymICP [28]  0.018  2.589  0.010  6.625 
BBRF (ours)  0.015  0.308  0.007  2.517 
4.6 Runtime
Table 3 shows the time it takes to run a single iteration of the different variants of our loss function, as a function of the number of points (measured with a PyTorch implementation running on a GTX 980 Ti GPU). All of the algorithm variants typically converge in few hundreds of iterations, depending on the learning rate.
Points  BBRsoftBBS  BBRsoftBD  BBRN  BBRF 
200  4  4  4  6 
500  4  4  6  6 
1000  8  8  13  8 
5000  140  140  240  24 
30000        125 
5 Conclusions
We proposed Best Buddy Registration (BBR) algorithms for point cloud registration inspired by the Best Buddy Similarity (BBS) measure. First we show that registration can be performed by running gradient descent on a differential approximation to the negated BBS measure. This results in an algorithm that is quite robust to noise, occlusions and distractions, and able to cope with very sparse point clouds. We then present additional algorithms that achieve higher accuracy while maintaining robustness, by incorporating pointtopoint and pointtoplane distances into the loss function. Finally, we present the BBRF algorithm that uses best buddy filtering to achieves state of the art accuracy on challenges that include significant noise, occlusions and distractors, including registration of automotive lidar scans that are relatively widely separated in time. Our algorithms are implemented in Pytorch and optimized with Adam gradient descent, allowing them to be incorporated as a registration stage in Deep Neural Networks for processing point clouds.
6 Acknowledgements
This research was supported by ERCStG grant no. 757497 (SPADE) and by ISF grant number 1549/19.
References

[1]
(201906)
PointNetLK: robust & efficient point cloud registration using pointnet.
In
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, Cited by: §2.  [2] (1975) Multidimensional binary search trees used for associative searching. Commun. ACM 18, pp. 509–517. Cited by: §3.
 [3] (199202) A method for registration of 3d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14 (2), pp. 239–256. External Links: ISSN 01628828 Cited by: §1, §2, Table 1, Table 2.
 [4] (2013) Sparse iterative closest point. In Proceedings of the Eleventh Eurographics/ACMSIGGRAPH Symposium on Geometry Processing, SGP ’13, pp. 113–123. Cited by: §2.
 [5] (199204) Object modelling by registration of multiple range images. Image Vision Comput. 10 (3), pp. 145–155. External Links: ISSN 02628856 Cited by: §2, §3.
 [6] (2005) Robust euclidean alignment of 3d point sets: the trimmed iterative closest point algorithm. Image and Vision Computing 23 (3), pp. 299 – 309. Cited by: §2.
 [7] (201906) DeepMapping: unsupervised map estimation from multiple point clouds. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: footnote 1.
 [8] (201809) HGMR: hierarchical gaussian mixtures for adaptive 3d registration. In The European Conference on Computer Vision (ECCV), Cited by: §2, §4.1.
 [9] (2001) Robust registration of 2D and 3D point sets. In British Machine Vision Conference, pp. 662–670. Cited by: §2.
 [10] (2019) FilterReg: robust and efficient probabilistic pointset registration using gaussian filter and twist parameterization. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 11095–11104. Cited by: §2.
 [11] (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.5, §4.
 [12] (2002) Multiscale emicp: a fast and robust approach for surface registration. In European Conference on Computer Vision, pp. 418–432. Cited by: §2, §2.
 [13] (2012) Rotation averaging. International Journal of Computer Vision 103, pp. 267–305. Cited by: §4.1.
 [14] (2020) PointGMM: a neural gmm network for point clouds. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 12051–12060. Cited by: §2.

[15]
(201108)
Robust point set registration using gaussian mixture models
. IEEE Trans. Pattern Anal. Mach. Intell. 33 (8), pp. 1633–1645. External Links: ISSN 01628828 Cited by: §2.  [16] (201412) Adam: a method for stochastic optimization. International Conference on Learning Representations, pp. . Cited by: §1.
 [17] (201910) DeepVCP: an endtoend deep neural network for point cloud registration. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2, §4.1, §4.1, §4.5, §4.5, Table 1, Table 2.
 [18] (2019) L3Net: towards learning based LiDAR localization for autonomous driving. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.5, §4.

[19]
(1981)
An iterative image registration technique with an application to stereo vision.
In
Proceedings of the 7th International Joint Conference on Artificial Intelligence  Volume 2
, IJCAI’81, pp. 674–679. Cited by: §2.  [20] (201012) Point set registration: coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 32 (12), pp. 2262–2275. External Links: ISSN 01628828 Cited by: §4.1, §4.4, Table 1, Table 2.
 [21] (2018) Bestbuddies similarity—robust template matching using mutual nearest neighbors. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (8), pp. 1799–1813. Cited by: §1, §2, §3, footnote 3.
 [22] (201805) AAicp: iterative closest point with anderson acceleration. In 2018 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 3407–3412. External Links: Document, ISSN Cited by: Table 1, Table 2.
 [23] (2018) Neural Nearest Neighbors Networks. Proceedings of Advances in Neural Information Processing Systems (NeuralIPS). Cited by: §2, §2, §3.
 [24] (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: §2.
 [25] (2017) PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 652–660. Cited by: §2.
 [26] (1997) The softassign procrustes matching algorithm. In Biennial International Conference on Information Processing in Medical Imaging, pp. 29–42. Cited by: §2.
 [27] (2001) Efficient variants of the icp algorithm. In Proceedings Third International Conference on 3D Digital Imaging and Modeling, pp. 145–152. Cited by: §2.
 [28] (201907) A symmetric objective function for ICP. ACM Transactions on Graphics (Proc. SIGGRAPH) 38 (4). Cited by: §2, §3, §3, Figure 5, §4.1, §4.4, §4.4, §4.4, §4.5, Table 1, Table 2.
 [29] (2011May 913) 3D is here: point cloud library (pcl). In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China. Cited by: §4.1.
 [30] (2019) PCRNet: point cloud registration network using pointnet encoding. ArXiv abs/1908.07906. Cited by: §2, §4.
 [31] (2009) Generalizedicp. In Robotics: Science and Systems, J. Trinkle, Y. Matsuoka, and J. A. Castellanos (Eds.), Cited by: §2, §3, §4.1, Table 1, Table 2.
 [32] (2012) Fast and accurate scan registration through minimization of the distance between compact 3d ndt representations.. I. J. Robotics Res. 31 (12), pp. 1377–1393. External Links: Link Cited by: Table 1, Table 2.
 [33] (2012) A benchmark for the evaluation of RGBD SLAM systems. In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, Vilamoura, Algarve, Portugal, October 712, 2012, pp. 573–580. Cited by: Figure 7, Figure 9, §4.4, §4.
 [34] (2004) A correlationbased approach to robust point set registration. In European conference on computer vision, pp. 558–569. Cited by: §2.
 [35] (1994) Zippered polygon meshes from range images. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’94, pp. 311–318. External Links: ISBN 0897916670 Cited by: §4.3, §4, §4.
 [36] (2015) Pointer networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 2692–2700. Cited by: §2.
 [37] (201910) Deep closest point: learning representations for point cloud registration. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
 [38] (2018) 3DFeatNet: weakly supervised local 3d features for point cloud registration. In European Conference on Computer Vision, pp. 630–646. Cited by: Table 1, Table 2.
Comments
There are no comments yet.