Point clouds registration is an important task in 3D computer vision. The same object or scene is scanned from two view points, e.g. with a laser scanner, and the goal is to recover the rigid transformation (rotation and translation) that aligns the two scans to each other. Realistic scenarios add complications: measurement noise, occlusions due to the change in view point, and outliers due to independent motions of free moving objects in the scene (distractors). This makes robustness a central issue for point cloud registration algorithms.
Probably the most popular approach to solve the problem is using some variant of the Iterative Closest Point (ICP) algorithm . This method works by iterating between two stages: first match pairs of points between the two clouds, and then apply the transformation that minimizes a loss defined by the distance between the two points in each pair. The simplest version of ICP uses the Euclidean distance between points, but later versions make use of more complex distance measures to achieve faster and more accurate convergence. Some of the most popular and successful variants use local normals to define point-to-plane distance measures. ICP-like methods are typically sensitive to noise, requiring the use of steps such as explicit outlier removal to improve their robustness.
Recently, Oron et al. introduced the Best-Buddies Similarity (BBS) measure . BBS counts the number of mutual nearest-neighbors between two point sets. This simple measure was used for template matching between images and proved to be resilient to outliers and occluders. This success motivated us to study how the BBS measure could be adapted to the task of point cloud registration. We suggest several differentiable loss functions inspired by BBS. Our registration algorithms consist of optimizing over these losses with a variant of gradient descent (Adam ), to recover the parameters of the aligning transformation. We collectively name the resulting algorithms Best Buddy Registration (BBR), and demonstrate their high level of robustness to noise, occlusions and distractors, as well as an ability to cope with extremely sparse point clouds. Some of the algorithms are able to achieve very high accuracy in noisy settings where robustness is essential.
Deep neural networks (DNN) have increasingly been used for the processing of point clouds lately. BBR can easily be integrated into such DNNs as a registration stage, and be optimized as part of the overall gradient descent optimization of the network.111For example, the DeepMapping network  includes a registration stage based on the non-robust Chamfer distance, which could be replaced by BBR.
To facilitate this, we implemented BBR in Pytorch222https://github.com/AmnonDrory/BestBuddiesRegistration, which also makes it possible to run the algorithms on widely available neural network infrastructure, such as GPUs. (See figure 2).
The main contributions of this paper are:
A robust and accurate point cloud registration algorithm that is especially useful in realistic scenarios with a large time offset between the pair of point clouds, meaning large occlusions and outlier motions.
The algorithm naturally fits into the deep learning settings as a component: it can be implemented using operations that already exist in deep learning frameworks, and optimized using Adam gradient descent, which is commonly used for neural network optimization
2 Related Work
There are various approaches to the problem of point cloud registration. These algorithms can be divided into classic (i.e., non-deep) and deep methods.
Classic methods. ICP was introduced by Besl and Mckay , and Chen and Medioni . See the survey of Rusinkiewicz and Levoy  or the recent review of the topic by Pomerelo et al. . The basic ICP algorithm deals with point-to-point registration, but already  considered point-to-plane registration to improve accuracy. This, however, requires the use of normals as an additional source of information.
Segal et al.  later extended ICP to a full plane-to-plane formulation and gave it a probabilistic interpretation. Jian and Vemuri  proposed a robust point set registration. Their approach reformulated ICP as the problem of aligning two Gaussian mixtures such that a statistical discrepancy measure between the two corresponding mixtures is minimized. It was recently accelerated by Eckart et al.  who introduced a Hierarchical multi-scale Gaussian Mixture Representation (HGMR) of the point clouds. Similarly, FilterReg 
is a probabilistic point-set registration method that is considerably faster than alternative methods due to its computationally-efficient probabilistic model. Their key idea is to treat registration as a maximum likelihood estimation, which can be solved using the EM algorithm. With a simple augmentation, they formulate the E step as a filtering problem and solve it using advances in efficient Gaussian filters.
ICP is prone to errors due to outliers and missing data. Thus, a variety of heuristics, as well as more principled methods, were introduced to deal with it. Chetverikovet al.  proposed a robust version of ICP, termed Trimmed ICP, which is based on Least Trimmed Squares that is designed to robustify the minimization of the error function. Bouaziz et al.  used sparse inducing norms to cope with missing points and outliers.
Rusinkiewicz  recently introduced a symmetric objective function for ICP that approximated a locally-second-order surface centered around the corresponding points. The proposed objective function achieved a larger basin of convergence, compared to regular ICP, while providing state-of-the-art accuracy.
Fitzgibbon  replaces ICP with a general-purpose nonlinear optimization (the Levenberg-Marquardt algorithm) that minimizes the registration error directly. His surprising finding is that his technique is comparable in speed to the special-purpose ICP algorithm.
Another line of research gives the correspondence problem a probabilistic interpretation. Instead of assuming a one-to-one correspondence, assignments are assumed to be probabilistic. Similar to us, these methods, described next, use gradient descent to find the optimal registration between two point clouds.
The differentiable approximation we take resembles that taken in SoftAssign 
. There, they solve the correspondence problem, as an intermediate step, using a permutation matrix. Because that matrix is non-differentiable it is replaced with a Doubly-Stochastic Matrix.
EM-ICP  treats point matches as hidden variables and suggests a method that corresponds to an ICP with multiple matches weighted by normalized Gaussian weights, giving birth to the EM-ICP acronym of the method.
In KCReg , the authors take an information theoretic approach to the problem. First, they define a kernel correlation that measures affinity between every pair of points. Then, they use that to measure the compactness of a point set and then show that registering two point sets minimizes the overall compactness. In addition, they show that this is equivalent to minimizing Renyi’s Quadratic Entropy. In fact, the only difference between the gradients of KCReg  and EM-ICP  is the normalization term.
Deep methods. The introduction of PointNet  for processing unordered point clouds led to the development of PointNet-based registration algorithms.
. To do that, they define a supervised learning problem that takes two rotated versions of the same point cloud and produces the rotation between the two. The method is implemented using a Recurrent Neural Network, and avoids the costly step of point correspondence. On the downside, it requires a training phase to learn the embedding space, unlike our work that requires no training at all.
to approximate combinatorial matching, and a differentiable singular value decomposition (SVD) layer to extract the final rigid transformation. PointGMM represents the data via a hierarchical Gaussian mixture and learns to perform registration by transforming shapes to their canonical position. DeepVCP , for Virtual Corresponding Points, trains a network to detect keypoints, match them probabilistically, and recover the registration using them.
A major drawback of deep learning based registration methods is that they strongly depend on the data that they have been trained on. A registration network that is trained for a given dataset does not necessarily generalize well to other datasets . As we do not have a training step, our approach does not suffer from this problem.
Our approach builds on the work of Oron et al.  and that of Plötz and Roth . Oron et al. introduced the concept of the best-buddies similarity measure as a robust method for template matching in images. The idea was to map image patches to points in some high dimensional space and count the number of mutual nearest neighbor matches between the two point sets. This was shown to converge to the error measure when the number of points tends to infinity.
Plötz and Roth  proposed an approximation scheme to the nearest neighbor problem. Instead of selecting a particular element to be the nearest neighbor to a query point, they use a soft approximation that is governed by a temperature parameter. When the temperature goes to zero, the approximation converges to the deterministic nearest neighbor. Similarly to ICP and its variants, the best-buddies similarity relies on nearest neighbor search, and we use this nearest neighbor approximation in our work.
We consider two point clouds , , where
. We wish to find the transformation that aligns them, and in this work we assume this is a rigid transformation with 6 degrees of freedom (6DOF). We define several differentiable loss functions inspired by theBest Buddy Similarity measure (BBS). We collectively name our registration algorithms Best Buddy Registration (BBR). For each loss function , the algorithm BBR-L works by optimizing over this loss function to find the aligning transformation:
where is a 3D rotation matrix,
a 3D translation vector, anda temperature parameter (discussed ahead). We parameterize the rotation using Euler angles: . We next describe the four variants of our algorithm: BBR-softBBS, BBR-softBD, BBR-N and BBR-F.
We start by defining the BBS measure: Let denote the distance matrix between points in and points in . A best buddies matrix determines if a pair of points and are mutual nearest neighbors:
where equals if the term in the brackets is true and zero otherwise.
The Best-Buddies Similarity (BBS) loss is negative the number of best buddies pairs333in the original definition , BBS is normalized by . We omit that here.:
The best-buddies similarity measure was shown to be very robust to outliers and missing data , in the context of template matching for images. We bring it to 3D point clouds. is a robust measure for the quality of the matching between two point clouds and . However, it cannot be used for gradient-descent optimization, because it uses the non-differentiable argmin operator. To overcome this, we use the soft argmin approximation introduced by  for Neural Nearest Neighbors Networks. Specifically, approximates as follows:
where is a temperature parameter (see ahead), and is a small constant used for numerical stability. The matrix is the element-wise multiplication of row-wise and column-wise soft argmin of the distance matrix . Observe how it corresponds to the brackets in Equation (2). The softBBS loss is now given by:
is not only a differentiable approximation to , but also a generalization. While is only non-zero if and are mutual nearest neighbors, can also be non-zero when, for example, is ’s 3rd nearest neighbor, while is ’s 4th nearest neighbor. The value of the temperature parameter, , controls this behaviour. The smaller it is, the more strict becomes, meaning more similar to . is also learned during the optimization (together with the other optimized parameters, and ). However, we find it important to initialize it with a reasonable value. A bad choice of can result in a flat loss, unsuitable for gradient descent. In all our experiments we set as it generally provides a smooth approximation to , but with a good slope near the minimum. For numerical stability, we allow it to decrease down to .
The next loss we suggest is the soft buddy distance loss, or , which makes use of the distance between the two points in each pair. This is in contrast to the BBS measure, which only counts the number of best-buddies, and softBBS loss, which is a soft approximation to that. We define softBD as follows:
The next loss is softBD with normals loss, or , which uses a point-to-plane distance measure calculated from local normals. Such a distance measure is used in some of the most popular and successful ICP variants, such as generalized ICP , and symmetric ICP . In the previous methods we suggested, we used the Euclidean point-to-point distance to create the distance matrix . In BBR-N we replace that with the following symmetric point-to-plane distance, based on Rusinkievicz :
where is the normal at point , is the normal at point , and denotes the version of the distance matrix calculated using this distance. The loss function is then calculated as in softBD, except using instead . This distance is symmetric in the sense that it uses the normals from both points, unlike algorithms such as point-to-plane ICP  that only use normals from one of the point clouds.444This is not to be confused with the symmetric nature of the best-buddies similarity measure that we introduce here.
The final loss we present is the best-buddy filtering loss, or . At the heart of the BBS measure lies the robustness achieved by using mutual nearest neighbors. BBR-F translates this idea into best buddy filtering: using only pairs that are mutual nearest neighbors. In addition, it follows the trend in ICP-like algorithms, in that it uses both point-to-point and point-to-plane distance measures: the Euclidean point-to-point distance is used for the stage of matching pairs between the two point clouds. Then the symmetric point-to-plane distance between these pairs is used to define the following loss:
Notice that in this variant of BBR we have a hard selection of pairs, which isn’t optimized during gradient descent. Instead, the distances between the points in each pair are minimized.
The central difference between BBR-F and symmetric ICP  is that best buddy filtering replaces explicit outlier rejection. This removes the necessity to calibrate outlier-rejection parameters, while resulting in better accuracy in settings where robustness is important, as Section 4 shows. Another difference is that BBR-F uses Adam gradient descent for optimization, while symmetric ICP uses a closed form solution to an approximate linearized version of the symmetric point-to-plane distance measure.
BBR-F is especially useful for very large point clouds, where memory and running time become a constraint, because it does not require the full distance matrix or . For the pair matching step, we use the KD-tree method , and then calculate the point-to-plane distances only for best-buddy pairs.
We present experiments that are designed to analyze the behaviour of the BBR methods, and evaluate them on several datasets including Stanford , TUM RGBD , KITTI Odometry , and Apollo-Southbay . We compare our approach to several established alternatives, focusing on classic approaches (e.g., ICP) as opposed to learned methods, as the latter do not necessarily generalize well across datasets .
Figure 1 shows the best-buddies pairs during a typical run of BBR-softBBS on the Stanford Bunny model . The algorithm converges after 120 iterations. At first, there are few best-buddies pairs, but as time progresses their number grows until convergence.
4.1 Performance Evaluation Setup
We conduct a set of experiments to evaluate accuracy and robustness. To do that, we apply different rotations, translations, sub-sampling, and noise, to each point cloud. We compare ourselves to the following popular point cloud registration algorithms: (i) HGMR , a GMM based method; (ii) Coherent Point Drift (CPD) , a probabilistic algorithm based on GMM; (iii) Generalized ICP (G-ICP) , a very popular and accurate ICP variant that uses local normals, and (iv) Symmetric ICP (Sym-ICP) , which provides state-of-the-art performance on several point cloud registration challenges.
ICP algorithms are sensitive to noise, and therefore commonly employ a set of standard practices for outlier rejection. Our BBR methods require no such processes.
We test the 4 variants of our algorithm: BBR-softBBS, BBR-softBD, BBR-N and BBR-F. BBR-softBBS tends to be less accurate than the others, and therefore we omit it from most experiments.
The local normals that are used in BBR-N and BBR-F are estimated by calculating the principal axis of a neighborhood of neighbors around each point in the full cloud (before subsampling). For consistency, Sym-ICP was given the same normals used for BBR-N. G-ICP calculates its own normals on-the-fly, from the subsampled point cloud that it takes as input. For all BBR methods, the optimization is performed by running Adam for a pre-defined number of iterations.
We use the Probreg555https://github.com/neka-nat/probreg library’s implementation of HGMR and CPD, and the original SymICP implementation, for all of which we use the default parameters. We use the Point Cloud Library’s  implementation of G-ICP, setting the parameters as in .
4.2 Comparing Accuracy and Robustness between BBR variants
We start by comparing the different variants of BBR in a simple experiment, testing their accuracy and ability to converge with different initial rotations: 5, 10, 30, 60 and 90 degrees. For each angle, we repeat 20 times: select two random subsets of 500 points from the Stanford Bunny point cloud, rotate the target point cloud around a random axis, perform registration and measure the angular error. We consider registration to have failed if the final error is over .
Results are shown in figure 3. BBR-softBBS is the clear leader in robustness to large initial error. Unlike the others, it is able to handle initial rotations of with hardly any failures. Notice that BBR-N is especially sensitive to large initial rotations, due to its reliance on normals to recognize best-buddies. When the initial rotation is large, the same object will have very different normals in each of the two scans. For all algorithms, large initial rotations do not cause degradation in accuracy - for the attempts that did succeed. When looking at the accuracy of the successful attempts, it’s clear that the methods that make use of local normals (BBR-N and BBR-F) are significantly more accurate.
The experiments shown in Figure 4 demonstrate our ability to register point clouds with random rotation and translation, using the Bunny, Horse, and Dragon point clouds from . We randomly select a source and target subset from the original point cloud, each containing points. The target point cloud is rotated around a randomly selected axis by an angle of . It is then translated along a random axis to a distance of . We then run the registration algorithms on the point-cloud pair and record their translational and angular error. We repeat the experiment times and report the median error for each algorithm and each cardinality of the point clouds.
For all experiments in Figure 4 we set and . The other parameters are: Bunny: , . Horse: , . Dragon: up to 2000, . BBR-F achieves the lowest error rate, across almost all point cloud sizes. It is followed closely by BBR-N. Only when the point density is high, Sym-ICP performs on-par with BBR-N. This demonstrate BBR’s ability to work well with very sparse point clouds. It should also be pointed out that BBR-softBD outperforms the comparable registration method that do not use normals (as well as G-ICP).
The next experiments demonstrate the resistance of the algorithm to a variety of challenges, including occlusions, the presence of a distractor, measurement noise, and a large initial error.
Partial Overlap and Occlusion.
The experiment shown in Figure 5 evaluates the resistance to partial overlap and occlusion. We perform registration between two partial scans of the Stanford Bunny, bun000 and bun090, each captured from a different view point. Following Rusinkiewicz’s  experimental setup, we first align the scans according to the ground truth motion. Then we follow the same experimental method as in Section 4.3, with , and . BBR-F achieves the most accurate results, followed by BBR-N and Sym-ICP. However, Sym-ICP deteriorates considerably when given a very sparse point cloud, while both of our algorithms cope with it very well.
Resistance to Distractors.
This experiment evaluates the effects of distractor noise. This is the case where in addition to the main object of interest, the scene also contains a second object with a different motion. In this synthetic experiment, the main object, a large horse, was randomly translated and rotated as in Section 4.3, with , and . The distractor object, a small horse, underwent a different motion. Such a situation may occur when attempting to estimate the ego-motion of a vehicle, using scans that include other independently moving vehicles. The main object contains 1000 points, and we vary the number of points in the distractor object from 200 up to 900. In figure 6 we show the median error as a function of the number of points in the distractor. BBR-F shows a strong resistance to distractor noise in this experiment, while Sym-ICP is quite susceptible to it. Among methods that do not make use of normals, BBR-softBD is the most accurate.
The TUM RGB-D dataset  contains point clouds of indoor scenes captured with the Kinect sensor. It contains natural measurement noise, due to the warp and scanning noise from the Kinect sensor. It has been noted by Rusinkiewicz  that this dataset poses a qualitatively different challenge than the bunny point cloud. He demonstrates his algorithm on a specific pair of partially-overlapping scans from this dataset. We use the same pair, 1305031104.030279 and 1305031108.503548 of the freiburg1_xyz sequence from the TUM RGB-D (Figure 7), sample 1000 points from each, and experiment with adding a random rotation of up to around a random axis. We repeat this 50 times and perform registration for each, showing the cumulative distribution of the final errors in Figure 8. The BBR algorithms perform better than all competing methods.
We test another pair of scans from TUM RGB-D, 1305031794.813436 and 1305031794.849048 of the freiburg1_xyz sequence (Figure 9). We sample 1500 points from each, and experiment with adding a random rotation of up to around a random axis. We repeat this 50 times and perform registration for each, showing the cumulative distribution of the final errors in Figure 10. BBR-softBD (labeled BBR) performs considerably better than either Sym-ICP  or CPD , showing its robustness to realistic measurement noise and occlusions.
Large initial error with partial overlap.
In this experiment we evaluate the ability to converge when the source and target point clouds are only partially overlapping, and starting from a large initial error. We use two scans of the bunny point cloud, bun180 and bun270, that were captured from significantly different view points. As before, we first align them, sample 1000 points from each, and then apply a random motion to one of them. In this experiment the motion is large: a random rotation in the range degrees, around a random axis, and a translation of half the extent of the point cloud () along a random direction. Figure 11 shows the cumulative distribution of errors over 50 repeats. BBR-softBD performs considerably better than all other algorithms.
We’ve previously demonstrated the ability of BBR-softBD, BBR-N and BBR-F to handle cases where the initial error is of the order of up to . Here we show that BBR-softBBS can be used to register in situations where the initial error is much larger. This is due to its large basin of convergence. In this section we use the same Bunny, Horse and Dragon models that were used in the accuracy test, this time with a large random rotation in range degrees, and run BBR-softBBS. We set and . As can be seen in Figure 12, BBR-softBBS manages to reduce the rotation error significantly, to below , in all experiments.
In this section we present experiments in the realistic setting of vehicle navigation, specifically focusing on a difficult setting where the separation between the two clouds is relatively large, leading to significant occlusions and outliers caused by independently moving objects in the scene. We use two datasets of high-resolution Lidar scans: KITTI Odometry , and Apollo-Southbay , both of which consist of large point clouds of over 100K points. We follow the experimental setup of , where the test set consists of pairs of clouds scanned by the same vehicle at two times, during which the vehicle has travelled up to 5 meters, for up to 2 seconds (KITTI) or even 5 seconds (Apollo-Southbay). An initial estimate for the motion is assumed to be available, which is inaccurate by up to 1 meter in each of x,y,z, and up to 1 degree in each of . We report the mean translation and angular errors, as well as the maximum (worst case).
For this test, we use the BBR-F variant of our BBR algorithm. Normals are estimated from neighborhoods of points in the full cloud. To achieve high accuracy, we only moderately subsample the point clouds to 30K points, uniformly at random.
Tables 1 and 2 compare our results to those reported in , and to those of Sym-ICP . In both experiments, BBR-F achieves state of the art accuracy by three of the four accuracy measures. This demonstrates BBR’s capabilities of achieving high accuracy in realistic scenarios that include occlusions, outlier motions and measurement noise.
Table 3 shows the time it takes to run a single iteration of the different variants of our loss function, as a function of the number of points (measured with a PyTorch implementation running on a GTX 980 Ti GPU). All of the algorithm variants typically converge in few hundreds of iterations, depending on the learning rate.
We proposed Best Buddy Registration (BBR) algorithms for point cloud registration inspired by the Best Buddy Similarity (BBS) measure. First we show that registration can be performed by running gradient descent on a differential approximation to the negated BBS measure. This results in an algorithm that is quite robust to noise, occlusions and distractions, and able to cope with very sparse point clouds. We then present additional algorithms that achieve higher accuracy while maintaining robustness, by incorporating point-to-point and point-to-plane distances into the loss function. Finally, we present the BBR-F algorithm that uses best buddy filtering to achieves state of the art accuracy on challenges that include significant noise, occlusions and distractors, including registration of automotive lidar scans that are relatively widely separated in time. Our algorithms are implemented in Pytorch and optimized with Adam gradient descent, allowing them to be incorporated as a registration stage in Deep Neural Networks for processing point clouds.
This research was supported by ERC-StG grant no. 757497 (SPADE) and by ISF grant number 1549/19.
PointNetLK: robust & efficient point cloud registration using pointnet.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
-  (1975) Multidimensional binary search trees used for associative searching. Commun. ACM 18, pp. 509–517. Cited by: §3.
-  (1992-02) A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14 (2), pp. 239–256. External Links: Cited by: §1, §2, Table 1, Table 2.
-  (2013) Sparse iterative closest point. In Proceedings of the Eleventh Eurographics/ACMSIGGRAPH Symposium on Geometry Processing, SGP ’13, pp. 113–123. Cited by: §2.
-  (1992-04) Object modelling by registration of multiple range images. Image Vision Comput. 10 (3), pp. 145–155. External Links: Cited by: §2, §3.
-  (2005) Robust euclidean alignment of 3d point sets: the trimmed iterative closest point algorithm. Image and Vision Computing 23 (3), pp. 299 – 309. Cited by: §2.
-  (2019-06) DeepMapping: unsupervised map estimation from multiple point clouds. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: footnote 1.
-  (2018-09) HGMR: hierarchical gaussian mixtures for adaptive 3d registration. In The European Conference on Computer Vision (ECCV), Cited by: §2, §4.1.
-  (2001) Robust registration of 2D and 3D point sets. In British Machine Vision Conference, pp. 662–670. Cited by: §2.
-  (2019) FilterReg: robust and efficient probabilistic point-set registration using gaussian filter and twist parameterization. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 11095–11104. Cited by: §2.
-  (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.5, §4.
-  (2002) Multi-scale em-icp: a fast and robust approach for surface registration. In European Conference on Computer Vision, pp. 418–432. Cited by: §2, §2.
-  (2012) Rotation averaging. International Journal of Computer Vision 103, pp. 267–305. Cited by: §4.1.
-  (2020) PointGMM: a neural gmm network for point clouds. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 12051–12060. Cited by: §2.
Robust point set registration using gaussian mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 33 (8), pp. 1633–1645. External Links: Cited by: §2.
-  (2014-12) Adam: a method for stochastic optimization. International Conference on Learning Representations, pp. . Cited by: §1.
-  (2019-10) DeepVCP: an end-to-end deep neural network for point cloud registration. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2, §4.1, §4.1, §4.5, §4.5, Table 1, Table 2.
-  (2019) L3-Net: towards learning based LiDAR localization for autonomous driving. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.5, §4.
An iterative image registration technique with an application to stereo vision.
Proceedings of the 7th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’81, pp. 674–679. Cited by: §2.
-  (2010-12) Point set registration: coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 32 (12), pp. 2262–2275. External Links: Cited by: §4.1, §4.4, Table 1, Table 2.
-  (2018) Best-buddies similarity—robust template matching using mutual nearest neighbors. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (8), pp. 1799–1813. Cited by: §1, §2, §3, footnote 3.
-  (2018-05) AA-icp: iterative closest point with anderson acceleration. In 2018 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 3407–3412. External Links: Cited by: Table 1, Table 2.
-  (2018) Neural Nearest Neighbors Networks. Proceedings of Advances in Neural Information Processing Systems (NeuralIPS). Cited by: §2, §2, §3.
-  (2015) A review of point cloud registration algorithms for mobile robotics. Cited by: §2.
-  (2017) PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 652–660. Cited by: §2.
-  (1997) The softassign procrustes matching algorithm. In Biennial International Conference on Information Processing in Medical Imaging, pp. 29–42. Cited by: §2.
-  (2001) Efficient variants of the icp algorithm. In Proceedings Third International Conference on 3-D Digital Imaging and Modeling, pp. 145–152. Cited by: §2.
-  (2019-07) A symmetric objective function for ICP. ACM Transactions on Graphics (Proc. SIGGRAPH) 38 (4). Cited by: §2, §3, §3, Figure 5, §4.1, §4.4, §4.4, §4.4, §4.5, Table 1, Table 2.
-  (2011-May 9-13) 3D is here: point cloud library (pcl). In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China. Cited by: §4.1.
-  (2019) PCRNet: point cloud registration network using pointnet encoding. ArXiv abs/1908.07906. Cited by: §2, §4.
-  (2009) Generalized-icp. In Robotics: Science and Systems, J. Trinkle, Y. Matsuoka, and J. A. Castellanos (Eds.), Cited by: §2, §3, §4.1, Table 1, Table 2.
-  (2012) Fast and accurate scan registration through minimization of the distance between compact 3d ndt representations.. I. J. Robotics Res. 31 (12), pp. 1377–1393. External Links: Cited by: Table 1, Table 2.
-  (2012) A benchmark for the evaluation of RGB-D SLAM systems. In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, Vilamoura, Algarve, Portugal, October 7-12, 2012, pp. 573–580. Cited by: Figure 7, Figure 9, §4.4, §4.
-  (2004) A correlation-based approach to robust point set registration. In European conference on computer vision, pp. 558–569. Cited by: §2.
-  (1994) Zippered polygon meshes from range images. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’94, pp. 311–318. External Links: Cited by: §4.3, §4, §4.
-  (2015) Pointer networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 2692–2700. Cited by: §2.
-  (2019-10) Deep closest point: learning representations for point cloud registration. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
-  (2018) 3DFeat-Net: weakly supervised local 3d features for point cloud registration. In European Conference on Computer Vision, pp. 630–646. Cited by: Table 1, Table 2.