Deep Global Registration

04/24/2020 ∙ by Christopher Choy, et al. ∙ 8

We present Deep Global Registration, a differentiable framework for pairwise registration of real-world 3D scans. Deep global registration is based on three modules: a 6-dimensional convolutional network for correspondence confidence prediction, a differentiable Weighted Procrustes algorithm for closed-form pose estimation, and a robust gradient-based SE(3) optimizer for pose refinement. Experiments demonstrate that our approach outperforms state-of-the-art methods, both learning-based and classical, on real-world data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

page 7

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A variety of applications, including 3D reconstruction, tracking, pose estimation, and object detection, invoke 3D registration as part of their operation [35, 5, 33]. To maximize the accuracy and speed of 3D registration, researchers have developed geometric feature descriptors [22, 11, 41, 9], pose optimization algorithms [38, 45, 24, 50], and end-to-end feature learning and registration pipelines [41, 2].

(a) RANSAC [35]
(b) FGR [50]
(c) DCP [41]
(d) Ours
Figure 1: Pairwise registration results on the 3DMatch dataset [47]. Our method successfully aligns a challenging 3D pair (left), while RANSAC [35], FGR [50], and DCP [41] fail. On an easier pair (right), our method achieves finer alignment.

In particular, recent end-to-end registration networks have proven to be effective in relation to classical pipelines. However, these end-to-end approaches have some drawbacks that limit their accuracy and applicability. For example, PointNetLK [2] uses globally pooled features to encode the entire geometry of a point cloud, which decreases spatial acuity and registration accuracy. Deep closest point [41] makes strong assumptions on the distribution of points and correspondences, which do not hold for partially overlapping 3D scans.

In this work, we propose three modules for robust and accurate registration that resolve these drawbacks: a 6-dimensional convolutional network for correspondence confidence estimation, a differentiable Weighted Procrustes method for scalable registration, and a robust optimizer for fine-tuning the final alignment.

The first component is a 6-dimensional convolutional network that analyzes the geometry of 3D correspondences and estimates their accuracy. Our approach is inspired by a number of learning-based methods for estimating the validity of correspondences in 2D [46, 34] and 3D [31]

. These methods stack coordinates of correspondence pairs, forming a vector

for each correspondence . Prior methods treat these -dimensional vectors as a set, and apply global set processing models for analysis. Such models largely disregard local geometric structure. Yet the correspondences are embedded in a metric space () that induces distances and neighborhood relationships. In particular, 3D correspondences form a geometric structure in 6-dimensional space [8] and we use a high-dimensional convolutional network to analyze the 6D structure formed by correspondences and estimate the likelihood that a given correspondence is correct (i.e., an inlier).

The second component we develop is a differentiable Weighted Procrustes solver. The Procrustes method [15] provides a closed-form solution for rigid registration in . A differentiable version of the Procrustes method by Wang et al[41] has been used for end-to-end registration. However, the differentiable Procrustes method passes gradients through coordinates, which requires time and memory for

keypoints, limiting the number of keypoints that can be processed by the network. We use the inlier probabilities predicted by our first module (the 6D convolutional network) to guide the Procrustes method, thus forming a differentiable Weighted Procrustes method. This method passes gradients through the weights associated with correspondences rather than correspondence coordinates. The computational complexity of the Weighted Procrustes method is linear in the number of correspondences, allowing the registration pipeline to use dense correspondence sets rather than sparse keypoints. This substantially increases registration accuracy.

Our third component is a robust optimization module that fine-tunes the alignment produced by the Weighted Procrustes solver. This optimization module minimizes a differentiable loss via gradient descent on the continuous representation space [52]. The optimization is fast since it does not require neighbor search in the inner loop [48].

Experimentally, we validate the presented modules on a real-world pairwise registration benchmark [47] and large-scale scene reconstruction datasets [18, 6, 32]. We show that our modules are robust, accurate, and fast in comparison to both classical global registration algorithms [50, 35, 45] and recent end-to-end approaches [31, 41, 2]. All training and experiment scripts are available at https://github.com/chrischoy/DeepGlobalRegistration.

2 Related Work

We divide the related work into three categories following the stages of standard registration pipelines that deal with real-world 3D scans: feature-based correspondence matching, outlier filtering, and pose optimization.

Feature-based correspondence matching.

The first step in many 3D registration pipelines is feature extraction. Local and global geometric structure in 3D is analyzed to produce high-dimensional feature descriptors, which can then be used to establish correspondences.

Traditional hand-crafted features commonly summarize pairwise or higher-order relationships in histograms [20, 37, 40, 36, 35]. Recent work has shifted to learning features via deep networks [47, 22]. A number of recent methods are based on global pooling models [12, 11, 49], while others use convolutional networks [14, 9].

Our work is agnostic to the feature extraction mechanism. Our modules primarily address subsequent stages of the registration pipeline and are compatible with a wide variety of feature descriptors.

Outlier filtering. Correspondences produced by matching features are commonly heavily contaminated by outliers. These outliers need to be filtered out for robust alignment. A widely used family of techniques for robust model fitting is based on RANdom SAmple Consensus (RANSAC) [38, 1, 35, 29, 19], which iteratively samples small sets of correspondences in the hope of sampling a subset that is free from outliers. Other algorithms are based on branch-and-bound [45], semi-definite programming [28, 25], and maximal clique selection [44]

. These methods are accurate, but commonly require longer iterative sampling or more expensive computation as the signal-to-noise ratio decreases. One exception is TEASER 

[44]

, which remains effective even with high outlier rates. Other methods use robust loss functions to reject outliers during optimization 

[50, 4].

Our work uses a convolutional network to identify inliers and outliers. The network needs only one feed-forward pass at test time and does not require iterative optimization.

Pose optimization. Pose optimization is the final stage that minimizes an alignment objective on filtered correspondences. Iterative Closest Points (ICP) [3] and Fast Global Registration (FGR) [50] use second-order optimization to optimize poses. Makadia et al[26] propose an iterative procedure to minimize correlation scores. Maken et al[27]

propose to accelerate this process by stochastic gradient descent.

Recent end-to-end frameworks combine feature learning and pose optimization. Aoki et al[2] combine PointNet global features with an iterative pose optimization method [24]. Wang et al[41, 42]

train graph neural network features by backpropagating through pose optimization.

We further advance this line of work. In particular, our Weighted Procrustes method reduces the complexity of optimization from quadratic to linear and enables the use of dense correspondences for highly accurate registration of real-world scans.

3 Deep Global Registration

3D reconstruction systems typically take a sequence of partial 3D scans as input and recover a complete 3D model of the scene. These partial scans are scene fragments, as shown in Fig. 1. In order to reconstruct the scene, reconstruction systems often begin by aligning pairs of fragments [6]. This stage is known as pairwise registration. The accuracy and robustness of pairwise registration are critical and often determine the accuracy of the final reconstruction.

Our pairwise registration pipeline begins by extracting pointwise features. These are matched to form a set of putative correspondences. We then use a high-dimensional convolutional network (ConvNet) to estimate the veracity of each correspondence. Lastly, we use a Weighted Procrustes method to align 3D scans given correspondences with associated likelihood weights, and refine the result by optimizing a robust objective.

The following notation will be used throughout the paper. We consider two point clouds, and , with and points respectively, where . A correspondence between and is denoted as or .

3.1 Feature Extraction

To prepare for registration, we extract pointwise features that summarize geometric context in the form of vectors in metric feature space. Our pipeline is compatible with many feature descriptors. We use Fully Convolutional Geometric Features (FCGF) [9], which have recently been shown to be both discriminative and fast. FCGF are also compact, with dimensionality as low as 16 to 32, which supports rapid neighbor search in feature space.

3.2 Correspondence Confidence Prediction

Given the features and of two 3D scans, we use the nearest neighbor in the feature space to generate a set of putative correspondences or matches . This procedure is deterministic and can be hand-crafted to filter out noisy correspondences with ratio or reciprocity tests [50]

. However, we propose to learn this heuristic filtering process through a convolutional network that learns to analyze the underlying geometric structure of the correspondence set.

We first provide a 1-dimensional analogy to explain the geometry of correspondences. Let be a set of 1-dimensional points and be another such set . Here is a translation of : . If an algorithm returns a set of possible correspondences , then the set of correct correspondences (inliers) will form a line (first 5 pairs), whereas incorrect correspondences (outliers) will form random noise outside the line (last 2 pairs). If we extend this to 3D scans and pointclouds, we can also represent a 3D correspondence as a point in 6-dimensional space . The inlier correspondences will be distributed on a lower-dimensional surface in this 6D space, determined by the geometry of the 3D input. We denote as a set of inliers or a set of correspondences that align accurately up to the threshold under the ground truth transformation . Meanwhile, the outliers will be scattered outside the surface . To identify the inliers, we use a convolutional network. Such networks have been proven effective in related dense prediction tasks, such as 3D point cloud segmentation [16, 7]. The convolutional network in our setting is in 6-dimensional space [8]. The network predicts a likelihood for each correspondence, which is a point in 6D space . The prediction is interpreted as the likelihood that the correspondence is true: an inlier.

Note that the convolution operator is translation invariant, thus our 6D ConvNet will generate the same output regardless of the absolute position of inputs in 3D. We use a similar network architecture to Choy et al. [9] to create a 6D convolutional network with skip connections within the spatial resolution across the network. The architecture of the 6D ConvNet is shown in Fig. 2. During training, we use the binary cross-entropy loss between the likelihood prediction that a correspondence is an inlier, , and the ground-truth correspondences to optimize the network parameters:

(1)

where and is the cardinality of the set of putative correspondences.

Figure 2: 6-dimensional convolutional network architecture for inlier likelihood prediction (Sec. 3.2

). The network has a U-net structure with residual blocks between strided convolutions. Best viewed on the screen.

3.3 Weighted Procrustes for

The inlier likelihood estimated by the 6D ConvNet provides a weight for each correspondence. The original Procrustes method [15] minimizes the mean squared error between corresponding points and thus gives equal weight to all correspondences. In contrast, we minimize a weighted mean squared error . This change allows us to pass gradients through the weights, rather than through the position [41], and enables the optimization to scale to dense correspondence sets.

Formally, Weighted Procrustes analysis minimizes:

(2)
(3)
(4)

where , , and . is a list of indices that defines the correspondences . is the weight vector and denotes the normalized weight after a nonlinear transformation that applies heuristic prefiltering. forms the diagonal weight matrix.

Theorem 1

: The and t that minimize the squared error are and where , , , and .

Sketch of proof. First, we differentiate w.r.t. and equate the partial derivative to 0. This gives us . Next, we substitute on Eq. 4 and do the same for . Then, we substitute into the squares. This yields and expanding the term results in two squares plus

. We maximize the last negative term whose maximum is the sum of all singular values, which leads to

. The full derivation is in the supplement.

We can easily extend the above theorem to incorporate a scaling factor , or anisotropic scaling for tasks such as scan-to-CAD registration, but in this paper we assume that partial scans of the same scene have the same scale.

The Weighted Procrustes method generates rotation and translation as outputs that depend on the weight vector . In our current implementation, and are directly sent to the robust registration module in Section 4 as an initial pose. However, we briefly demonstrate that they can also be embedded in an end-to-end registration pipeline, since Weighted Procrustes is differentiable. From a top-level loss function of and , we can pass the gradient through the closed-form solver, and update parameters in downstream modules:

(5)

where can be defined as the combination of differentiable rotation error (RE) and translation error (TE) between predictions and ground-truth :

(6)
(7)

or the Forbenius norm of relative transformation matrices defined in [2, 41]. The final loss is the weighted sum of , , and .

4 Robust Registration

In this section, we propose a fine-tuning module that minimizes a robust loss function of choice to improve the registration accuracy. We use a gradient-based method to refine poses, where a continuous representation [52] for rotations is adopted to remove discontinuities and construct a smooth optimization space. This module initializes the pose from the prediction of the Weighted Procrustes method. During iterative optimization, unlike Maken et al[27], who find the nearest neighbor per point at each gradient step, we rely on the correspondence likelihoods from the 6D ConvNet, which is estimated only once per initialization.

In addition, our framework naturally offers a failure detection mechanism. In practice, Weighted Procrustes may generate numerically unstable solutions when the number of valid correspondences is insufficient due to small overlaps or noisy correspondences between input scans. By computing the ratio of the sum of the filtered weights to the total number of correspondences, i.e. , we can easily approximate the fraction of valid correspondences and predict whether an alignment may be unstable. When this fraction is low, we resort to a more time-consuming but accurate registration algorithm such as RANSAC [38, 1, 35] or a branch-and-bound method [45] to find a numerically stable solution. In other words, we can detect when our system might fail before it returns a result and fall back to a more accurate but time-consuming algorithm, unlike previous end-to-end methods that use globally pooled latent features [2]

or a singly stochastic matrix 

[41] – such latent representations are more difficult to interpret.

4.1 SE(3) Representation and Initialization

We use the 6D representation of 3D rotation proposed by Zhou et al[52], rather than Euler angles or quaternions. The new representation uses 6 parameters and can be transformed into a orthogonal matrix by

(8)

where are , , and , and denotes L2 normalization. Thus, the final representation that we use is which are equivalent to using Eq. 8.

To initialize , we simply use the first two columns of the rotation matrix , i.e., , . For convenience, we define as though this inverse function is not unique as there are infinitely many choices of that map to the same .

4.2 Energy Minimization

We use a robust loss function to fine-tune the registration between predicted inlier correspondences. The general form of the energy function is

(9)

where and are defined as in Eq. 3 and is a prefiltering function. In the experiments, we use , which clips weights below

elementwise as neural network outputs bounded logit scores.

is a pointwise loss function between and ; we use the Huber loss in our implementation. The energy function is parameterized by and which in turn are represented as . We can apply first-order optimization algorithms such as SGD, Adam, etc. to minimize the energy function, but higher-order optimizers are also applicable since the number of parameters is small. The complete algorithm is described in Alg. 1.

Input:
Output:
  // § 3.1
1
  // § 3.2
2
3
4 if  then
       return SafeGuardRegistration   // §4
5      
6 else
         // § 3.3
         // § 4.1
7       while not converging do
8            
9            
10            
11      return
Algorithm 1 Deep Global Registration

5 Experiments

We analyze the proposed model in two registration scenarios: pairwise registration where we estimate an transformation between two 3D scans or fragments, and multi-way registration which generates a final reconstruction and camera poses for all fragments that are globally consistent. Here, pairwise registration serves as a critical module in multi-way registration.

For pairwise registration, we use the 3DMatch benchmark [47] which consists of 3D point cloud pairs from various real-world scenes with ground truth transformations estimated from RGB-D reconstruction pipelines [17, 10]. We follow the train/test split and the standard procedure to generate pairs with at least 30% overlap for training and testing [12, 11, 9]. For multi-way registration, we use the simulated Augmented ICL-NUIM dataset [6, 18] for quantitative trajectory results, and Indoor LiDAR RGB-D dataset [32] and Stanford RGB-D dataset [6] for qualitative registration visualizations. Note in this experiment we use networks trained on the 3DMatch training set and do not fine-tune on the other datasets. This illustrates the generalization abilities of our models. Lastly, we use KITTI LIDAR scans [13] for outdoor pairwise registration. As the official registration splits do not have labels for pairwise registration, we follow Choy et al[9] to create pairwise registration train/val/test splits.

For all indoor experiments, we use 5cm voxel downsampling [35, 51], which randomly subsamples a single point within each 5cm voxel to generate point clouds with uniform density. For safeguard registration, we use RANSAC and the safeguard threshold , which translates to 5% of the correspondences should be valid. We train learning-based state-of-the-art models and our network on the training split of the 3DMatch benchmark. During training, we augment data by applying random rotations varying from 180 to 180 degrees around a random axis. Ground-truth pointwise correspondences are found using nearest neighbor search in 3D space. We train the 6-dimensional ConvNet on a single Titan XP with batch size 4. SGD is used with an initial learning rate and an exponential learning rate decay factor 0.99.

5.1 Pairwise Registration

Figure 3: Global registration results of our method on all 8 different test scenes in 3DMatch [47]. Best viewed in color.
Figure 4: Overall pairwise registration recall (y-axis) on the 3DMatch benchmark with varying rotation (left image) and translation (right image) error thresholds (x-axis). Our approach outperforms baseline methods for all thresholds while being faster than the most accurate baseline.

In this section, we report the registration results on the test set of the 3DMatch benchmark [47], which contains 8 different scenes as depicted in Fig. 3. We measure translation error (TE) defined in Eq. 6, rotation error (RE) defined in Eq. 7, and recall. Recall is the ratio of successful pairwise registrations and we define a registration to be successful if its rotation error and translation error are smaller than predefined thresholds. Average TE and RE are computed only on these successfully registered pairs since failed registrations return poses that can be drastically different from the ground truth, making the error metrics unreliable.

Figure 5: Analysis of 3DMatch registration results per scene. Row 1: recall rate (higher is better). Row 2-3: TE and RE measured on successfully registered pairs (lower is better). Our method is consistently better on all scenes, which were not seen during training. Note: a missing bar corresponds to zero successful alignments in a scene.
Recall TE (cm) RE (deg) Time (s)
Ours w/o safeguard 85.2% 7.73 2.58 0.70
Ours 91.3% 7.34 2.43 1.21
FGR [50] 42.7% 10.6 4.08 0.31
RANSAC-2M [35] 66.1% 8.85 3.00 1.39
RANSAC-4M 70.7% 9.16 2.95 2.32
RANSAC-8M 74.9% 8.96 2.92 4.55
Go-ICP [45] 22.9% 14.7 5.38 771.0
Super4PCS [29] 21.6% 14.1 5.25 4.55
ICP (P2Point) [51] 6.04% 18.1 8.25 0.25
ICP (P2Plane) [51] 6.59% 15.2 6.61 0.27
DCP [41] 3.22% 21.4 8.42 0.07
PointNetLK [2] 1.61% 21.3 8.04 0.12
Table 1: Row 1-6: registration results of our method and classical global registration methods on point clouds voxelized with 5cm voxel size. Our method outperforms RANSAC and FGR while being as fast as FGR. Row 7-10: results of ICP variants. Row 11 - 12: results of learning-based methods. The learning-based methods generally fail on real-world scans. Time includes feature extraction.
ElasticFusion [43] InfiniTAM [21] BAD-SLAM [39] Multi-way + FGR  [50] Multi-way + RANSAC  [51] Multi-way + Ours
Living room 1 66.61 46.07 fail 78.97 110.9 21.06
Living room 2 24.33 73.64 40.41 24.91 19.33 21.88
Office 1 13.04 113.8 18.53 14.96 14.42 15.76
Office 2 35.02 105.2 26.34 21.05 17.31 11.56
Avg. Rank 3 5 5 3.5 2.5 2
Table 2: ATE (cm) error on the Augmented ICL-NUIM dataset with simulated depth noise. For InfiniTAM, the loop closure module is disabled since it fails in all scenes. For BAD-SLAM, the loop closure module only succeeds in ‘Living room 2’.
(a) Real-world: Apartment
(b) Real-world: Boardroom
(c) Synthetic: Office
(d) Real-world: Copyroom
(e) Real-world: Loft
(f) Synthetic: Livingroom
Figure 6: Fragment registrations on [6, 32]. From left to right: FGR [50], RANSAC [35], Ours. Row 1-3: our method succeeds on scenes with small overlaps or ambiguous geometry structures while other methods fail. Row 4-6: by combining Weighted Procrustes and gradient-based refinement, our method outputs more accurate registrations in one pass, leading to better aligned details.

We compare our methods with various classical methods [50, 35, 45] and state-of-the-art learning based methods [41, 42, 2, 31]. All the experiments are evaluated on an Intel i7-7700 CPU and a GTX 1080Ti graphics card except for Go-ICP [45] tested on an Intel i7-5820K CPU. In Table 1, we measure recall with the TE threshold 30cm which is typical for indoor scene relocalization [30], and RE threshold 15 degrees which is practical for partially overlapping scans from our experiments. In Fig. 4, we plot the sensitivity of recall on both thresholds by changing one threshold and setting the other to infinity. Fig. 5 includes detailed statistics on separate test scenes. Our system outperforms all the baselines on recall by a large margin and achieves the lowest translation and rotation error consistently on most scenes.

Classical methods. To compare with classical methods, we evaluate point-to-point ICP, Point-to-plane ICP, RANSAC [35], and FGR [50], all implemented in Open3D [51]

. In addition, we test the open-source Python bindings of Go-ICP 

[45] and Super4PCS [29]. For RANSAC and FGR, we extract FPFH from voxel-downsampled point clouds. The results are shown in Table 1.

ICP variants mostly fail as the dataset contains challenging 3D scan sequences with small overlap and large camera viewpoint change. Super4PCS, a sampling-based algorithm, performs similarly to Go-ICP, an ICP variant with branch-and-bound search.

Feature-based methods, FGR and RANSAC, perform better. When aligning 5cm-voxel-downsampled point clouds, RANSAC achieves recall as high as 70%, while FGR reaches 40%. Table 1 also shows that increasing the number of RANSAC iterations by a factor of 2 only improves performance marginally. Note that our method is about twice as fast as RANSAC with 2M iterations while achieving higher recall and registration accuracy.

Learning-based methods. We use 3DRegNet [31], Deep Closest Point (DCP) [41], PRNet [42], and PointNetLK [2] as our baselines. We train all the baselines on 3DMatch with the same setup and data augmentation as ours for all experiments.

For 3DRegNet, we follow the setup outlined in [31], except that we do not manually filter outliers with ground truth, and train and test with the standard realistic setup. We find that the registration loss of 3DRegNet does not converge during training and the rotation and translation errors are consistently above 30 degrees and 1m during test.

We train Deep Closest Point (DCP) with 1024 randomly sampled points for each point cloud for 150 epochs 

[41]. We initialize the network with the pretrained weights provided by the authors. Although the training loss converges, DCP fails to achieve reasonable performance for point clouds with partial overlap. DCP uses a singly stochastic matrix to find correspondences, but this formulation assumes that all points in point cloud have at least one corresponding point in the convex hull of point cloud . This assumption fails when some points in have no corresponding points in , as is the case for partially overlapping fragments. We also tried to train PRNet [42]

on our setup, but failed to get reasonable results due to random crashes and high-variance training losses.

Lastly, we fine-tune PointNetLK [2] on 3DMatch for 400 epochs, starting from the pretrained weights provided by the authors. PointNetLK uses a single feature that is globally pooled for each point cloud and regresses the relative pose between objects, and we suspect that a globally pooled feature fails to capture complex scenes such as 3DMatch.

In conclusion, while working well on object-centric synthetic datasets, current end-to-end registration approaches fail on real-world data. Unlike synthetic data, real 3D point cloud pairs contain multiple objects, partial scans, self-occlusion, substantial noise, and may have only a small degree of overlap between scans.

5.2 Multi-way Registration

Multi-way registration for RGB-D scans proceeds via multiple stages. First, the pipeline estimates the camera pose via off-the-shelf odometry and integrates multiple 3D scans to reduce noise and generate accurate 3D fragments of a scene. Next, a pairwise registration algorithm roughly aligns all fragments, followed by multi-way registration [6] which optimizes fragment poses with robust pose graph optimization [23].

We use a popular open-source implementation of this registration pipeline [51] and replace the pairwise registration stage in the pipeline with our proposed modules. Note that we use the networks trained on the 3DMatch training set and test on the multi-way registration datasets [18, 32, 6]; this demonstrates cross-dataset generalization.

We test the modified pipeline on the Augmented ICL-NUIM dataset [6, 18] for quantitative trajectory results, and Indoor LiDAR RGB-D dataset [32] and Stanford RGB-D dataset [6] for qualitative registration visualizations. We measure the absolute trajectory error (ATE) on the Augmented ICL-NUIM dataset with simulated depth noise. As shown in Table 2, compared to state-of-the-art online SLAM [43, 21, 39] and offline reconstruction methods [50], our approach yields consistently low error across scenes.

For qualitative results, we compare pairwise fragment registration on these scenes against FGR and RANSAC in Fig. 6. Full scene reconstruction results are shown in the supplement.

5.3 Outdoor LIDAR Registration

We use outdoor LIDAR scans from the KITTI dataset [13] for registration, following [9]. The registration split of Choy et al[9] uses GPS-IMU to create pairs that are at least 10m apart and generated ground-truth transformation using GPS followed by ICP to fix errors in GPU readings. We use FCGF features [9] trained on the training set of the registration split to find the correspondences and trained the 6D ConvNet for inlier confidence prediction similar to how we trained the system for indoor registration. We use voxel size 30cm for downsampling point clouds for all experiments. Registration results are reported in Tab. 3 and visualized in Fig. 7.

(a) KITTI registration test pair 1
(b) KITTI registration test pair 2
(c) KITTI registration test pair 3
Figure 7: KITTI registration results. Pairs of scans are at least 10m apart.
Recall TE (cm) RE (deg) Time (s)
FGR [50] 0.2% 40.7 1.02 1.42
RANSAC [35] 34.2% 25.9 1.39 1.37
FCGF [9] 98.2% 10.2 0.33 6.38
Ours 96.9% 21.7 0.34 2.29
Ours + ICP 98.0% 3.46 0.14 2.51
Table 3: Registration on the KITTI test split [13, 9]. We use thresholds of 0.6m and 5 degrees. ‘Ours + ICP’ refers to our method followed by ICP for fine-grained pose adjustment. The runtime includes feature extraction.

6 Conclusion

We presented Deep Global Registration, a learning-based framework that robustly and accurately aligns real-world 3D scans. To achieve this, we used a 6D convolutional network for inlier detection, a differentiable Weighted Procrustes algorithm for scalable registration, and a gradient-based optimizer for pose refinement. Experiments show that our approach outperforms both classical and learning-based registration methods, and can serve as a ready-to-use plugin to replace alternative registration methods in off-the-shelf scene reconstruction pipelines.

References

  • [1] D. Aiger, N. J. Mitra, and D. Cohen-Or (2008) 4-points congruent sets for robust pairwise surface registration. ACM Transactions on Graphics. Cited by: §2, §4.
  • [2] Y. Aoki, H. Goforth, R. A. Srivatsan, and S. Lucey (2019) PointNetLK: robust & efficient point cloud registration using PointNet. In CVPR, Cited by: §1, §1, §1, §2, §3.3, §4, §5.1, §5.1, §5.1, Table 1.
  • [3] P. J. Bes, N. D. McKay, et al. (1992) A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §2.
  • [4] S. Bouaziz, A. Tagliasacchi, and M. Pauly (2013) Sparse iterative closest point. In Eurographics, Cited by: §2.
  • [5] Q. Cai, D. Gallup, C. Zhang, and Z. Zhang (2010) 3D deformable face tracking with a commodity depth camera. In ECCV, Cited by: §1.
  • [6] S. Choi, Q. Zhou, and V. Koltun (2015) Robust reconstruction of indoor scenes. In CVPR, Cited by: §1, §3, Figure 6, §5.2, §5.2, §5.2, §5.
  • [7] C. Choy, J. Gwak, and S. Savarese (2019)

    4D spatio-temporal ConvNets: minkowski convolutional neural networks

    .
    In CVPR, Cited by: §3.2.
  • [8] C. Choy, J. Lee, R. Ranftl, J. Park, and V. Koltun (2020)

    High-dimensional convolutional networks for geometric pattern recognition

    .
    In CVPR, Cited by: §1, §3.2.
  • [9] C. Choy, J. Park, and V. Koltun (2019) Fully convolutional geometric features. In ICCV, Cited by: §1, §2, §3.1, §3.2, §5.3, Table 3, §5.
  • [10] A. Dai, M. Nießner, M. Zollöfer, S. Izadi, and C. Theobalt (2017) BundleFusion: real-time globally consistent 3D reconstruction using on-the-fly surface re-integration. ACM Transactions on Graphics. Cited by: §5.
  • [11] H. Deng, T. Birdal, and S. Ilic (2018)

    PPF-FoldNet: unsupervised learning of rotation invariant 3D local descriptors

    .
    In ECCV, Cited by: §1, §2, §5.
  • [12] H. Deng, T. Birdal, and S. Ilic (2018) PPFNet: global context aware local features for robust 3D point matching. In CVPR, Cited by: §2, §5.
  • [13] A. Geiger, P. Lenz, and R. Urtasun (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, Cited by: §5.3, Table 3, §5.
  • [14] Z. Gojcic, C. Zhou, J. D. Wegner, and A. Wieser (2019) The perfect match: 3D point cloud matching with smoothed densities. In CVPR, Cited by: §2.
  • [15] J. C. Gower (1975) Generalized procrustes analysis. Psychometrika. Cited by: §1, §3.3.
  • [16] B. Graham, M. Engelcke, and L. van der Maaten (2018) 3D semantic segmentation with submanifold sparse convolutional networks. In CVPR, Cited by: §3.2.
  • [17] M. Halber and T. Funkhouser (2017) Fine-to-coarse global registration of RGB-D scans. In CVPR, Cited by: §5.
  • [18] A. Handa, T. Whelan, J. McDonald, and A. J. Davison (2014) A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In ICRA, Cited by: §1, §5.2, §5.2, §5.
  • [19] D. Holz, A. E. Ichim, F. Tombari, R. B. Rusu, and S. Behnke (2015) Registration with the point cloud library: a modular framework for aligning in 3-D. RA-L. Cited by: §2.
  • [20] A. E. Johnson and M. Hebert (1999) Using spin images for efficient object recognition in cluttered 3D scenes. Pattern Analysis and Machine Intelligence. Cited by: §2.
  • [21] O. Kähler, V. A. Prisacariu, and D. W. Murray (2016) Real-time large-scale dense 3D reconstruction with loop closure. In ECCV, Cited by: §5.2, Table 2.
  • [22] M. Khoury, Q. Zhou, and V. Koltun (2017) Learning compact geometric features. In ICCV, Cited by: §1, §2.
  • [23] R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard (2011) G2o: a general framework for graph optimization. In ICRA, Cited by: §5.2.
  • [24] B. D. Lucas and T. Kanade (1981) An iterative image registration technique with an application to stereo vision. In DARPA Image Understanding Workshop, Cited by: §1, §2.
  • [25] J. Maciel and J. P. Costeira (2003) A global solution to sparse correspondence problems. Pattern Analysis and Machine Intelligence. Cited by: §2.
  • [26] A. Makadia, A. Patterson, and K. Daniilidis (2006) Fully automatic registration of 3D point clouds. In CVPR, Cited by: §2.
  • [27] F. A. Maken, F. Ramos, and L. Ott (2019) Speeding up iterative closest point using stochastic gradient descent. In ICRA, Cited by: §2, §4.
  • [28] H. Maron, N. Dym, I. Kezurer, S. Kovalsky, and Y. Lipman (2016) Point registration via efficient convex relaxation. ACM Transactions on Graphics. Cited by: §2.
  • [29] N. Mellado, D. Aiger, and N. J. Mitra (2014) Super 4PCS fast global pointcloud registration via smart indexing. Computer Graphics Forum. Cited by: §2, §5.1, Table 1.
  • [30] Mur-Artal and J. D. Tardós (2015) ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics. Cited by: §5.1.
  • [31] G. D. Pais, P. Miraldo, S. Ramalingam, J. C. Nascimento, V. M. Govindu, and R. Chellappa (2019) 3DRegNet: a deep neural network for 3D point registration. arXiv. Cited by: §1, §1, §5.1, §5.1, §5.1.
  • [32] J. Park, Q. Zhou, and V. Koltun (2017) Colored point cloud registration revisited. In ICCV, Cited by: §1, Figure 6, §5.2, §5.2, §5.
  • [33] C. Qian, X. Sun, Y. Wei, X. Tang, and J. Sun (2014) Realtime and robust hand tracking from depth. In CVPR, Cited by: §1.
  • [34] R. Ranftl and V. Koltun (2018) Deep fundamental matrix estimation. In ECCV, Cited by: §1.
  • [35] R. B. Rusu, N. Blodow, and M. Beetz (2009) Fast point feature histograms (FPFH) for 3D registration. In ICRA, Cited by: Figure 1, (a)a, §1, §1, §2, §2, §4, Figure 6, §5.1, §5.1, Table 1, Table 3, §5.
  • [36] R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz (2008) Aligning point cloud views using persistent feature histograms. In IROS, Cited by: §2.
  • [37] S. Salti, F. Tombari, and L. di Stefano (2014) SHOT: unique signatures of histograms for surface and texture description. CVIU. Cited by: §2.
  • [38] R. Schnabel, R. Wahl, and R. Klein (2007) Efficient RANSAC for point-cloud shape detection. Computer Graphics Forum. Cited by: §1, §2, §4.
  • [39] T. Schops, T. Sattler, and M. Pollefeys (2019) BAD SLAM: bundle adjusted direct RGB-D SLAM. In CVPR, Cited by: §5.2, Table 2.
  • [40] F. Tombari, S. Salti, and L. D. Stefano (2010) Unique shape context for 3D data description. In ACM Workshop on 3D Object Retrieval, Cited by: §2.
  • [41] Y. Wang and J. M. Solomon (2019) Deep closest point: learning representations for point cloud registration. In ICCV, Cited by: Figure 1, (c)c, §1, §1, §1, §1, §2, §3.3, §3.3, §4, §5.1, §5.1, §5.1, Table 1.
  • [42] Y. Wang and J. M. Solomon (2019)

    PRNet: self-supervised learning for partial-to-partial registration

    .
    In NeurIPS, Cited by: §2, §5.1, §5.1, §5.1.
  • [43] T. Whelan, S. Leutenegger, R. F. Salas-Moreno, B. Glocker, and A. J. Davison (2015) ElasticFusion: dense SLAM without a pose graph. In Robotics: Science and Systems, Cited by: §5.2, Table 2.
  • [44] H. Yang and L. Carlone (2019) A polynomial-time solution for robust registration with extreme outlier rates. In Robotics: Science and Systems, Cited by: §2.
  • [45] J. Yang, H. Li, D. Campbell, and Y. Jia (2015) Go-ICP: a globally optimal solution to 3D ICP point-set registration. Pattern Analysis and Machine Intelligence. Cited by: §1, §1, §2, §4, §5.1, §5.1, Table 1.
  • [46] K. M. Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, and P. Fua (2018) Learning to find good correspondences. In CVPR, Cited by: §1.
  • [47] A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser (2017) 3DMatch: learning local geometric descriptors from RGB-D reconstructions. In CVPR, Cited by: Figure 1, §1, §2, Figure 3, §5.1, §5.
  • [48] Z. Zhang (1994) Iterative point matching for registration of free-form curves and surfaces. IJCV. Cited by: §1.
  • [49] C. Zhao, Z. Cao, C. Li, X. Li, and J. Yang (2019) NM-Net: mining reliable neighbors for robust feature correspondences. In CVPR, Cited by: §2.
  • [50] Q. Zhou, J. Park, and V. Koltun (2016) Fast global registration. In ECCV, Cited by: Figure 1, (b)b, §1, §1, §2, §2, §3.2, Figure 6, §5.1, §5.1, §5.2, Table 1, Table 2, Table 3.
  • [51] Q. Zhou, J. Park, and V. Koltun (2018) Open3D: a modern library for 3D data processing. arXiv. Cited by: §5.1, §5.2, Table 1, Table 2, §5.
  • [52] Y. Zhou, C. Barnes, J. Lu, J. Yang, and H. Li (2019) On the continuity of rotation representations in neural networks. In CVPR, Cited by: §1, §4.1, §4.