1 Introduction
Finding structure in noisy data is a general problem that arises in many different disciplines. For example, robust linear regression requires finding a pattern (line, plane) in noisy data. 3D registration of point clouds requires the identification of veridical correspondences in the presence of spurious ones
[6]. Structure from motion (SfM) pipelines use verification based on prescribed geometric models to filter spurious image matches [40]. A variety of such applications can benefit from improved methods for the detection of geometric structures in noisy data.Such detection is challenging. Data points that belong to the soughtafter structure often constitute only a small fraction, while the majority are outliers. Various algorithms have been proposed over the years to cope with noisy data
[3, 11, 16, 21, 22, 34, 38, 41], but they are usually specific to a subset of problems.Recent works have advocated for using deep networks [35, 48, 51]
to learn robust models to classify geometric structures in the presence of outliers. Deep networks offer significant flexibility and the promise to replace handcrafted algorithms and heuristics by models learned directly from data. However, due to the unstructured nature of the data in geometric problems, existing works have treated such data as unordered sets, and relied on network architectures based predominantly on global pooling operators and multilayer perceptrons (MLPs)
[33, 49]. Such network architectures lack the capacity to model local geometric structures and do not leverage the nature of the data, which is often embedded in a (highdimensional) metric space and has a meaningful geometric structure.In this work, we introduce a novel type of deep convolutional network that can operate in high dimensions. Our network takes a sparse tensor as input and employs highdimensional convolutions as the fundamental operator. The distinguishing characteristic of our approach is that it is able to effectively leverage local neighborhood relations together with global context even for highdimensional data. Our network is fullyconvolutional, translationinvariant, and incorporates best practices from the development of ConvNets for twodimensional image analysis
[18, 23, 36].To demonstrate the effectiveness and generality of our approach, we tackle various geometric pattern recognition problems. We begin with the diagnostic setting of linear subspace detection and show that our construction is effective in highdimensional spaces and at low signaltonoise ratios. We then apply the presented construction to geometric pattern recognition problems that arise in computer vision, including registration of threedimensional point sets under rigid motion (a 6D problem) and correspondence estimation between images under epipolar constraints (a 4D problem). In both settings, the problem is made difficult by the presence of outliers.
Our experiments indicate that the presented construction can reliably detect geometric patterns in highdimensional data that is heavily contaminated by noise. It can operate in regimes where existing algorithms break down. Our approach significantly improves 3D registration performance when combined with standard approaches for 3D point cloud alignment [6, 39, 53, 54]. The presented highdimensional convolutional network also outperforms stateoftheart methods for correspondence estimation between images [48, 51]. All networks and training scripts are available at https://github.com/chrischoy/HighDimConvNets.
2 Related Work
Robust model fitting. Fitting a geometric model to a set of observations that are contaminated by outliers is a fundamental problem that frequently arises in computer vision and related fields. The most widely used approach for robust geometric model fitting is RANdom SAmple Consensus (RANSAC) [15]. Due to its fundamental importance, many variants and improvements of RANSAC have been proposed over the years [3, 11, 25, 34, 38, 41, 43, 44].
Alternatively, algorithms for robust geometric model fitting are frequently derived using techniques from robust statistics [16, 21, 22, 52], where outlier rejection is performed by equipping an estimator with a cost function that is insensitive to gross outliers. While the resulting algorithms are computationally efficient, they require careful initialization and optimization procedures to avoid poor local optima [53].
Another line of work proposes to find globally optimal solutions to the consensus maximization problem [5, 26, 47]. However, these approaches are currently computationally too demanding for many practical applications.
3D registration. Finding reliable correspondences between a pair of surfaces is an essential step for 3D reconstruction [6, 12, 14, 31]. The problem has been conventionally framed as an energy minimization problem which can be solved using various techniques such as branch and bound [46], Riemannian optimization [37], mixedinteger programming [24], robust error minimization [53], semidefinite programming [20, 28], or random sampling [6].
Recent work has begun to leverage deep networks for geometric registration [2, 30, 32]. These works are predominantly based on PointNet and related architectures, which detect patterns via global pooling operations [33, 49]. In contrast, we develop a highdimensional convolutional network that operates at multiple scales and can leverage not just global but also local geometric structure.
Image correspondences. Yi et al. [48] and Zhang et al. [51] reduce essential matrix estimation to classification of correspondences into inliers and outliers. Ranftl and Koltun [35] present a similar formulation for fundamental matrix estimation. Brachmann and Rother [4]
propose to learn a neural network that guides hypothesis sampling in RANSAC for model fitting problems. Dang
et al. [13]propose a numerically stable loss function for essential matrix estimation.
All of these works employ variants of PointNets to classify inliers in an unordered set of putative correspondences. These architectures, based on pointwise MLPs, lack the capacity to model local geometric structure. In contrast, we develop convolutional networks that directly leverage neighborhood relations in the highdimensional space of correspondences.
3 HighDimensional Convolutional Networks
In this section, we introduce the two main building blocks of our highdimensional convolutional network construction: generalized sparse tensors and generalized convolutions.
3.1 Sparse Tensor and Convolution
A tensor is a multidimensional array that represents highorder data. A th order tensor requires indices to uniquely access its elements. We denote such indices or coordinates as and the element at the coordinate as similar to how we access components in a matrix. Likewise, a sparse tensor is a highdimensional extension of a sparse matrix where the majority of the elements are 0. Concretely,
(1) 
where is the set of coordinates with nonzero values, is the number of nonzero elements, and is the nonzero value at the th coordinate. A sparse tensor feature map is a th order tensor with as we use the last dimension to denote the feature dimension. A sparse tensor has the constraint that . We extend the sparse tensor coordinates to integer indices and define where denotes the cardinality of the integer space to define a generalized sparse tensor.
A convolution on this generalized sparse tensor can then be defined as a simple extension of the generalized sparse convolution [8]:
(2) 
where and are the set of input and output locations which is predefined by the user, is a weight matrix, and defines a set of neighbors of which is defined by the shape of the convolution kernel. For example, if the convolution kernel is a hypercube of size , is a set of all the nonzero elements of the input sparse tensor centered at within the ball of extent .
3.2 Convolutional Networks
We design a highdimensional fullyconvolutional neural network for sparse tensors (sparse tensor networks) based on generalized convolution
[8, 10]. We use Ushaped networks [36]to capture large receptive fields while maintaining the original resolution at the final layer. The network has residual connections
[18] within layers with the same resolution and across the network to speed up convergence and to recover the lost spatial resolution in the last layer. The network architecture is illustrated in Fig. 1.For computational efficiency in high dimensions, we use the crossshaped kernel [8] for all convolutions. We denote the kernel size by . The crossshaped kernel has nonzero weights only for the nearest neighbors along each axis, which results in one weight parameter for the center location and weight parameters for each axis. Note that a crossshaped kernel is similar to separable convolution, where a full convolution is approximated by onedimensional convolutions of size . Both types of kernels are rank1 approximations of the full hypercubic kernel , but separable convolution requires matrix multiplications, whereas the crossshaped kernel requires only matrix multiplications.
3.3 Implementation
We extend the implementation of Choy et al. [8], which supports arbitrary kernel shapes, to highdimensional convolutional networks. To implement the sparse tensor networks, we need an efficient data structure that can generate a new sparse tensor as well as find neighbors within the sparse tensor. Choy et al. [8] use a hash table that is efficient for both insertion and search. We replaced the hash table with a faster and more efficient variant [1]. In addition, as the neighbor search can be run in parallel, we create an iterator function that can run in parallel with OpenMP [29] by dividing the table into smaller parallelization blocks.
Lastly, Ushaped networks generate hierarchical feature maps that expand the receptive field. Choy et al. [8]
use stride
convolutions with kernel size to generate lowerresolution hierarchical feature maps. Still, such implementation requires iterating over at least elements within a hypercubic kernel as the coordinates are stored in a hash table, which results in complexity where is the cardinality of input. Consequently, it becomes infeasible to store the weights on the GPU for highdimensional spaces. Instead of strided convolutions, we propose an efficient implementation of stride sum pooling layers with kernel size . Instead of iterating over all possible neighbors, we iterate over all input coordinates and round them down to multiples of , which requires only complexity.4 Geometric Pattern Recognition
Our approach uses convolutional networks to recognize geometric patterns in highdimensional spaces. Specifically, we classify each point in a highdimensional dataset as an inlier or an outlier. We start by validating our approach on synthetic datasets of varying dimensionality and then show results on 3D registration and essential matrix estimation.
Qi et al. [33]  Zaheer et al. [49] + BN + IN  Yi et al. [48]  Ours  

Dim.  Inlier Ratio  MSE  F1  AP (AUC)  MSE  F1  AP (AUC)  MSE  F1  AP (AUC)  MSE  F1  AP (AUC) 
4  15.59%  1.337  0.025  0.164  6.33E4  0.867  0.936  9.11E5  0.981  0.996  2.33E5  0.998  0.999 
8  5.54%  2.369  0.022  0.065  0.001  0.891  0.955  2.45E4  0.946  0.989  1.64E5  0.999  0.999 
16  2.75%  3.854  0.012  0.034  4.86E4  0.970  0.992  0.002  0.962  0.986  3.39E5  0.999  0.999 
24  0.40%  5.372  0.021  0.011  0.676  0.634  0.691  0.775  0.610  0.674  5.34E5  0.994  0.996 
32  0.07%  6.715  0.012  6.71E5    0.0  0.295    0.0  0.050  0.010  0.669  0.689 
Line detection in highdimensional spaces in the presence of extreme noise. All networks are trained with the crossentropy loss for 40 epochs. Inlier ratios are listed on the left. In the 32dimensional setting, only 7 out of 10,000 data points are inliers. The table reports Mean Squared Error (MSE), F1 score, and Average Precision (AP) of our approach (a highdimensional ConvNet) versus baselines (PointNet variants). MSE: lower is better. F1 and AP: higher is better.
Qi et al. [33]  Zaheer et al. [49] + BN + IN  Yi et al. [48]  Ours  

Dim.  Inlier Ratio  F1  AP (AUC)  F1  AP (AUC)  F1  AP (AUC)  F1  AP (AUC) 
4  29.96%  0.0  0.315  0.980  0.996  0.993  0.999  0.991  0.998 
8  8.07%  0.0  0.088  0.985  0.999  0.990  0.999  0.998  0.999 
16  0.34%  0.0  0.004  0.155  0.299  0.182  0.359  0.951  0.961 
24  0.01%  0.0  1.61E4  0.032  0.133  0.0  0.081  0.304  0.346 
32  4.64E3%  0.0  5.56E5  0.0  0.221  0.0  0.023  0.138  0.240 
Qi et al. [33]  Zaheer et al. [49]  Yi et al. [48]  Ours 
16D line detection, progression of training. We plot the running mean and standard deviation of precision, recall, and F1 score on the validation set. Our highdimensional convolutional network quickly attains much higher accuracy than the baselines.
For all experiments, we first quantize the input coordinates to create a sparse tensor of order
, where the last dimension denotes the feature channels. The network then predicts a logit score for each nonzero element in the sparse tensor to indicate if a point is part of the geometric pattern or if it is an outlier.
4.1 Line and Plane Detection
We first test the capabilities of our fullyconvolutional networks on simple highdimensional pattern recognition problems that involve detecting linear subspaces amidst noise. Our dataset consists of uniformly sampled noise from the dimensional space and a small number of samples from a line under a Gaussian noise model. The number of outliers increases exponentially in the dimension (), while the number of inliers increases sublinearly (), where
is the extent of the domain. Further details are given in the supplement. The network predicts a likelihood score for each nonzero element in the input sparse tensor, and we threshold inliers with probability
. We estimate the line equation from the predicted inliers using unweighted least squares.We use PointNet variants as the baselines for this experiment [33, 48, 49]. For Zaheer et al. [49]
, we were not able to get reasonable results with the network architecture proposed in the paper. We thus augmented the architecture with batch normalization and instance normalization layers after each linear transformation similar to Yi
et al. [48], which boosted performance significantly. For all experiments, we use the crossentropy loss. We used the same training hyperparameters, including loss, batch size, optimizer, and learning rate schedule for all approaches.
We use three metrics to analyze the performance of the networks: Mean Squared Error (MSE), F1 score, and Average Precision (AP). For the MSE, we estimate the line equation with Least Squares to fit the line to the inliers. The second metric is the F1 score, the harmonic mean of precision and recall. In many problems, F1 score is a direct indicator of the performance of a classifier, and we also found a strong correlation between F1 score and the mean squared error. The final metric we use is average precision (AP), which measures the area under the precisionrecall curve. We report the results in Tab.
1 and provide qualitative examples in Fig. 2. Tab. 1 lists the inlier ratio to indicate the difficulty of each task.In a second experiment, we create another synthetic dataset where the inlier pattern is sampled from a plane spanned by two vectors:
. The two basis vectors are sampled uniformly from the dimensional unit hypercube. We use the same training procedure for our network and the baselines, and report the results in Tab. 2.We found that the convolutional network is more robust to noise in highdimensional spaces than the PointNet variants. In addition, the convolutional network training converges quickly, as shown in Fig. 3, which is a further indication that the architecture can effectively leverage the structure of the data.
4.2 3D Registration
A typical 3D registration pipeline consists of 1) feature extraction, 2) feature matching, 3) match filtering, and 4) global registration. In this section, we show that in the
match filtering stage the correct (inlier) correspondences form a 6dimensional geometric structure. We then extend our geometric pattern recognition networks to identify inlier correspondences in this 6dimensional space.Let be a set of points sampled from a 3D surface, , and let be a subset of that went through a rigid transformation , . For example, could be a 3D scan from a different perspective that has an overlap with . We denote a correspondence between points and as
. When we form an ordered pair
, the ground truth correspondences satisfy along the common 3D geometry whereas an incorrect correspondence implies . For example, in Fig. 4, we visualize the ordered pairs from 1dimensional sets. Note that the inliers follow the geometry of the inputs and form a line segment. Similarly, the geometry or for forms a surface in 6dimensional space.Theorem 1
The 6D representation of groundtruth 3D correspondences
lies on the intersection of three hyperplanes where each hyperplane is a row of the following block equation
.Proof: A groundtruth 3D correspondence, , must satisfy . We move to the LHS and convert both and to the 6D representation to get three hyperplane equations.
Corollary 1.1
The rank of the block matrix, , is 3 since and are orthogonal matrices. Thus, all hyperplanes defined by the block matrix intersect each other and the 6D inlier correspondences form a 3D plane since the solution of the intersection of three hyperplanes in the 6D space, , is a 3D plane.
We can thus use our highdimensional convolutional network construction to segment the 6dimensional set of correspondences into inliers and outliers by estimating the inlier likelihood for each correspondence.
Network. We use a 6dimensional instantiation of the Ushaped convolutional network presented in Sec. 3. As the dimensionality is manageable, we use hypercubic kernels. The network takes an order6 sparse tensor whose coordinates are correspondences . We discretize the coordinates with the voxel size used to extract features. Our baseline is Yi et al. [48], which takes dimensionless meancentered correspondences without discretization. We train the networks to predict the inlier probability of each correspondence with the balanced crossentropy loss.
Dataset. We use the 3DMatch dataset for this experiment [50]. The 3DMatch dataset is a composition of various 3D scan datasets [17, 45, 50] and thus covers a wide range of scenes and different types of 3D cameras. We integrate RGBD images to form fragments of the scenes following [50]. During training, we randomly rotate each scene on the fly to augment the dataset. We use a popular handdesigned feature descriptor, FPFH [39], to compute correspondences. Note, however, that our pipeline is agnostic to the choice of feature and can also be used with learned features [10].
FPFH + FGR  FPFH + Ours + FGR  FPFH + RANSAC  FPFH + Ours + RANSAC  

Inlier Ratio  TE  RE  Succ. Rate  TE  RE  Succ. Rate  TE  RE  Succ. Rate  TE  RE  Succ. Rate  
Kitchen  1.62%  10.98  4.99  37.15  5.68  2.21  65.61  6.25  2.17  44.47  5.90  1.98  69.57 
Home 1  2.71%  11.12  4.40  45.51  6.52  2.08  80.77  7.07  2.19  61.54  6.00  1.87  80.13 
Home 2  2.83%  9.61  3.83  36.54  7.13  2.56  64.42  6.47  2.40  50.00  7.86  2.56  69.71 
Hotel 1  1.35%  12.31  5.09  33.19  7.95  2.65  76.11  7.48  2.75  48.67  7.38  2.38  80.09 
Hotel 2  1.54%  12.27  5.22  25.00  7.86  2.56  69.23  9.54  3.18  47.12  6.40  2.25  70.19 
Hotel 3  1.59%  13.52  7.04  27.78  5.39  1.99  72.22  5.91  2.46  59.26  5.85  2.36  81.48 
Study  0.87%  16.10  6.01  16.78  9.61  2.64  53.42  10.05  3.01  30.48  8.51  2.23  56.16 
Lab  1.59%  10.48  4.80  42.86  7.69  2.44  61.04  8.01  2.31  45.45  6.64  2.12  68.83 
Average  12.05  5.17  33.10  7.23  2.39  67.85  7.60  2.56  48.37  6.82  2.22  72.02 
Kitchen  Bedroom  
Livingroom  Study 
We follow the standard procedures in the 3D registration literature to generate candidate correspondences. First, since 3D scans often exhibit irregular densities, we resample the input point clouds using a voxel grid to produce a regular point cloud. We use voxel sizes of 2.5cm and 5cm for our experiments. Next, we compute FPFH features and find the nearest neighbor for each point in feature space to form correspondences. The correspondences obtained from this procedure often exhibit a very low inlier ratio, as little as 0.87% with a 2.5cm voxel size. Among these correspondences, we regard as an inlier if it satisfies and all others as outliers. We set to be two times the voxel size.
Finally, we use a registration method to convert the filtered correspondences into a final registration result. We show results with two different registration methods. The first is Fast Global Registration [53], which directly minimizes a robust error metric. The second is a variant of RANSAC [15] that is specialized to 3D registration [54].
Evaluation. We use three standard metrics to evaluate registration performance: rotation error, translation error, and success rate. The rotation error measures the absolute angular deviation from the ground truth rotation , . Similarly, the translation error measures the deviation of the translation . When we report these metrics, we exclude alignments that exceed a threshold following [10] since the results of the registration methods [53, 15] can be arbitrarily bad when registration fails. Finally, the success rate is the ratio of registrations that were successful; registration is considered successful if both rotation and translation errors are within the respective thresholds. For all experiments, we use a rotation error of 15 degrees and a translation error of 30cm as the thresholds.
Tab. 3 shows the 3D registration pipelines with and without our network to filter the outliers. Note that for FGR [53], we observe a considerable improvement with our network since FGR assumes more accurate correspondences as inputs. The improvement is smaller with RANSAC, since it is more robust to high outlier rates. Although the inlier ratio is as low as 1% for many 3D scene pairs, our network generates very accurate predictions. Similar to the linear regression experiments in Fig. 3, we find that the network converges very quickly. We compare to the model of Yi et al. [48] in Fig. 5 with 5cm voxel size to study the robustness of the 6dimensional convolutional network to voxel size and find that the convolutional network improves the registration success rate significantly even for the higher inlier ratio seen with a 5cm discretization resolution. Qualitatively, our network accurately filters out outliers even in the presence of extreme noise (Fig. 6).
4.3 Filtering Image Correspondences
In this section, we apply the highdimensional convolutional network to image correspondence inlier detection. In the projective space , an inlier correspondence must satisfy , where is an essential matrix, denotes a normalized homogeneous coordinate , is the corresponding homogeneous image coordinate, and is the camera intrinsic matrix. When we expand , we get , which is a quadrivariate quadratic function. If there is a realvalued solution, there are infinitely many solutions that form either an ellipse (sphere), a parabola, or a hyperbola. These are known as conic sections. Thus a set of groundtruth image correspondences will form a hyper conic section in 4dimensional space. We use a convolutional network to predict the likelihood that a correspondence is an inlier.
Dataset: YFCC100M. We use the largescale phototourism dataset YFCC100M [42] for the experiment. The dataset contains 100M Flicker images of tourist hotspots with metadata, which is curated into 72 locations with camera extrinsics estimated using SfM [19]. We follow Zhang et al. [51] to generate a dataset and use 68 locations for training and the others for testing. We filtered any image pairs that have fewer than 100 overlapping 3D points from SfM to guarantee nonzero overlap between images.
We use SIFT features [27] to create correspondences and label a correspondence to be a groundtruth inlier if the symmetric epipolar distance of the correspondence is below a certain threshold using the provided camera parameters, i.e.,
(3) 
where is a homogeneous line, , and .
Network. We convert a set of candidate image correspondences into an order5 sparse tensor with four spatial dimensions and vectorized features. The coordinates are defined as normalized image coordinates . We define integer coordinates by discretizing the normalized image coordinates with quantization resolution 0.01 and additionally use the normalized coordinates as features. We use stateoftheart baselines on the YFCC dataset and two variants of convolutional networks for this task. The first variant is a Ushaped convolutional network (Ours); the second network is a ResNetlike network with spatial correlation modules (Ours + SC). Spatial correlation modules [51] are blocks of shared MLPs that encode global context from a set of correspondences.
Note that unlike the FPFH descriptor [39], which extracts features densely, SIFT features are extracted from only a few keypoints sparsely in the image. In the 4dimensional space of correspondences, the sparsity gets even worse as the volume increases multiplicatively while the number of points (correspondences) stay the same. Such sparsity leads to fewer neighbors in the highdimensional space, which results in the degeneration of the convolution to a multilayer perceptron. Our convolutional networks remain effective in this setting, but their distinctive ability to leverage local geometric structure is not strongly utilized. To increase the density of the correspondences in the 4dimensional space, we use a dense fully convolutional feature UCN [7]
. We train the UCN on the YFCC100M training set for 100 epochs and we follow the preprocessing stage outlined in the opensource version
[9] to create correspondences. Training and testing on UCN follow the same standard procedure. We use the balanced crossentropy loss [48] for all experiments.Yi et al. [48]  Zhang et al. [51]  Ours  Ours + SC  Ours + UCN  

Prec.  Recall  F1  Prec.  Recall  F1  Prec.  Recall  F1  Prec.  Recall  F1  Prec.  Recall  F1  
Buckingham  0.497  0.772  0.605  0.486  0.889  0.629  0.535  0.822  0.648  0.611  0.835  0.705  0.769  0.892  0.826 
Notre dame  0.581  0.894  0.705  0.629  0.951  0.757  0.647  0.915  0.758  0.721  0.929  0.812  0.844  0.954  0.896 
Reichtag  0.747  0.877  0.807  0.734  0.917  0.815  0.695  0.911  0.789  0.769  0.897  0.827  0.856  0.937  0.895 
Sacre coeur  0.658  0.871  0.750  0.662  0.948  0.780  0.632  0.917  0.748  0.718  0.932  0.811  0.855  0.948  0.899 
Average  0.621  0.854  0.717  0.628  0.926  0.745  0.628  0.891  0.736  0.704  0.898  0.789  0.830  0.933  0.879 
Buckingham  Notre Dame  Sacre Coeur  

Evaluation. We use precision, recall, and F1 score to evaluate correspondence classification accuracy. We use for the distance threshold to define ground truthcorrespondences. Our network predicts a correspondence to be accurate if its inlier probability prediction is above 0.5. We provide quantitative results in Tab. 4 and qualitative results in Fig. 7. Our approachs outperform the PointNet variants [48, 51] as measured by the F1 score.
5 Conclusion
Many interesting problems in computer vision involve geometric patterns in highdimensional spaces. We proposed highdimensional convolutional networks for geometric pattern recognition. We presented a fullyconvolutional network architecture that is efficient and able to find patterns in highdimensional data even in the presence of severe noise. We validated the efficacy of our approach on tasks such as line and plane detection, 3D registration, and geometric filtering of image correspondences.
Acknowledgements
This work was partially supported by a National Research Foundation of Korea (NRF) grant and funded by the Korean government (MSIT) (No. 2020R1C1C1015260).
References
 [1] (2019) Robin hood hashing. GitHub. Note: https://github.com/martinus/robinhoodhashing Cited by: §3.3.
 [2] (2019) PointNetLK: robust & efficient point cloud registration using pointnet. In CVPR, Cited by: §2.
 [3] (2017) DSAC  differentiable RANSAC for camera localization. In CVPR, Cited by: §1, §2.
 [4] (2019) Neuralguided RANSAC: learning where to sample model hypotheses. In ICCV, Cited by: §2.
 [5] (2017) Efficient globally optimal consensus maximisation with tree search. IEEE Transactions on Pattern Anaysis and Machine Intelligence. Cited by: §2.
 [6] (2015) Robust reconstruction of indoor scenes. In CVPR, Cited by: §1, §1, §2.
 [7] (2016) Universal correspondence network. In Advances in Neural Information Processing Systems, Cited by: §4.3.
 [8] (2019) 4D spatiotemporal ConvNets: minkowski convolutional neural networks. In CVPR, Cited by: §3.1, §3.2, §3.2, §3.3, §3.3.
 [9] (2019) Open universal correspondence network. Note: https://github.com/chrischoy/openucn Cited by: §4.3.
 [10] (2019) Fully convolutional geometric features. In ICCV, Cited by: §3.2, §4.2, §4.2.
 [11] (2005) Matching with prosacprogressive sample consensus. In CVPR, Cited by: §1, §2.
 [12] (2017) BundleFusion: realtime globally consistent 3d reconstruction using onthefly surface reintegration. ACM Transactions on Graphics. Cited by: §2.

[13]
(2018)
Eigendecompositionfree training of deep networks with zero eigenvaluebased losses
. In ECCV, Cited by: §2.  [14] (2019) GPU accelerated robust scene reconstruction. In IROS, Cited by: §2.
 [15] (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM. Cited by: §2, §4.2, §4.2.
 [16] (2003) Robust registration of 2D and 3D point sets. Image and Vision Computing. Cited by: §1, §2.
 [17] (2013) Realtime RGBD camera relocalization. In ISMAR, Cited by: §4.2.
 [18] (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1, §3.2.
 [19] (2015) Reconstructing the world in six days. In CVPR, Cited by: §4.3.

[20]
(2014)
Convex relaxations of SE(2) and SE(3) for visual pose estimation
. In ICRA, Cited by: §2.  [21] (2011) An mestimator for high breakdown robust estimation in computer vision. Computer Vision and Image Understanding. Cited by: §1, §2.
 [22] (2011) Robust statistics. Springer. Cited by: §1, §2.
 [23] (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, Cited by: §1.
 [24] (2017) Globally optimal object pose estimation in point clouds with mixedinteger programming. In International Symposium on Robotics Research, Cited by: §2.
 [25] (2012) Fixing the locally optimized RANSAC–full experimental evaluation. In British Machine Vision Conference, Cited by: §2.
 [26] (2009) Consensus set maximization with guaranteed global optimality for robust geometry estimation. In ICCV, Cited by: §2.
 [27] (2004) Distinctive image features from scaleinvariant keypoints. International Journal of Computer Vision. Cited by: §4.3.
 [28] (2016) Point registration via efficient convex relaxation.. ACM Transactions on Graphics. Cited by: §2.
 [29] (200805) OpenMP application program interface version 3.0. External Links: Link Cited by: §3.3.
 [30] (2019) 3DRegNet: A deep neural network for 3D point registration. arXiv. Cited by: §2.
 [31] (2017) Colored point cloud registration revisited. In ICCV, Cited by: §2.
 [32] (2019) Unsupervised learning of consensus maximization for 3D vision problems. In CVPR, Cited by: §2.

[33]
(2017)
Pointnet: deep learning on point sets for 3D classification and segmentation
. In CVPR, Cited by: §1, §2, Figure 2, §4.1, Table 1, Table 2.  [34] (2013) USAC: A universal framework for random sample consensus. IEEE Transactions on Pattern Anaysis and Machine Intelligence. Cited by: §1, §2.
 [35] (2018) Deep fundamental matrix estimation. In ECCV, Cited by: §1, §2.
 [36] (2015) Unet: convolutional networks for biomedical image segmentation. In MICCAI, Cited by: §1, §3.2.
 [37] (2019) SEsync: a certifiably correct algorithm for synchronization over the special euclidean group. International Journal of Robotics Research. Cited by: §2.
 [38] (1984) Least median of squares regression. Journal of the American Statistical Association. Cited by: §1, §2.
 [39] (2009) Fast point feature histograms (FPFH) for 3d registration. In ICRA, Cited by: §1, Figure 5, §4.2, §4.3.
 [40] (2016) Structurefrommotion revisited. In CVPR, Cited by: §1.
 [41] (2016) Robust model fitting using higher than minimal subset sampling. IEEE Transactions on Pattern Anaysis and Machine Intelligence. Cited by: §1, §2.
 [42] (2016) YFCC100M: the new data in multimedia research. Communications of the ACM. Cited by: §4.3.
 [43] (2002) Bayesian model estimation and selection for epipolar geometry and generic manifold fitting. International Journal of Computer Vision. Cited by: §2.
 [44] (2000) MLESAC: a new robust estimator with application to estimating image geometry. Computer Vision and Image Understanding. Cited by: §2.
 [45] (2013) Sun3D: a database of big spaces reconstructed using sfm and object labels. In ICCV, Cited by: §4.2.
 [46] (2013) GoICP: solving 3D registration efficiently and globally optimally. In ICCV, Cited by: §2.
 [47] (2014) Optimal essential matrix estimation via inlierset maximization. In ECCV, Cited by: §2.
 [48] (2018) Learning to find good correspondences. In CVPR, Cited by: §1, §1, §2, Figure 2, Figure 5, Figure 7, §4.1, §4.2, §4.2, §4.3, §4.3, Table 1, Table 2, Table 4.
 [49] (2017) Deep sets. In Advances in Neural Information Processing Systems, Cited by: §1, §2, Figure 2, §4.1, Table 1, Table 2.
 [50] (2017) 3DMatch: learning local geometric descriptors from RGBD reconstructions. In CVPR, Cited by: Figure 5, §4.2.
 [51] (2019) Learning twoview correspondences and geometry using orderaware network. In ICCV, Cited by: §1, §1, §2, Figure 7, §4.3, §4.3, §4.3, Table 4.
 [52] (1998) Determining the epipolar geometry and its uncertainty: A review. International Journal of Computer Vision. Cited by: §2.
 [53] (2016) Fast global registration. In ECCV, Cited by: §1, §2, §2, Figure 5, §4.2, §4.2, §4.2.
 [54] (2018) Open3D: A modern library for 3D data processing. arXiv. Cited by: §1, §4.2.
Comments
There are no comments yet.