High-dimensional Convolutional Networks for Geometric Pattern Recognition

05/17/2020 ∙ by Christopher Choy, et al. ∙ 0

Many problems in science and engineering can be formulated in terms of geometric patterns in high-dimensional spaces. We present high-dimensional convolutional networks (ConvNets) for pattern recognition problems that arise in the context of geometric registration. We first study the effectiveness of convolutional networks in detecting linear subspaces in high-dimensional spaces with up to 32 dimensions: much higher dimensionality than prior applications of ConvNets. We then apply high-dimensional ConvNets to 3D registration under rigid motions and image correspondence estimation. Experiments indicate that our high-dimensional ConvNets outperform prior approaches that relied on deep networks based on global pooling operators.



There are no comments yet.


page 7

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Finding structure in noisy data is a general problem that arises in many different disciplines. For example, robust linear regression requires finding a pattern (line, plane) in noisy data. 3D registration of point clouds requires the identification of veridical correspondences in the presence of spurious ones 

[6]. Structure from motion (SfM) pipelines use verification based on prescribed geometric models to filter spurious image matches [40]. A variety of such applications can benefit from improved methods for the detection of geometric structures in noisy data.

Such detection is challenging. Data points that belong to the sought-after structure often constitute only a small fraction, while the majority are outliers. Various algorithms have been proposed over the years to cope with noisy data 

[3, 11, 16, 21, 22, 34, 38, 41], but they are usually specific to a subset of problems.

Recent works have advocated for using deep networks [35, 48, 51]

to learn robust models to classify geometric structures in the presence of outliers. Deep networks offer significant flexibility and the promise to replace hand-crafted algorithms and heuristics by models learned directly from data. However, due to the unstructured nature of the data in geometric problems, existing works have treated such data as unordered sets, and relied on network architectures based predominantly on global pooling operators and multi-layer perceptrons (MLPs) 

[33, 49]. Such network architectures lack the capacity to model local geometric structures and do not leverage the nature of the data, which is often embedded in a (high-dimensional) metric space and has a meaningful geometric structure.

In this work, we introduce a novel type of deep convolutional network that can operate in high dimensions. Our network takes a sparse tensor as input and employs high-dimensional convolutions as the fundamental operator. The distinguishing characteristic of our approach is that it is able to effectively leverage local neighborhood relations together with global context even for high-dimensional data. Our network is fully-convolutional, translation-invariant, and incorporates best practices from the development of ConvNets for two-dimensional image analysis 

[18, 23, 36].

To demonstrate the effectiveness and generality of our approach, we tackle various geometric pattern recognition problems. We begin with the diagnostic setting of linear subspace detection and show that our construction is effective in high-dimensional spaces and at low signal-to-noise ratios. We then apply the presented construction to geometric pattern recognition problems that arise in computer vision, including registration of three-dimensional point sets under rigid motion (a 6D problem) and correspondence estimation between images under epipolar constraints (a 4D problem). In both settings, the problem is made difficult by the presence of outliers.

Our experiments indicate that the presented construction can reliably detect geometric patterns in high-dimensional data that is heavily contaminated by noise. It can operate in regimes where existing algorithms break down. Our approach significantly improves 3D registration performance when combined with standard approaches for 3D point cloud alignment [6, 39, 53, 54]. The presented high-dimensional convolutional network also outperforms state-of-the-art methods for correspondence estimation between images [48, 51]. All networks and training scripts are available at https://github.com/chrischoy/HighDimConvNets.

2 Related Work

Robust model fitting. Fitting a geometric model to a set of observations that are contaminated by outliers is a fundamental problem that frequently arises in computer vision and related fields. The most widely used approach for robust geometric model fitting is RANdom SAmple Consensus (RANSAC) [15]. Due to its fundamental importance, many variants and improvements of RANSAC have been proposed over the years [3, 11, 25, 34, 38, 41, 43, 44].

Alternatively, algorithms for robust geometric model fitting are frequently derived using techniques from robust statistics [16, 21, 22, 52], where outlier rejection is performed by equipping an estimator with a cost function that is insensitive to gross outliers. While the resulting algorithms are computationally efficient, they require careful initialization and optimization procedures to avoid poor local optima [53].

Another line of work proposes to find globally optimal solutions to the consensus maximization problem [5, 26, 47]. However, these approaches are currently computationally too demanding for many practical applications.

3D registration. Finding reliable correspondences between a pair of surfaces is an essential step for 3D reconstruction [6, 12, 14, 31]. The problem has been conventionally framed as an energy minimization problem which can be solved using various techniques such as branch and bound [46], Riemannian optimization [37], mixed-integer programming [24], robust error minimization [53], semi-definite programming [20, 28], or random sampling [6].

Recent work has begun to leverage deep networks for geometric registration [2, 30, 32]. These works are predominantly based on PointNet and related architectures, which detect patterns via global pooling operations [33, 49]. In contrast, we develop a high-dimensional convolutional network that operates at multiple scales and can leverage not just global but also local geometric structure.

Image correspondences. Yi et al[48] and Zhang et al[51] reduce essential matrix estimation to classification of correspondences into inliers and outliers. Ranftl and Koltun [35] present a similar formulation for fundamental matrix estimation. Brachmann and Rother [4]

propose to learn a neural network that guides hypothesis sampling in RANSAC for model fitting problems. Dang 

et al[13]

propose a numerically stable loss function for essential matrix estimation.

All of these works employ variants of PointNets to classify inliers in an unordered set of putative correspondences. These architectures, based on pointwise MLPs, lack the capacity to model local geometric structure. In contrast, we develop convolutional networks that directly leverage neighborhood relations in the high-dimensional space of correspondences.

3 High-Dimensional Convolutional Networks

In this section, we introduce the two main building blocks of our high-dimensional convolutional network construction: generalized sparse tensors and generalized convolutions.

3.1 Sparse Tensor and Convolution

A tensor is a multi-dimensional array that represents high-order data. A -th order tensor requires indices to uniquely access its elements. We denote such indices or coordinates as and the element at the coordinate as similar to how we access components in a matrix. Likewise, a sparse tensor is a high-dimensional extension of a sparse matrix where the majority of the elements are 0. Concretely,


where is the set of coordinates with non-zero values, is the number of non-zero elements, and is the non-zero value at the -th coordinate. A sparse tensor feature map is a -th order tensor with as we use the last dimension to denote the feature dimension. A sparse tensor has the constraint that . We extend the sparse tensor coordinates to integer indices and define where denotes the cardinality of the integer space to define a generalized sparse tensor.

A convolution on this generalized sparse tensor can then be defined as a simple extension of the generalized sparse convolution [8]:


where and are the set of input and output locations which is predefined by the user, is a weight matrix, and defines a set of neighbors of which is defined by the shape of the convolution kernel. For example, if the convolution kernel is a hypercube of size , is a set of all the non-zero elements of the input sparse tensor centered at within the -ball of extent .

3.2 Convolutional Networks

We design a high-dimensional fully-convolutional neural network for sparse tensors (sparse tensor networks) based on generalized convolution 

[8, 10]. We use U-shaped networks [36]

to capture large receptive fields while maintaining the original resolution at the final layer. The network has residual connections 

[18] within layers with the same resolution and across the network to speed up convergence and to recover the lost spatial resolution in the last layer. The network architecture is illustrated in Fig. 1.

For computational efficiency in high dimensions, we use the cross-shaped kernel [8] for all convolutions. We denote the kernel size by . The cross-shaped kernel has non-zero weights only for the nearest neighbors along each axis, which results in one weight parameter for the center location and weight parameters for each axis. Note that a cross-shaped kernel is similar to separable convolution, where a full convolution is approximated by one-dimensional convolutions of size . Both types of kernels are rank-1 approximations of the full hyper-cubic kernel , but separable convolution requires matrix multiplications, whereas the cross-shaped kernel requires only matrix multiplications.

3.3 Implementation

We extend the implementation of Choy et al[8], which supports arbitrary kernel shapes, to high-dimensional convolutional networks. To implement the sparse tensor networks, we need an efficient data structure that can generate a new sparse tensor as well as find neighbors within the sparse tensor. Choy et al[8] use a hash table that is efficient for both insertion and search. We replaced the hash table with a faster and more efficient variant [1]. In addition, as the neighbor search can be run in parallel, we create an iterator function that can run in parallel with OpenMP [29] by dividing the table into smaller parallelization blocks.

Lastly, U-shaped networks generate hierarchical feature maps that expand the receptive field. Choy et al[8]

use stride-

convolutions with kernel size to generate lower-resolution hierarchical feature maps. Still, such implementation requires iterating over at least elements within a hypercubic kernel as the coordinates are stored in a hash table, which results in complexity where is the cardinality of input. Consequently, it becomes infeasible to store the weights on the GPU for high-dimensional spaces. Instead of strided convolutions, we propose an efficient implementation of stride- sum pooling layers with kernel size . Instead of iterating over all possible neighbors, we iterate over all input coordinates and round them down to multiples of , which requires only complexity.

4 Geometric Pattern Recognition

Our approach uses convolutional networks to recognize geometric patterns in high-dimensional spaces. Specifically, we classify each point in a high-dimensional dataset as an inlier or an outlier. We start by validating our approach on synthetic datasets of varying dimensionality and then show results on 3D registration and essential matrix estimation.

Figure 1: A generic U-shaped high-dimensional convolutional network architecture. The numbers next to each block indicate kernel size, stride, and the number of channels. The strided convolutions that reduce the resolution of activations are shifted upward to indicate different levels of resolution.
Qi et al[33] Zaheer et al[49] + BN + IN Yi et al[48] Ours
Dim. Inlier Ratio MSE F1 AP (AUC) MSE F1 AP (AUC) MSE F1 AP (AUC) MSE F1 AP (AUC)
4 15.59% 1.337 0.025 0.164 6.33E-4 0.867 0.936 9.11E-5 0.981 0.996 2.33E-5 0.998 0.999
8 5.54% 2.369 0.022 0.065 0.001 0.891 0.955 2.45E-4 0.946 0.989 1.64E-5 0.999 0.999
16 2.75% 3.854 0.012 0.034 4.86E-4 0.970 0.992 0.002 0.962 0.986 3.39E-5 0.999 0.999
24 0.40% 5.372 0.021 0.011 0.676 0.634 0.691 0.775 0.610 0.674 5.34E-5 0.994 0.996
32 0.07% 6.715 0.012 6.71E-5 - 0.0 0.295 - 0.0 0.050 0.010 0.669 0.689
Table 1:

Line detection in high-dimensional spaces in the presence of extreme noise. All networks are trained with the cross-entropy loss for 40 epochs. Inlier ratios are listed on the left. In the 32-dimensional setting, only 7 out of 10,000 data points are inliers. The table reports Mean Squared Error (MSE), F1 score, and Average Precision (AP) of our approach (a high-dimensional ConvNet) versus baselines (PointNet variants). MSE: lower is better. F1 and AP: higher is better.

Qi et al[33] Zaheer et al[49] + BN + IN Yi et al[48] Ours
Dim. Inlier Ratio F1 AP (AUC) F1 AP (AUC) F1 AP (AUC) F1 AP (AUC)
4 29.96% 0.0 0.315 0.980 0.996 0.993 0.999 0.991 0.998
8 8.07% 0.0 0.088 0.985 0.999 0.990 0.999 0.998 0.999
16 0.34% 0.0 0.004 0.155 0.299 0.182 0.359 0.951 0.961
24 0.01% 0.0 1.61E-4 0.032 0.133 0.0 0.081 0.304 0.346
32 4.64E-3% 0.0 5.56E-5 0.0 0.221 0.0 0.023 0.138 0.240
Table 2: Plane detection in high-dimensional spaces in the presence of extreme noise. Inlier ratios are listed on the left. In the 32-dimensional setting, fewer than 5 out of 100,000 data points are inliers. The table reports the F1 score and Average Precision (AP) of our approach (a high-dimensional ConvNet) versus baselines (PointNet variants). Higher is better.
Qi et al. [33] Zaheer et al. [49] Yi et al. [48] Ours
Figure 2: 16D line detection projected to a 2D plane for visualization. Black dots are noise and blue dots are samples from a line in the 16D space. The dashed red line is the prediction of the respective method. Samples from the ground-truth line (blue) are enlarged by a factor of 10 for visualization.
Figure 3:

16D line detection, progression of training. We plot the running mean and standard deviation of precision, recall, and F1 score on the validation set. Our high-dimensional convolutional network quickly attains much higher accuracy than the baselines.

For all experiments, we first quantize the input coordinates to create a sparse tensor of order

, where the last dimension denotes the feature channels. The network then predicts a logit score for each non-zero element in the sparse tensor to indicate if a point is part of the geometric pattern or if it is an outlier.

4.1 Line and Plane Detection

We first test the capabilities of our fully-convolutional networks on simple high-dimensional pattern recognition problems that involve detecting linear subspaces amidst noise. Our dataset consists of uniformly sampled noise from the -dimensional space and a small number of samples from a line under a Gaussian noise model. The number of outliers increases exponentially in the dimension (), while the number of inliers increases sublinearly (), where

is the extent of the domain. Further details are given in the supplement. The network predicts a likelihood score for each non-zero element in the input sparse tensor, and we threshold inliers with probability

. We estimate the line equation from the predicted inliers using unweighted least squares.

We use PointNet variants as the baselines for this experiment [33, 48, 49]. For Zaheer et al[49]

, we were not able to get reasonable results with the network architecture proposed in the paper. We thus augmented the architecture with batch normalization and instance normalization layers after each linear transformation similar to Yi 

et al[48]

, which boosted performance significantly. For all experiments, we use the cross-entropy loss. We used the same training hyperparameters, including loss, batch size, optimizer, and learning rate schedule for all approaches.

We use three metrics to analyze the performance of the networks: Mean Squared Error (MSE), F1 score, and Average Precision (AP). For the MSE, we estimate the line equation with Least Squares to fit the line to the inliers. The second metric is the F1 score, the harmonic mean of precision and recall. In many problems, F1 score is a direct indicator of the performance of a classifier, and we also found a strong correlation between F1 score and the mean squared error. The final metric we use is average precision (AP), which measures the area under the precision-recall curve. We report the results in Tab. 

1 and provide qualitative examples in Fig. 2. Tab. 1 lists the inlier ratio to indicate the difficulty of each task.

In a second experiment, we create another synthetic dataset where the inlier pattern is sampled from a plane spanned by two vectors:

. The two basis vectors are sampled uniformly from the -dimensional unit hypercube. We use the same training procedure for our network and the baselines, and report the results in Tab. 2.

We found that the convolutional network is more robust to noise in high-dimensional spaces than the PointNet variants. In addition, the convolutional network training converges quickly, as shown in Fig. 3, which is a further indication that the architecture can effectively leverage the structure of the data.

4.2 3D Registration

A typical 3D registration pipeline consists of 1) feature extraction, 2) feature matching, 3) match filtering, and 4) global registration. In this section, we show that in the

match filtering stage the correct (inlier) correspondences form a 6-dimensional geometric structure. We then extend our geometric pattern recognition networks to identify inlier correspondences in this 6-dimensional space.

Let be a set of points sampled from a 3D surface, , and let be a subset of that went through a rigid transformation , . For example, could be a 3D scan from a different perspective that has an overlap with . We denote a correspondence between points and as

. When we form an ordered pair

, the ground truth correspondences satisfy along the common 3D geometry whereas an incorrect correspondence implies . For example, in Fig. 4, we visualize the ordered pairs from 1-dimensional sets. Note that the inliers follow the geometry of the inputs and form a line segment. Similarly, the geometry or for forms a surface in 6-dimensional space.

Theorem 1

The 6D representation of ground-truth 3D correspondences

lies on the intersection of three hyperplanes where each hyperplane is a row of the following block equation


Proof: A ground-truth 3D correspondence, , must satisfy . We move to the LHS and convert both and to the 6D representation to get three hyperplane equations.

Corollary 1.1

The rank of the block matrix, , is 3 since and are orthogonal matrices. Thus, all hyperplanes defined by the block matrix intersect each other and the 6D inlier correspondences form a 3D plane since the solution of the intersection of three hyperplanes in the 6D space, , is a 3D plane.

Figure 4: The set is rigid translation of the set , . The ordered pairs of correspondences (blue) form a line segment while outliers (red) form random noise outside the line.

We can thus use our high-dimensional convolutional network construction to segment the 6-dimensional set of correspondences into inliers and outliers by estimating the inlier likelihood for each correspondence.

Network. We use a 6-dimensional instantiation of the U-shaped convolutional network presented in Sec. 3. As the dimensionality is manageable, we use hypercubic kernels. The network takes an order-6 sparse tensor whose coordinates are correspondences . We discretize the coordinates with the voxel size used to extract features. Our baseline is Yi et al[48], which takes dimensionless mean-centered correspondences without discretization. We train the networks to predict the inlier probability of each correspondence with the balanced cross-entropy loss.

Dataset. We use the 3DMatch dataset for this experiment [50]. The 3DMatch dataset is a composition of various 3D scan datasets [17, 45, 50] and thus covers a wide range of scenes and different types of 3D cameras. We integrate RGB-D images to form fragments of the scenes following [50]. During training, we randomly rotate each scene on the fly to augment the dataset. We use a popular hand-designed feature descriptor, FPFH [39], to compute correspondences. Note, however, that our pipeline is agnostic to the choice of feature and can also be used with learned features [10].

Inlier Ratio TE RE Succ. Rate TE RE Succ. Rate TE RE Succ. Rate TE RE Succ. Rate
Kitchen 1.62% 10.98 4.99 37.15 5.68 2.21 65.61 6.25 2.17 44.47 5.90 1.98 69.57
Home 1 2.71% 11.12 4.40 45.51 6.52 2.08 80.77 7.07 2.19 61.54 6.00 1.87 80.13
Home 2 2.83% 9.61 3.83 36.54 7.13 2.56 64.42 6.47 2.40 50.00 7.86 2.56 69.71
Hotel 1 1.35% 12.31 5.09 33.19 7.95 2.65 76.11 7.48 2.75 48.67 7.38 2.38 80.09
Hotel 2 1.54% 12.27 5.22 25.00 7.86 2.56 69.23 9.54 3.18 47.12 6.40 2.25 70.19
Hotel 3 1.59% 13.52 7.04 27.78 5.39 1.99 72.22 5.91 2.46 59.26 5.85 2.36 81.48
Study 0.87% 16.10 6.01 16.78 9.61 2.64 53.42 10.05 3.01 30.48 8.51 2.23 56.16
Lab 1.59% 10.48 4.80 42.86 7.69 2.44 61.04 8.01 2.31 45.45 6.64 2.12 68.83
Average 12.05 5.17 33.10 7.23 2.39 67.85 7.60 2.56 48.37 6.82 2.22 72.02
Table 3: Pairwise registration on 3DMatch test scenes with 2.5cm downsampling. Translation Error (TE), Rotation Error (RE), success rate. Registration is considered successful if TE cm and RE .
Figure 5: The success rate of baseline methods and ours on the 3DMatch benchmark [50] with 5cm voxel size. FGR denotes registration with FPFH [39] and Zhou et al[53], Yi et al. + X denotes FPFH filtering with Yi et al[48] and registration with X, and Ours + X denotes our method for filtering followed by registration with X.
Kitchen Bedroom
Livingroom Study
Figure 6: Visualization of color-coded correspondences before and after outlier filtering (Sec. 4.2). For each pair, we visualize 100 random correspondences from the candidate set on the left and 100 random correspondences after outlier pruning on the right. Red lines are outlier correspondences and blue lines are inlier correpondences. On the bottom right pair, there are two identical chairs. The average inlier ratio is 1.76%.

We follow the standard procedures in the 3D registration literature to generate candidate correspondences. First, since 3D scans often exhibit irregular densities, we resample the input point clouds using a voxel grid to produce a regular point cloud. We use voxel sizes of 2.5cm and 5cm for our experiments. Next, we compute FPFH features and find the nearest neighbor for each point in feature space to form correspondences. The correspondences obtained from this procedure often exhibit a very low inlier ratio, as little as 0.87% with a 2.5cm voxel size. Among these correspondences, we regard as an inlier if it satisfies and all others as outliers. We set to be two times the voxel size.

Finally, we use a registration method to convert the filtered correspondences into a final registration result. We show results with two different registration methods. The first is Fast Global Registration [53], which directly minimizes a robust error metric. The second is a variant of RANSAC [15] that is specialized to 3D registration [54].

Evaluation. We use three standard metrics to evaluate registration performance: rotation error, translation error, and success rate. The rotation error measures the absolute angular deviation from the ground truth rotation , . Similarly, the translation error measures the deviation of the translation . When we report these metrics, we exclude alignments that exceed a threshold following [10] since the results of the registration methods [53, 15] can be arbitrarily bad when registration fails. Finally, the success rate is the ratio of registrations that were successful; registration is considered successful if both rotation and translation errors are within the respective thresholds. For all experiments, we use a rotation error of 15 degrees and a translation error of 30cm as the thresholds.

Tab. 3 shows the 3D registration pipelines with and without our network to filter the outliers. Note that for FGR [53], we observe a considerable improvement with our network since FGR assumes more accurate correspondences as inputs. The improvement is smaller with RANSAC, since it is more robust to high outlier rates. Although the inlier ratio is as low as 1% for many 3D scene pairs, our network generates very accurate predictions. Similar to the linear regression experiments in Fig. 3, we find that the network converges very quickly. We compare to the model of Yi et al[48] in Fig. 5 with 5cm voxel size to study the robustness of the 6-dimensional convolutional network to voxel size and find that the convolutional network improves the registration success rate significantly even for the higher inlier ratio seen with a 5cm discretization resolution. Qualitatively, our network accurately filters out outliers even in the presence of extreme noise (Fig. 6).

4.3 Filtering Image Correspondences

In this section, we apply the high-dimensional convolutional network to image correspondence inlier detection. In the projective space , an inlier correspondence must satisfy , where is an essential matrix, denotes a normalized homogeneous coordinate , is the corresponding homogeneous image coordinate, and is the camera intrinsic matrix. When we expand , we get , which is a quadrivariate quadratic function. If there is a real-valued solution, there are infinitely many solutions that form either an ellipse (sphere), a parabola, or a hyperbola. These are known as conic sections. Thus a set of ground-truth image correspondences will form a hyper conic section in 4-dimensional space. We use a convolutional network to predict the likelihood that a correspondence is an inlier.

Dataset: YFCC100M. We use the large-scale photo-tourism dataset YFCC100M [42] for the experiment. The dataset contains 100M Flicker images of tourist hot-spots with metadata, which is curated into 72 locations with camera extrinsics estimated using SfM [19]. We follow Zhang et al[51] to generate a dataset and use 68 locations for training and the others for testing. We filtered any image pairs that have fewer than 100 overlapping 3D points from SfM to guarantee non-zero overlap between images.

We use SIFT features [27] to create correspondences and label a correspondence to be a ground-truth inlier if the symmetric epipolar distance of the correspondence is below a certain threshold using the provided camera parameters, i.e.,


where is a homogeneous line, , and .

Network. We convert a set of candidate image correspondences into an order-5 sparse tensor with four spatial dimensions and vectorized features. The coordinates are defined as normalized image coordinates . We define integer coordinates by discretizing the normalized image coordinates with quantization resolution 0.01 and additionally use the normalized coordinates as features. We use state-of-the-art baselines on the YFCC dataset and two variants of convolutional networks for this task. The first variant is a U-shaped convolutional network (Ours); the second network is a ResNet-like network with spatial correlation modules (Ours + SC). Spatial correlation modules [51] are blocks of shared MLPs that encode global context from a set of correspondences.

Note that unlike the FPFH descriptor [39], which extracts features densely, SIFT features are extracted from only a few keypoints sparsely in the image. In the 4-dimensional space of correspondences, the sparsity gets even worse as the volume increases multiplicatively while the number of points (correspondences) stay the same. Such sparsity leads to fewer neighbors in the high-dimensional space, which results in the degeneration of the convolution to a multi-layer perceptron. Our convolutional networks remain effective in this setting, but their distinctive ability to leverage local geometric structure is not strongly utilized. To increase the density of the correspondences in the 4-dimensional space, we use a dense fully convolutional feature UCN [7]

. We train the UCN on the YFCC100M training set for 100 epochs and we follow the preprocessing stage outlined in the open-source version 

[9] to create correspondences. Training and testing on UCN follow the same standard procedure. We use the balanced cross-entropy loss [48] for all experiments.

Yi et al[48] Zhang et al[51] Ours Ours + SC Ours + UCN
Prec. Recall F1 Prec. Recall F1 Prec. Recall F1 Prec. Recall F1 Prec. Recall F1
Buckingham 0.497 0.772 0.605 0.486 0.889 0.629 0.535 0.822 0.648 0.611 0.835 0.705 0.769 0.892 0.826
Notre dame 0.581 0.894 0.705 0.629 0.951 0.757 0.647 0.915 0.758 0.721 0.929 0.812 0.844 0.954 0.896
Reichtag 0.747 0.877 0.807 0.734 0.917 0.815 0.695 0.911 0.789 0.769 0.897 0.827 0.856 0.937 0.895
Sacre coeur 0.658 0.871 0.750 0.662 0.948 0.780 0.632 0.917 0.748 0.718 0.932 0.811 0.855 0.948 0.899
Average 0.621 0.854 0.717 0.628 0.926 0.745 0.628 0.891 0.736 0.704 0.898 0.789 0.830 0.933 0.879
Table 4: Classification scores on the YFCC100M test set. We consider a correspondence a true positive if the symmetric epipolar distance is below and the prediction confidence is above 0.5.
Buckingham Notre Dame Sacre Coeur
Figure 7: Matching results using Yi et al[48] (first row), Zhang et al[51] (second row), Ours + SC (third row) and Ours + UCN (last row). We visualize correspondences with inlier probability above 0.5. We color a correspondence green if it is a true positive (symmetric epipolar distance smaller than ) and red otherwise.

Evaluation. We use precision, recall, and F1 score to evaluate correspondence classification accuracy. We use for the distance threshold to define ground truth-correspondences. Our network predicts a correspondence to be accurate if its inlier probability prediction is above 0.5. We provide quantitative results in Tab. 4 and qualitative results in Fig. 7. Our approachs outperform the PointNet variants [48, 51] as measured by the F1 score.

5 Conclusion

Many interesting problems in computer vision involve geometric patterns in high-dimensional spaces. We proposed high-dimensional convolutional networks for geometric pattern recognition. We presented a fully-convolutional network architecture that is efficient and able to find patterns in high-dimensional data even in the presence of severe noise. We validated the efficacy of our approach on tasks such as line and plane detection, 3D registration, and geometric filtering of image correspondences.


This work was partially supported by a National Research Foundation of Korea (NRF) grant and funded by the Korean government (MSIT) (No. 2020R1C1C1015260).


  • [1] M. Ankerl (2019) Robin hood hashing. GitHub. Note: https://github.com/martinus/robin-hood-hashing Cited by: §3.3.
  • [2] Y. Aoki, H. Goforth, R. Arun Srivatsan, and S. Lucey (2019) PointNetLK: robust & efficient point cloud registration using pointnet. In CVPR, Cited by: §2.
  • [3] E. Brachmann, A. Krull, S. Nowozin, J. Shotton, F. Michel, S. Gumhold, and C. Rother (2017) DSAC - differentiable RANSAC for camera localization. In CVPR, Cited by: §1, §2.
  • [4] E. Brachmann and C. Rother (2019) Neural-guided RANSAC: learning where to sample model hypotheses. In ICCV, Cited by: §2.
  • [5] T. Chin, P. Purkait, A. P. Eriksson, and D. Suter (2017) Efficient globally optimal consensus maximisation with tree search. IEEE Transactions on Pattern Anaysis and Machine Intelligence. Cited by: §2.
  • [6] S. Choi, Q. Zhou, and V. Koltun (2015) Robust reconstruction of indoor scenes. In CVPR, Cited by: §1, §1, §2.
  • [7] C. B. Choy, J. Gwak, S. Savarese, and M. Chandraker (2016) Universal correspondence network. In Advances in Neural Information Processing Systems, Cited by: §4.3.
  • [8] C. Choy, J. Gwak, and S. Savarese (2019) 4D spatio-temporal ConvNets: minkowski convolutional neural networks. In CVPR, Cited by: §3.1, §3.2, §3.2, §3.3, §3.3.
  • [9] C. Choy and J. Lee (2019) Open universal correspondence network. Note: https://github.com/chrischoy/open-ucn Cited by: §4.3.
  • [10] C. Choy, J. Park, and V. Koltun (2019) Fully convolutional geometric features. In ICCV, Cited by: §3.2, §4.2, §4.2.
  • [11] O. Chum and J. Matas (2005) Matching with prosac-progressive sample consensus. In CVPR, Cited by: §1, §2.
  • [12] A. Dai, M. Nießner, M. Zollöfer, S. Izadi, and C. Theobalt (2017) BundleFusion: real-time globally consistent 3d reconstruction using on-the-fly surface re-integration. ACM Transactions on Graphics. Cited by: §2.
  • [13] Z. Dang, K. M. Yi, Y. Hu, F. Wang, P. Fua, and M. Salzmann (2018)

    Eigendecomposition-free training of deep networks with zero eigenvalue-based losses

    In ECCV, Cited by: §2.
  • [14] W. Dong, J. Park, Y. Yang, and M. Kaess (2019) GPU accelerated robust scene reconstruction. In IROS, Cited by: §2.
  • [15] M. A. Fischler and R. C. Bolles (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM. Cited by: §2, §4.2, §4.2.
  • [16] A. W. Fitzgibbon (2003) Robust registration of 2D and 3D point sets. Image and Vision Computing. Cited by: §1, §2.
  • [17] B. Glocker, S. Izadi, J. Shotton, and A. Criminisi (2013) Real-time RGB-D camera relocalization. In ISMAR, Cited by: §4.2.
  • [18] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1, §3.2.
  • [19] J. Heinly, J. L. Schonberger, E. Dunn, and J. Frahm (2015) Reconstructing the world in six days. In CVPR, Cited by: §4.3.
  • [20] M. B. Horowitz, N. Matni, and J. W. Burdick (2014)

    Convex relaxations of SE(2) and SE(3) for visual pose estimation

    In ICRA, Cited by: §2.
  • [21] R. Hoseinnezhad and A. Bab-Hadiashar (2011) An m-estimator for high breakdown robust estimation in computer vision. Computer Vision and Image Understanding. Cited by: §1, §2.
  • [22] P. J. Huber (2011) Robust statistics. Springer. Cited by: §1, §2.
  • [23] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, Cited by: §1.
  • [24] G. Izatt and R. Tedrake (2017) Globally optimal object pose estimation in point clouds with mixed-integer programming. In International Symposium on Robotics Research, Cited by: §2.
  • [25] K. Lebeda, J. Matas, and O. Chum (2012) Fixing the locally optimized RANSAC–full experimental evaluation. In British Machine Vision Conference, Cited by: §2.
  • [26] H. Li (2009) Consensus set maximization with guaranteed global optimality for robust geometry estimation. In ICCV, Cited by: §2.
  • [27] D. G. Lowe (2004) Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision. Cited by: §4.3.
  • [28] H. Maron, N. Dym, I. Kezurer, S. Kovalsky, and Y. Lipman (2016) Point registration via efficient convex relaxation.. ACM Transactions on Graphics. Cited by: §2.
  • [29] OpenMP Architecture Review Board (2008-05) OpenMP application program interface version 3.0. External Links: Link Cited by: §3.3.
  • [30] G. D. Pais, P. Miraldo, S. Ramalingam, V. M. Govindu, J. C. Nascimento, and R. Chellappa (2019) 3DRegNet: A deep neural network for 3D point registration. arXiv. Cited by: §2.
  • [31] J. Park, Q. Zhou, and V. Koltun (2017) Colored point cloud registration revisited. In ICCV, Cited by: §2.
  • [32] T. Probst, D. P. Paudel, A. Chhatkuli, and L. V. Gool (2019) Unsupervised learning of consensus maximization for 3D vision problems. In CVPR, Cited by: §2.
  • [33] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017)

    Pointnet: deep learning on point sets for 3D classification and segmentation

    In CVPR, Cited by: §1, §2, Figure 2, §4.1, Table 1, Table 2.
  • [34] R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J. Frahm (2013) USAC: A universal framework for random sample consensus. IEEE Transactions on Pattern Anaysis and Machine Intelligence. Cited by: §1, §2.
  • [35] R. Ranftl and V. Koltun (2018) Deep fundamental matrix estimation. In ECCV, Cited by: §1, §2.
  • [36] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, Cited by: §1, §3.2.
  • [37] D. M. Rosen, L. Carlone, A. S. Bandeira, and J. J. Leonard (2019) SE-sync: a certifiably correct algorithm for synchronization over the special euclidean group. International Journal of Robotics Research. Cited by: §2.
  • [38] P. J. Rousseeuw (1984) Least median of squares regression. Journal of the American Statistical Association. Cited by: §1, §2.
  • [39] R. B. Rusu, N. Blodow, and M. Beetz (2009) Fast point feature histograms (FPFH) for 3d registration. In ICRA, Cited by: §1, Figure 5, §4.2, §4.3.
  • [40] Schönberger, J. Lutz, and J. Frahm (2016) Structure-from-motion revisited. In CVPR, Cited by: §1.
  • [41] R. B. Tennakoon, A. Bab-Hadiashar, Z. Cao, R. Hoseinnezhad, and D. Suter (2016) Robust model fitting using higher than minimal subset sampling. IEEE Transactions on Pattern Anaysis and Machine Intelligence. Cited by: §1, §2.
  • [42] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L. Li (2016) YFCC100M: the new data in multimedia research. Communications of the ACM. Cited by: §4.3.
  • [43] P. H. S. Torr (2002) Bayesian model estimation and selection for epipolar geometry and generic manifold fitting. International Journal of Computer Vision. Cited by: §2.
  • [44] P. H. Torr and A. Zisserman (2000) MLESAC: a new robust estimator with application to estimating image geometry. Computer Vision and Image Understanding. Cited by: §2.
  • [45] J. Xiao, A. Owens, and A. Torralba (2013) Sun3D: a database of big spaces reconstructed using sfm and object labels. In ICCV, Cited by: §4.2.
  • [46] J. Yang, H. Li, and Y. Jia (2013) Go-ICP: solving 3D registration efficiently and globally optimally. In ICCV, Cited by: §2.
  • [47] J. Yang, H. Li, and Y. Jia (2014) Optimal essential matrix estimation via inlier-set maximization. In ECCV, Cited by: §2.
  • [48] K. M. Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, and P. Fua (2018) Learning to find good correspondences. In CVPR, Cited by: §1, §1, §2, Figure 2, Figure 5, Figure 7, §4.1, §4.2, §4.2, §4.3, §4.3, Table 1, Table 2, Table 4.
  • [49] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola (2017) Deep sets. In Advances in Neural Information Processing Systems, Cited by: §1, §2, Figure 2, §4.1, Table 1, Table 2.
  • [50] A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser (2017) 3DMatch: learning local geometric descriptors from RGB-D reconstructions. In CVPR, Cited by: Figure 5, §4.2.
  • [51] J. Zhang, D. Sun, Z. Luo, A. Yao, L. Zhou, T. Shen, Y. Chen, L. Quan, and H. Liao (2019) Learning two-view correspondences and geometry using order-aware network. In ICCV, Cited by: §1, §1, §2, Figure 7, §4.3, §4.3, §4.3, Table 4.
  • [52] Z. Zhang (1998) Determining the epipolar geometry and its uncertainty: A review. International Journal of Computer Vision. Cited by: §2.
  • [53] Q. Zhou, J. Park, and V. Koltun (2016) Fast global registration. In ECCV, Cited by: §1, §2, §2, Figure 5, §4.2, §4.2, §4.2.
  • [54] Q. Zhou, J. Park, and V. Koltun (2018) Open3D: A modern library for 3D data processing. arXiv. Cited by: §1, §4.2.