1 Introduction
Solving the PerspectivenPoint (PnP) problem with unknown correspondences involves estimating the 6DoF absolute pose (rotation and translation ) of the camera with respect to a reference coordinate frame, given a 2D point set from an image captured by the camera and a 3D point set of the environment in the reference frame. Importantly, 2D–3D correspondences are unknown: any 2D point could correspond to any 3D point or to none. This is a nontrivial chickenandegg problem since the estimation of correspondence and pose is coupled. Moreover, crossmodal correspondences between image pixels and 3D points are difficult to obtain. Even if the 2D and 3D sets are known to overlap, finding the specific correspondences between 2D and 3D points is an unsolved problem.
When correspondences are known, the problem reduces to the standard PnP problem [10, 17, 33, 19]. When correspondences are unknown, the problem is blind PnP, for which several traditional geometrybased methods were proposed, including SoftPoSIT [7], BlindPnP [24], GOPAC [3] and GOSMA [2]. These local methods [7, 24] require a good pose prior to find a reasonable solution, while global methods [2, 3] systematically search the space of and for a global optimum with respect to an objective function and are thus quite slow.
Instead of relying on a good prior on camera pose, or exhaustively searching over all possible camera poses, we propose to estimate the correspondence matrix directly. Once the 2D–3D correspondences have been found, the camera pose can be recovered efficiently using an offtheshelf PnP solver inside a RANSAC [8] framework. While a straightforward idea, finding the correspondence matrix is challenging because we need to identify inliers from a correspondence set with cardinality , where is the number of 3D points and is the number of 2D points. A naïve RANSAClike search of this correspondence space [9] has complexity [7]. We instead estimate the correspondence matrix directly using a CNNbased method that takes only the original 2D and 3D coordinates as input.
The proposed method extracts discriminative feature descriptors from the point sets that encode both local geometric structure and global context at each point. The intuition is that the local geometric structure about a point in 3D is likely to bear some resemblance to the local geometric structure of the corresponding point in 2D, modulo the effects of projection and occlusion. We then combine the features from each point set in a novel global feature matching module to estimate the 2D–3D correspondences. This module computes a weighting (joint probability) matrix using optimal transport, where each element describes the matchability of a particular 3D point with a particular 2D point. Sorting the 2D–3D matches in decreasing order by weight produces a prioritized match list, which can be used to recover the camera pose. To further disambiguate inlier and outlier 2D–3D matches from the prioritized match list, we append an inlier classification CNN similar to that of Yi
et al. [22] and use the filtered output to estimate the camera pose. Our correspondence estimation CNN is trained endtoend and the code and data will be released to facilitate future research. The overall framework is illustrated in Figure 1. Our contributions are:
a new deep method to solve the blind PnP problem with unknown 2D–3D correspondences. To the best of our knowledge, there is no existing deep method that takes unordered 2D and 3D pointsets (with unknown correspondences) as inputs, and outputs a 6DoF camera pose;

a twostream network to extract discriminative features from the point sets, which encodes both local geometric structure and global context; and

an original global feature matching network based on a recurrent Sinkhorn layer to find 2D–3D correspondences, with a loss function that maximizes the matching probabilities of inlier matches.
Our method achieves stateoftheart performance, orders of magnitude faster () than existing blind PnP approaches.
2 Related Work
When 2D–3D correspondences are known, PnP methods [10, 17, 33, 19]
can be used to solve the 6DoF pose estimation problem. When correspondences are unknown, the local optimization method SoftPOSIT
[7] iterates between solving for correspondences and solving for pose. The correspondences are estimated from a zero–one assignment matrix using Sinkhorn’s algorithm [29]. This method requires a good pose prior and can only find a locallyoptimal pose within the convergence basin of the prior. BlindPnP [24] also relies on good pose priors to restrict the number of possible 2D–3D matches. To avoid getting trapped in local optima, the global methods GOPAC [3] and GOSMA [2] were proposed. Though guaranteed to find the optimum, they can only handle a moderate number of points (often hundreds) since the timeconsuming branchandbound [18] algorithm is used. Furthermore, they are affected by geometric ambiguities, meaning that many different camera poses can be considered equivalently good.When 3D points are not utilized, the PoseNet algorithms [15, 14] can directly regress a camera pose. However, the accuracy of the regressed 6DoF poses is inferior to geometrybased methods that use 3D points.
With PointNet [26], deep networks can now handle sparse and unordered pointsets. Though most PointNetbased works focus on classification and segmentation tasks [31, 26], traditional geometry problems are ready to be addressed. For example, 3D–3D registration [1], 2D–2D outlier correspondence rejection [22], and 2D–3D outlier correspondence rejection [6]. In contrast, this paper tackles the problem of PnP with unknown correspondences using a deep network, which has not previously been demonstrated. The key challenge is encoding sufficient information in pointwise features from the geometry of points alone, and matching these 2D and 3D features effectively.
3 Learning Correspondences for Blind PnP
3.1 Problem Formulation
Let denote a 3D point set with points in the reference coordinate system, denote a 2D point set with points in an image coordinate system, and denote the correspondence matrix between and . We assume that the camera is calibrated, and thus the intrinsic camera matrix [11] is known.
Blind PnP aims to estimate a rotation matrix
and a translation vector
which transforms 3D points to align with 2D points . Specifically, for , where is the homogeneous coordinate of . The difficult part of this problem is to estimate the correspondence matrix . Once it is found, a traditional PnP algorithm can solve the problem.We propose to estimate
using a deep neural network. Specifically, for each tentative 2D–3D match in
, we calculate a weight for and describing the matchability of and . We can obtain a set of 2D–3D matches by taking the Top matches according to these weights.We present our method for extracting pointwise discriminative features in Section 3.2. We then describe our global feature matching method for obtaining 2D–3D match probabilities in Section 3.3. Finally, we provide our match refinement strategy using a classification CNN to further disambiguate inlier and outlier matches in Section 3.4.
3.2 Feature Extraction
To learn discriminative features and for each 3D point and 2D point respectively, we propose a twostream network. One branch takes 3D points from as inputs and the other takes 2D points from . The two branches do not share weights. Both aim to encode information about the local geometric structure at each point as well as global contextual information. The detailed structure for a single branch is given in Figure 2.
Preprocessing: The 2D points are transformed to normalized coordinates using the camera intrinsic matrix to improve numerical stability [11]. The 3D points are aligned to a canonical direction, similarly to PointNet [26], which is beneficial for extracting features. Specifically, a transformation matrix is learned and applied to the original coordinates of 3D points.
Encoding Local Geometry: To extract pointwise local features, we first perform an nearest neighbor search and build a pointwise KNN graph. For a KNN graph around the anchor point indexed by , the edges from the anchor to its neighbors capture the local geometric structure. Similar to EdgeConv [31], we concatenate the anchor and edge features, and then pass them through an MLP module to extract local features around the th point. Specifically, the operation is defined by
(1) 
where denotes that point is in the neighborhood of point , and are MLP weights performed on the edge and anchor point respectively, and avg() denotes that we perform average pooling in the neighborhood after the MLP to extract a single feature for point . We detail the above operations in Figure 2. Note that for the first layer of our CNN, the MLP module lifts the dimensions of 3D and 2D points to .
Encoding Global Context: After extracting pointwise local features, we aim to also embed global contextual information within them. We use Context Normalization [22] to share global information while remaining permutation invariant. This layer normalizes the feature distribution across the point set, applying the nonparametric operation , where is the th feature descriptor, and and
are the mean and standard deviation across the point set. Context normalized features are then passed through batch normalization, ReLU, and shared MLP layers to output the final pointwise features.
We replicate the local and global feature extraction modules
times with residual connections
[12] to extract deep features. Finally, we normalize all feature vectors to embed them to a metric space.3.3 Global Feature Matching
Given a learned feature descriptor per point in and , we perform global feature matching to estimate the likelihood that a given 2D–3D pair matches. To do so, we compute the pairwise distance matrix , which measures the cost of assigning 3D points to 2D points. Each element of is the distance between the features at point and , i.e., . Furthermore, to model the likelihood that a given point has a match and is not an outlier, we define unary matchability vectors, denoted by and for the 3D and 2D set respectively.
From , and we estimate a weighting matrix where each element represents the matchability of the 3D–2D pair . Note that each element is estimated from the cost matrix and the unary matchability vectors and , rather than locally from . In other words, the weighting matrix globally handles pairwise descriptor distance ambiguities in , while respecting the unary priors. The overall pipeline is given in Figure 3.
Prior Matchability: For each point we define a prior unary matchability measuring how likely it is to have a match. Formally, let and denote the unary matchabilities of points and
respectively. Collecting the matchabilities for all 2D or 3D points yields a matchability histogram, a 1D probability distribution, given by
and , where a simplex in is defined as .We make the assumption that the unary matchabilities are uniformly distributed, that is,
. This means that each point has the same prior likelihood of matching. While our model can predict nonuniform priors, we found that using learned priors led to overfitting.Solving for : From optimal transport theory [30, 5, 4], the joint probability matrix can be obtained by solving
(2) 
where is the Frobenius dot product and is the transport polytope that couples two unary matchability vectors and , given by
(3) 
where . The constraint on ensures that we assign the binary matchabilities of each 3D point to all 2D points without altering the unary matchability of the point. The entropy regularization term facilitates efficient computation [5] and is defined by
(4) 
To solve (2), we use a variant of Sinkhorn’s Algorithm [29, 21], given in Algorithm 1
. Unlike the standard algorithm that generates a square, doublystochastic matrix, our version generates a rectangular joint probability matrix, whose existence is guaranteed (Theorem
[21]).Joint Probability Loss Function: To train the feature extraction and matching network, we apply a loss function to the weighting matrix . Since models the joint probability distribution of and , we can maximize the joint probability of inlier correspondences and minimize the joint probability of outlier correspondences using
(5) 
where the groundtruth correspondence matrix is if is a true correspondence and otherwise. The loss is bounded, with , since .
If groundtruth correspondence labels are not available, they can be obtained in a weaklysupervised fashion by projecting the 3D points onto the image using the groundtruth camera pose and applying an inlier threshold.
Remark 1: A common objective in geometry optimization is minimizing reprojection error. With estimated weighting matrix , the weighted angular reprojection error is defined by:
(6) 
where and are the groundtruth rotation and translation and denotes normalization. This loss minimizes the sum of weighted angular distances between image rays and rays connecting the camera center and the transformed 3D points . While this loss is geometrically meaningful, we will show in the experiments that it is inferior to our loss.
Remark 2: A common technique for learning discriminative crossmodal features is deep metric learning. We tested a triplet loss [13] that minimizes the feature distance between matchable 2D–3D pairs and maximizes the feature distance between nonmatchable 2D–3D pairs, given by
(7) 
where is chosen empirically. We use the groundtruth labels to find the positive and negative anchors, that is for and for , selected at random. is randomly selected from nonmatchable features. While this loss is effective, it also performs worse than our loss.
Retrieving Correspondences: To retrieve the 2D–3D correspondences from the weighting matrix , we test the following methods.
1. Top Prioritized Matches: To obtain a list of prioritized matches, we have (i) reshape into a 1D correspondence probability vector, sort by decreasing probability; or (ii) reshape into a 1D correspondence distance vector, sort by increasing distance, and then retrieve the associated 2D and 3D point indices. Given this list of matches or prioritized by weight or distance, respectively, we truncate it to obtain the Top matches. We denote the former by TopK_w and the latter by TopK_f. Instead of enforcing onetoone 2D–3D matches, we defer disambiguation to the match refinement stage.
2. Nearest Neighbors (NNs): For each 2D point , we find its nearest 3D neighbor with respect to (i) the probability matrix, that is ; or (ii) the regressed descriptor, that is . We denote the former by NN_w and the latter by NN_f. This approach retrieves matches.
3. Mutual Nearest Neighbors (MNNs): We extend the previous approach by also enforcing a onetoone constraint, that is, only keeping the correspondence if is the nearest neighbor of and is the nearest neighbor of . We again compute nearest neighbors with respect to (i) the probability matrix (MNN_w); or (ii) the regressed descriptors (MNN_f). This approach retrieves fewer than correspondences.
3.4 Correspondence Set Refinement
We now have a set of putative 2D–3D correspondences, some of which are outliers, and want to estimate the camera pose. This is the standard PnP problem with outlier correspondences, and may be solved using RANSAC [8] with a minimal P3P solver [10]. However, recent work [22, 6]
has shown that outliers can be filtered more efficiently using deep learning. Therefore, we apply the 2D–2D correspondence classification network from Yi
et al. [22] to reject outliers, with a modified input dimension and loss function.Regression Loss:
We directly regress rotation and translation
using the weighted Direct Linear Transform (DLT). Given at least six correspondences
, and can be estimated by solving a SVD problem [11]. We first construct the linear equation , where each 2D–3D match supplies two rows in , giving(8) 
where , , and is the i^{th} row of the rotation matrix. The camera pose can be estimated by taking the SVD of , where
is the vector of weights predicted by the classification network. The eigenvector associated with the smallest eigenvalue is the solution
from which and can be assembled up to a sign ambiguity. Given groundtruth rotation and translation , we define our pose loss as(9) 
Although we do not impose an orthogonality constraint on , minimizing the above loss function pushes towards the Lie group of .
RANSAC and Nonlinear Optimization:
4 Experiments
Our experiments are conducted on both synthetic (ModelNet40 [32] and NYURGBD [25]) and realworld (MegaDepth [20]) datasets. Sample 3D and 2D point clouds from these datasets are given in Appendix. We first validate the components of our pipeline and then compare it with stateoftheart methods.
ModelNet40 [32]: We use the default train and test splits of and CAD mesh models respectively. We uniformly sample 3D points from the surface of each model and generate virtual camera viewpoints as follows: Euler rotation angles are uniformly drawn from , translations are uniformly drawn from , and a translation offset of is applied along the axis to ensure that all 3D points are in front of the camera. The 3D points are projected onto a virtual image plane with size and focal length and Gaussian pixel noise () is added to the 2D points. In total, training and testing 2D–3D pairs are generated.
NYURGBD [25]: We use train and test splits of and aligned RGB and depth image pairs respectively from the labeled NYU Depth V2 dataset. We uniformly sample 2D points from each RGB image, normalize the points using the intrinsic camera matrix, and find the corresponding 3D points in the depth image. We transform the 3D points using virtual rotations and translations generated in the same way as for the ModelNet40 dataset, without the translation offset, and add Gaussian pixel noise () to the 2D points. In total, training and testing 2D–3D pairs are generated. Note that the scenes in the train and test sets do not overlap.
MegaDepth [20]: MegaDepth is a multiview Internet photo dataset with multiple landmark scenes obtained from Flickr. It has diverse scene contents, image resolutions, 2D–3D point distributions, and camera poses. The dataset provides 3D point sets reconstructed using COLMAP [28], and 2D SIFT keypoints detected from images. We randomly select several landmarks, yielding a total number of 2D–3D training sets and testing sets. The number of 2D–3D correspondences varies from tens to thousands. Note that the landmarks in the train and test sets do not overlap.
Evaluation metrics: We report the number of inlier 2D–3D matches among all matches found, using groundtruth correspondence labels. We also report the rotation error, given by , where is the groundtruth rotation and is the estimated rotation, and the translation error, given by the distance between the estimated and groundtruth translation vectors.
We also calculate the recalls (percentage of poses) by varying predefined thresholds on rotation and translation error. For each threshold, we count the number of poses with error less than that threshold, and then normalise the number by the total number of poses.
Implementation details:
Our 12layer twostream network is implemented in TensorFlow and is trained from scratch using the Adam optimizer
[16] with a learning rate of and a batch size of . Every layer has an output channel dimension of . The number of Sinkhorn iterations is set to , is set to , and the number of neighbors in the KNNgraph is set to . We utilize a twostage training strategy: we first train our feature extraction and matching network until convergence and then train the classification network to refine 2D–3D matches. Our model is trained on a single NVIDIA Tesla P40 GPU in days. Code and data will be released.4.1 Synthetic Data Experiments
To validate the components of our pipeline, we perform experiments on the synthetic ModelNet40 [32] and NYURGBD [25] datasets.
The effectiveness of global matching: Given a pointwise regressed descriptor and a 2D–3D weighting matrix , we have methods for retrieving 2D–3D correspondences as listed in Section 3.3: TopK_w, TopK_f, NN_w, NN_f, MNN_w and MNN_f. We calculate the number of inlier matches using groundtruth labels, and the results are shown in Figure 4. The number of inlier 2D–3D correspondences found by NN_w and MNN_w is consistently greater than the number found by NN_f and MNN_f respectively. This demonstrates that retrieving 2D–3D matches from the weighting matrix is better than performing nearest neighbor search using the regressed descriptors.
For the methods TopK_w and TopK_f, since we can truncate the prioritized 2D–3D matching list at the ^{th} () position, we plot the curve showing the number of inliers with respect to the number of found 2D–3D correspondences for up to . Again, method TopK_w outperforms TopK_f. Interestingly, match–inlier tuples found by NN_w and MNN_w lie very close to the TopK_w curve. We use the TopK_w method (omit the subscript) in the remaining experiments, since it enables us to select a sufficient number of matches while also finding large number of inliers. Our method finds fewer inlier correspondences on the ModelNet40 dataset than the NYURGBD dataset, since the virtual camera can only view part of the whole 3D model with the remainder being occluded.
The effectiveness of 2D–3D classification: We evaluate the ability of the 2D–3D correspondence classification module to disambiguate inlier and outlier correspondences by running this network with the Top () matches from the prioritized 2D–3D match list. We calculate the average inlier ratio (#inlier/#matches) with respect to the number of found 2D–3D matches, as shown in Figure 5. The results demonstrate that the classification network significantly improves the average inlier ratio, which improves the pose estimation considerably, as shown in the next experiment.
MethodDataset  ModelNet40  NYURGBD  

Q1  Med.  Q3  Q1  Med.  Q3  
TopK  Rot. err. ()  4.316  10.85  19.30  0.303  0.448  0.645 
Trans. err.  0.041  0.088  0.196  0.014  0.022  0.033  
TopKC  Rot. err. ()  1.349  2.356  5.260  0.202  0.291  0.407 
Trans. err.  0.018  0.037  0.070  0.009  0.014  0.020 
Comparison of rotation and translation errors on the ModelNet40 and NYURGBD datasets. Q1 denotes the first quartile, Med. denotes the median, and Q3 denotes the third quartile.
MethodError  Rotation ()  Translation  

Q1  Med.  Q3  Q1  Med.  Q3  
P3PRANSAC  90.82  138.5  164.8  0.433  1.147  3.077 
SoftPOSIT [7]  16.10  21.75  28.00  0.332  0.488  0.719 
GOSMA [2]  10.08  22.06  52.01  0.254  0.464  0.746 
Ours  1.349  2.356  5.260  0.018  0.037  0.070 
Estimating 6DoF pose: Once the 2D–3D matches have been established, we apply the P3P algorithm in a RANSAC framework to estimate the 6DoF camera pose. We compare two methods: (a) P3P with TopK matches (); and (b) P3P with TopK matches and classification network filtering. The results are shown in Figure 6. Observe that after classification filtering we consistently obtain larger recalls at each error thresholds. The same trend is visible in the rotation and translation error statistics, shown in Table 4. Due to these conclusive results, we thus use the TopK + Classification (TopKC) method as our default configuration. The performance of stateoftheart methods on ModelNet40 is presented in Table 2; see the Appendix for a full comparison on both synthetic datasets. Our method outperforms all others by a large margin, in addition to a speedup.
MethodError  Rotation ()  Translation  

Q1  Med.  Q3  Q1  Med.  Q3  
P3PRANSAC  66.64  122.1  155.4  6.796  15.18  28.18 
SoftPOSIT [7]  1.806  21.39  165.4  0.242  1.532  6.101 
GOSMA [2]  8.685  86.78  144.5  1.070  5.670  9.335 
Ours  0.028  0.056  0.137  0.002  0.005  0.018 
4.2 Real Data Experiments
We further demonstrate the effectiveness of our method to handle realworld data on the MegaDepth [20] dataset.
Comparison with stateoftheart methods: The unavailability of Gaussian pose priors and the sheer number () of 2D and 3D points precludes the use of the methods BlindPnP [24] and GOPAC [3]. We compare our method against P3PRANSAC with randomlysampled 2D–3D correspondences and the stateoftheart local solver SoftPOSIT [7] and global solver GOSMA [2]. All methods are terminated at s per alignment for time considerations, returning the best value found so far. Note that with this approach, GOSMA’s guarantee of global optimality is traded off against its runtime. For randomlysampling 2D–3D correspondences in P3PRANSAC, the probability of finding minimal inlier correspondences set approximates zero (see Appendix). Since SoftPOSIT requires a good prior pose, we simulate it by adding a small perturbation to the groundtruth pose, with the angular perturbation drawn uniformly from and the translation perturbation drawn uniformly from . The configuration details of these methods are given in the Appendix. The performance for 6DoF pose estimation is shown in Figure 7 and Table 3. It shows that our method outperforms the secondbest method GOSMA,^{1}^{1}1SoftPOSIT is initialized with a good prior pose from the groundtruth, and so cannot be compared directly. with median rotation and translation errors of and for our method, and and for GOSMA. The qualitative comparisons in Figure 8 show that the projection of 3D points using our method’s pose aligns very well with the images. The average runtime of our method, SoftPOSIT, P3PRANSAC and GOSMA is , , and respectively, where the last three algorithms were run for a maximum runtime of .
Robustness to outliers: To demonstrate the effectiveness of our method at handling outliers, we add outliers to both the 3D and 2D pointsets. Specifically, for original 3D and 2D pointsets with cardinality and , we add and outliers to the 3D and 2D pointsets, respectively, for an outlier ratio . We add two types of outliers: synthetic and real. For synthetic outliers, they are generated uniformly within the bounding box enclosing the 3D and 2D pointsets. The rotation and translation errors with respect to the outlier ratio are given in Figure 9 (Left). For real outliers, 2D outliers are added from detected SIFT keypoints that do not have a matchable 3D point, and 3D outliers are added from 3D model points that do not have a matchable 2D point. The rotation and translation errors with respect to the outlier ratio are given in Figure 9 (Right). It shows that the performance of our method degrades gracefully with respect to an increasing outlier ratio. See Appendix for more comparisons.
Backbone networks: To demonstrate the effectiveness of our method at regressing pointwise descriptors, we compare four networks: PointNet [26], PointNet++ [27], CnNet [22] and Dgcnn [31]. The features from PointNet [26] and Dgcnn [31] are taken before global pooling and have dimension . CnNet [22] and PointNet++ [27] generate features with dimension . For all networks, the output pointwise feature vectors are normalized to embed them in a metric space. The configuration details are given in the Appendix. We compute the average number of inliers with respect to the number of found 2D–3D matches using each backbone network, as shown in Figure 10 (Left). It demonstrates that our feature extraction network significantly outperforms all other networks, finding more inlier 2D–3D matches.
Loss functions: To show the effectiveness of our proposed inlier set probability maximization loss, we compare it with two others: (a) a reprojection loss; and (b) a triplet loss. For the triplet loss, we use an exhaustive minibatch strategy to maximize the number of triplets. Using the weighting matrix learned using the different losses, we calculate the average number of inliers with respect to the number of found 2D–3D matches, shown in Figure 10 (Right). It shows that the proposed loss outperforms other losses, finding more inlier matches.
5 Conclusion
We have proposed a new deep network to solve the blind PnP problem of simultaneously estimating the 2D–3D correspondences and 6DoF camera pose from 2D and 3D points. The key idea is to extract discriminative pointwise feature descriptors that encode local geometric structure and global context, and use these to establish 2D–3D matches via a global feature matching module. The highquality correspondences found by our method facilitates the direct application of traditional PnP methods to recover the camera pose. Our experiments demonstrate that our method significantly outperforms traditional geometrybased methods with respect to speed and accuracy.
In this Appendix, we first give additional experimental results and then describe implementation details.
Appendix 0.A Comparison with StateoftheArt Methods
In this section, we provide the full set of results for the synthetic ModelNet40 [32] and NYURGBD [25] datasets. We provide the 6 DoF pose estimation results in Table 4. It shows that our method outperforms all stateoftheart methods by a large margin.
[50mm]MethodDataset  ModelNet40  NYURGBD  

Q1  Med.  Q3  Q1  Med.  Q3  
P3PRANSAC  Rot. err. ()  90.82  138.6  164.8  40.11  99.26  154.0 
Trans. err.  0.433  1.147  3.077  0.827  1.295  2.023  
SoftPOSIT [7]  Rot. err. ()  16.10  21.75  28.00  12.88  20.61  31.32 
Trans. err.  0.332  0.488  0.719  0.646  0.935  1.299  
GOSMA [2]  Rot. err. ()  10.08  22.06  52.01  1.364  3.184  21.98 
Trans. err.  0.254  0.464  0.746  0.126  0.212  0.688  
Our  Rot. err. ()  1.349  2.356  5.260  0.202  0.291  0.407 
Trans. err.  0.018  0.037  0.070  0.009  0.014  0.020 
Appendix 0.B Visualization of the Weighting Matrix
We provide a sample visualization of the weighting matrix in Figure 11 (left) with groundtruth 2D–3D matches on the diagonal. The corresponding convergence curve of the Sinkhorn algorithm is given in Figure 11 (right).
Appendix 0.C Sample 3D and 2D Point Cloud
Our experiments are conducted on both synthetic (ModelNet40 [32] and NYURGBD [25]) and realworld (MegaDepth [20]) datasets. Figure 12 presents sample 3D and 2D point clouds from these datasets.
Appendix 0.D More Qualitative Results
We give more qualitative results on the realworld MegaDepth [20] dataset in Figure 13. We add green borders to images if their corresponding poses are estimated with rotation errors less than and translation errors less than 0.5. We add red borders to images if their corresponding poses are estimated with rotation errors larger than or translation errors larger than 0.5.
Appendix 0.E Robustness to outliers
In the main paper, to demonstrate the effectiveness of our method at handling outliers, we add outliers to both the 3D and 2D pointsets at the same time. To further test the robustness of our method to realworld outliers, we add realworld outliers to the 3D and 2D pointsets separately.
Specifically, for original 3D and 2D pointsets with cardinality and , we add and outliers to the 3D and 2D pointsets, respectively, for outlier ratios and . Note that the configurations of the main paper correspondences to .
We first set (i.e., no outliers are added to 2D points), and test the robustness of our method to realworld 3D outliers. We then set (i.e., no outliers are added to 3D points), and test the robustness of our method to realworld 2D outliers. The results of rotation and translation errors with respect to the outlier ratio are given in Figure 14. We also include the results of the main paper for completeness.
It shows that the performance of our method degrades gracefully with respect to an increasing outlier ratio. Interestingly, our method is more robust to 3D outliers than 2D outliers. The potential reason is that geometry structure within 2D points cloud is more easily to be destroyed by outliers than geometry structure within 3D points cloud. Note that the perspective projection of pinehole camera does not preserve the Euclidean properties of 3D geometry structure.
Appendix 0.F Implementation Details
0.f.1 StateoftheArt Methods
Gosma [2]:
We provide a translation domain to the GOSMA algorithm in the following way, in order to reduce the search space for the algorithm: (i) find the axisaligned bounding box that includes all points in the 3D model excluding outliers (i.e.excluding the 2.5% percentile minimum and maximum); (ii) extend the bounding box to include the groundtruth camera position; and (iii) increase the size of the resulting bounding box by 10%. In this way, we ensure that the search space encompasses all reasonable camera positions. The runtime is set to a maximum of s per alignment for time considerations. As a result, the algorithm does not always converge to the global optimum or provide an optimality guarantee, which can require minutes per alignment and is therefore impractical for a large dataset.
SoftPOSIT [7]:
SoftPOSIT requires an initial estimate of the rotation and translation. For the synthetic datasets ModelNet40 and NYURGBD, we use the mean translation and the average rotation to initialise SoftPOSIT. The median initial rotation error is and the median initial translation error is . For the realworld dataset MegaDepth, we set the initial pose by adding a perturbation to the groundtruth poses. The angular perturbation on rotation is uniformly drawn from degrees, and the perturbation on translation is uniformly drawn from . The number of iterations is set to , which corresponds to s for 2D/3D points in our experiments.
P3pRansac:
We compare our method against P3PRANSAC with randomlysampled 2D–3D correspondences. For randomlysampling 2D–3D correspondences in P3PRANSAC, the probability of finding inlier 2D–3D correspondences approximates zero within . The number of RANSAC iterations is given by:
(10) 
where is the confidence level, is the groundtruth inlier ratio of 2D–3D correspondences, and is the minimal number of 2D–3D correspondences for P3P (one more 2D–3D correspondence to prune multiple solutions of P3P). For a moderate confidence , the number of 2D and 3D points at (), the number of RANSAC iterations approximates .
Within , we can evaluate hypotheses, resulting in a RANSAC success ratio at .
0.f.2 Backbone Network Architectures
PointNet [26]:
The architecture is cropped from the segmentation model. The input point cloud is passed to a transformation network to regress a
matrix. The matrix is applied to each point. After this alignment stage, the point cloud is passed to an MLP (,) network (with layer output sizes , ) for each point. The output features are then passed to a transformation network to regress a matrix. The matrix is applied to each feature. After the featurealignment, features are passed to an MLP (,,) network. The output feature of each point is normalized to embed it to a metric space.PointNet++ [27]:
The architecture is cropped from the segmentation model. The input point cloud is passed to set abstraction modules (SA) and feature propagation layers (FP). The configuration of SAs are: SA (, , [,,]), SA (, , [,,]), SA (, , [,,]) and SA (, , [,,]). The configuration of FPs are: FP (,), FP (,), FP (,) and FP (,,). The output feature of each point is normalized.
Dgcnn [31]:
The architecture is cropped from the part segmentation model. The number of nearest neighbors is set to . It contains MLP blocks. For the first MLP block, points cloud is passed to MLP (,) network (with layer output sizes ,
) on each point. Local features are aggregated using maxpooling. For the second MLP block, features are passed to MLP (
,) network. Local features are aggregated using maxpooling. For the last MLP block, features are passed to MLP () network. Local features are aggregated using maxpooling. The outputs of MLP blocks are concatenated and passed to MLP () network. The output feature of each point is normalized.CnNet [22]:
The architecture is cropped before the ReLU+Tanh operation. The output feature of each point is normalized.
References

[1]
(2019)
PointNetLK: robust & efficient point cloud registration using pointnet.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 7163–7172. Cited by: §2.  [2] (2019) The alignment of the spheres: globallyoptimal spherical mixture alignment for camera pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11796–11806. Cited by: Table 4, §0.F.1, §1, §2, §4.2, Table 2, Table 3.
 [3] (2017) Globallyoptimal inlier set maximisation for simultaneous camera pose and feature correspondence. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1–10. Cited by: §1, §2, §4.2.
 [4] (2016) Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence 39 (9), pp. 1853–1865. Cited by: Figure 1, §3.3.
 [5] (2013) Sinkhorn distances: lightspeed computation of optimal transport. In Advances in neural information processing systems, pp. 2292–2300. Cited by: Figure 1, §3.3.
 [6] (2018) Eigendecompositionfree training of deep networks with zero eigenvaluebased losses. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 768–783. Cited by: §2, §3.4.
 [7] (2004) SoftPOSIT: simultaneous pose and correspondence determination. International Journal of Computer Vision 59 (3), pp. 259–284. Cited by: Table 4, §0.F.1, §1, §1, §2, §4.2, Table 2, Table 3.
 [8] (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24 (6), pp. 381–395. Cited by: §1, §3.4, §3.4.
 [9] (1990) Object recognition by computer: the role of geometric constraints. Mit Press. Cited by: §1.
 [10] (1841) Das pothenotische problem in erweiterter gestalt nebst bber seine anwendungen in der geodasie. Grunerts Archiv fur Mathematik und Physik, pp. 238–248. Cited by: §1, §2, §3.4, §3.4.
 [11] (2003) Multiple view geometry in computer vision. Cambridge university press. Cited by: §3.1, §3.2, §3.4.
 [12] (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.2.
 [13] (2018) CVMnet: crossview matching network for imagebased groundtoaerial geolocalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7258–7267. Cited by: §3.3.
 [14] (2017) Geometric loss functions for camera pose regression with deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5974–5983. Cited by: §2.
 [15] (2015) Posenet: a convolutional network for realtime 6dof camera relocalization. In Proceedings of the IEEE international conference on computer vision, pp. 2938–2946. Cited by: §2.
 [16] (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: §4.
 [17] (2011) A novel parametrization of the perspectivethreepoint problem for a direct computation of absolute camera position and orientation. In CVPR 2011, pp. 2969–2976. Cited by: §1, §2.
 [18] (2010) An automatic method for solving discrete programming problems. In 50 Years of Integer Programming 19582008, pp. 105–132. Cited by: §2.
 [19] (2009) Epnp: an accurate o (n) solution to the pnp problem. International journal of computer vision 81 (2), pp. 155. Cited by: §1, §2.
 [20] (2018) MegaDepth: learning singleview depth prediction from internet photos. In Computer Vision and Pattern Recognition (CVPR), Cited by: Figure 11, Appendix 0.C, Appendix 0.D, §4.2, §4, §4.
 [21] (1968) Scaling of matrices to achieve specified row and column sums. Numerische Mathematik 12 (1), pp. 83–90. Cited by: §3.3.
 [22] (2018) Learning to find good correspondences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2666–2674. Cited by: §0.F.2, §1, §2, §3.2, §3.4, §4.2.
 [23] (1978) The levenbergmarquardt algorithm: implementation and theory. In Numerical analysis, pp. 105–116. Cited by: §3.4.
 [24] (2008) Pose priors for simultaneously solving alignment and correspondence. In European Conference on Computer Vision, pp. 405–418. Cited by: §1, §2, §4.2.
 [25] (2012) Indoor segmentation and support inference from rgbd images. In ECCV, Cited by: Appendix 0.A, Appendix 0.C, §4.1, §4, §4.
 [26] (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §0.F.2, §2, §3.2, §4.2.
 [27] (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413. Cited by: §0.F.2, §4.2.
 [28] (2016) Structurefrommotion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.
 [29] (1964) A relationship between arbitrary positive matrices and doubly stochastic matrices. The annals of mathematical statistics 35 (2), pp. 876–879. Cited by: §2, §3.3.
 [30] (2009) Optimal transport–old and new, volume 338 of a series of comprehensive studies in mathematics. Springer. Cited by: Figure 1, §3.3.
 [31] (2018) Dynamic graph cnn for learning on point clouds. arXiv preprint arXiv:1801.07829. Cited by: §0.F.2, §2, §3.2, §4.2.
 [32] (2015) 3d shapenets: a deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912–1920. Cited by: Appendix 0.A, Appendix 0.C, §4.1, §4, §4.
 [33] (2013) Revisiting the pnp problem: a fast, general and optimal solution. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2344–2351. Cited by: §1, §2.
Comments
There are no comments yet.