Solving the Perspective-n-Point (PnP) problem with unknown correspondences involves estimating the 6-DoF absolute pose (rotation and translation ) of the camera with respect to a reference coordinate frame, given a 2D point set from an image captured by the camera and a 3D point set of the environment in the reference frame. Importantly, 2D–3D correspondences are unknown: any 2D point could correspond to any 3D point or to none. This is a non-trivial chicken-and-egg problem since the estimation of correspondence and pose is coupled. Moreover, cross-modal correspondences between image pixels and 3D points are difficult to obtain. Even if the 2D and 3D sets are known to overlap, finding the specific correspondences between 2D and 3D points is an unsolved problem.
When correspondences are known, the problem reduces to the standard PnP problem [10, 17, 33, 19]. When correspondences are unknown, the problem is blind PnP, for which several traditional geometry-based methods were proposed, including SoftPoSIT , BlindPnP , GOPAC  and GOSMA . These local methods [7, 24] require a good pose prior to find a reasonable solution, while global methods [2, 3] systematically search the space of and for a global optimum with respect to an objective function and are thus quite slow.
Instead of relying on a good prior on camera pose, or exhaustively searching over all possible camera poses, we propose to estimate the correspondence matrix directly. Once the 2D–3D correspondences have been found, the camera pose can be recovered efficiently using an off-the-shelf PnP solver inside a RANSAC  framework. While a straightforward idea, finding the correspondence matrix is challenging because we need to identify inliers from a correspondence set with cardinality , where is the number of 3D points and is the number of 2D points. A naïve RANSAC-like search of this correspondence space  has complexity . We instead estimate the correspondence matrix directly using a CNN-based method that takes only the original 2D and 3D coordinates as input.
The proposed method extracts discriminative feature descriptors from the point sets that encode both local geometric structure and global context at each point. The intuition is that the local geometric structure about a point in 3D is likely to bear some resemblance to the local geometric structure of the corresponding point in 2D, modulo the effects of projection and occlusion. We then combine the features from each point set in a novel global feature matching module to estimate the 2D–3D correspondences. This module computes a weighting (joint probability) matrix using optimal transport, where each element describes the matchability of a particular 3D point with a particular 2D point. Sorting the 2D–3D matches in decreasing order by weight produces a prioritized match list, which can be used to recover the camera pose. To further disambiguate inlier and outlier 2D–3D matches from the prioritized match list, we append an inlier classification CNN similar to that of Yiet al.  and use the filtered output to estimate the camera pose. Our correspondence estimation CNN is trained end-to-end and the code and data will be released to facilitate future research. The overall framework is illustrated in Figure 1. Our contributions are:
a new deep method to solve the blind PnP problem with unknown 2D–3D correspondences. To the best of our knowledge, there is no existing deep method that takes unordered 2D and 3D point-sets (with unknown correspondences) as inputs, and outputs a 6-DoF camera pose;
a two-stream network to extract discriminative features from the point sets, which encodes both local geometric structure and global context; and
an original global feature matching network based on a recurrent Sinkhorn layer to find 2D–3D correspondences, with a loss function that maximizes the matching probabilities of inlier matches.
Our method achieves state-of-the-art performance, orders of magnitude faster () than existing blind PnP approaches.
2 Related Work
can be used to solve the 6-DoF pose estimation problem. When correspondences are unknown, the local optimization method SoftPOSIT iterates between solving for correspondences and solving for pose. The correspondences are estimated from a zero–one assignment matrix using Sinkhorn’s algorithm . This method requires a good pose prior and can only find a locally-optimal pose within the convergence basin of the prior. BlindPnP  also relies on good pose priors to restrict the number of possible 2D–3D matches. To avoid getting trapped in local optima, the global methods GOPAC  and GOSMA  were proposed. Though guaranteed to find the optimum, they can only handle a moderate number of points (often hundreds) since the time-consuming branch-and-bound  algorithm is used. Furthermore, they are affected by geometric ambiguities, meaning that many different camera poses can be considered equivalently good.
When 3D points are not utilized, the PoseNet algorithms [15, 14] can directly regress a camera pose. However, the accuracy of the regressed 6-DoF poses is inferior to geometry-based methods that use 3D points.
With PointNet , deep networks can now handle sparse and unordered point-sets. Though most PointNet-based works focus on classification and segmentation tasks [31, 26], traditional geometry problems are ready to be addressed. For example, 3D–3D registration , 2D–2D outlier correspondence rejection , and 2D–3D outlier correspondence rejection . In contrast, this paper tackles the problem of PnP with unknown correspondences using a deep network, which has not previously been demonstrated. The key challenge is encoding sufficient information in point-wise features from the geometry of points alone, and matching these 2D and 3D features effectively.
3 Learning Correspondences for Blind PnP
3.1 Problem Formulation
Let denote a 3D point set with points in the reference coordinate system, denote a 2D point set with points in an image coordinate system, and denote the correspondence matrix between and . We assume that the camera is calibrated, and thus the intrinsic camera matrix  is known.
Blind PnP aims to estimate a rotation matrix
and a translation vectorwhich transforms 3D points to align with 2D points . Specifically, for , where is the homogeneous coordinate of . The difficult part of this problem is to estimate the correspondence matrix . Once it is found, a traditional PnP algorithm can solve the problem.
We propose to estimate
using a deep neural network. Specifically, for each tentative 2D–3D match in, we calculate a weight for and describing the matchability of and . We can obtain a set of 2D–3D matches by taking the Top- matches according to these weights.
We present our method for extracting point-wise discriminative features in Section 3.2. We then describe our global feature matching method for obtaining 2D–3D match probabilities in Section 3.3. Finally, we provide our match refinement strategy using a classification CNN to further disambiguate inlier and outlier matches in Section 3.4.
3.2 Feature Extraction
To learn discriminative features and for each 3D point and 2D point respectively, we propose a two-stream network. One branch takes 3D points from as inputs and the other takes 2D points from . The two branches do not share weights. Both aim to encode information about the local geometric structure at each point as well as global contextual information. The detailed structure for a single branch is given in Figure 2.
Pre-processing: The 2D points are transformed to normalized coordinates using the camera intrinsic matrix to improve numerical stability . The 3D points are aligned to a canonical direction, similarly to PointNet , which is beneficial for extracting features. Specifically, a transformation matrix is learned and applied to the original coordinates of 3D points.
Encoding Local Geometry: To extract point-wise local features, we first perform an nearest neighbor search and build a point-wise KNN graph. For a KNN graph around the anchor point indexed by , the edges from the anchor to its neighbors capture the local geometric structure. Similar to EdgeConv , we concatenate the anchor and edge features, and then pass them through an MLP module to extract local features around the -th point. Specifically, the operation is defined by
where denotes that point is in the neighborhood of point , and are MLP weights performed on the edge and anchor point respectively, and avg() denotes that we perform average pooling in the neighborhood after the MLP to extract a single feature for point . We detail the above operations in Figure 2. Note that for the first layer of our CNN, the MLP module lifts the dimensions of 3D and 2D points to .
Encoding Global Context: After extracting point-wise local features, we aim to also embed global contextual information within them. We use Context Normalization  to share global information while remaining permutation invariant. This layer normalizes the feature distribution across the point set, applying the non-parametric operation , where is the -th feature descriptor, and and
are the mean and standard deviation across the point set. Context normalized features are then passed through batch normalization, ReLU, and shared MLP layers to output the final point-wise features.
3.3 Global Feature Matching
Given a learned feature descriptor per point in and , we perform global feature matching to estimate the likelihood that a given 2D–3D pair matches. To do so, we compute the pairwise distance matrix , which measures the cost of assigning 3D points to 2D points. Each element of is the distance between the features at point and , i.e., . Furthermore, to model the likelihood that a given point has a match and is not an outlier, we define unary matchability vectors, denoted by and for the 3D and 2D set respectively.
From , and we estimate a weighting matrix where each element represents the matchability of the 3D–2D pair . Note that each element is estimated from the cost matrix and the unary matchability vectors and , rather than locally from . In other words, the weighting matrix globally handles pairwise descriptor distance ambiguities in , while respecting the unary priors. The overall pipeline is given in Figure 3.
Prior Matchability: For each point we define a prior unary matchability measuring how likely it is to have a match. Formally, let and denote the unary matchabilities of points and
respectively. Collecting the matchabilities for all 2D or 3D points yields a matchability histogram, a 1D probability distribution, given byand , where a simplex in is defined as .
We make the assumption that the unary matchabilities are uniformly distributed, that is,. This means that each point has the same prior likelihood of matching. While our model can predict non-uniform priors, we found that using learned priors led to overfitting.
where is the Frobenius dot product and is the transport polytope that couples two unary matchability vectors and , given by
where . The constraint on ensures that we assign the binary matchabilities of each 3D point to all 2D points without altering the unary matchability of the point. The entropy regularization term facilitates efficient computation  and is defined by
. Unlike the standard algorithm that generates a square, doubly-stochastic matrix, our version generates a rectangular joint probability matrix, whose existence is guaranteed (Theorem).
Joint Probability Loss Function: To train the feature extraction and matching network, we apply a loss function to the weighting matrix . Since models the joint probability distribution of and , we can maximize the joint probability of inlier correspondences and minimize the joint probability of outlier correspondences using
where the ground-truth correspondence matrix is if is a true correspondence and otherwise. The loss is bounded, with , since .
If ground-truth correspondence labels are not available, they can be obtained in a weakly-supervised fashion by projecting the 3D points onto the image using the ground-truth camera pose and applying an inlier threshold.
Remark 1: A common objective in geometry optimization is minimizing re-projection error. With estimated weighting matrix , the weighted angular reprojection error is defined by:
where and are the ground-truth rotation and translation and denotes normalization. This loss minimizes the sum of weighted angular distances between image rays and rays connecting the camera center and the transformed 3D points . While this loss is geometrically meaningful, we will show in the experiments that it is inferior to our loss.
Remark 2: A common technique for learning discriminative cross-modal features is deep metric learning. We tested a triplet loss  that minimizes the feature distance between matchable 2D–3D pairs and maximizes the feature distance between non-matchable 2D–3D pairs, given by
where is chosen empirically. We use the ground-truth labels to find the positive and negative anchors, that is for and for , selected at random. is randomly selected from non-matchable features. While this loss is effective, it also performs worse than our loss.
Retrieving Correspondences: To retrieve the 2D–3D correspondences from the weighting matrix , we test the following methods.
1. Top- Prioritized Matches: To obtain a list of prioritized matches, we have (i) reshape into a 1D correspondence probability vector, sort by decreasing probability; or (ii) reshape into a 1D correspondence distance vector, sort by increasing distance, and then retrieve the associated 2D and 3D point indices. Given this list of matches or prioritized by weight or distance, respectively, we truncate it to obtain the Top- matches. We denote the former by Top-K_w and the latter by Top-K_f. Instead of enforcing one-to-one 2D–3D matches, we defer disambiguation to the match refinement stage.
2. Nearest Neighbors (NNs): For each 2D point , we find its nearest 3D neighbor with respect to (i) the probability matrix, that is ; or (ii) the regressed descriptor, that is . We denote the former by NN_w and the latter by NN_f. This approach retrieves matches.
3. Mutual Nearest Neighbors (MNNs): We extend the previous approach by also enforcing a one-to-one constraint, that is, only keeping the correspondence if is the nearest neighbor of and is the nearest neighbor of . We again compute nearest neighbors with respect to (i) the probability matrix (MNN_w); or (ii) the regressed descriptors (MNN_f). This approach retrieves fewer than correspondences.
3.4 Correspondence Set Refinement
We now have a set of putative 2D–3D correspondences, some of which are outliers, and want to estimate the camera pose. This is the standard PnP problem with outlier correspondences, and may be solved using RANSAC  with a minimal P3P solver . However, recent work [22, 6]
has shown that outliers can be filtered more efficiently using deep learning. Therefore, we apply the 2D–2D correspondence classification network from Yiet al.  to reject outliers, with a modified input dimension and loss function.
We directly regress rotation and translation
using the weighted Direct Linear Transform (DLT). Given at least six correspondences, and can be estimated by solving a SVD problem . We first construct the linear equation , where each 2D–3D match supplies two rows in , giving
where , , and is the ith row of the rotation matrix. The camera pose can be estimated by taking the SVD of , wherefrom which and can be assembled up to a sign ambiguity. Given ground-truth rotation and translation , we define our pose loss as
Although we do not impose an orthogonality constraint on , minimizing the above loss function pushes towards the Lie group of .
RANSAC and Nonlinear Optimization:
Our experiments are conducted on both synthetic (ModelNet40  and NYU-RGBD ) and real-world (MegaDepth ) datasets. Sample 3D and 2D point clouds from these datasets are given in Appendix. We first validate the components of our pipeline and then compare it with state-of-the-art methods.
ModelNet40 : We use the default train and test splits of and CAD mesh models respectively. We uniformly sample 3D points from the surface of each model and generate virtual camera viewpoints as follows: Euler rotation angles are uniformly drawn from , translations are uniformly drawn from , and a translation offset of is applied along the axis to ensure that all 3D points are in front of the camera. The 3D points are projected onto a virtual image plane with size and focal length and Gaussian pixel noise () is added to the 2D points. In total, training and testing 2D–3D pairs are generated.
NYU-RGBD : We use train and test splits of and aligned RGB and depth image pairs respectively from the labeled NYU Depth V2 dataset. We uniformly sample 2D points from each RGB image, normalize the points using the intrinsic camera matrix, and find the corresponding 3D points in the depth image. We transform the 3D points using virtual rotations and translations generated in the same way as for the ModelNet40 dataset, without the translation offset, and add Gaussian pixel noise () to the 2D points. In total, training and testing 2D–3D pairs are generated. Note that the scenes in the train and test sets do not overlap.
MegaDepth : MegaDepth is a multi-view Internet photo dataset with multiple landmark scenes obtained from Flickr. It has diverse scene contents, image resolutions, 2D–3D point distributions, and camera poses. The dataset provides 3D point sets reconstructed using COLMAP , and 2D SIFT keypoints detected from images. We randomly select several landmarks, yielding a total number of 2D–3D training sets and testing sets. The number of 2D–3D correspondences varies from tens to thousands. Note that the landmarks in the train and test sets do not overlap.
Evaluation metrics: We report the number of inlier 2D–3D matches among all matches found, using ground-truth correspondence labels. We also report the rotation error, given by , where is the ground-truth rotation and is the estimated rotation, and the translation error, given by the distance between the estimated and ground-truth translation vectors.
We also calculate the recalls (percentage of poses) by varying pre-defined thresholds on rotation and translation error. For each threshold, we count the number of poses with error less than that threshold, and then normalise the number by the total number of poses.
Our 12-layer two-stream network is implemented in TensorFlow and is trained from scratch using the Adam optimizer with a learning rate of and a batch size of . Every layer has an output channel dimension of . The number of Sinkhorn iterations is set to , is set to , and the number of neighbors in the KNN-graph is set to . We utilize a two-stage training strategy: we first train our feature extraction and matching network until convergence and then train the classification network to refine 2D–3D matches. Our model is trained on a single NVIDIA Tesla P40 GPU in days. Code and data will be released.
4.1 Synthetic Data Experiments
The effectiveness of global matching: Given a point-wise regressed descriptor and a 2D–3D weighting matrix , we have methods for retrieving 2D–3D correspondences as listed in Section 3.3: Top-K_w, Top-K_f, NN_w, NN_f, MNN_w and MNN_f. We calculate the number of inlier matches using ground-truth labels, and the results are shown in Figure 4. The number of inlier 2D–3D correspondences found by NN_w and MNN_w is consistently greater than the number found by NN_f and MNN_f respectively. This demonstrates that retrieving 2D–3D matches from the weighting matrix is better than performing nearest neighbor search using the regressed descriptors.
For the methods Top-K_w and Top-K_f, since we can truncate the prioritized 2D–3D matching list at the th () position, we plot the curve showing the number of inliers with respect to the number of found 2D–3D correspondences for up to . Again, method Top-K_w outperforms Top-K_f. Interestingly, match–inlier tuples found by NN_w and MNN_w lie very close to the Top-K_w curve. We use the Top-K_w method (omit the subscript) in the remaining experiments, since it enables us to select a sufficient number of matches while also finding large number of inliers. Our method finds fewer inlier correspondences on the ModelNet40 dataset than the NYU-RGBD dataset, since the virtual camera can only view part of the whole 3D model with the remainder being occluded.
The effectiveness of 2D–3D classification: We evaluate the ability of the 2D–3D correspondence classification module to disambiguate inlier and outlier correspondences by running this network with the Top- () matches from the prioritized 2D–3D match list. We calculate the average inlier ratio (#inlier/#matches) with respect to the number of found 2D–3D matches, as shown in Figure 5. The results demonstrate that the classification network significantly improves the average inlier ratio, which improves the pose estimation considerably, as shown in the next experiment.
|Top-K||Rot. err. ()||4.316||10.85||19.30||0.303||0.448||0.645|
|Top-K-C||Rot. err. ()||1.349||2.356||5.260||0.202||0.291||0.407|
Comparison of rotation and translation errors on the ModelNet40 and NYU-RGBD datasets. Q1 denotes the first quartile, Med. denotes the median, and Q3 denotes the third quartile.
Estimating 6-DoF pose: Once the 2D–3D matches have been established, we apply the P3P algorithm in a RANSAC framework to estimate the 6-DoF camera pose. We compare two methods: (a) P3P with Top-K matches (); and (b) P3P with Top-K matches and classification network filtering. The results are shown in Figure 6. Observe that after classification filtering we consistently obtain larger recalls at each error thresholds. The same trend is visible in the rotation and translation error statistics, shown in Table 4. Due to these conclusive results, we thus use the Top-K + Classification (Top-K-C) method as our default configuration. The performance of state-of-the-art methods on ModelNet40 is presented in Table 2; see the Appendix for a full comparison on both synthetic datasets. Our method outperforms all others by a large margin, in addition to a speed-up.
4.2 Real Data Experiments
We further demonstrate the effectiveness of our method to handle real-world data on the MegaDepth  dataset.
Comparison with state-of-the-art methods: The unavailability of Gaussian pose priors and the sheer number () of 2D and 3D points precludes the use of the methods BlindPnP  and GOPAC . We compare our method against P3P-RANSAC with randomly-sampled 2D–3D correspondences and the state-of-the-art local solver SoftPOSIT  and global solver GOSMA . All methods are terminated at s per alignment for time considerations, returning the best value found so far. Note that with this approach, GOSMA’s guarantee of global optimality is traded off against its runtime. For randomly-sampling 2D–3D correspondences in P3P-RANSAC, the probability of finding minimal inlier correspondences set approximates zero (see Appendix). Since SoftPOSIT requires a good prior pose, we simulate it by adding a small perturbation to the ground-truth pose, with the angular perturbation drawn uniformly from and the translation perturbation drawn uniformly from . The configuration details of these methods are given in the Appendix. The performance for 6-DoF pose estimation is shown in Figure 7 and Table 3. It shows that our method outperforms the second-best method GOSMA,111SoftPOSIT is initialized with a good prior pose from the ground-truth, and so cannot be compared directly. with median rotation and translation errors of and for our method, and and for GOSMA. The qualitative comparisons in Figure 8 show that the projection of 3D points using our method’s pose aligns very well with the images. The average runtime of our method, SoftPOSIT, P3P-RANSAC and GOSMA is , , and respectively, where the last three algorithms were run for a maximum runtime of .
Robustness to outliers: To demonstrate the effectiveness of our method at handling outliers, we add outliers to both the 3D and 2D point-sets. Specifically, for original 3D and 2D point-sets with cardinality and , we add and outliers to the 3D and 2D point-sets, respectively, for an outlier ratio . We add two types of outliers: synthetic and real. For synthetic outliers, they are generated uniformly within the bounding box enclosing the 3D and 2D point-sets. The rotation and translation errors with respect to the outlier ratio are given in Figure 9 (Left). For real outliers, 2D outliers are added from detected SIFT keypoints that do not have a matchable 3D point, and 3D outliers are added from 3D model points that do not have a matchable 2D point. The rotation and translation errors with respect to the outlier ratio are given in Figure 9 (Right). It shows that the performance of our method degrades gracefully with respect to an increasing outlier ratio. See Appendix for more comparisons.
Backbone networks: To demonstrate the effectiveness of our method at regressing point-wise descriptors, we compare four networks: PointNet , PointNet++ , CnNet  and Dgcnn . The features from PointNet  and Dgcnn  are taken before global pooling and have dimension . CnNet  and PointNet++  generate features with dimension . For all networks, the output point-wise feature vectors are normalized to embed them in a metric space. The configuration details are given in the Appendix. We compute the average number of inliers with respect to the number of found 2D–3D matches using each backbone network, as shown in Figure 10 (Left). It demonstrates that our feature extraction network significantly outperforms all other networks, finding more inlier 2D–3D matches.
Loss functions: To show the effectiveness of our proposed inlier set probability maximization loss, we compare it with two others: (a) a reprojection loss; and (b) a triplet loss. For the triplet loss, we use an exhaustive mini-batch strategy to maximize the number of triplets. Using the weighting matrix learned using the different losses, we calculate the average number of inliers with respect to the number of found 2D–3D matches, shown in Figure 10 (Right). It shows that the proposed loss outperforms other losses, finding more inlier matches.
We have proposed a new deep network to solve the blind PnP problem of simultaneously estimating the 2D–3D correspondences and 6-DoF camera pose from 2D and 3D points. The key idea is to extract discriminative point-wise feature descriptors that encode local geometric structure and global context, and use these to establish 2D–3D matches via a global feature matching module. The high-quality correspondences found by our method facilitates the direct application of traditional PnP methods to recover the camera pose. Our experiments demonstrate that our method significantly outperforms traditional geometry-based methods with respect to speed and accuracy.
In this Appendix, we first give additional experimental results and then describe implementation details.
Appendix 0.A Comparison with State-of-the-Art Methods
In this section, we provide the full set of results for the synthetic ModelNet40  and NYU-RGBD  datasets. We provide the 6 DoF pose estimation results in Table 4. It shows that our method outperforms all state-of-the-art methods by a large margin.
|P3P-RANSAC||Rot. err. ()||90.82||138.6||164.8||40.11||99.26||154.0|
|SoftPOSIT ||Rot. err. ()||16.10||21.75||28.00||12.88||20.61||31.32|
|GOSMA ||Rot. err. ()||10.08||22.06||52.01||1.364||3.184||21.98|
|Our||Rot. err. ()||1.349||2.356||5.260||0.202||0.291||0.407|
Appendix 0.B Visualization of the Weighting Matrix
We provide a sample visualization of the weighting matrix in Figure 11 (left) with ground-truth 2D–3D matches on the diagonal. The corresponding convergence curve of the Sinkhorn algorithm is given in Figure 11 (right).
Appendix 0.C Sample 3D and 2D Point Cloud
Appendix 0.D More Qualitative Results
We give more qualitative results on the real-world MegaDepth  dataset in Figure 13. We add green borders to images if their corresponding poses are estimated with rotation errors less than and translation errors less than 0.5. We add red borders to images if their corresponding poses are estimated with rotation errors larger than or translation errors larger than 0.5.
Appendix 0.E Robustness to outliers
In the main paper, to demonstrate the effectiveness of our method at handling outliers, we add outliers to both the 3D and 2D point-sets at the same time. To further test the robustness of our method to real-world outliers, we add real-world outliers to the 3D and 2D point-sets separately.
Specifically, for original 3D and 2D point-sets with cardinality and , we add and outliers to the 3D and 2D point-sets, respectively, for outlier ratios and . Note that the configurations of the main paper correspondences to .
We first set (i.e., no outliers are added to 2D points), and test the robustness of our method to real-world 3D outliers. We then set (i.e., no outliers are added to 3D points), and test the robustness of our method to real-world 2D outliers. The results of rotation and translation errors with respect to the outlier ratio are given in Figure 14. We also include the results of the main paper for completeness.
It shows that the performance of our method degrades gracefully with respect to an increasing outlier ratio. Interestingly, our method is more robust to 3D outliers than 2D outliers. The potential reason is that geometry structure within 2D points cloud is more easily to be destroyed by outliers than geometry structure within 3D points cloud. Note that the perspective projection of pinehole camera does not preserve the Euclidean properties of 3D geometry structure.
Appendix 0.F Implementation Details
0.f.1 State-of-the-Art Methods
We provide a translation domain to the GOSMA algorithm in the following way, in order to reduce the search space for the algorithm: (i) find the axis-aligned bounding box that includes all points in the 3D model excluding outliers (i.e.excluding the 2.5% percentile minimum and maximum); (ii) extend the bounding box to include the ground-truth camera position; and (iii) increase the size of the resulting bounding box by 10%. In this way, we ensure that the search space encompasses all reasonable camera positions. The runtime is set to a maximum of s per alignment for time considerations. As a result, the algorithm does not always converge to the global optimum or provide an optimality guarantee, which can require minutes per alignment and is therefore impractical for a large dataset.
SoftPOSIT requires an initial estimate of the rotation and translation. For the synthetic datasets ModelNet40 and NYU-RGBD, we use the mean translation and the average rotation to initialise SoftPOSIT. The median initial rotation error is and the median initial translation error is . For the real-world dataset MegaDepth, we set the initial pose by adding a perturbation to the ground-truth poses. The angular perturbation on rotation is uniformly drawn from degrees, and the perturbation on translation is uniformly drawn from . The number of iterations is set to , which corresponds to s for 2D/3D points in our experiments.
We compare our method against P3P-RANSAC with randomly-sampled 2D–3D correspondences. For randomly-sampling 2D–3D correspondences in P3P-RANSAC, the probability of finding inlier 2D–3D correspondences approximates zero within . The number of RANSAC iterations is given by:
where is the confidence level, is the ground-truth inlier ratio of 2D–3D correspondences, and is the minimal number of 2D–3D correspondences for P3P (one more 2D–3D correspondence to prune multiple solutions of P3P). For a moderate confidence , the number of 2D and 3D points at (), the number of RANSAC iterations approximates .
Within , we can evaluate hypotheses, resulting in a RANSAC success ratio at .
0.f.2 Backbone Network Architectures
The architecture is cropped from the segmentation model. The input point cloud is passed to a transformation network to regress amatrix. The matrix is applied to each point. After this alignment stage, the point cloud is passed to an MLP (,) network (with layer output sizes , ) for each point. The output features are then passed to a transformation network to regress a matrix. The matrix is applied to each feature. After the feature-alignment, features are passed to an MLP (,,) network. The output feature of each point is normalized to embed it to a metric space.
The architecture is cropped from the segmentation model. The input point cloud is passed to set abstraction modules (SA) and feature propagation layers (FP). The configuration of SAs are: SA (, , [,,]), SA (, , [,,]), SA (, , [,,]) and SA (, , [,,]). The configuration of FPs are: FP (,), FP (,), FP (,) and FP (,,). The output feature of each point is normalized.
The architecture is cropped from the part segmentation model. The number of nearest neighbors is set to . It contains MLP blocks. For the first MLP block, points cloud is passed to MLP (,) network (with layer output sizes ,
) on each point. Local features are aggregated using max-pooling. For the second MLP block, features are passed to MLP (,) network. Local features are aggregated using max-pooling. For the last MLP block, features are passed to MLP () network. Local features are aggregated using max-pooling. The outputs of MLP blocks are concatenated and passed to MLP () network. The output feature of each point is normalized.
The architecture is cropped before the ReLU+Tanh operation. The output feature of each point is normalized.
-  (2019) PointNetLK: robust & efficient point cloud registration using pointnet. In , pp. 7163–7172. Cited by: §2.
-  (2019) The alignment of the spheres: globally-optimal spherical mixture alignment for camera pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11796–11806. Cited by: Table 4, §0.F.1, §1, §2, §4.2, Table 2, Table 3.
-  (2017) Globally-optimal inlier set maximisation for simultaneous camera pose and feature correspondence. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1–10. Cited by: §1, §2, §4.2.
-  (2016) Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence 39 (9), pp. 1853–1865. Cited by: Figure 1, §3.3.
-  (2013) Sinkhorn distances: lightspeed computation of optimal transport. In Advances in neural information processing systems, pp. 2292–2300. Cited by: Figure 1, §3.3.
-  (2018) Eigendecomposition-free training of deep networks with zero eigenvalue-based losses. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 768–783. Cited by: §2, §3.4.
-  (2004) SoftPOSIT: simultaneous pose and correspondence determination. International Journal of Computer Vision 59 (3), pp. 259–284. Cited by: Table 4, §0.F.1, §1, §1, §2, §4.2, Table 2, Table 3.
-  (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24 (6), pp. 381–395. Cited by: §1, §3.4, §3.4.
-  (1990) Object recognition by computer: the role of geometric constraints. Mit Press. Cited by: §1.
-  (1841) Das pothenotische problem in erweiterter gestalt nebst bber seine anwendungen in der geodasie. Grunerts Archiv fur Mathematik und Physik, pp. 238–248. Cited by: §1, §2, §3.4, §3.4.
-  (2003) Multiple view geometry in computer vision. Cambridge university press. Cited by: §3.1, §3.2, §3.4.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.2.
-  (2018) CVM-net: cross-view matching network for image-based ground-to-aerial geo-localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7258–7267. Cited by: §3.3.
-  (2017) Geometric loss functions for camera pose regression with deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5974–5983. Cited by: §2.
-  (2015) Posenet: a convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision, pp. 2938–2946. Cited by: §2.
-  (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: §4.
-  (2011) A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation. In CVPR 2011, pp. 2969–2976. Cited by: §1, §2.
-  (2010) An automatic method for solving discrete programming problems. In 50 Years of Integer Programming 1958-2008, pp. 105–132. Cited by: §2.
-  (2009) Epnp: an accurate o (n) solution to the pnp problem. International journal of computer vision 81 (2), pp. 155. Cited by: §1, §2.
-  (2018) MegaDepth: learning single-view depth prediction from internet photos. In Computer Vision and Pattern Recognition (CVPR), Cited by: Figure 11, Appendix 0.C, Appendix 0.D, §4.2, §4, §4.
-  (1968) Scaling of matrices to achieve specified row and column sums. Numerische Mathematik 12 (1), pp. 83–90. Cited by: §3.3.
-  (2018) Learning to find good correspondences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2666–2674. Cited by: §0.F.2, §1, §2, §3.2, §3.4, §4.2.
-  (1978) The levenberg-marquardt algorithm: implementation and theory. In Numerical analysis, pp. 105–116. Cited by: §3.4.
-  (2008) Pose priors for simultaneously solving alignment and correspondence. In European Conference on Computer Vision, pp. 405–418. Cited by: §1, §2, §4.2.
-  (2012) Indoor segmentation and support inference from rgbd images. In ECCV, Cited by: Appendix 0.A, Appendix 0.C, §4.1, §4, §4.
-  (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §0.F.2, §2, §3.2, §4.2.
-  (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413. Cited by: §0.F.2, §4.2.
-  (2016) Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.
-  (1964) A relationship between arbitrary positive matrices and doubly stochastic matrices. The annals of mathematical statistics 35 (2), pp. 876–879. Cited by: §2, §3.3.
-  (2009) Optimal transport–old and new, volume 338 of a series of comprehensive studies in mathematics. Springer. Cited by: Figure 1, §3.3.
-  (2018) Dynamic graph cnn for learning on point clouds. arXiv preprint arXiv:1801.07829. Cited by: §0.F.2, §2, §3.2, §4.2.
-  (2015) 3d shapenets: a deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912–1920. Cited by: Appendix 0.A, Appendix 0.C, §4.1, §4, §4.
-  (2013) Revisiting the pnp problem: a fast, general and optimal solution. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2344–2351. Cited by: §1, §2.