1 Introduction
Finding and matching sparse 2D feature points across images has been a longstanding problem in computer vision [17]. Feature detection algorithms enable the creation of vivid 3D models from image collections [19, 53, 43], building maps for robotic agents [29, 30], recognizing places [42, 22, 33] and precise locations [23, 49, 39] as well as recognizing objects [24, 32, 36, 1, 2]. Naturally, the design of feature detection and description algorithms, subsumed as feature detection in the following, has received tremendous attention in computer vision research since its early days. Although invented three decades ago, the seminal SIFT algorithm [24] remains the gold standard feature detection pipeline to this day.
With the recent advent of powerful machine learning tools, some authors replace classical, featurebased vision pipelines by neural networks
[20, 50, 4]. However, independent studies suggest that these learned pipelines have not yet reached the accuracy of their classical counterparts [44, 40, 56, 41], due to limited generalization abilities. Alternatively, one prominent strain of current research aims to keep the concept of sparse feature detection but replaces handcrafted designs like SIFT [24] with datadriven, learned representations. Initial works largely focused on learning to compare image patches to yield expressive feature descriptors [16, 47, 51, 27, 25, 48]. Fewer works attempt to learn feature detection [10, 5] or a complete architecture for feature detection and description [55, 33, 12].Training of these methods is usually driven by optimizing lowlevel matching scores inspired by metric learning [51] with the necessity to define ground truth correspondences between patches or images. When evaluated on lowlevel matching benchmarks like HPatches [3], such methods regularly achieve highly superior scores compared to a SIFT baseline. HPatches [3] defines sets of matching image patches that undergo severe illumination and viewpoint changes. However, the increased accuracy in such matching tasks does not necessarily translate to increased accuracy in highlevel vision pipelines. For example, we show that the stateoftheart learned SuperPoint detector [12], while highly superior to SIFT [24] on HPatches [3], does not reach SIFT’s capabilities when estimating an essential matrix for an image pair. Similar observations were reported in earlier studies, where the supposedly superior learned LIFT detector [55] failed to produce richer reconstructions than SIFT [24] in a structurefrommotion pipeline [44].
Some authors took notice of the discrepancy between lowlevel training and highlevel performance, and developed training protocols that mimic properties of highlevel vision pipelines. Lua et al. [25] perform hard negative mining of training patches in a way that simulates the problem of selfsimilarity when matching at the image level. Revaud et al. [37] train a detector to find few but reliable key points. Similarly Cieslewski et al. [10]
learn to find key points with high probability of being inliers in robust model fitting.
In this work, we take a more radical approach. Instead of handcrafting a training procedure that emulates aspects of highlevel vision pipelines, we embed the feature detector in a complete vision pipeline during training. Particularly, our pipeline addresses the task of relative pose estimation, a central component in camera relocalization, structurefrommotion or SLAM. The pipeline incorporates key point selection, descriptor matching and robust model fitting. We do not need to predefine ground truth correspondences, dispensing with the need for hardnegative mining. Furthermore, we do not need to speculate whether it is more beneficial to find many matches or few, reliable matches. All these aspects are solely guided by the task loss, i.e. by minimizing the relative pose error between two images.
Key point selection and descriptor matching are discrete operations which cannot be directly differentiated. However, since many feature detectors predict key point locations as heat maps, we can reformulate key point selection as a sampling operation. Similarly, we lift feature matching to a distribution where the probability of a match stems from its descriptor distance. This allows us to apply principles from reinforcement learning [46] to directly optimize a highlevel task loss. Particularly, all operations after the feature matching stage, e.g. robust model fitting, do not need to be differentiable since they only provide a reward signal for learning. In summary, our training methodology puts little restrictions on the feature detection architecture or the vision task to be optimized for.
We demonstrate our approach using the SuperPoint detector [12], which regularly ranks among top methods in independent evaluations [14, 5, 37]. We train SuperPoint for the task of relative pose estimation by robust fitting of the essential matrix. For this task, our training procedure closes the gap between SuperPoint and a stateoftheart SIFTbased pipeline, see Fig. 1 for a comparison of results.
We summarize our main contributions:

A new training methodology which allows for learning a feature detector and descriptor, embedded in a complete vision pipeline, to optimize its performance for a highlevel vision task.

We apply our method to a stateoftheart architecture, Superpoint [12], and train it for the task of relative pose estimation.
2 Related Work
Of all handcrafted feature detectors, SIFT [24] stands out for its long lasting success. SIFT finds key point locations as a differenceofGaussian filter response in the scale space of an image, and describes features using histograms of oriented gradients [11]. Arandjelovic and Zisserman [1] improve the matching accuracy of SIFT by normalizing its descriptor, also called RootSIFT. Other handcrafted feature detectors improve efficiency for realtime applications while sacrificing as little accuracy as possible [6, 38].
MatchNet [16] is an early example of learning to compare image patches using a patch similarity network. The reliance on a network as a similarity measure prevents the use of efficient nearest neighbor search schemes. L2Net [47], and subsequent works, instead learn patch descriptors to be compared using the Euclidean distance. Balntas et al. [51] demonstrated the advantage of using a triplet loss for descriptor learning over losses defined on pairs of patches only. A triplet combines two matching and one nonmatching patch, and the triplet loss optimizes relative distances within a triplet. HardNet [27] employs a “hardestinbatch” strategy when assembling triplets for training, i.e. for each matching patch pair, they search for the most similar nonmatching patch within a minibatch. GeoDesc [25] constructs minibatches for training that contain visually similar but nonmatching patch pairs to mimic the problem of selfsimilarity when matching two images. SOSNet [48] uses second order similarity regularization to enforce a structure of the descriptor space that leads to well separated clusters of similar patches.
Learning feature detection has also started to attract attention recently. ELF [7] shows that feature detection can be implemented using gradient tracing within a pretrained neural network. Key.Net [5] combines handcrafted and learned filters to avoid overfitting. The detector is trained using a repeatability objective, i.e. finding the same points in two related images, synthetically created by homography warping. SIPs [10] learns to predict a pixelwise probability map of inlier locations as key points, inlier being a correspondence which can be continuously tracked throughout an image sequence by an offtheshelf feature tracker.
LIFT [55] was the first, complete learningbased architecture for feature detection and description. It rebuilds the main processing steps of SIFT with neural networks, and is trained using sets of matching and nonmatching image patches extracted from structurefrommotion datasets. DELF [33]
learns detection and description for image retrieval, where coarse key point locations emerge by training an attention layer on top of a dense descriptor tensor. D2Net
[14] implements feature detection and description by searching for local maxima in the filter response map of a pretrained CNN. R2D2 [37] proposes a learning scheme for identifying feature locations that can be matched uniquely among images, avoiding repetitive patterns.All mentioned learningbased works design training schemes that emulate difficult conditions for a feature detector when employed for a vision task. Our work is the first to directly embed a feature detector in a complete vision pipeline for training where all realworld challenges occur, naturally. We realize our approach using the SuperPoint [12] architecture, a fully convolutional CNN for feature detection and description, pretrained on synthetic and homographywarped real images. In principle, our training scheme can be applied to architectures other than SuperPoint, like LIFT [55] or R2D2 [37], and also to separate networks for feature detection and description.
3 Method
As an example of a highlevel vision task, we estimate the relative transformation , with rotation and translation , between two images and . We solve the task using sparse feature matching. We determine 2D key points indexed by
, and compute a descriptor vector
for each key point. Using nearest neighbor matching in descriptor space, we establish a set of tentative correspondences between images and . We solve for the relative pose based on these tentative correspondences by robust fitting of the essential matrix [18]. We apply a robust estimator like RANSAC [15] with a 5point solver [31] to find the essential matrix which maximises the inlier count among all correspondences. An inlier is defined as a correspondence with a distance to the closest epipolar line below a threshold [18]. Decomposition of the essential matrix yields an estimate of the relative transformation .We implement feature detection using two networks: a detection network and a description network. In practice, we use a joint architecture, SuperPoint[12], where most weights are shared between detection and description. The main goal of this work is to optimize the learnable parameters of both networks such that their accuracy for the vision task is enhanced. For our application, the networks should predict key points and descriptors such that the relative pose error between two images is minimized. Key point selection and feature matching are discrete, nondifferentiable operations. Therefore, we cannot directly propagate gradients of our estimated transformation
back to update the network weights, as in standard supervised learning. Components of our vision pipeline, like the robust estimator (
e.g. RANSAC [15]) or the minimal solver (e.g. the 5point solver [31]) might also be nondifferentiable. To still optimize the neural network parameters for our task, we apply principles from reinforcement learning [46]. We formulate feature detection and feature matching as probabilistic actions where the probability of taking an action, i.e. selecting a key point, or matching two features, depends on the output of the neural networks. During training, we sample different instantiations of key points and their matchings based on the probability distributions predicted by the neural networks. We observe how well these key points and their matching perform in the vision task, and adjust the parameters of the networks, such that a good outcome with a low loss becomes more probable. An overview of our approach is shown in Fig. 2.In the following, we firstly describe how to reformulate key point selection and feature matching as probabilistic actions. Thereafter, we formulate our learning objective, and how to efficiently approximate it using sampling.
3.1 Probabilistic Key Point Selection
We assume that the detection network predicts a key point heat map for an input image, as is common in many architectures [55, 12, 37, 10]. Feature locations are usually selected from by taking all local maxima combined with local nonmax suppression to keep only the strongest responses.
To make key point selection probabilistic, we instead interpret the heat map as a probability distribution over key point locations parameterized by the network parameters . We define a set of key points for image as sampled independently according to
(1) 
see also Fig. 2, bottom left. Similarly, we define for image . We give the joint probability of sampling key points independently in each image as
(2) 
3.2 Probabilistic Feature Matching
We assume that a second description network predicts a feature descriptor for a given key point . To simplify notation, we use to denote the learnable parameters associated with feature detection and description. We define a feature match as a pairing of one key point from image and image , respectively: . We give the probability of a match between two key points and as a function of their descriptor distance,
(3) 
Note that the matching probability is conditioned on the sets of key points which we selected in an earlier step. The matching distribution is normalized using all possible matches with and . The matching distribution assigns a low probability to a match, if the associated key points have very different descriptors. In turn, if the network wants to increase the probability of a (good) match during training, it has to reduce the descriptor distance of the associated key points relative to all other matches for the same image pair.
We define a complete set of matches between and sampled independently according to
(4) 
3.3 Learning Objective
We learn network parameters in a supervised fashion, i.e. we assume to have training data of the form with ground truth transformation . Note that we do not need ground truth key point locations or ground truth image correspondences .
Our learning formulation is agnostic to the implementation details of how the vision task is solved using tentative image correspondences . We treat the exact processing pipeline after the feature matching stage as a black box which produces only one output: a loss value , which depends on the key points and that we selected for both images, and the matches that we selected among the key points. For relative pose estimation, calculating entails robust fitting of the essential matrix, its decomposition to yield an estimated relative camera transformation , and its comparison to a ground truth transformation . We only require the loss value itself, not its gradients.
Our training objective aims at reducing the expected task loss when sampling key points and matches according to the probability distributions parameterized by the learnable parameters :
(5) 
where we abbreviate to . We split the expectation in key point selection and match selection. Firstly, we select key points and according to the heat map predictions of the detection network (see Eq. 1 and 2). Secondly, we select matches among these key points according to a probability distribution calculated from descriptor distances (see Eq. 3 and 4).
Calculating the expectation and its gradients exactly would necessitate summing over all possible key point sets, and all possible matchings, which is clearly infeasible. To make the calculation tractable, we assume that the network is already initialized, and makes sensible predictions that we aim at optimizing further for our task. In practice, we take an offtheshelf architecture, like SuperPoint [12], which was trained on a lowlevel matching task. For such an initialized network, we observe the following properties:

Heat maps predicted by the feature detector are sparse. The probability of selecting a key point is zero at almost all image pixels (see Fig. 2 bottom, left). Therefore, only few image locations have an impact on the expectation.

Matches among unrelated key points have a large descriptor distance. Such matches have a probability close to zero, and no impact on the expectation.
Observation 1) means, we can just sample from the key point heat map, and ignore other image locations. Observation 2) means that for the key points we selected, we do not have to realise a complete matching of all key points in to all key points in . Instead, we rely on a knearestneighbour matching with some small . All nearest neighbours beyond likely have large descriptor distances, and hence near zero probability. In practice, we found no advantage in using a which means we can do a normal nearest neighbour matching during training when calculating (see Fig. 2 bottom, right).
We update the learnable parameters according to the gradients of Eq. 5, following the classic REINFORCE algorithm [52] of Williams:
(6) 
Note that we only need to calculate the gradients of the log probabilities of key point selection and feature matching. We approximate the expectations in the gradient calculation by sampling. We approximate by drawing samples . For a given key point sample, we approximate by drawing samples . For each sample combination, we run the vision pipeline and observe the associated task loss
. To reduce the variance of the gradient approximation, we subtract the mean loss over all samples as a baseline
[46]. We found a small number of samples for and sufficient for the pipeline to converge.4 Experiments
We train the SuperPoint [12] architecture for the task of relative pose estimation, and report our main results in Sec. 4.1. Furthermore, we analyse the impact of reinforcing SuperPoint for relative pose estimation on a lowlevel matching benchmark (Sec. 4.2), and in a structurefrommotion task (Sec. 4.3).
4.1 Relative Pose Estimation
Network Architecture.
SuperPoint [12]
is a fullyconvolutional neural network which processes fullsized images. The network has two output heads: one produces a heat map from which key points can be picked, and the other head produces 256dimensional descriptors as a dense descriptor field over the image. The descriptor output of SuperPoint fits well into our training methodology, as we can look up descriptors for arbitrary image locations without doing repeated forward passes of the network. Both output heads share a common encoder which processes the image and reduces its dimensionality, while the output heads act as decoders. We use the network weights provided by the authors as an initialization.
Task Description.
We calculate the relative camera pose between a pair of images by robust fitting of the essential matrix. We show an overview of the processing pipeline in Fig. 2. The feature detector produces a set of tentative image correspondences. We estimate the essential matrix using the 5point algorithm [31] in conjunction with a robust estimator. For the robust estimator, we conducted experiments with a standard RANSAC [15] estimator, as well as with the recent NGRANSAC [8]. NGRANSAC uses a neural network to suppress outlier correspondences, and to guide RANSAC sampling towards promising candidates for the essential matrix. As a learningbased robust estimator, NGRANSAC is particularly interesting in our setup, since we can refine it in conjunction with SuperPoint during endtoend training.
Datasets.
To facilitate comparison to other methods, we follow the evaluation protocol of Yi et al. [56] for relative pose estimation. They evaluate using a collection of 7 outdoor and 16 indoor datasets from various sources [45, 19, 54]. One outdoor scene and one indoor scene serve as training data, the remaining 21 scenes serve as test set. All datasets come with covisibility information for the selection of suitable image pairs, and ground truth poses.
Training Procedure.
We interpret the output of the detection head of SuperPoint as a probability distribution over key point locations. We sample 600 key points for each image, and we read out the descriptor for each key point from the descriptor head output. Next, we perform a nearest neighbour matching between key points, accepting only matches of mutual nearest neighbors in both images. We calculate a probability distribution over all the matches depending on their descriptor distance (according to Eq. 3). We randomly choose of all matches from this distribution for the relative pose estimation pipeline. We fit the essential matrix, and estimate the relative pose up to scale. We measure the angle between the estimated and ground truth rotation, as well as, the angle between the estimated and the ground truth translation vector. We take the maximum of both angles as our task loss . For difficult image pairs, essential matrix estimation can fail, and the task loss can be very large. To limit the influence of such large losses, we apply a square root soft clamping [8] of the loss after a value of , and a hard clamping after a value of .
To approximate the expected task loss and its gradients in Eq. 5 and Eq. 6, we draw key points times, and, for each set of key points, we draw sets of matches. Therefore, for each training iteration, we run the vision pipeline times, which takes 1.5s to 2.1s on a single Tesla K80 GPU, depending on the termination of the robust estimator. We train using the Adam [21] optimizer and a learning rate of
for 150k iterations which takes approximately 60 hours. Our training code is based on PyTorch
[35] for SuperPoint [12] integration and learning, and on OpenCV [9] for estimating the relative pose. We will make our source code publicly available to ensure reproducibility of our approach.Test Procedure.
For testing, we revert to a deterministic procedure for feature detection, instead of doing sampling. We select the strongest 2000 key points from the detector heat map using local nonmax suppression. We remove very weak key point with a heat map value below . We do a nearest neighbor matching of the corresponding feature descriptors, and keep all matches of mutual nearest neighbors. We adhere to this procedure for SuperPoint before and after our training, to ensure comparability of the results.
Discussion.
We report test accuracy in accordance to Yi et al. [56], who calculate the pose error as the maximum of rotation and translation angular error. For each dataset, the area under the cumulative error curve (AUC) is calculated and the mean AUC for outdoor and indoor datasets are reported separately.
Firstly, we train and test our pipeline using a standard RANSAC estimator for essential matrix fitting, see Fig. 3 a). We compare to a stateoftheart SIFTbased [24] pipeline, which uses RootSIFT descriptor normalization [1]. For RootSIFT, we apply Lowe’s ratio criterion [24] to filter matches where the distance ratio of the nearest and second nearest neighbor is above 0.8. We also compare to the LIFT feature detector [55], with and without the learned inlier classification scheme of Yi et al. [56] (denoted InClass). Finally, we compare the results of SuperPoint [12] before and after our proposed training (denoted Reinforced SP).
Reinforced SuperPoint exceeds the accuracy of SuperPoint across all thresholds, proving that our training scheme indeed optimizes the performance of SuperPoint for relative pose estimation. The effect is particularly strong for outdoor environments. For indoors, the training effect is weaker, because large textureless areas make these scenes difficult for sparse feature detection, in principle. SuperPoint exceeds the accuracy of LIFT by a large extent, but does not reach the accuracy of RootSIFT. We found that the excellent accuracy of RootSIFT is largely due to the effectiveness of Lowe’s ratio filter for removing unreliable SIFT matches. We tried the ratio filter also for SuperPoint, but we found no ratio threshold value that would consistently improve accuracy across all datasets.
To implement a similarly effective outlier filter for SuperPoint, we substitute the RANSAC estimator in our vision pipeline with the recent learningbased NGRANSAC [8] estimator. We train NGRANSAC for SuperPoint using the public code of Brachmann and Rother [8], and with the initial weights for SuperPoint by Detone et al. [12]. With NGRANSAC as a robust estimator, SuperPoint almost reaches the accuracy of RootSIFT, see Fig. 3, b). Finally, we embed both, SuperPoint and NGRANSAC in our vision pipeline, and train them jointly and endtoend. After our training schema, Reinforced SuperPoint matches and slightly exceeds the accuracy of RootSIFT. Fig. 3, c) shows an ablation study where we either update only NGRANSAC, only SuperPoint or both during endtoend training. While the main improvement comes from updating SuperPoint, updating NGRANSAC as well allows the robust estimator to adapt to the changing matching statistics of SuperPoint throughout the training process.
Analysis.
We visualize the effect of our training procedure on the outputs of SuperPoint in Fig. 4. For the key point heat maps, we observe two major effects. Firstly, many key points seem to be discarded, especially for repetitive patterns that would result in ambiguous matches. Secondly, some key points are kept, but their position is adjusted, presumably to achieve a lower relative pose error. For the descriptor distribution, we see a tendency of reducing the descriptor distance for correct matches, and increasing the descriptor distance for wrong matches. Quantitative analysis confirms these observations, see Table 1. While the number of key points reduces after endtoend training, the overall matching quality increases, measured as the ratio of estimated inliers, and ratio of ground truth inliers.
4.2 LowLevel Matching Accuracy
We investigate the effect of our training scheme on lowlevel matching scores. Therefore, we analyse the performance of Reinforced SuperPoint, trained for relative pose estimation (see previous section), on the HPatches [3] benchmark. The benchmark consists of 116 test sequences showing images under increasing viewpoint and illumination changes. We adhere to the evaluation protocol of Dusmanu et al. [14]. That is, we find key points and matches between image pairs of a sequence, accepting only matches of mutual nearest neighbours between two images. We calculate the reprojection error of each match using the ground truth homography. We measure the average percentage of correct matches for thresholds ranging from 1px to 10px for the reprojection error. We compare to a RootSIFT [1] baseline with a hessian affine detector [26] (denoted HA+RootSIFT) and several learned detectors, namely HardNet++ [27] with learned affine normalization [28] (denoted HANet+HN++), LFNet [34], DELF [33] and D2Net [14]. The original SuperPoint [12] beats all competitors in terms of AUC when combining illumination and viewpoint sequences. In particular, SuperPoint significantly exceeds the matching accuracy of RootSIFT on HPatches, although RootSIFT outperforms SuperPoint in the task of relative pose estimation. This confirms that lowlevel matching accuracy does not necessarily translate to accuracy in a highlevel vision task, see our earlier discussion. As for Reinforced SuperPoint, we observe an increased matching accuracy compared to SuperPoint, due to having fewer but more reliable and precise key points.
Outdoors  
Kps  Matches  Inliers  GT Inl.  
SuperPoint (SP)  1993  1008  24.8%  21.9% 
Reinf. SP (ours)  1892  955  28.4%  25.3% 
Indoors  
SuperPoint (SP)  1247  603  13.4%  9.6% 
Reinf. SP (ours)  520  262  16.4%  11.1% 
Dataset  Method 






DSPSIFT  15k  4.79  0.41  
GeoDesc  17k  4.99  0.46  
SuperPoint  31k  4.75  0.97  
Reinf. SP  9k  4.86  0.87  

DSPSIFT  8k  4.22  0.46  
GeoDesc  9k  4.34  0.55  
SuperPoint  21k  4.10  0.95  
Reinf. SP  7k  4.32  0.82  

DSPSIFT  113k  5.92  0.58  
GeoDesc  170k  5.21  0.64  
SuperPoint  160k  7.83  0.92  
Reinf. SP  102k  7.86  0.88 
4.3 StructurefromMotion
We evaluate the performance of Reinforced SuperPoint, trained for relative pose estimation, in a structurefrommotion (SfM) task. We follow the protocol of the SfM benchmark of Schönberger et al. [44]. We select three of the smaller scenes from the benchmark, and extract key points and matches using SuperPoint and Reinforced SuperPoint. We create a sparse SfM reconstruction using COLMAP [43], and report the number of reconstructed 3D points, the average track length of features (indicating feature stability across views), and the average reprojection error (indicating key point precision). We report our results in Table 2, and confirm the findings of our previous experiments. While the number of key points reduces, the matching quality increases, as measured by track length and reprojection error. For reference, we also show results for DSPSIFT [13] the best of all SIFT variants on the benchmark [44], and GeoDesc [25], a learned descriptor which achieves stateoftheart results on the benchmark. Note that SuperPoint only provides pixelaccurate key point locations, compared to the subpixel accuracy of DSPSIFT and GeoDesc. Hence, the reprojection error of SuperPoint is higher.
5 Conclusion
We have presented a new methodology for endtoend training of feature detection and description which includes key point selection, feature matching and robust model estimation. We applied our approach to the task of relative pose estimation between two images. We observe that our endtoend training increases the pose estimation accuracy of a stateoftheart feature detector by removing unreliable key points, and refining the locations of remaining key points. We require a good initialization of the network, which might have a limiting effect in training. In particular, we observe that the network rarely discovers new key points. Key point locations with very low initial probability will never be selected, and cannot be reinforced. In future work, we could combine our training schema with importance sampling, for biased sampling of interesting locations.
Acknowledgements:
This project has received funding from the European Social Fund (ESF) and Free State of Saxony under SePIA grant 100299506, DFG grant 389792660 as part of TRR 248, the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 647769), and DFG grant COVMAP: Intelligente Karten mittels gemeinsamer GPS und Videodatenanalyse (RO 4804/21). The computations were performed on an HPC Cluster at the Center for Information Services and High Performance Computing (ZIH) at TU Dresden.
References
 [1] R. Arandjelovic and A. Zisserman. Three things everyone should know to improve object retrieval. In CVPR, 2012.
 [2] R. Arandjelovic and A. Zisserman. All about VLAD. In CVPR, 2013.
 [3] V. Balntas, K. Lenc, A. Vedaldi, and K. Mikolajczyk. HPatches: A benchmark and evaluation of handcrafted and learned local descriptors. In CVPR, 2017.
 [4] V. Balntas, S. Li, and V. A. Prisacariu. Relocnet: Continuous metric learning relocalisation using neural nets. In ECCV, 2018.
 [5] A. BarrosoLaguna, E. Riba, D. Ponsa, and K. Mikolajczyk. Key.Net: Keypoint detection by handcrafted and learned CNN filters. In ICCV, 2019.
 [6] H. Bay, T. Tuytelaars, and L. V. Gool. SURF: Speeded up robust features. In ECCV, 2006.
 [7] A. Benbihi, M. Geist, and C. Pradalier. ELF: Embedded localisation of features in pretrained CNN. In ICCV, 2019.
 [8] E. Brachmann and C. Rother. NeuralGuided RANSAC: Learning where to sample model hypotheses. In ICCV, 2019.
 [9] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000.
 [10] T. Cieslewski, K. G. Derpanis, and D. Scaramuzza. SIPs: Succinct interest points from unsupervised inlierness probability learning. In 3DV, 2019.
 [11] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
 [12] D. DeTone, T. Malisiewicz, and A. Rabinovich. SuperPoint: Selfsupervised interest point detection and description. In CVPR Workshops, 2018.

[13]
J. Dong and S. Soatto.
Domainsize pooling in local descriptors: Dspsift.
In
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, June 2015.  [14] M. Dusmanu, I. Rocco, T. Pajdla, M. Pollefeys, J. Sivic, A. Torii, and T. Sattler. D2Net: A trainable CNN for joint detection and description of local features. In CVPR, 2019.
 [15] M. A. Fischler and R. C. Bolles. Random Sample Consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 1981.
 [16] X. Han, T. Leung, Y. Jia, R. Sukthankar, and A. C. Berg. MatchNet: Unifying feature and metric learning for patchbased matching. In CVPR, 2015.
 [17] C. Harris and M. Stephens. A combined corner and edge detector. In AVC, 1988.
 [18] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2004.
 [19] J. Heinly, J. L. Schönberger, E. Dunn, and J.M. Frahm. Reconstructing the World* in Six Days *(As Captured by the Yahoo 100 Million Image Dataset). In CVPR, 2015.
 [20] A. Kendall, M. Grimes, and R. Cipolla. PoseNet: A convolutional network for realtime 6DoF camera relocalization. In ICCV, 2015.
 [21] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
 [22] Y. Li, N. Snavely, and D. P. Huttenlocher. Location recognition using prioritized feature matching. In ECCV, 2010.
 [23] Y. Li, N. Snavely, D. P. Huttenlocher, and P. Fua. Worldwide pose estimation using 3D point clouds. In ECCV, 2012.
 [24] D. G. Lowe. Distinctive image features from scaleinvariant keypoints. IJCV, 2004.
 [25] Z. Luo, T. Shen, L. Zhou, S. Zhu, R. Zhang, Y. Yao, T. Fang, and L. Quan. GeoDesc: Learning local descriptors by integrating geometry constraints. In ECCV, 2018.
 [26] K. Mikolajczyk and C. Schmid. Scale & affine invariant interest point detectors. IJCV, 2004.
 [27] A. Mishchuk, D. Mishkin, F. Radenovic, and J. Matas. Working hard to know your neighbors margins: Local descriptor learning loss. In NeurIPS, 2017.
 [28] D. Mishkin, F. Radenovic, and J. Matas. Repeatability is not enough: Learning affine regions via discriminability. In ECCV, 2018.
 [29] R. MurArtal, J. M. M. Montiel, and J. D. Tardós. ORBSLAM: A versatile and accurate monocular SLAM system. TRO, 2015.

[30]
R. MurArtal and J. D. Tardós.
ORBSLAM2: an opensource SLAM system for monocular, stereo, and RGBD cameras.
TRO, 2017.  [31] D. Nistér. An efficient solution to the fivepoint relative pose problem. TPAMI, 2004.
 [32] D. Nistér and H. Stewénius. Scalable recognition with a vocabulary tree. In CVPR, 2006.
 [33] H. Noh, A. Araujo, J. Sim, T. Weyand, and B. Han. Largescale image retrieval with attentive deep local features. In ICCV, 2017.
 [34] Y. Ono, E. Trulls, P. Fua, and K. M. Yi. LFNet: Learning local features from images. In NeurIPS, 2018.
 [35] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in PyTorch. In NeurIPSW, 2017.
 [36] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. In CVPR, 2007.
 [37] J. Revaud, P. Weinzaepfel, C. R. de Souza, N. Pion, G. Csurka, Y. Cabon, and M. Humenberger. R2D2: repeatable and reliable detector and descriptor. In NeurIPS, 2019.
 [38] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski. ORB: An efficient alternative to SIFT or SURF. In ICCV, 2011.
 [39] T. Sattler, B. Leibe, and L. Kobbelt. Efficient & effective prioritized matching for largescale imagebased localization. TPAMI, 2016.
 [40] T. Sattler, W. Maddern, C. Toft, A. Torii, L. Hammarstrand, E. Stenborg, D. Safari, M. Okutomi, M. Pollefeys, J. Sivic, F. Kahl, and T. Pajdla. Benchmarking 6DOF outdoor visual localization in changing conditions. In CVPR, 2018.
 [41] T. Sattler, Q. Zhou, M. Pollefeys, and L. LealTaixe. Understanding the limitations of CNNbased absolute camera pose regression. In CVPR, 2019.
 [42] G. Schindler, M. Brown, and R. Szeliski. Cityscale location recognition. In CVPR, 2007.
 [43] J. L. Schönberger and J.M. Frahm. StructurefromMotion Revisited. In CVPR, 2016.
 [44] J. L. Schönberger, H. Hardmeier, T. Sattler, and M. Pollefeys. Comparative evaluation of handcrafted and learned local features. In CVPR, 2017.
 [45] C. Strecha, W. von Hansen, L. Van Gool, P. Fua, and U. Thoennessen. On benchmarking camera calibration and multiview stereo for high resolution imagery. In CVPR, 2008.
 [46] R. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning. MIT Press, 1998.

[47]
Y. Tian, B. Fan, and F. Wu.
L2Net: Deep learning of discriminative patch descriptor in Euclidean space.
In CVPR, 2017.  [48] Y. Tian, X. Yu, B. Fan, F. Wu, H. Heijnen, and V. Balntas. SOSNet: Second order similarity regularization for local descriptor learning. In CVPR, 2019.
 [49] C. Toft, E. Stenborg, L. Hammarstrand, L. Brynte, M. Pollefeys, T. Sattler, and F. Kahl. Semantic match consistency for longterm visual localization. In ECCV, 2018.
 [50] B. Ummenhofer, H. Zhou, J. Uhrig, N. Mayer, E. Ilg, A. Dosovitskiy, and T. Brox. Demon: Depth and motion network for learning monocular stereo. In CVPR, 2017.
 [51] D. P. Vassileios Balntas, Edgar Riba and K. Mikolajczyk. Learning local feature descriptors with triplets and shallow convolutional neural networks. In BMVC, 2016.
 [52] R. J. Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 1992.
 [53] C. Wu. Towards lineartime incremental structure from motion. In 3DV, 2013.
 [54] J. Xiao, A. Owens, and A. Torralba. SUN3D: A database of big spaces reconstructed using SfM and object labels. In ICCV, 2013.
 [55] K. M. Yi, E. Trulls, V. Lepetit, and P. Fua. LIFT: Learned invariant feature transform. In ECCV, 2016.
 [56] K. M. Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, and P. Fua. Learning to find good correspondences. In CVPR, 2018.
Comments
There are no comments yet.