The first step in geometric computer vision tasks such as Simultaneous Localization and Mapping (SLAM), Structure-from-Motion (SfM), camera calibration, and image matching is to extract interest points from images. Interest points are 2D locations in an image which are stable and repeatable from different lighting conditions and viewpoints. The subfield of mathematics and computer vision known as Multiple View Geometry  consists of theorems and algorithms built on the assumption that interest points can be reliably extracted and matched across images. However, the inputs to most real-world computer vision systems are raw images, not idealized point locations.
Convolutional neural networks have been shown to be superior to hand-engineered representations on almost all tasks requiring images as input. In particular, fully-convolutional neural networks which predict 2D “keypoints” or “landmarks” are well-studied for a variety of tasks such as human pose estimation , object detection , and room layout estimation . At the heart of these techniques is a large dataset of 2D ground truth locations labeled by human annotators.
It seems natural to similarly formulate interest point detection as a large-scale supervised machine learning problem and train the latest convolutional neural network architecture to detect them. Unfortunately, when compared to semantic tasks such as human-body keypoint estimation, where a network is trained to detect body parts such as the corner of the mouth or left ankle, the notion of interest point detection is semantically ill-defined. Thus training convolution neural networks with strong supervision of interest points is non-trivial.
Instead of using human supervision to define interest points in real images, we present a self-supervised solution using self-training. In our approach, we create a large dataset of pseudo-ground truth interest point locations in real images, supervised by the interest point detector itself, rather than a large-scale human annotation effort.
To generate the pseudo-ground truth interest points, we first train a fully-convolutional neural network on millions of examples from a synthetic dataset we created called Synthetic Shapes (see Figure 2a). The synthetic dataset consists of simple geometric shapes with no ambiguity in the interest point locations. We call the resulting trained detector MagicPoint—it significantly outperforms traditional interest point detectors on the synthetic dataset (see Section 4). MagicPoint performs surprising well on real images despite domain adaptation difficulties . However, when compared to classical interest point detectors on a diverse set of image textures and patterns, MagicPoint misses many potential interest point locations. To bridge this gap in performance on real images, we developed a multi-scale, multi-transform technique Homographic Adaptation.
Homographic Adaptation is designed to enable self-supervised training of interest point detectors. It warps the input image multiple times to help an interest point detector see the scene from many different viewpoints and scales (see Section 5). We use Homographic Adaptation in conjunction with the MagicPoint detector to boost the performance of the detector and generate the pseudo-ground truth interest points (see Figure 2b). The resulting detections are more repeatable and fire on a larger set of stimuli; thus we named the resulting detector SuperPoint.
The most common step after detecting robust and repeatable interest points is to attach a fixed dimensional descriptor vector to each point for higher level semantic tasks,e.g., image matching. Thus we lastly combine SuperPoint with a descriptor subnetwork (see Figure 2c). Since the SuperPoint architecture consists of a deep stack of convolutional layers which extract multi-scale features, it is straightforward to then combine the interest point network with an additional subnetwork that computes interest point descriptors (see Section 3). The resulting system is shown in Figure 1.
2 Related Work
Traditional interest point detectors have been thoroughly evaluated [24, 16]. The FAST corner detector  was the first system to cast high-speed corner detection as a machine learning problem, and the Scale-Invariant Feature Transform, or SIFT 
, is still probably the most well-known traditional local feature descriptor in computer vision.
Our SuperPoint architecture is inspired by recent advances in applying deep learning to interest point detection and descriptor learning. At the ability to match image sub-structures, we are similar to UCN and to a lesser extent DeepDesc ; however, both do not perform any interest point detection. On the other end, LIFT , a recently introduced convolutional replacement for SIFT stays close to the traditional patch-based detect then describe recipe. The LIFT pipeline contains interest point detection, orientation estimation and descriptor computation, but additionally requires supervision from a classical SfM system. These differences are summarized in Table 1.
On the other extreme of the supervision spectrum, Quad-Networks  tackles the interest point detection problem from an unsupervised approach; however, their system is patch-based (inputs are small image patches) and relatively shallow 2-layer network. The TILDE  interest point detection system used a principle similar to Homographic Adaptation; however, their approach does not benefit from the power of large fully-convolutional neural networks.
Our approach can also be compared to other self-supervised methods, synthetic-to-real domain-adaptation methods. A similar approach to Homographic Adaptation is by Honari et al.  under the name “equivariant landmark transform.” Also, Geometric Matching Networks  and Deep Image Homography Estimation  use a similar self-supervision strategy to create training data for estimating global transformations. However, these methods lack interest points and point correspondences, which are typically required for doing higher level computer vision tasks such as SLAM and SfM. Joint pose and depth estimation models also exist [33, 30, 28], but do not use interest points.
3 SuperPoint Architecture
We designed a fully-convolutional neural network architecture called SuperPoint which operates on a full-sized image and produces interest point detections accompanied by fixed length descriptors in a single forward pass (see Figure 3). The model has a single, shared encoder to process and reduce the input image dimensionality. After the encoder, the architecture splits into two decoder “heads”, which learn task specific weights – one for interest point detection and the other for interest point description. Most of the network’s parameters are shared between the two tasks, which is a departure from traditional systems which first detect interest points, then compute descriptors and lack the ability to share computation and representation across the two tasks.
3.1 Shared Encoder
Our SuperPoint architecture uses a VGG-style 
encoder to reduce the dimensionality of the image. The encoder consists of convolutional layers, spatial downsampling via pooling and non-linear activation functions. Our encoder uses three max-pooling layers, letting us defineand for an image sized . We refer to the pixels in the lower dimensional output as “cells,” where three non-overlapping max pooling operations in the encoder result in pixel cells. The encoder maps the input image
to an intermediate tensorwith smaller spatial dimension and greater channel depth (i.e., , and ).
3.2 Interest Point Decoder
For interest point detection, each pixel of the output corresponds to a probability of “point-ness” for that pixel in the input. The standard network design for dense prediction involves an encoder-decoder pair, where the spatial resolution is decreased via pooling or strided convolution, and then upsampled back to full resolution via upconvolution operations, such as done in SegNet. Unfortunately, upsampling layers tend to add a high amount of computation and can introduce unwanted checkerboard artifacts , thus we designed the interest point detection head with an explicit decoder111This decoder has no parameters, and is known as “sub-pixel convolution”  to reduce the computation of the model.
The interest point detector head computes and outputs a tensor sized . The channels correspond to local, non-overlapping grid regions of pixels plus an extra “no interest point” dustbin. After a channel-wise softmax, the dustbin dimension is removed and a reshape is performed.
3.3 Descriptor Decoder
The descriptor head computes and outputs a tensor sized .
To output a dense map of L2-normalized fixed length descriptors, we use a model similar to UCN  to first output a semi-dense grid of descriptors (e.g., one every 8 pixels). Learning descriptors semi-densely rather than densely reduces training memory and keeps the run-time tractable. The decoder then performs bi-cubic interpolation
bi-cubic interpolationof the descriptor and then L2-normalizes the activations to be unit length. This fixed, non-learned descriptor decoder is shown in Figure 3.
3.4 Loss Functions
The final loss is the sum of two intermediate losses: one for the interest point detector, , and one for the descriptor, . We use pairs of synthetically warped images which have both (a) pseudo-ground truth interest point locations and (b) the ground truth correspondence from a randomly generated homography which relates the two images. This allows us to optimize the two losses simultaneously, given a pair of images, as shown in Figure 2c. We use to balance the final loss:
The interest point detector loss functionis a fully-convolutional cross-entropy loss over the cells . We call the set of corresponding ground-truth interest point labels222If two ground truth corner positions land in the same bin then we randomly select one ground truth corner location. and individual entries as . The loss is:
The descriptor loss is applied to all pairs of descriptor cells, from the first image and from the second image. The homography-induced correspondence between the cell and the cell can be written as follows:
where denotes the location of the center pixel in the cell, and denotes multiplying the cell location by the homography and dividing by the last coordinate, as is usually done when transforming between Euclidean and homogeneous coordinates. We denote the entire set of correspondences for a pair of images with .
We also add a weighting term to help balance the fact that there are more negative correspondences than positive ones. We use a hinge loss with positive margin and negative margin . The descriptor loss is defined as:
4 Synthetic Pre-Training
In this section, we describe our method for training a base detector (shown in Figure 2a) called MagicPoint which is used in conjunction with Homographic Adaptation to generate pseudo-ground truth interest point labels for unlabeled images in a self-supervised fashion.
4.1 Synthetic Shapes
There is no large database of interest point labeled images that exists today. Thus to bootstrap our deep interest point detector, we first create a large-scale synthetic dataset called Synthetic Shapes that consists of simplified 2D geometry via synthetic data rendering of quadrilaterals, triangles, lines and ellipses. Examples of these shapes are shown in Figure 4. In this dataset, we are able to remove label ambiguity by modeling interest points with simple Y-junctions, L-junctions, T-junctions as well as centers of tiny ellipses and end points of line segments.
Once the synthetic images are rendered, we apply homographic warps to each image to augment the number of training examples. The data is generated on-the-fly and no example is seen by the network twice. While the types of interest points represented in Synthetic Shapes represents only a subset of all potential interest points found in the real world, we found it to work reasonably well in practice when used to train an interest point detector.
We use the detector pathway of the SuperPoint architecture (ignoring the descriptor head) and train it on Synthetic Shapes. We call the resulting model MagicPoint.
Interestingly, when we evaluate MagicPoint against other traditional corner detection approaches such as FAST , Harris corners  and Shi-Tomasi’s “Good Features To Track”  on the Synthetic Shapes dataset, we discovered a large performance gap in our favor. We measure the mean Average Precision (mAP) on held-out images of the Synthetic Shapes dataset, and report the results in Table 2. The classical detectors struggle in the presence of imaging noise – qualitative examples of this are shown in Figure 4. More detailed experiments can be found in Appendix B.
The MagicPoint detector performs very well on Synthetic Shapes, but does it generalize to real images? To summarize a result that we later present in Section 7.2, the answer is yes, but not as well as we hoped. We were surprised to find that MagicPoint performs reasonably well on real world images, especially on scenes which have strong corner-like structure such as tables, chairs and windows. Unfortunately in the space of all natural images, it under-performs when compared to the same classical detectors on repeatability under viewpoint changes. This motivated our self-supervised approach for training on real-world images which we call Homographic Adaptation.
5 Homographic Adaptation
Our system bootstraps itself from a base interest point detector and a large set of unlabeled images from the target domain (e.g
., MS-COCO). Operating in a self-supervised paradigm (also known as self-training), we first generate a set of pseudo-ground truth interest point locations for each image in the target domain, then use traditional supervised learning machinery. At the core of our method is a process that applies random homographies to warped copies of the input image and combines the results – a process we callHomographic Adaptation (see Figure 5).
Homographies give exact or almost exact image-to-image transformations for camera motion with only rotation around the camera center, scenes with large distances to objects, and planar scenes. Moreover, because most of the world is reasonably planar, a homography is good model for what happens when the same 3D point is seen from different viewpoints. Because homographies do not require 3D information, they can be randomly sampled and easily applied to any 2D image – involving little more than bilinear interpolation. For these reasons, homographies are at the core of our self-supervised approach.
Let represent the initial interest point function we wish to adapt, the input image, the resulting interest points and a random homography, so that:
An ideal interest point operator should be covariant with respect to homographies. A function is covariant with if the output transforms with the input. In other words, a covariant detector will satisfy, for all 333For clarity, we slightly abuse notation and allow to denote the homography matrix being applied to the resulting interest points, and to denote the entire image being warped by .:
moving homography-related terms to the right, we get:
In practice, a detector will not be perfectly covariant – different homographies in Equation 9 will result in different interest points . The basic idea behind Homographic Adaptation is to perform an empirical sum over a sufficiently large sample of random ’s (see Figure 5). The resulting aggregation over samples thus gives rise to a new and improved, super-point detector, :
5.2 Choosing Homographies
Not all 3x3 matrices are good choices for Homographic Adaptation. To sample good homographies which represent plausible camera transformations, we decompose a potential homography into more simple, less expressive transformation classes. We sample within pre-determined ranges for translation, scale, in-plane rotation, and symmetric perspective distortion using a truncated normal distribution. These transformations are composed together with an initial root center crop to help avoid bordering artifacts. This process is shown in Figure6.
When applying Homographic Adaptation to an image, we use the average response across a large number of homographic warps of the input image. The number of homographic warps is a hyper-parameter of our approach. We typically enforce the first homography to be equal to identity, so that =1 in our experiments corresponds to doing no adaptation. We performed an experiment to determine the best value for , varying the range of from “small” , to “medium” , and “large” . Our experiments suggest that there is diminishing returns when performing more than homographies. On a held-out set of images from MS-COCO, we obtain a repeatability score of without any Homographic Adaptation, a repeatability boost of 21% when performing transforms, and a repeatability boost of 22% when , thus the added benefit of using more than homographies is minimal. For a more detailed analysis and discussion of this experiment see Appendix C.
5.3 Iterative Homographic Adaptation
We apply the Homographic Adaptation technique at training time to improve the generalization ability of the base MagicPoint architecture on real images. The process can be repeated iteratively to continually self-supervise and improve the interest point detector. In all of our experiments, we call the resulting model, after applying Homographic Adaptation, SuperPoint and show the qualitative progression on images from HPatches in Figure 7.
6 Experimental Details
In this section we provide some implementation details for training the MagicPoint and SuperPoint models. This encoder has a VGG-like 
architecture that has eight 3x3 convolution layers sized 64-64-64-64-128-128-128-128. Every two layers there is a 2x2 max pool layer. Each decoder head has a single 3x3 convolutional layer of 256 units followed by a 1x1 convolution layer with 65 units and 256 units for the interest point detector and descriptor respectively. All convolution layers in the network are followed by ReLU non-linear activation and BatchNorm normalization.
To train the fully-convolutional SuperPoint model, we start with a base MagicPoint model trained on Synthetic Shapes. The MagicPoint architecture is the SuperPoint architecture without the descriptor head. The MagicPoint model is trained for 200,000 iterations of synthetic data. Since the synthetic data is simple and fast to render, the data is rendered on-the-fly, thus no single example is seen twice by the network.
We generate pseudo-ground truth labels using the MS-COCO 2014  training dataset split which has 80,000 images and the MagicPoint base detector. The images are sized to a resolution of and converted to grayscale. The labels are generated using Homographic Adaptation with , as motivated by our results from Section 5.2. We repeat the Homographic Adaptation a second time, using the resulting model trained from the first round of Homographic Adaptation.
The joint training of SuperPoint is also done on grayscale COCO images. For each training example, a homography is randomly sampled. It is sampled from a more restrictive set of homographies than during Homographic Adaptation to better model the target application of pair-wise matching (e.g., we avoid sampling extreme in-plane rotations as they are rarely seen in HPatches). The image and corresponding pseudo-ground truth are transformed by the homography to create the needed inputs and labels. The descriptor size used in all experiments is . We use a weighting term of to keep the descriptor learning balanced. The descriptor hinge loss uses a positive margin and negative margin . We use a factor of to balance the two losses.
All training is done using PyTorch  with mini-batch sizes of 32 and the ADAM solver with default parameters of and . We also use standard data augmentation techniques such as random Gaussian noise, motion blur, brightness level changes to improve the network’s robustness to lighting and viewpoint changes.
In this section we present quantitative results of the methods presented in the paper. Evaluation of interest points and descriptors is a well-studied topic, thus we follow the evaluation protocol of Mikołajczyk et al. 
. For more details on our evaluation metrics, see AppendixA.
7.1 System Runtime
We measure the run-time of the SuperPoint architecture using a Titan X GPU and the timing tool that comes with the Caffe deep learning library. A single forward pass of the model runs in approximately ms with inputs sized , which produces the point detection locations and a semi-dense descriptor map. To sample the descriptors at the higher resolution from the semi-dense descriptor, it is not necessary to create the entire dense descriptor map – we can just sample from the 1000 detected locations, which takes about ms on a CPU implementation of bi-cubic interpolation followed by L2 normalization. Thus we estimate the total runtime of the system on a GPU to be about ms or FPS.
7.2 HPatches Repeatability
In our experiments we train SuperPoint on the MS-COCO images, and evaluate using the HPatches dataset . HPatches contains 116 scenes with 696 unique images. The first 57 scenes exhibit large changes in illumination and the other 59 scenes have large viewpoint changes.
|57 Illumination Scenes||59 Viewpoint Scenes|
To evaluate the interest point detection ability of the SuperPoint model, we measure repeatability on the HPatches dataset. We compare it to the MagicPoint model (before Homographic Adaptation), as well as FAST , Harris  and Shi , all implemented using OpenCV. Repeatability is computed at resolution with points detected in each image. We also vary the Non-Maximum Suppression (NMS) applied to the detections. We use a correct distance of pixels. Applying larger amounts of NMS helps ensure that the points are evenly distributed in the image, useful for certain applications such as ORB-SLAM , where a minimum number of FAST corner detections is forced in each cell of a coarse grid.
In summary, the Homographic Adaptation technique used to transform MagicPoint into SuperPoint gives a large boost in repeatability, especially under large viewpoint changes. Results are shown in Table 3. The SuperPoint model outperforms classical detectors under illumination changes and performs on par with classical detectors under viewpoint changes.
7.3 HPatches Homography Estimation
To evaluate the performance of the SuperPoint interest point detector and descriptor network, we compare matching ability on the HPatches dataset. We evaluate SuperPoint against three well-known detector and descriptor systems: LIFT , SIFT  and ORB . For LIFT we use the pre-trained model (Picadilly) provided by the authors. For SIFT and ORB we use the default OpenCV implementations. We use a correct distance of pixels for Rep, MLE, NN mAP and MScore. We compute a maximum of points for all systems at a resolution and compute a number of metrics for each image pair. To estimate the homography, we perform nearest neighbor matching from all interest points+descriptors detected in the first image to all the interest points+descriptors in the second. We use an OpenCV implementation (findHomography() with RANSAC) with all the matches to compute the final homography estimate.
|Homography Estimation||Detector Metrics||Descriptor Metrics|
|Rep.||MLE||NN mAP||M. Score|
The homography estimation results are shown in Table 4. SuperPoint outperforms LIFT and ORB and performs comparably to SIFT for homography estimation on HPatches using various thresholds of correctness. Qualitative examples of SuperPoint versus LIFT, SIFT and ORB are shown in Figure 8. Please see Appendix D for even more homography estimation example pairs. SuperPoint tends to produce a larger number of correct matches which densely cover the image, and is especially effective against illumination changes.
Quantitatively we outperform LIFT in almost all metrics. LIFT is also outperformed by SIFT in most metrics. This may be due to the fact that HPatches includes indoor sequences and LIFT was trained on a single outdoor sequence. Our method was trained on hundreds of thousands of warped MS-COCO images that exhibit a much larger diversity and more closely match the diversity in HPatches.
SIFT performs well for sub-pixel precision homographies and has the lowest mean localization error (MLE). This is likely due to the fact that SIFT performs extra sub-pixel localization, while other methods do not perform this step.
ORB achieves the highest repeatability (Rep.); however, its detections tend to form sparse clusters throughout the image as shown in Figure 8, thus scoring poorly on the final homography estimation task. This suggests that optimizing solely for repeatability does not result in better matching or estimation further up the pipeline.
We have presented a fully-convolutional neural network architecture for interest point detection and description trained using a self-supervised domain adaptation framework called Homographic Adaptation. Our experiments demonstrate that it is possible to transfer knowledge from a synthetic dataset onto real-world images, sparse interest point detection and description can be cast as a single, efficient convolutional neural network, and the resulting system works well for geometric computer vision matching tasks such as Homography Estimation.
Future work will investigate whether Homographic Adaptation can boost the performance of models such as those used in semantic segmentation (e.g., SegNet  ) and object detection (e.g., SSD ). It will also carefully investigate the ways that interest point detection and description (and potentially other tasks) benefit each other.
Lastly, we believe that our SuperPoint network can be used to tackle all visual data-association in 3D computer vision problems like SLAM and SfM, and that a learning-based Visual SLAM front-end will enable more robust applications in robotics and augmented reality.
-  V. Badrinarayanan, A. Kendall, and R. Cipolla. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. PAMI, 2017.
-  V. Balntas, K. Lenc, A. Vedaldi, and K. Mikolajczyk. HPatches: A benchmark and evaluation of handcrafted and learned local descriptors. In CVPR, 2017.
-  C. B. Choy, J. Gwak, S. Savarese, and M. Chandraker. Universal Correspondence Network. In NIPS. 2016.
-  D. DeTone, T. Malisiewicz, and A. Rabinovich. Deep image homography estimation. arXiv preprint arXiv:1606.03798, 2016.
-  D. DeTone, T. Malisiewicz, and A. Rabinovich. Toward geometric deepslam. arXiv preprint arXiv:1707.07410, 2017.
-  L. F. I. K. P. F. F. M.-N. Edgar Simo-Serra, Eduard Trulls. Discriminative learning of deep convolutional feature point descriptors. In ICCV, 2015.
Y. Ganin and V. Lempitsky.
Unsupervised domain adaptation by backpropagation.In ICML, 2015.
-  C. Harris and M. Stephens. A combined corner and edge detector. In Alvey vision conference, volume 15, pages 10–5244. Manchester, UK, 1988.
-  R. Hartley and A. Zisserman. Multiple View Geometry in computer vision. 2003.
-  S. Honari, P. Molchanov, S. Tyree, P. Vincent, C. Pal, and J. Kautz. Improving landmark localization with semi-supervised learning. arXiv preprint arXiv:1709.01591, 2017.
-  Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
-  C.-Y. Lee, V. Badrinarayanan, T. Malisiewicz, and A. Rabinovich. RoomNet: End-to-end room layout estimation. In ICCV, 2017.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and L. Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In ECCV, 2016.
-  D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
-  K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. PAMI, 2005.
-  R. Mur-Artal, J. Montiel, and J. D. Tardos. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 2015.
-  A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 2016.
-  A. Paszke, S. Gross, S. Chintala, and G. Chanan. PyTorch. https://github.com/pytorch/pytorch.
-  I. Rocco, R. Arandjelović, and J. Sivic. Convolutional neural network architecture for geometric matching. In CVPR, 2017.
-  E. Rosten and T. Drummond. Machine learning for high-speed corner detection. In ECCV, 2006.
-  E. Rublee, V. Rabaud, K. Konolige, and G. Bradski. ORB: An efficient alternative to SIFT or SURF. In ICCV, 2011.
N. Savinov, A. Seki, L. Ladicky, T. Sattler, and M. Pollefeys.
Quad-networks: unsupervised learning to rank for interest point detection.In CVPR. 2017.
-  C. Schmid, R. Mohr, and C. Bauckhage. Evaluation of interest point detectors. IJCV, 2000.
-  J. Shi and C. Tomasi. Good features to track. In CVPR, 1994.
W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop,
D. Rueckert, and Z. Wang.
Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network.In CVPR, 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  B. Ummenhofer, H. Zhou, J. Uhrig, N. Mayer, E. Ilg, A. Dosovitskiy, and T. Brox. DeMoN: Depth and motion network for learning monocular stereo. In CVPR, 2017.
-  Y. Verdie, K. Yi, P. Fua, and V. Lepetit. TILDE: A Temporally Invariant Learned DEtector. In CVPR, 2015.
-  S. Vijayanarasimhan, S. Ricco, C. Schmid, R. Sukthankar, and K. Fragkiadaki. SfM-Net: Learning of structure and motion from video. arXiv preprint arXiv:1704.07804, 2017.
-  S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In CVPR, 2016.
-  K. M. Yi, E. Trulls, V. Lepetit, and P. Fua. LIFT: Learned Invariant Feature Transform. In ECCV, 2016.
-  T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017.
Appendix A Evaluation Metrics
In this section we present more details on the metrics used for evaluation. In our experiments we follow the protocol of , with one exception. Since our fully-convolutional model does not use local patches, we instead compare detection distances by measuring the distance between the 2D detection centers, rather than measure patch overlap. For multi-scale methods such as SIFT and ORB, we compare distances at the highest resolution scale.
Corner Detection Average Precision. We compute Precision-Recall curves and the corresponding Area-Under-Curve (also known as Average Precision), the pixel location error for correct detections, and the repeatability rate. For corner detection, we use a threshold to determine if a returned point location is correct relative to a set of ground-truth corners . We define the correctness as follows:
The precision recall curve is created by varying the detection confidence and summarized with a single number, namely the Average Precision (which ranges from to ), and larger AP is better.
Localization Error. To complement the AP analysis, we compute the corner localization error, but solely for the correct detections. We define the Localization Error as follows:
The Localization Error is between and , and lower LE is better.
Repeatability. We compute the repeatability rate for an interest point detector on a pair of images. Since the SuperPoint architecture is fully-convolutional and does not rely on patch extraction, we cannot compute patch overlap and instead compute repeatability by measuring the distance between the extracted 2D point centers. We use to represent the correct distance threshold between two points. More concretely, let us assume we have points in the first image and points in the second image. We define correctness for repeatability experiments as follows:
Repeatability simply measures the probability that a point is detected in the second image.
Nearest Neighbor mean Average Precision. This metric captures how discriminating the descriptor is by evaluating it at multiple descriptor distance thresholds. It is computed by measuring Area Under Curve (AUC) of the Precision-Recall curve, using the Nearest Neighbor matching strategy. This metric is computed symmetrically across the pair of images and averaged.
Matching Score. This metric measures the overall performance of the interest point detector and descriptor combined. It measures the ratio of ground truth correspondences that can be recovered by the whole pipeline over the number of features proposed by the pipeline in the shared viewpoint region. This metric is computed symmetrically across the pair of images and averaged.
Homography Estimation. We measure the ability of an algorithm to estimate the homography relating a pair of images by comparing the estimated homography to the ground truth homography . It is not straightforward to compare the matrices directly, since different entries in the matrix have different scales. Instead we compare how well the homography transforms the four corners of one image onto the other. We define the four corners of the first image as . We then apply the ground truth to get the ground truth corners in the second image and the estimated homography to get . We use a threshold to denote a correct homography.
The scores range between and , higher is better.
Appendix B Additional Synthetic Shapes Experiments
We present the full results of the SuperPoint interest point detector (ignoring the descriptor head) trained and evaluated on the Synthetic Shapes dataset.444An earlier version of our MagicPoint experiments can be found in our “Toward Geometric DeepSLAM” paper . We call this detector MagicPoint. The data consists of simple synthetic geometry that a human could easily label with the ground truth corner locations. We expect a good point detector to easily detect the correct corners in these scenarios. In fact, we were surprised at how difficult the simple geometries were for the classical point detectors such as FAST , Harris  and the Shi-Tomasi “Good Features to Track” .
We evaluated two models: MagicPointL and MagicPointS
. Both models share the same encoder architecture, but differ in the number of neurons per layer. MagicPointL and MagicPointS have 64-64-64-64-128-128-128-128-128 and 9-9-16-16-32-32-32-32-32 respectively.
We created an evaluation dataset with our Synthetic Shapes generator to determine how well our detector is able to localize simple corners. There are 10 categories of images, shown in Figure 9.
Mean Average Precision and Mean Localization Error. For each category, there are 1000 images sampled from the Synthetic Shapes generator. We compute Average Precision and Localization Error with and without added imaging noise. A summary of the per category results are shown in Figure 10 and the mean results are shown in Table 5. The MagicPoint detectors outperform the classical detectors in all categories. There is a significant performance gap in mAP in all categories in the presence of noise.
Effect of Noise Magnitude. Next we study the effect of noise more carefully by varying its magnitude. We were curious if the noise we add to the images is too extreme and unreasonable for a point detector. To test this hypothesis, we linearly interpolate between the clean image () and the noisy image (). To push the detectors to the extreme, we also interpolate between the noisy image and random noise (). The random noise images contain no geometric shapes, and thus produce an mAP score of for all detectors. An example of the varying degree of noise and the plots are shown in Figure 11.
Effect of Noise Type. We categorize the noise into eight categories. We study the effect of these noise types individually to better understand which has the biggest effect on the point detectors. Speckle noise is particularly difficult for traditional detectors. Results are summarized in Figure 12.
Blob Detection. We experimented with our model’s ability to detect the centers of shapes such as quadrilaterals and ellipses. We used the MagicPointL architecture (as described above) and augmented the Synthetic Shapes training set to include blob centers in addition to corners. We observed that our model was able to detect such blobs as long as the entire shape was not too large. However, the confidences produced for such “blob detection” are typically lower than those for corners, making it somewhat cumbersome to integrate both kinds of detections into a single system. For the main experiments in the paper, we omit training with blobs, except the following experiment.
We created a sequence of images of a black square on a white background. We vary the square’s width to range from to pixels and report MagicPoint’s confidence for two special pixels in the output heatmap: the center pixel (location of the blob) and the square’s top-left pixel (an easy-to-detect corner). The MagicPoint blob+corner confidence plot for this experiment can be seen in Figure 13. We observe that we can confidently detect the center of the blob when the square is between and pixels wide (red region in Figure 13), detect with lower confidence when the square is between and pixels wide (yellow region in Figure 13), and unable to detect the center blob when the square is larger than (blue regions in Figure 13).
Appendix C Homographic Adaptation Experiment
When combining interest point response maps, it is important to differentiate between within-scale aggregation and across-scale aggregation. Real-world images typically contain features at different scales, as some points which would be deemed interesting in a high-resolution images, are often not even visible in coarser, lower resolution images. However, within a single-scale, transformations of the image such as rotations and translations should not make interest points appear/disappear. This underlying multi-scale nature of images has different implications for within-scale and across-scale aggregation strategies. Within-scale aggregation should be similar to computing the intersection of a set and across-scale aggregation should be similar to the union of a set. In other words, it is the average response within-scale that we really want, and the maximum response across-scale. We can additionally use the average response across scale as a multi-scale measure of interest point confidence. The average response across scales will be maximized when the interest point is visible across all scales, and these are likely to be the most robust interest points for tracking applications.
Within-scale aggregation. We use the average response across a large number of Homographic warps of the input image. Care should be taken in choosing random homographies because not all homographies are realistic image transformations. The number of homographic warps is a hyper-parameter of our approach. We typically enforce the first homography to be equal to identity, so that in our experiments corresponds to doing no homographies (or equivalently, applying the identity Homography). Our experiments range from “small” , to “medium” , and “large” .
Across-scale aggregation. When aggregating across scales, the number of scales considered is a hyper-parameter of our approach. The setting of corresponds to no multi-scale aggregation (or simply aggregating across the large possible image size only). For , we refer to the multi-scale set of images being processed as “the multi-scale image pyramid.” We consider weighting schemes that weigh levels of the pyramid differently, giving higher-resolution images a larger weight. This is important because interest points detected at lower resolutions have poorer localization ability, and we want the final aggregated points to be localized as well as possible.
We experimented with within-scale and across-scale aggregation on a held out test of MS-COCO images. The results are summarized in Figure 14. We find that within-scale aggregation has the biggest effect on repeatability.
Appendix D Extra Qualitative Examples
We show extra qualitative examples of SuperPoint, LIFT, SIFT and ORB on HPatches matching in Figure 15.