Learning discriminative local descriptors from image patches is a fundamental ingredient of various computer vision tasks, including structure-from-motion 24], and panorama stitching . Conventional approaches mostly utilize hand-crafted descriptors, such as SIFT , which have been successfully employed in a variety of applications. Recently, with the emergence of large-scale annotated datasets [5, 3]
, data-driven methods have started to demonstrate their effectiveness, and learning-based descriptors have gradually dominated this field. Specifically, convolutional neural network (CNN) based descriptors[10, 35, 34, 30, 21, 31] can achieve state-of-the-art performance on various tasks, including patch retrieval and 3D reconstruction.
Notably, triplet loss is adopted in many well-performing descriptor learning frameworks. Nevertheless, the quality of the learned descriptors heavily relies on the triplet selection, and mining suitable triplets from a large database is a challenging task. Towards this challenge, Balntas et al.  propose an in-triplet hard negative mining strategy called anchor swapping. Tian et al.  progressively sample unbalanced training pairs in favor of negatives, and Mishchuk et al.  further simplify this idea to mine the hardest negatives within the mini-batch. Despite the significant progress on performance and generalization ability, however, two potential problems still exist in the current hardest-in-batch sampling solution: i) hard negatives are mined in the batch level, while randomly selected matching pairs can still be easily discriminated by the descriptor network; ii) it does not take the interaction between the training progress and the hardness of the training samples into consideration. To this end, we propose a novel triplet mining pipeline to adaptively construct high-informativeness batches in a principled manner.
Our proposed method is nominated as AdaSample, in which matching pairs are sampled from the dataset based on their informativeness to construct mini-batches. The methodology is developed on informativeness analysis, where informativenessmaximum loss minimization  to boost the generalization ability of the descriptor network. Under this training framework, we can adaptively adjust the overall hardness of the training examples fed to the network, based on the training progress. Comprehensive evaluation results and ablation studies on several standard benchmarks [5, 3] demonstrate the effectiveness of our proposed method.
In summary, our contributions are three-fold:
We theoretically analyze the informativeness of potential training examples and formulate a principled sampling approach for descriptor learning.
We propose a hardness-aware training protocol inspired by maximum loss minimization, in which the overall hardness of the generated triplets are adaptively adjusted to match the training progress.
Comprehensive evaluation results on popular benchmarks demonstrate the efficacy of our proposed AdaSample framework.
2 Related work
Local Descriptor Learning.
Traditional descriptors [17, 15] mostly utilize hand-crafted features to extract low-level textures from image patches. The seminal work, i.e., SIFT , computes the smoothed weighted histograms using the gradient field of the image patch. PCA-SIFT  further improves the descriptors by applying Principle Component Analysis (PCA) to the normalized image gradient. A comprehensive overview of the hand-crafted descriptors can be found in .
Recently, due to the rapid development of deep learning, CNN-based methods enable us to learn feature descriptors directly from the raw image patches. MatchNet propose a two-stage Siamese architecture to extract feature embeddings and measure patch similarity, which significantly improves the performance and demonstrates the great potential of CNNs in descriptor learning. DeepDesc  trains the network with Euclidean distance and adopts a mining strategy to sample hard examples. DeepCompare  explores various architectures of the Siamese network and develops a two-stream network focusing on image centers.
With the advances of metric learning, triplet-based architectures have gradually replaced the pair-based ones. TFeat  adopts the triplet loss and mines in-triplet hard negatives with a strategy named anchor swapping. -Net  employs progressive sampling and requires that matching patches have minimal distances within the mini-batch. HardNet  further develops the idea to mine the hardest-in-batch negatives with a simple triplet margin loss. DOAP  imposes a ranking-based loss directly optimized for the average precision. GeoDesc  further incorporates the geometric constraints from multi-view reconstructions and achieves significant improvement on 3D reconstruction task. SOSNet  proposes a second-order similarity regularization term and achieves more compact patch clusters in the feature space. A very recent work 
relaxes the hard margin in the triplet margin loss with a dynamic soft margin to avoid manually tuning the margin by human heuristics.
From previous arts, we find that the triplet mining framework can generally be decoupled into two stages, i.e., batch construction from the dataset and triplet generation within the mini-batch. Previous works [4, 30, 21] mostly focus on mining hard negatives in the second stage, while neglecting batch construction in the first place. Besides, their sampling approaches do not take the training progress into consideration when generating triplets. Therefore, we argue that their triplet mining solutions still cannot exploit the full potential of the entire dataset to produce triplets with suitable hardness. To alleviate this issue, we analyze the contributing gradients of the potential training examples and sample informative matching pairs for batch construction. Then, we propose a hardness-aware training protocol inspired by maximum loss minimization, in which the overall hardness of the selected triplets is correlated with the training progress. Incorporating the hardest-in-batch negative mining solution, we formulate a powerful triplet mining framework, AdaSample, for descriptor learning, in which the quality of the learned descriptors can be significantly improved by a simple sampling strategy.
Hard Negative Mining.
Hard negative mining has been widely used in deep metric learning, such as face verification , as it can progressively select hard negatives for triplet loss and Siamese networks to boost the performance and speed up the convergence. FaceNet 
samples semi-hard triplets within the mini-batch to avoid overfitting the outliers. Wuet al.  select training examples based on their relative distances. Zheng et al.  augment the training data by adaptively synthesizing hardness-aware and label-preserving examples. However, our sampling solution differs from them in that we analyze the informativeness
of the training data and ensure that the sampled data can provide gradients contributing most to the parameter update. Besides, our method can adaptively adjust the hardness of the selected training data as training progresses. In this way, well-classified samples are filtered out, and the network is always fed with informative triplets with suitable hardness. Comprehensive evaluation results demonstrate consistent performance improvement contributed by our proposed approach.
3.1 Problem Overview
Given a dataset that consists of classes111 The term ”class” stands for the image patches that come from the same 3D location. For our sampling purpose, patches from a single class are matching, while non-matching pairs come from different classes. with each containing matching patches, we decompose the triplet generation into two stages. Firstly, we select matching pairs (positives) to form a mini-batch, where is the batch size. This is done by our proposed AdaSample, as introduced in Sec. 3.2. Secondly, we mine the hardest-in-batch negatives for each matching pair and use the triplet loss to supervise the network training, as in Sec. 3.3. See Fig. 1 for an illustration of the two-stage sampling pipeline. Finally, the overall solution is summarized in Sec. 3.4.
Previous works [30, 21] sample positives randomly to construct mini-batches, yielding a majority of similar matching pairs which can be easily discriminated by the network. This practice may reduce the overall hardness of the triplets. Motivated by the hardest-in-batch mining strategy in , a straightforward solution is to select the most dissimilar matching pairs. However, potential issues arise, i.e., the network may be trained with bias in favor of the most dissimilar matching pairs, while other cases are less-considered. We validate this solution, nominated as Hardpos, in experiments (Sec. 5.4).
A more principled solution is to sample positives based on their informativeness. Here, we assume that informative pairs are those contributing most to the optimization, namely, providing effective gradients for parameter updates. Therefore, we quantify the informativeness of the matching pairs by measuring their contributing gradients during training. Moreover, we employ maximum loss minimization  to improve the generalization ability of the learned model and show that the resulting gradient estimator is an unbiased estimator of the actual gradient. In the following, we introduce our derivation and elaborate on the theoretical justification in Sec. 4.
Informativeness Based Sampling.
In the end-to-end deep learning literature, the training data contribute to optimization via gradients, so we measure the informativeness of training examples by analyzing their resulting gradients. Generally, we consider the generic deep learning framework. Let be the data-label pair of the training set, be the model parameterized by , and
be a differentiable loss function. The goal is to find the optimal model parameterthat minimizes the average loss, i.e.,
where denotes the number of training examples. Then, we proceed with the following definition of informativeness.
The informativeness of a training example is quantified by its resulting gradient norm at iteration , namely,
At iteration , let be the sampling probabilities of each datum in the training set. More generally, we also re-weight each sample by
. Let random variabledenote the sampled index at iteration , then , namely, . We record the re-weighted gradient induced by the training sample as
For simplicity, we omit the superscript when no ambiguity is made. By setting , we can make the gradient estimator
an unbiased estimator of the actual gradient,i.e.,
Without loss of generality, we use stochastic gradient descent (SGD) to update model parameters:
where is the learning rate at iteration . As the goal is to find the optimal , we define the expected progress towards the optimum at each iteration as follows.
At iteration , the expected parameter rectification is defined as the expected reduction of the squared distance between the parameter and the optimum after iteration ,
Generally, tens of thousands of iterations are included in the training so that the empirical average parameter rectification will converge to the average of asymptotically. Therefore, by maximizing , we guarantee the most progressive step towards parameters optimum at each iteration in the expectation sense. Inspired by the greedy algorithm , we aim to maximize at each iteration.
It can be shown that maximizing is equivalent to minimizing (Thm. 1). Under this umbrella, we show that the optimal sampling probability is proportional to the per-sample gradient norm (a special case of Thm. 2). Therefore, the optimal sampling probability of each datum happens to be proportional to its informativeness. This property justifies our definition of informativeness as the resulting gradient norm of each training example.
However, as the neural network has multiple layers with a large number of parameters, it is computationally prohibitive to calculate the full gradient norm. Instead, we prove that the matching distance in the feature space is a good approximation to the informativeness222The approximation is up to a constant factor, which is insignificant as it will be offset by the learning rate. The same reasoning applies to the approximation of gradients in Maximum Loss Minimization paragraph. in Sec. 4.2. Concretely, for each class consisting of patches , we first select a patch randomly, which serves as the anchor patch, and then sample a matching patch with probability
where is the extracted descriptor of , and measures the discrepancy of the extracted descriptors. See specific choices of in Sec. 3.4.
Maximum Loss Minimization.
Minimizing the average loss may be sub-optimal because the training tends to be overwhelmed by well-classified examples that provide noisy gradients . On the contrary, well-classified examples can be adaptively filtered out by minimizing the maximum loss , which can further improve the generalization ability. However, directly minimizing the maximum loss may lead to insufficient usage of training data and sensitivity to outliers, so we approximate the gradient of maximum loss by , in which is sufficiently large. As is used to update parameters, consider its expectation
To guarantee is an unbiased estimator333 We impose the unbiasedness constraints due to its theoretical convergence guarantees. For example, the non-asymptotic error bound induced by unbiased gradient estimates is referred to . For re-weighted SGD, as in our case, improved convergence rate can be found in . of , it suffices to set
as in this case,
Following the previous reasoning, we need to minimize under the constraints specified by Eqn. 9 in order to step most progressively at each iteration. In Thm. 2, we show that the optimal sampling probability and re-weighting scalar should be given by
As previously claimed, we approximate the gradient norm via the matching distance in the feature space. Besides, in our case, the hinge triplet loss (Eqn. 14) is positively (or even linearly) correlated with the matching distance squared. Therefore, we use the matching distance squared as an approximation of the hinge triplet loss. Thus, the sampling probability and re-weighting scalar are given by
Moreover, for better approximation, it is preferable to adjust adaptively, namely, to increase with training. Intuitively, when easy matching pairs have been correctly classified, we focus more on hard ones. A good indicator of the training progress is the average loss. As a result, instead of pre-defining a sufficiently large , we set , where is a tunable hyperparameter, and is the moving average of history loss. Formally, we formulate our sampling probability and re-weighting scalar as
The exponent increases adaptively as training progresses so that hardness-aware training examples can be generated and fed to the network. Our sampling approach is thus named as AdaSample.
3.3 Triplet generation by hardest-in-batch
AdaSample focuses on the batch construction stage, and for a complete triplet mining framework, we need to mine negatives from the mini-batch as well. Here, we adopt the hardest-in-batch strategy in . Formally, given a mini-batch of matching pairs , let be the descriptors extracted from 444For clarity, denotes the selected matching pairs, with different pairs belonging to different classes. denotes a generic patch in a specific class, where denotes the placeholder for the index.. For each matching pair , we select the non-matching patch which lies closest to one of the matching patches in the feature space. Then, the Hinge Triplet (HT) loss is defined as follows:
where denotes the margin. Incorporating the re-weighting scalar, we update the model parameter via the gradient estimator .
3.4 Distance Metric
Euclidean distance is widely used in previous works [28, 30, 21, 31]. However, as the descriptors lie on the unit hypersphere in -dimensional space (Sec. 5.1), it is more natural to adopt the geodesic distance of the embedded manifold. Therefore, we adopt the angular distance  as follows:
where denotes the inner product operator. We nominate our loss function as Angular Hinge Triplet (AHT) loss, which is demonstrated to result in consistent performance improvement (Sec. 5.4).
Alg. 1 summarizes the overall triplet generation framework. For each training iteration, we first randomly pick distinct classes from the dataset and extract descriptors for patches belonging to these classes (Step 1, 2). Then, we randomly choose a patch as the anchor from each of the selected classes (Step 4) and adopt our proposed AdaSample to select an informative matching patch (Step 5). With the generated mini-batch, we mine hard negatives following  and compute Angular Hinge Triplet (AHT) loss (Step 7).
4 Theoretical Analysis
4.1 Informativeness Formulation
Due to unbiasedness (Eqn. 4), the first two terms in Eqn. 16 is fixed, so maximizing is equivalent to minimizing . Thm. 2 specifies the optimal probabilities to minimize the aforementioned trace under a more general assumption.
Let be defined in Eqn. 3 and suppose the sampled index obeys distribution . Then, given the constraints , is minimized by the following optimal sampling probabilities:
As is an unbiased estimator of the actual gradient (Eqn. 4), is fixed in our case, denoted by for short. By the linearity of trace and , we have
Mathematically, given the constraints
, the aforementioned harmonic mean ofreaches its minimum when the probabilities satisfy
Dividing by a normalization factor, we get the expression in Eqn. 17. ∎
Note that in the special case of , the constraints degrade into , and the optimal sampling probabilities become .
4.2 Approximation of Informativeness
As mentioned in Sec. 3.2, the matching distance can serve as a good approximation of informativeness. We justify this here. For simplicity, we introduce some notations for a
-layer multi-layer perceptron (MLP). Letbe the weight matrix for layer and
be a Lipschitz continuous activation function. Then the multi-layer perceptron can be formulated as follows:
Note that although our notations describe only MLPs without bias, our analysis holds for any affine transformation followed by a Lipschitz continuous non-linearity. Therefore, our reasoning can naturally extend to CNNs. With
Various data preprocessing, weight initialization [9, 11], and activation normalization [13, 2, 32] techniques uniformize the activations of each layer across samples. Therefore, the variation of gradient norms is mostly captured by the gradient of the loss function w.r.t. the output of neural networks,
where is a constant, and serves as a precise approximation of the full gradient norm. For simplicity, we consider hinge triplet loss (Eqn. 14) here. Then, the gradient norm w.r.t. the descriptor of the matching patch is just twice the matching distance555This relation holds only when the hinge triplet loss is positive. Empirically, due to the relatively large margin, the hinge loss never becomes zero.,
As a result, we reach the conclusion that the matching distance is a good approximation to the informativeness. Also, we empirically verify this in Sec. 5.4.
5.1 Implementation Details
matching pairs for each epoch, and the total number of epochs is. The learning rate is divided by at the end of , , epochs.
We compare our method with both handcrafted and deep methods666 Note that the training dataset of GeoDesc  is not released, so the comparison may be unfair. Besides, some recent works [31, 36] explore in different directions, and their training codes are not publicly available. So we leave the efficacy comparison and system combination in future work., including SIFT , DeepDesc , TFeat , -Net , HardNet , HardNet with global orthogonal regularization (GOR) , DOAP , and GeoDesc . Comprehensive evaluation results and ablation studies on two standard descriptor datasets: UBC Phototour  (Sec. 5.2), and HPatches  (Sec. 5.3) demonstrate the efficacy of our proposed sampling framework.
5.2 UBC Phototour
UBC Phototour , also known as Brown dataset, consists of three subsets: Liberty, Notre Dame, and Yosemite, with about normalized patches in each subset. Keypoints are detected by DoG detector  and verified by model. The testing set consists of matching and non-matching pairs for each sequence. For evaluation, models are trained on one subset and tested on the other two. The metric is the false positive rate (FPR) at true positive recall. The evaluation results are reported in Tab. 1.
Our method outperforms other approaches by a significant margin. We randomly flip and rotate by degrees for data augmentation, noted by . Besides, for our method, we also generate positive patches by random rotation such that each class has patches, noted by *. We augment matching pairs as there are too few patches (two or three) corresponding to one class in UBC Phototour dataset , which limits the capacity of our method. To analyze its effect, we also conduct it for HardNet  baseline. It can be seen that our method consistently outperforms the baseline, indicating the effectiveness of our adaptive sampling solution.
HPatches  consists of sequences of images. The dataset is split into two parts: viewpoint - sequences with significant viewpoint change and illumination - sequences with significant illumination change. According to the level of geometric noises, the patches can be further divided into three groups: easy, hard, and tough. There are three evaluation tasks: patch verification, image matching, and patch retrieval. Following standard evaluation protocols of the dataset, we show results in Fig. 2. It demonstrates that our method performs in favor of other methods on patch verification task, which is consistent with the patch classification results in Tab. 1. Furthermore, our descriptors achieve the best results on the more challenging image matching and patch retrieval tasks, indicating the improved generalization ability contributed by our approach.
5.4 Ablation Study
We empirically verify the conclusion in Sec. 4.2 that the probability induced by matching distance approximate well to the one induced by informativeness (Fig. 3, Left). Besides, the results show that the Pearson correlation is consistently greater than during training (Fig. 3, Right), which indicates these probabilities have strong correlation with each other statistically.
Impact of and Distance Metric.
We experiment with varying in AdaSample to control the overall hardness of the selected matching pairs. A large indicates that hard matching pairs are more likely to be selected. When , our method degrades into random sampling and the overall framework becomes HardNet , and as , the framework becomes Hardpos. Therefore, both HardNet and Hardpos are special cases of our proposed AdaSample. Tab. 2 shows the results on HPatches  dataset, where leads to the best results in most cases. It demonstrates the advantages of our balanced sampling strategy against the hardest solution. Also, Tab. 2 demonstrates that the angular hinge triplet (AHT) loss outperforms the commonly-used hinge triplet (HT) loss in most cases.
Stability and Reproducibility.
The sampling naturally comes from stochasticity. To ensure reproducibility, we conduct experiments on five runs with different random seeds and show the means and standard deviations of the patch classification results in Tab.3. It demonstrates the stability of our sampling solution. We argue that a possible explanation of the stability is the unbiasedness of the gradient estimator (Eqn. 10). As the number of training triplets is huge, the estimated gradients converge to the actual gradient asymptotically. Therefore, the gradients can guide the network towards the parameter optimum as training progresses, regardless of the specific random condition.
Since previous methods have been approaching the saturating point in terms of the performance on UBC Phototour  dataset, it is challenging to make progress on top of the HardNet  baseline. However, with the proposed method, we still observe a consistent improvement, as demonstrated in Tab. 3. It can be seen that our method can give a relative improvement of up to 8.38% in terms of patch classification accuracy, indicating our superiority. To be more principled, we also demonstrate the statistical significance of our improvement upon the baseline. Specifically, we adopt the non-parametric hypothesis testing, i.e., the classic Mann-Whitney testing , to test whether a random variable is stochastically larger than the other one. In our setting, the two random variables are the performance of AdaSample
and HardNet baseline, respectively, and the null hypothesis is that our methodcannot significantly improve the performance. The p values under different experimental settings are summarized in Tab. 3. With a significance level of , we can reject the null hypothesis in 5 of the 6 experiments in total. For the only anomaly, i.e., training on Notredame and testing on Liberty, we conjecture that the reason lies in the extremely high performance of the HardNet baseline (about 0.4% in terms of FPR). Therefore, we argue that the statistical significance under the other 5 experimental settings is sufficient to verify the effectiveness of our approach.
This paper proposes AdaSample for descriptor learning, which adaptively samples hard positives to construct informative mini-batches during training. We demonstrate the efficacy of our method from both theoretical and empirical perspectives. Theoretically, we give a rigorous definition of informativeness of potential training examples. Then, we reformulate the problem and derive a tractable sampling probability expression (Eqn. 13) to generate hardness-aware training triplets. Empirically, we enjoy a consistent and statistically significant performance gain on top of the HardNet  baseline when evaluated on various tasks, including patch classification, patch verification, image matching, and patch retrieval.
-  (2009) Building rome in a day. In IEEE International Conference on Computer Vision (ICCV), pp. 72–79. Cited by: §1.
-  (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §4.2.
HPatches: a benchmark and evaluation of handcrafted and learned local descriptors.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5173–5182. Cited by: §1, §1, Figure 2, §5.1, §5.3, §5.4, Table 2.
-  (2016) Learning local feature descriptors with triplets and shallow convolutional neural networks.. In British Machine Vision Conference (BMVC), Cited by: §1, §2, §2, §5.1, Table 1.
-  (2011) Discriminative learning of local image descriptors. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI) 33 (1), pp. 43–57. Cited by: §1, §1, Figure 2, §5.1, §5.2, §5.2, §5.4, Table 1, Table 3.
-  (2007) Automatic panoramic image stitching using invariant features. International Journal on Computer Vision (IJCV) 74 (1), pp. 59–73. Cited by: §1.
ArcFace: additive angular margin loss for deep face recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4690–4699. Cited by: §3.4.
-  (1971) Matroids and the greedy algorithm. Mathematical programming 1 (1), pp. 127–136. Cited by: §3.2.
Understanding the difficulty of training deep feedforward neural networks.
Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. Cited by: §4.2.
-  (2015) Matchnet: unifying feature and metric learning for patch-based matching. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3279–3286. Cited by: §1, §2, Table 1.
Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In IEEE International Conference on Computer Vision (ICCV), pp. 1026–1034. Cited by: §4.2.
-  (2018) Local descriptors optimized for average precision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 596–605. Cited by: §2, §5.1, Table 1.
Batch Normalization: accelerating deep network training by reducing internal covariate shift.
International Conference on Machine Learning (ICML), pp. 448–456. Cited by: §4.2.
-  (2018) Not all samples are created equal: deep learning with importance sampling. PMLR, pp. 2525–2534. Cited by: §4.1.
-  (2004) PCA-SIFT: a more distinctive representation for local image descriptors. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 506–513. Cited by: §2.
-  (2017) Focal loss for dense object detection. In IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988. Cited by: §3.2.
-  (2004) Distinctive image features from scale-invariant keypoints. International Journal on Computer Vision (IJCV) 60 (2), pp. 91–110. Cited by: §1, §2, §5.1, §5.2, Table 1.
-  (2018) GeoDesc: learning local descriptors by integrating geometry constraints. In European Conference on Computer Vision (ECCV), pp. 168–183. Cited by: §2, §5.1, Table 1, footnote 6.
-  (1947) On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics, pp. 50–60. Cited by: §5.4.
-  (2005) A performance evaluation of local descriptors. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI), pp. 1615–1630. Cited by: §2.
-  (2017) Working hard to know your neighbor’s margins: local descriptor learning loss. In Neural Information Processing Systems (NIPS), pp. 4826–4837. Cited by: §1, §1, §2, §2, §3.2, §3.3, §3.4, §3.4, §5.1, §5.1, §5.2, §5.4, §5.4, Table 1, Table 3, §6.
Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In Neural Information Processing Systems (NIPS), pp. 451–459. Cited by: footnote 3.
-  (2014) Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm. In Neural Information Processing Systems (NIPS), pp. 1017–1025. Cited by: footnote 3.
-  (2010) Descriptor learning for efficient retrieval. In European Conference on Computer Vision (ECCV), pp. 677–691. Cited by: §1.
-  (2015) FaceNet: a unified embedding for face recognition and clustering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815–823. Cited by: §2.
-  (2016) Minimizing the maximal loss: how and why.. In International Conference on Machine Learning (ICML), pp. 793–801. Cited by: §1, §3.2, §3.2.
-  (2015-12) Discriminative learning of deep convolutional feature point descriptors. In IEEE International Conference on Computer Vision (ICCV), pp. 118–126. Cited by: §5.1, Table 1.
-  (2015) Discriminative learning of deep convolutional feature point descriptors. In IEEE International Conference on Computer Vision (ICCV), pp. 118–126. Cited by: §2, §3.4.
-  (2019) PyTorch: an imperative style, high-performance deep learning library. In Neural Information Processing Systems (NIPS), Cited by: §5.1.
-  (2017) L2-Net: deep learning of discriminative patch descriptor in euclidean space. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 661–669. Cited by: §1, §1, §2, §2, §3.2, §3.4, §5.1, §5.1, Table 1.
-  (2019) SOSNet: second order similarity regularization for local descriptor learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §3.4, footnote 6.
-  (2016) Instance Normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §4.2.
-  (2017) Sampling matters in deep embedding learning. In IEEE International Conference on Computer Vision (ICCV), pp. 2840–2848. Cited by: §2.
-  (2016) LIFT: learned invariant feature transform. In European Conference on Computer Vision (ECCV), pp. 467–483. Cited by: §1.
-  (2015) Learning to compare image patches via convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4353–4361. Cited by: §1, §2.
-  (2019) Learning local descriptors with a cdf-based dynamic soft margin. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2969–2978. Cited by: §2, footnote 6.
-  (2017) Learning spread-out local feature descriptors. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4595–4603. Cited by: §5.1, Table 1.
-  (2019) Hardness-Aware deep metric learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 72–81. Cited by: §2.