1 Introduction
Feature descriptors, i.e. the invariant and discriminative representation of local image patches, is a major research topic in computer vision. The field reached maturity with SIFT
(Lowe, 2004), and has since become the cornerstone of a wide range of applications in recognition and registration. While most descriptors use handcrafted features (Lowe, 2004; Bay et al., 2006; Kokkinos et al., 2012; Trulls et al., 2013; SimoSerra et al., 2015), there has recently been interest in using machine learning algorithms to learn descriptors from large databases.
In this paper we draw inspiration on the recent success of Deep Convolutional Neural Networks on largescale image classification problems (Krizhevsky et al., 2012; Szegedy et al., 2013) to build discriminative descriptors for local patches. Specifically, we propose an architecture based on a siamese structure of two CNNs that share the parameters. We compute the L norm on their output, i.e. the descriptors, and use a loss that enforces the norm to be small for corresponding patches and large otherwise. We demonstrate that this approach allows us to learn compact and discriminative representations.
To implement this approach we rely on the dataset of Brown et al. (2011), which contains over 1.5M grayscale image patches from different views of 500K different 3D points. With such large datasets it becomes intractable to exhaustively explore all corresponding and noncorresponding pairs. Random sampling is typically used; however, most correspondences are not useful and hinder the learning of a discriminant mapping. We address this issue with aggressive mining of “hard” positives and negatives, a strategy that we denote as “fracking”, and which proves fundamental in order to obtain discriminative learned descriptors. In particular, in some of the tests we obtain up to a 169% increase in performance with SIFT as a baseline.
2 Related Work
Local features have proven very successful at matching points across images, and are nearly ubiquitous in modern computer vision, with a broad range of applications encompassing stereo, pose estimation, classification, detection, medical imaging and many others. Recent developments in the design of local image descriptors are moving from carefullyengineered features
(Lowe, 2004; Bay et al., 2006) towards learning features from large volumes of data. This line of works includes unsupervised techniques based on hashing as well as supervised approaches using Linear Discriminant Analysis (Brown et al., 2011; Gong et al., 2012; Strecha et al., 2012), boosting (Trzcinski et al., 2013), and convex optimization (Simonyan et al., 2014).In this paper we explore solutions based on deep convolutional networks (CNNs). CNNs have been used in computer vision for decades, but are currenly experiencing a resurgence kickstarted by the accomplishments of Krizhevsky et al. (2012) on largescale image classification. The application of CNNs to the problem of descriptor learning has already been explored by some researchers (Jahrer, 2008; Osendorfer et al., 2013). These works are however preliminary, and many open questions remain regarding the practical application of CNNs for learning descriptors, such as the most adequate network architectures and applicationdependent training schemes. In this paper we aim to provide a rigorous analysis of several of these topics. In particular, we use a siamese network (Bromley et al., 1994)
to train the models, and experiment with different network configurations inspired by the stateoftheart in deep learning.
Additionally, we demonstrate that aggressive mining of both “hard” positive and negative matching pairs greatly enhances the learning process. Mining hard negatives is a wellknown procedure in slidingwindow detectors (Felzenszwalb et al., 2010), where the number of negative samples is virtually unlimited and yet most negatives are easily discriminated. Similar techniques have been applied to CNNs for object detection (Szegedy et al., 2013; Girshick et al., 2014).
3 Learning Deep Descriptors
Given an intensity patch , the descriptor of is a nonlinear mapping that is expected to be discriminative, i.e. descriptors for image patches corresponding to the same point should be similar, and dissimilar otherwise.
In the context of multipleview geometry, descriptors are typically computed for salient points where scale and orientation can be reliably estimated, for invariance. Patches then capture local projections of 3D scenes. Let us consider that each image patch has an index that uniquely identifies the 3D point which roughly projects onto the 2D patch, from a specific viewpoint. Therefore, taking the L norm as a similarity metric between descriptors, for an ideal descriptor we would wish that
(1) 
We propose learning descriptors using a siamese network (Bromley et al., 1994), i.e. optimizing the model for pairs of corresponding or noncorresponding patches, as shown in Fig. 1(fig:archsiam:siamese). We propagate the patches through the model to extract the descriptors and then compute their L
norm, which is a standard similarity measure for image descriptors. We then compute the loss function on this distance. Given a pair of patches
and we define a loss function of the form(2) 
where is the indicator function, which is if , and otherwise. and are the partial loss functions for patches corresponding to the same 3D point and to different points, respectively. When performing backpropagation, the gradients are independently accumulated for both descriptors, but jointly applied to the weights, as they are shared.
Although it would be ideal to optimize directly for Eq. (1), we relax it, using a margin for . In particular, we consider the hinge embedding criterion (Mobahi et al., 2009)
(3) 
3.1 Convolutional Neural Network Descriptors
When designing the structure of the CNN we are limited by the size of the input data, in our case 6464 patches from the dataset of Brown et al. (2011). Note that larger patches would allow us to consider deeper networks, and possibly more informative descriptors, but at the same time they would be also more susceptible to occlusions. We consider networks of up to three convolutional layers, followed by up to a single additional fullyconnected layer. We target descriptors of size 128, the same as SIFT (Lowe, 2004); this value also constrains the architectures we can explore.
As usual, each convolutional layer consists four sublayers: filter layer, nonlinearity layer, pooling layer and normalization layer. Since sparser connectivity has been shown to improve performance while lowering parameters and increasing speed (Culurciello et al., 2013), except for the first layer, the filters are not densely connected to the previous layers. Instead, they are sparsely connected at random, so that the mean number of connections each input layer has is constant.
Regarding the nonlinear layer, we use hyperbolic tangent (Tanh), as we found it performs better than Rectified Linear Units (ReLU). We use L
pooling for the pooling sublayers, which were shown to outperfom the more standard max pooling
(Sermanet et al., 2012). Normalization has been shown to be important for deep networks (Jarrett et al., 2009) and fundamental for descriptors (Mikolajczyk & Schmid, 2005). We use subtractive normalization for a 55 neighbourhood with a Gaussian kernel. We will justify these decisions empirically in Sec. 4.An overview of the architectures we consider is given in Fig. 1
(fig:archsiam:architectures). We choose a set of six networks, from 2 up to 4 layers. The architecture hyperparameters (number of layers and convolutional/pooling filter size) are chosen so that no padding is needed. We consider models with a final fullyconnected layer as well as fully convolutional models, where the last sublayer is a pooling layer. Our implementation is based on Torch7
(Collobert et al., 2011).3.2 Stochastic Sampling Strategy and “Fracking”
Our goal is to optimize the network parameters from an arbitrarily large set of training patches. Let us consider a dataset with patches and unique 3D patch indices, each with associated image patches. Then, the number of matching image patches or positives and the number of nonmatching images patches or negatives in the dataset is
(4) 
In general both and are intractable to exhaustively iterate over. We approach the problem with random sampling. For gathering positives samples we can randomly choose a set of 3D point indices , and choose two patches with corresponding 3D point indices randomly. For negatives it is sufficient to choose random pairs with nonmatching indices.
However, when the pool of negative samples is very large random sampling will produce many negatives with a very small loss, which do not contribute to the global loss, and thus stifle the learning process. Instead, we can iterate over noncorresponding patch pairs to search for “hard” negatives, i.e. with a high loss. In this manner it becomes feasible to train discriminative models faster while also increasing performance. This technique is commonplace in slidingwindow classification.
Therefore, at each epoch we generate a set of
randomly chosen patch pairs, and after forwardpropagation through the network and computing their loss we keep only a subset of the “hardest” negatives, which are backpropagated through the network in order to update the weights. Additionally, the same procedure can be used over the positive samples, i.e. we can sample corresponding patch pairs and prune them down to the “hardest” positives. We show that the combination of aggressively mining positive and negative patch pairs, which we call “fracking”, allows us to greatly improve the discriminative capability of learned descriptors. Note that extracting descriptors with the learned models does not further require the siamese network and does not incur the computational costs related to mining.3.3 Learning
We normalize the dataset by subtraction of the mean of the training patches and division by their standard deviation. We then learn the weights by performing stochastic gradient descent. We use a learning rate that decreases by an order of magnitude every fixed number of iterations. Additionally, we use standard momentum in order to accelerate the learning process. We use a subset of the data for validation, and stop training when the metric we use to evaluate the learned models converges. Due to the exponentially large pool of positives and negatives available for training and the small number of parameters of the architectures, no techniques to cope with overfitting are used. The particulars of the learning procedure are detailed in the following section.
4 Results
For evaluation we use the Multiview Stereo Correspondence dataset (Brown et al., 2011), which consists of 6464 grayscale image patches sampled from 3D reconstructions of the Statue of Liberty (LY), Notre Dame (ND) and Half Dome in Yosemite (YO). Patches are extracted using the Difference of Gaussians detector (Lowe, 2004), and determined as a valid correspondence if they are within 5 pixels in position, 0.25 octaves in scale and radians in angle. Figure 2 shows some samples from each set, which contain significant changes in position, rotation and illumination conditions, and often exhibit very noticeable perspective changes.
We join the data from LY and YO to form a training set with over a million patches. Out of these we reserve a subset of 10,000 unique 3D points for validation (roughly 30,000 patches). The resulting training dataset contains 1,133,525 possible positive patch combinations and 1.117
possible negative combinations. This skew is common in correspondence problems such as stereo or structure from motion; we address it with aggressive mining or “fracking”.
A popular metric for classification systems is the Receiving Operator Characteristic (ROC), used e.g. in (Brown et al., 2011), which can be summarized by its area under the curve (AUC). However, ROC curves can be misleading when the number of positive and negative samples are very different (Davis & Goadrich, 2006), and is already nearly saturated for the baseline descriptor, SIFT (see Sec. 6). A richer metric is the PrecisionRecall curve (PR). We benchmark our models with PR curves and their AUC. In particular, for each of the 10,000 unique points in the validation set we randomly sample two corresponding patches and 1,000 noncorresponding patches, and use them to compute the PR curve. We rely on the validation set for the LY+YO split to examine different configurations, network architectures and mining techniques; these results are presented in Secs. 4.14.4.
Architecture  Parameters  PR AUC 

SIFT  —  0.361 
CNN1_NN1  68,352  0.032 
CNN2  27,776  0.379 
CNN2a_NN1  145,088  0.370 
CNN2b_NN1  48,576  0.439 
CNN3_NN1  62,784  0.289 
CNN3  46,272  0.558 
Architecture  PR AUC 

SIFT  0.361 
CNN3  0.558 
CNN3 ReLU  0.442 
CNN3 No Norm  0.511 
CNN3 MaxPool  0.420 
Finally, we evaluate the topperforming models over the test set in Sec. 4.5. We follow the same procedure as for validation, compiling the results for 10,000 points with 1,000 noncorresponding matches each, now over 10 different folds (see Sec. 6 for details). We run three different splits, for generalization: LY+YO (tested on ND), LY+ND (tested on YO), and YO+ND (tested on LY).
We will consider all hyperparameters to be the same unless otherwise mentioned, i.e. a learning rate of that decreases ever iterations by a factor of 10. We consider negative mining with and , and no positive mining; i.e. .
4.1 Depth and Fully Convolutional Architectures
The network depth is constrained by the size of the patch. We consider only up to 3 convolutional layers (CNN{13}). Additionally, we consider adding a single fullyconnected layer at the end (NN1). Fullyconnected layers increase the number of parameters by a large factor, which increases the difficulty of learning and can lead to overfitting. We show the results of the various architectures we evaluate in Table 2 and Fig. 4. Deeper networks outperform shallower ones, and architectures with a fullyconnected layer at the end do worse than fully convolutional architectures. In the following experiments with consider only models with 3 convolutional layers.
Cost  PR AUC  
128  128  1  1  —  0.366 
256  256  1  1  —  0.374 
512  512  1  1  —  0.369 
1024  1024  1  1  —  0.325 
128  256  1  2  20%  0.558 
256  256  2  2  35%  0.596 
512  512  4  4  48%  0.703 
1024  1024  8  8  67%  0.746 
2048  2048  16  16  80%  0.538 
Architecture  Output  Parameters  PR AUC 

SIFT  128f  —  0.361 
CNN3  128f  46,272  0.596 
CNN3 Wide  128f  110,496  0.552 
CNN3_NN1  128f  62,784  0.456 
CNN3_NN1  32f  50,400  0.389 
4.2 Hidden Units Mapping, Normalization, and Pooling
It is generally accepted that Rectified Linear Units (ReLU) perform much better in classification tasks (Krizhevsky et al., 2012) than other nonlinear functions. They are, however, illsuited for tasks with continuous outputs such as regression tasks or the problem at hand, as they can only output positive values. We consider both the standard Tanh and ReLU. For the ReLU case we still use Tanh for the last layer. We also consider not using the normalization sublayer for each of the convolutional layers. Finally, we consider using max pooling rather than L pooling. We show results for the fullyconvolutional CNN3 architecture in Table 2 and Fig. 4. The best results are obtained with Tanh, normalization and L pooling (‘CNN3’ in the table/plot). We will use this configuration in the following experiments.
4.3 Fracking
We analyze the effect of both positive and negative mining, or “fracking”, by training different models in which a large, initial pool of positives and negatives are pruned to a smaller number of ‘hard’ positive and negative matches, used to update the parameters of the network. We observe that increasing the batch size does not offer benefits in training: see Table 4 and Fig 7. We thus keep the batch size fixed to and , and increase the ratio of both negative mining and positive mining . We keep all other parameters constant. We use the notation /, for brevity.
Large mining factors have a high computational cost, up to 80% of the total computational cost, which includes mining (i.e. forward propagation of all and
samples) and learning (i.e. backpropagating the “hard” positive and negative samples). In order to speed up the learning process we initialize the CNN3 models with positive mining, i.e. 2/2, 4/4, 8/8 and 16/16, with an early iteration of a model trained only with negative mining (1/2).
Results are shown in Table 4 and Fig. 7. We see that for this particular problem aggressive mining is fundamental. This is likely due to the extremely large number of both negatives and positives in the dataset, in combination with models with a low number of parameters. We observe a drastic increase in performance up to 8/8 mining factors.
Train  Test  SIFT  CNN3  CNN3  CNN3  CNN3  PR AUC Increase 

mine1/2  mine2/2  mine4/4  mine8/8  (Best vs SIFT)  
LY+YOS  ND  0.349  0.535  0.555  0.630  0.667  91.1% 
LY+ND  YOS  0.425  0.383  0.390  0.502  0.545  28.2% 
YOS+ND  LY  0.226  0.460  0.483  0.564  0.608  169.0% 
Test  SIFT  BGM  LBGM  BinBoost{64,128,256}  Ours (best)  PR AUC Increase  

(128f)  (256b)  (64f)  (64b)  (128b)  (256b)  (128f)  (Ours vs Best)  
ND  0.349  0.487  0.495  0.267  0.451  0.549  0.667  21.5% 
YOS  0.425  0.495  0.517  0.283  0.457  0.533  0.545  0.2% 
LY  0.226  0.268  0.355  0.202  0.346  0.410  0.608  48.3% 
4.4 Number of Filters and Descriptor Dimension
We analyze increasing the number of filters in the CNN3 model, and adding a fullyconnected layer that can be used to decrease the dimensionality of the descriptor. We consider increasing the number of filters in layers 1 and 2 from 32 and 64 to 64 and 96, respectively. Additionally, we double the number of internal connections between layers. This more than doubles the number of parameters in this network. To analyze descriptor dimensions we consider the CNN3_NN1 model and change the number of outputs in the last fullyconnected layer from 128 to 32. In this case we consider positive mining with (i.e. 2/2). Results can be seen in Table 4 and Fig. 7. The best results are obtained with smaller filters and fullyconvolutional networks. Additionally we notice that mining is also instrumental for models the NN1 layer (compare results with Table 2).
4.5 Generalization and Comparison to StateoftheArt
In this section we consider the three dataset splits. We train the best performing models, i.e. CNN3 with different mining ratios, on a combination of two sets, and test them on the remaining set. We select the training iteration that performs best over the validation set. The test datasets are very large (up to 633K patches) and we use the same procedure as for validation, evaluating 10,000 unique points, each with 1,000 random noncorresponding matches. We repeat this process over 10 folds, thus considering 100,000 sets of one corresponding patch vs 1,000 noncorresponding patches. We show results in terms of PR AUC in Table 5, and the corresponding PR curves are pictured in Fig. 8.
We report consistent improvements over the baseline, i.e. SIFT. The performance varies significantly from split to split; this is due to the nature of the different sets. ‘Yosemite’ contains mostly frontoparallel translations with illumination changes and no occlusions (Fig. 2, row 3); SIFT performs well on this type of data. Our learned descriptors outperform SIFT on the highrecall regime (over 20% of the samples; see Fig. 8), and is 28% better overall in terms of PR AUC. The effect is much more dramatic on ‘Notredame’ and ‘Liberty’, which contains significant patch translation and rotation, as well as viewpoint changes around outcropping nonconvex objects which result in occlusions (see Fig. 2, rows 12). Our learned descriptors outperform SIFT by 91% and 169% testing over ND and LY, respectively.
We additionally compare against the stateoftheart (Trzcinski et al., 2013). In particular, we compare against 4 binary descriptor variants (BGM, BinBoost64, BinBoost128, and BinBoost256) and LBGM, as well as SIFT. For the binary descriptors we use the Hamming distance instead of Euclidean distance. The results are summarized in Table 6 and shown in Fig. 9. Our approach outperforms all descriptors, with the largest relative improvement on the ‘Liberty’ (LY) dataset. Due to the binary nature of the Hamming distance, the curves for the binary descriptors can be seen to have a sawtooth shape where each tooth corresponds to a 1bit difference.
4.6 Qualitative analysis
Figure 10 shows samples of matches retrieved with our CNN3mined4/4 network, over the validation set for the first split. In this experiment the corresponding patches were ranked in the first position in 76.5% of cases; a remarkable result, considering that every true match had to be chosen from a pool of 1,000 false correspondences. The righthand image shows cases where the ground truth match was not ranked first; notice that most of these patches exhibit significant changes of appearance. We include a failure case (highlighted in red), caused by a combination of large viewpoint and illumination changes; these misdetections are however very uncommon.
5 Conclusions
We use siamese networks to train deep convolutional models for the extraction of image descriptors for correspondence matching. This problem typically involves small patches and large volumes of data. The former constrains the size of the network, limiting the discriminating power of the models, while the latter makes it intractable to exhaustibly iterate over all the training samples. We address this problem with a novel training scheme based on aggressive mining of both positive and negative correspondences.
Current research in convolutional neural networks for computer vision is focused on classification rather than regression. Despite previous research in this area (Jahrer, 2008; Osendorfer et al., 2013) it remains unclear what is the most adequate architecture for the problem at hand. This is a critical point, as evidenced by the dramatic effect Krizhevsky et al. (2012) had in reinvigorating the field. In this paper we investigate a wide range of architectures (filter size, Tanh and ReLU units, normalization), and consider both fullyconvolutional networks and networks with a fullyconnected neural network at the end.
We show that the combination of stateoftheart techniques with aggressive mining in the training stage can result in large performance gains. We obtain up to over 2.5x the performance of SIFT, and up to 1.5x the performance of the stateoftheart in terms of the PrecisionRecall AUC. Our experiments over different data splits suggest that learning descriptors is particularly relevant for hard correspondence problems (e.g. ‘Liberty’ and ‘Notredame’ sets). The best models are fully convolutional.
We identify multiple directions for future research. Color has proved informative in deep learning for detection problems; our current models are built on grayscale data due to data restrictions. Our networks are likewise restricted by the size of the patches ( pixels), so that we currently limit our models to three convolutional layers. We intend to explore the performance of deeper networks along with larger patches. Additionally, our study indicates that fullyconvolutional models outperform models with a fullyconnected neural network at the end, but we intend to study this problem in further detail, particularly with dimensionality reduction in mind.
Acknowledgments
This project was partially funded by the ERAnet CHISTERA project VISEN (PCIN2013047), ARCAS (FP7ICT2011287617), PAU+ (DPI201127510), grant ANR10JCJC0205 (HICORE), MOBOT (FP7ICT2011600796) and RECONFIG (FP7ICT600825). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research.
References
 Bay et al. (2006) Bay, H., Tuytelaars, T., and Van Gool, L. SURF: Speeded Up Robust Features. In ECCV, 2006.
 Bromley et al. (1994) Bromley, J., Guyon, I., Lecun, Y., Säckinger, E., and Shah, R. Signature verification using a ”siamese” time delay neural network. In NIPS, 1994.
 Brown et al. (2011) Brown, M., Hua, Gang, and Winder, S. Discriminative learning of local image descriptors. PAMI, 33(1):43–57, 2011.
 Collobert et al. (2011) Collobert, R., Kavukcuoglu, K., and Farabet, C. Torch7: A matlablike environment for machine learning. In BigLearn, NIPS Workshop, 2011.
 Culurciello et al. (2013) Culurciello, E., Jin, J., Dundar, A., and Bates, J. An analysis of the connections between layers of deep neural networks. CoRR, abs/1306.0152, 2013.
 Davis & Goadrich (2006) Davis, J. and Goadrich, M. The relationship between PR and ROC curves. In ICML, 2006.
 Felzenszwalb et al. (2010) Felzenszwalb, P., Girshick, R., McAllester, D., and Ramanan, D. Object detection with discriminatively trained partbased models. PAMI, 32(9):1627–1645, 2010.
 Girshick et al. (2014) Girshick, R., Donahue, J., Darrell, T., and Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.

Gong et al. (2012)
Gong, Y., Lazebnik, S., Gordo, A., and Perronnin, F.
Iterative quantization: A procrustean approach to learning binary codes for largescale image retrieval.
In PAMI, 2012.  Jahrer (2008) Jahrer, M., Grabner M. Bischof H. Learned local descriptors for recognition and matching. In Computer Vision Winter Workshop, 2008.
 Jarrett et al. (2009) Jarrett, K., Kavukcuoglu, K., Ranzato, M., and LeCun, Y. What is the best multistage architecture for object recognition? In ICCV, 2009.
 Kokkinos et al. (2012) Kokkinos, I., Bronstein, M., and Yuille, A. Dense scaleinvariant descriptors for images and surfaces. In INRIA Research Report 7914, 2012.
 Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
 Lowe (2004) Lowe, D. Distinctive image features from scaleinvariant keypoints. IJCV, 60:91–110, 2004.
 Mikolajczyk & Schmid (2005) Mikolajczyk, K. and Schmid, C. A performance evaluation of local descriptors. PAMI, 27(10):1615–1630, 2005.
 Mobahi et al. (2009) Mobahi, H., Collobert, R., and Weston, J. Deep learning from temporal coherence in video. In ICML, 2009.
 Osendorfer et al. (2013) Osendorfer, C., Bayer, J., Urban, S., and van der Smagt, P. Convolutional neural networks learn compact local image descriptors. In ICONIP, volume 8228. 2013.
 Sermanet et al. (2012) Sermanet, P., Chintala, S., and LeCun, Y. Convolutional neural networks applied to house numbers digit classification. In ICPR, 2012.
 SimoSerra et al. (2015) SimoSerra, E., Torras, C., and MorenoNoguer, F. DaLI: deformation and light invariant descriptor. IJCV, 2015.
 Simonyan et al. (2014) Simonyan, K., Vedaldi, A., and Zisserman, A. Learning local feature descriptors using convex optimisation. PAMI, 2014.
 Strecha et al. (2012) Strecha, C., Bronstein, A., Bronstein, M., and Fua, P. Ldahash: Improved matching with smaller descriptors. In PAMI, volume 34, 2012.
 Szegedy et al. (2013) Szegedy, C., Toshev, A., and Erhan, D. Deep neural networks for object detection. In NIPS, 2013.
 Trulls et al. (2013) Trulls, E., Kokkinos, I., Sanfeliu, A., and MorenoNoguer, F. Dense segmentationaware descriptors. CVPR, 2013.
 Trzcinski et al. (2013) Trzcinski, T., Christoudias, M., Fua, P., and Lepetit, V. Boosting binary keypoint descriptors. In CVPR, 2013.
6 Supplemental Material
This section contains supplemental material. As we argue in Sec. 4, PrecisionRecall (PR) curves are the most appropriate metric for our problem; however, we also consider ROC curves and Cumulative Match Curves (CMC).
ROC curves are created by plotting the true positive rate TPR as a function of the true negative rate TNR, where:
(5) 
Alternatively, the CMC curve is created by plotting the Rank against the Ratio of correct matches. That is, CMC(k) is the fraction of correct matches that have rankk. In particular CMC(1) is the percentage of examples in which the ground truth match is retrieved in the first position.
We report these results for either metric in terms of the curves (plots) and their AUC (tables), for the bestperforming iteration.
6.1 Experiments: (1) Depth and architectures
We extend the results of Sec. 4.1, which are summarized in Table 7. Figs. 11, 12 and 13 show the PR, ROC and CMC curves respectively.
Architecture  Parameters  PR AUC  ROC AUC  CMC AUC 

SIFT  —  0.361  0.944  0.953 
CNN1_NN1  68,352  0.032  0.929  0.929 
CNN2  27,776  0.379  0.971  0.975 
CNN2a_NN1  145,088  0.370  0.987  0.988 
CNN2b_NN1  48,576  0.439  0.985  0.986 
CNN3_NN1  62,784  0.289  0.980  0.982 
CNN3  46,272  0.558  0.986  0.987 
6.2 Experiments: (2) Hidden units, normalization and pooling
We extend the results of Sec. 4.2, which are summarized in Table 8. Figs. 14, 15 and 16 show the PR, ROC and CMC curves respectively.
Architecture  PR AUC  ROC AUC  CMC AUC 

SIFT  0.361  0.944  0.953 
CNN3  0.558  0.986  0.987 
CNN3 ReLU  0.442  0.973  0.976 
CNN3 No Norm  0.511  0.980  0.982 
CNN3 MaxPool  0.420  0.973  0.975 
6.3 Experiments: (3) Fracking
We extend the results of Sec. 4.3, which are summarized in Table 9. Figs. 17, 18 and 19 show the PR, ROC and CMC curves respectively.
PR AUC  ROC AUC  CMC AUC  

1  1  0.366  0.977  0.979 
1  2  0.558  0.986  0.987 
2  2  0.596  0.988  0.989 
4  4  0.703  0.993  0.993 
8  8  0.746  0.994  0.994 
16  16  0.538  0.983  0.986 
6.4 Experiments: (4) Number of filters and descriptor dimension
We extend the results of Sec. 4.4, which are summarized in Table 10. Figs. 20, 21 and 22 show the PR, ROC and CMC curves respectively.
Architecture  Output  Parameters  PR AUC  ROC AUC  CMC AUC 

SIFT  128D  —  0.361  0.944  0.953 
CNN3  128D  46,272  0.596  0.988  0.989 
CNN3 Wide  128D  110,496  0.552  0.987  0.988 
CNN3_NN1  128D  62,784  0.456  0.988  0.988 
CNN3_NN1  32D  50,400  0.389  0.986  0.987 
6.5 Experiments: (5) Generalization
In this section we extend the results of Sec. 4.5. We summarize the results over three different dataset splits, each with ten test folds of 10,000 randomly sampled positives and 1,000 randomly sampled negatives. We show the PR results in Tables 1113, and Figs. 2325, the ROC results in Tables 1416, and Figs. 2628, and the CMC results in Tables 1719, and Figs. 2931.
PrecisionRecall AUC, Train: LY+YOS, Test: ND (10 folds)  
Model  F1  F2  F3  F4  F5  F6  F7  F8  F9  F10  Avg.  Increase 
SIFT  .364  .352  .345  .343  .349  .350  .350  .351  .341  .348  .349  — 
CNN3, mine1/2  .535  .527  .538  .537  .548  .529  .537  .535  .530  .529  .535  53.3% 
CNN3, mine2/2  .559  .548  .560  .556  .566  .554  .557  .554  .550  .549  .555  59.0% 
CNN3, mine4/4  .628  .619  .635  .632  .639  .625  .636  .631  .624  .626  .630  80.5% 
CNN3, mine 8/8  .667  .658  .669  .667  .678  .659  .672  .667  .662  .666  .667  91.1% 
PrecisionRecall AUC, Train: LY+ND, Test: YOS (10 folds)  
Model  F1  F2  F3  F4  F5  F6  F7  F8  F9  F10  Avg.  Increase 
SIFT  .428  .419  .413  .416  .414  .427  .429  .442  .432  .430  .425  — 
CNN3, mine1/2  .381  .385  .367  .386  .366  .390  .393  .401  .383  .376  .383  9.9% 
CNN3, mine2/2  .388  .395  .377  .393  .376  .397  .401  .405  .388  .381  .390  8.2% 
CNN3, mine4/4  .502  .504  .483  .509  .485  .515  .513  .516  .499  .489  .502  18.1% 
CNN3, mine8/8  .547  .547  .528  .551  .528  .559  .556  .561  .546  .530  .545  28.2% 
PrecisionRecall AUC, Train: YOS+ND, Test: LY (10 folds)  
Model  F1  F2  F3  F4  F5  F6  F7  F8  F9  F10  Avg.  Increase 
SIFT  .223  .226  .229  .228  .226  .222  .233  .235  .219  .223  .226  — 
CNN3, mine1/2  .460  .464  .464  .460  .454  .452  .462  .463  .462  .456  .460  103.5% 
CNN3, mine2/2  .482  .487  .490  .485  .478  .472  .484  .488  .486  .478  .483  113.7% 
CNN3, mine4/4  .564  .566  .569  .562  .560  .557  .564  .567  .570  .562  .564  149.6% 
CNN3, mine8/8  .607  .611  .610  .604  .603  .604  .606  .615  .612  .608  .608  169.0% 
ROC AUC, Train: LY+YOS, Test: ND (10 folds)  
Model  F1  F2  F3  F4  F5  F6  F7  F8  F9  F10  Avg. 
SIFT  .956  .954  .955  .958  .957  .955  .955  .955  .956  .955  .956 
CNN3, mine1/2  .979  .978  .979  .982  .980  .978  .981  .981  .979  .979  .980 
CNN3, mine2/2  .981  .980  .981  .983  .982  .980  .983  .982  .981  .981  .981 
CNN3, mine4/4  .985  .984  .985  .987  .986  .985  .988  .986  .985  .985  .986 
CNN3, mine8/8  .986  .985  .986  .988  .987  .986  .989  .986  .986  .986  .987 
ROC AUC, Train: LY+ND, Test: YOS (10 folds)  
Model  F1  F2  F3  F4  F5  F6  F7  F8  F9  F10  Avg. 
SIFT  .949  .947  .948  .949  .949  .950  .949  .950  .950  .950  .949 
CNN3, mine1/2  .956  .953  .955  .956  .957  .958  .957  .957  .958  .957  .956 
CNN3, mine2/2  .958  .955  .957  .958  .959  .959  .958  .959  .960  .958  .958 
CNN3, mine4/4  .971  .969  .971  .971  .973  .973  .972  .972  .973  .971  .972 
CNN3, mine8/8  .974  .972  .975  .974  .976  .975  .975  .975  .976  .974  .975 
ROC AUC, Train: YOS+ND, Test: LY (10 folds)  
Model  F1  F2  F3  F4  F5  F6  F7  F8  F9  F10  Avg. 
SIFT  .938  .939  .936  .938  .933  .935  .936  .938  .937  .936  .937 
CNN3, mine1/2  .973  .973  .973  .971  .972  .972  .972  .973  .973  .972  .972 
CNN3, mine2/2  .975  .976  .975  .974  .976  .975  .974  .976  .976  .974  .975 
CNN3, mine4/4  .980  .980  .980  .979  .980  .980  .979  .982  .981  .979  .980 
CNN3, mine8/8  .983  .983  .983  .981  .983  .982  .982  .984  .983  .982  .982 
CMC AUC, Train: LY+YOS, Test: ND (10 folds)  
Model  F1  F2  F3  F4  F5  F6  F7  F8  F9  F10  Avg. 
SIFT  .964  .962  .963  .966  .965  .963  .964  .963  .964  .962  .963 
CNN3, mine1/2  .982  .981  .981  .984  .982  .981  .983  .983  .982  .981  .982 
CNN3, mine2/2  .983  .982  .983  .985  .984  .982  .985  .984  .984  .983  .984 
CNN3, mine4/4  .987  .987  .987  .989  .988  .987  .989  .988  .987  .987  .988 
CNN3, mine8/8  .988  .988  .988  .990  .989  .988  .990  .989  .989  .989  .989 
CMC AUC, Train: LY+ND, Test: YOS (10 folds)  
Model  F1  F2  F3  F4  F5  F6  F7  F8  F9  F10  Avg. 
SIFT  .956  .955  .956  .956  .956  .958  .956  .957  .956  .958  .956 
CNN3, mine1/2  .964  .962  .963  .964  .965  .966  .964  .965  .966  .966  .965 
CNN3, mine2/2  .966  .965  .966  .967  .968  .968  .967  .968  .969  .968  .967 
CNN3, mine4/4  .977  .976  .978  .978  .980  .980  .977  .979  .980  .980  .978 
CNN3, mine8/8  .980  .979  .981  .981  .982  .982  .980  .982  .982  .982  .981 
CMC AUC, Train: YOS+ND, Test: LY (10 folds)  
Model  F1  F2  F3  F4  F5  F6  F7  F8  F9  F10  Avg. 
SIFT  .948  .949  .947  .948  .945  .945  .948  .949  .948  .947  .948 
CNN3, mine1/2  .976  .975  .976  .974  .975  .975  .975  .977  .976  .975  .975 
CNN3, mine2/2  .978  .979  .978  .977  .978  .978  .978  .979  .979  .977  .978 
CNN3, mine4/4  .983  .983  .983  .982  .983  .982  .982  .984  .984  .982  .983 
CNN3, mine8/8  .985  .985  .985  .984  .985  .984  .985  .986  .986  .985  .985 
Comments
There are no comments yet.