Designing high quality descriptors for finding correspondences between images is crucial for many computer vision tasks such as 3D reconstruction, structure from motion (SFM), wide-baseline matching , stitching image panoramas , and tracking [9, 3]. Finding correspondences in-the-wild is challenging due to changes in viewpoints, scale variations, variations in illumination, occlusion, and shading.
Traditional handcrafted descriptors [11, 3] encode pixel, super-pixel or sub-pixel level statistics and similarity, but do not have ability to capture higher structural level information. However, there are tasks which are highly dependent on pixel level statistics. In these kind of tasks handcrafted features perform better. Resurgence of ConvNets has resulted in many recent works proposing learning based descriptors [13, 21, 8, 2]. ConvNet based descriptors have the potential to capture higher level structural information and generalize well, if it is properly trained with a good dataset.
As noted in , current benchmark datasets limit the potential of ConvNet based learning algorithm to evaluate across different datasets. The frequently used datasets for patch matching are the Multi-View Stereo (MVS) dataset  and Oxford ACRD dataset . The MVS dataset has only three scenes (each scene consists of approximately images) and does not provide sufficient variation in terms of scene content, viewpoint, and scale. Further, most of the non-matching pairs in the dataset are totally distinct from each other which seldom happens in real-world scenarios. The Oxford ACRD dataset which was created a decade ago is very small for today’s computing power and is prone to over-fitting and in turn cannot generalize any descriptor to be robust in-the-wild. Even a recently published dataset named Hpatches  contains scenes with variations only in illumination and viewpoints on flat surfaces such as walls. Such type of scenes do not suffer from occlusions. However, scenes capturing real world 3D non-planer objects at various angles will experience partial occlusions. Hence, a good dataset should include these characteristics to be more challenging and to efficently describe feature descriptors for 3D reconstruction of non-planer objects.
For efficient ConvNet based descriptors, it is important to have a good combination of ConvNet architecture and dataset on which the ConvNet is trained. Selection of a good architecture that is robust to geometric and scale variations is as essential as good datasets. Working on these lines, in this paper, we propose a multi-resolution ConvNet architecture based descriptors. The ConvNet is trained on a new larger dataset which has higher geometric and photometric variations in the scene, number of viewpoints, variations in scale, and also includes scenes capturing 3D object that suffer partial occlusions. We have evaluated the proposed descriptor for patch matching and keypoint matching, and found that it is more than competent when compared to the state-of-the-art descriptors. Further, we have conducted 3D reconstruction evaluations and found that the proposed method has produced significantly better results.
1.1 Related Work
Several papers in the literature exist that address the challenges involved in designing image descriptors that are in turn used to find the image correspondences using local patch matching. These include the traditional hand-crafted descriptors such as SIFT  and SURF  and the more recent ConvNet based descriptors such as DeepDesc , DeepCompare , Matchnet , and Tfeat . Learning the descriptors for local patches using ConvNets was attempted early by Jahrer et al. 
but was not followed up due to numerous practical issues and limited evaluation. However, with recent success of ConvNets and deep learning, Matching local image patches via learned descriptors became widespread study and many ConvNet based architectures have been proposed[21, 8, 13, 2]. It has been shown in the literature that the descriptors learned using Siamese architecture based ConvNets considerably improve the matching performance [21, 8, 13].
Few papers in the literature, study patch matching as a task [21, 8], where the feature layers (Siamese network) and the metric learning layers (fully-connected layers) are jointly learnt in an end-to-end fashion. These type of ConvNets cannot be used as general descriptors for any tasks such as reconstruction except patch matching. Whereas, 
uses the features extracted at the output of the Siamese networks without learning any non-linear decision network or metric learning layer. These type of descriptors, are generic in nature and can be used for many tasks as drop-in replacement of traditional descriptors including keypoint matching, 3D reconstruction, and tracking. Since, metrics to compare between patches are not learned, a generic metric such asdistance to compare patches and train the network. Learning feature descriptors using triplets of patches was investigated in  using shallow networks in order to reduce the descriptor extraction time. Similar to [13, 2], the aim of the proposed approach is to extract descriptors for local image patches that can be used for 3D reconstruction.
Inspired by the multi-bank architecture used in human-pose estimation, the proposed network uses a three bank network to encode scale variations of the image patches. Each bank shares common weights and hence the scaled patch inputs undergo similar transformation before being combined together and processed further. This helps the proposed network in being more robust to scale changes. Similar multi-resolution architecture has been proposed as a variant (central-surround two-stream model) in . This multi-resolution model produces independent output combined by the metric learned layers. In the current literature, this type of architecture has not been studied for stand alone descriptors.
2 Multi-Resolution Convolutional Neural Network
The Multi-Resolution Convolutional Neural Network which has the capability to capture better scale variance, we adapted it in a Siamese fashion to learn patch descriptors of size dimensions. The proposed multi-bank network accepts image patches scaled to different resolutions, analogous to approximating the Laplacian pyramid for the input patch. The network has channels as shown in Fig. 1 and each channel accepts patches of size pixels. The first channel takes a patch of size pixels downsampled to pixels. The second channel takes a center-cropped patch and the third channel takes center-cropped patch scaled to . Each channel has identical structure consisting of three convolution layers and shares the parameters across the banks as shown in Fig. 1. The output maps from the channels are then concatenated to form one bank and passed through convolution layers of features respectively. The result is then flattened to form a
D tensorsize and passed to a fully connected layer of dimensions.
The output of this fully connected layer of trained network is used as the descriptors for the input image.
The Multi-Resolution architecture along with providing scale invariance also captures information at different extent. The top bank has a wider support region due to larger patch, enabling the network to better distinguish among locally repeated patterns. In contrast, the bottom bank which is feed from up-sampling a a central patch captures subtle changes which helps to discriminate from close by points. We also adopt a learned combination of the three outputs rather than leaving it to the point of mere concatenation as done in DeepCompare .
Training of the network is performed in Siamese fashion using the contrastive loss function (Eq.1) as used in . Here, is the output of the network whose parameters are . is a binary indicator function, whose value is when the pair forms a match and otherwise. The margin , is the minimum distance by which a non-matching pair should be apart.
3 The PS Dataset
In this paper we propose a new dataset, for learning generic descriptors, called the PhotoSynth-based dataset (PS). This dataset consists of two types of scenes, Multi-image and Single-image scenes.
Multi-image scene: The scenes in this category focus on D objects having distinctive edges. Each scene consists of color images on an average, and a corresponding sparse D point cloud created using SFM [19, 20]. Unlike the MVS dataset which has only scenes, the proposed dataset has scenes with considerable photometric and geometric variations. The number of patches per scene ranges from to . Image patches are created by defining a square neighborhood around the projections of D points in the images. The SFM process provides correspondences having wide baselines and large scale variations which cannot be obtained by stereo matching using handcrafted descriptors. Sample images of this category are illustrated in Fig. 2.
For a particular multi-image scene, let denote the set of all the patches belonging to a D point . The scale , for a projection is given by the ratio and varies in range in . Here, is the focal length of the camera corresponding to the image the D point is projected and is the distance between its camera center and the D point projected along the camera’s view direction. Viewpoint difference between a pair images is measured in terms of angle between their view directions. The viewpoint variation ranges from degrees. Square patches of size are cropped from images.
Single-image scene: The scenes in this category contain images focusing a flat surface having varied textures, , a wall. In such a scene, pairs are formed by taking a patch from the image and a random affine transform of the patch. Such transformations can be obtained dynamically while training the network. This process aids the training in two ways: (i) provide a wide variety of affine transforms between patches that are not present in the multi-image scenes (ii) It avoids over-fitting, since, the network sees the same patch taken from an image with different affine transform each time. Totally there are 10 images in this category. Sample images of this category are illustrated in Fig. 3.
The dataset in total contains 6 indoor scenes and 19 outdoor scenes. The resolution of the images are either pixels or pixels. The format of our PS dataset00footnotetext: The dataset will be made publicly available is similar to that of the MVS dataset. Each scene contains RGB patches, of size , Each scene is provided with a patch information list with an entry for all the patches in that scene and D point index to which the patch is belongs to and the co-ordinates of the center of the patch in the grid image. Additionally, each scene contains a match-list containing pair of indices from the information list of all the matching pairs. The number of matching pairs in a scene varies from to . For training we use scenes with and scenes from the multi-image and single-image categories respectively. The remaining multi-image scenes and single-image scenes form the test scenes. The test scenes have an additional list containing randomly selected matching and non-matching pairs. Sample patches from our dataset with comparison to MVS dataset are shown in Fig. 4.
4 Experimental setup
The experimental setup for evaluating the proposed approach and for comparing with other approaches in literature is detailed in this section. The evaluation metrics and training methodology are described in sections4.1 and 4.2 respectively.
4.1 Evaluation metrics
Following [13, 2], matching score is used as metrics for patch matching. Matching score is the ratio of the number of correct predicted matches to the number of correspondences. Ground truth correspondences are computed using the homography associated with image pair. For a point in one image its nearest neighbor in the other image is predicted as a match.
We used vl_covdet from the vl_benchmark library  to extract patches and compute SIFT descriptors. The patches extracted by vl_covdet are affine normalized. We have also extracted unnormalized patches which provides a way to evaluate learnt descriptors without using scale and rotation information from the keypoint.
To evaluate D reconstructions via SFM using putative matching from different descriptors, DoG keypoints are computed from vl_covdet. False positives are pruned by only selecting those pairs which form mutual nearest neighbors. VisualSFM  is used for reconstruction. Total number of points triangulated, average re-projection error and average track length (projections per D point) are reported.
4.2 Training methodology
For training the proposed network, mini-batch gradient descent is used with batch size of pairs and
batches per epoch. Each batch containsmatching and non-matching pairs. Further, the matching pairs in a batch are systematically distributed in ranges such as in a ratio of . Here, is the margin of contrastive loss (set to
). For proper training we have used negative mining strategy, where few wrongly classified pairs in an epoch is used for training the subsequent epochs.
Similar to matching pairs, all non-matching pairs were also divided into four ranges based on margin distance. A subset of patches is taken from all the patches. For every patch in , we divide patches into buckets (first one containing the closest patches and last one containing the farthest patches from ) and sample patches from the first two buckets in the ratio and form non-matching pairs with . Care is taken to ensure that none of the matching patches are paired as non-matching. We don’t look beyond the first buckets as it has been observed that after the first epoch the distance of non-matching pairs lying in the rd and th bucket are above the margin and don’t contribute to the gradients.
To reduce over-fitting and achieve rotation and scaling invariance, the patches are perturbed randomly during training. The perturbations include rotating and scaling the patch with random values within the range and respectively. Perturbations are also used to create matching and non-matching pairs from the single image scenes. A matching pair is formed by pairing a patch and an affine transformation of it. For non-matching pairs, a patch is paired with an affine transformation of some other patch from the same single image.
In this section, we evaluate the performance of the proposed approach on patch pair classification, keypoint matching and D reconstruction tasks and compare with the recent approaches in literature. The patch pair classification task is to classify a given pair of patches as matching or non-matching. Though in the real-world this type of classification is not feasible, we report the performance for completeness. The keypoint matching task is to find matching patches around keypoints detected in images captured from different views. The results of pair classification, keypoint matching and 3D reconstruction are reported below.
5.1 Patch pair classification
The MVS dataset  is used to measure the ability of a descriptor to discriminate positive pairs of patches from negative pairs. It has scenes, Liberty (Lib), Notredame (Not) and Yosemite (Yos) with , and patches respectively. Each scene is also provided with a list of pairs with matching and non-matching pairs. Approaches in [8, 21, 2] use the evaluation mentioned in  where model training is based on single scene. However, in , training is based on two scenes and tested on remaining one. The evaluation is performed by thresholding distance scores between patch pairs on ROC curve. The results are shown in table 1. The numbers reported in the table is the false positive rate at true positive rate (FPR95). It is observed that the model trained on single scenes performs marginally lower than  only in some cases. Since our model capacity is intentionally made large (in order to achieve descriptor generalization using the proposed PS dataset) and with the MVS dataset being small, the problem of over-fitting is observed when training on single scenes. However, it should be noted that the pair classification is not of practical importance when compared to the keypoint matching and it is only reported for completeness.
|DeepCompare siam ||256||15.89||19.91||13.24||17.25||8.38||6.01||13.45|
|TFeat margin ||128||7.08||7.82||7.22||9.79||3.85||3.12||6.47|
5.2 Keypoint matching
The Oxford ACRD  and SG dataset  are used for evaluating the keypoint matching performance of different descriptors. The Oxford ACRD contains real images with different geometric and photometric transformations for different scene types. We consider four scenes: boat (zoom, rotation), graffiti (viewpoint), leuven (light) and wall (viewpoint). As mentioned in section 4 we use matching score (MScore) and mAP values as metrics. Fig. 5 and Fig. 6 shows the MScore comparison on scenes from the Oxford ACRD for normalized patches obtained using Harris-Affine keypoints. As can be observed, the proposed descriptor outperforms all the other descriptors on all the scenes.
The SG dataset has scenes with each scene represented by a reference image. For each scene, the reference image is synthetically warped geometrically and photometrically to generate new images. The transformations include blur, lighting, rotation, zoom (scaling), perspective (viewpoint). Fig. 7 and Fig. 8 show MScore of different descriptors on the SG dataset  for normalized patches (using Harris-Affine keypoints) and unnormalized patches respectively. The plots show comparison for different degrees of transformations in the dataset. For each transformation, as the degree of variation increases, the performance of the proposed descriptor is observed to be better than the other descriptors being compared against. For the unnormalized patches, even though SIFT is better for large values of zoom and rotation, proposed descriptor is better for all other transformations and is always better or comparable to DeepDesc and TFeat. Thus, the proposed descriptor is robust to different transformations and can handle large variations better than all the other descriptors. In Fig. 9, comparison of mAP for the SG dataset is shown.
We observe that our descriptor outperforms all the other descriptor in the Oxford-ACRD for all scenes and majority of the transformations in the SG dataset. We observe that for low geometric transformation in the SG dataset ’lighting’, we have inferior mAP values compared to DeepDesc and Tfeat although having similar Matching Score (as shown in Fig. 6 in the main paper). One possible reason is that our PS dataset which is used for training have much more difficult matching and non-matching pairs than MVS. This makes distance between matching and non-matching pair is much more spread out and have lower mAP which is based on threshold over descriptor distances.
5.3 3D reconstruction
In this section, we compare reconstructions using putative matches obtained using our model, DeepDesc, Tfeat and SIFT. We use the fountain-P11, herz-Jesu-P8 and entry-P10 datasets from  to reconstruct 3D points using SFM. The metrics used for evaluation are discussed in Sec. 4.1. Table 2 shows the results of reconstructions obtained using different descriptors.
|Avg. track len.||F-P11||3.83||3.73||3.82||3.90|
From table 2, we observe that our proposed model performs better than DeepDesc  and TFeat  on all the four metrics considered. In comparison to SIFT, the proposed descriptor is better on three of the four metrics. Even though the re-projection error is higher for the proposed descriptor when compared to SIFT, the number of inlier matches between different views is higher for our descriptor along with the average track length (measure of number of projections of a D point in different views). D reconstructions for the fountain-P11 and herz-Jesu-P8 scenes of Strecha are shown in Fig. 10 and Fig. 11 respectively. We observe that all methods produce visually indistinguishable results in most parts of the reconstructions. However in Fig. 10, the bottom part of the fountain reconstruction is better for the proposed descriptor.
Thus, the proposed descriptor is better than all the other descriptors for reconstruction task especially among the learned descriptors.
We have used Nvidia-Titan X for training and testing. On a batch size of 128 averaged over 1000 batches, the forward propagation times of Tfeat, DeepDesc and our network are 3.5, 175 and 14 micro-seconds respectively. We observe that our network is slightly slower than Tfeat though having multi-resolution banks and operating on larger patch sizes.
In this paper, we proposed a learning based local image descriptors for patch matching and 3D reconstruction. For designing efficient learning based descriptors using ConvNet a good combination of dataset as well as architecture is important. We propose the use of multi-resolution architecture and we have introduced a new dataset with scenes of varied content and containing images with high geometric transformations. With training, ConvNet with our dataset to obtain descriptors we have found that it is invariant to geometric changes than other learned descriptors when key-point information is not used. We have also found that it generated on average % more number of points when compared to other descriptors during reconstructions. The proposed combination has also produced increased image coverage per point on wide baseline scenes. With these results, we can conclude that the proposed combination of multi-resolution ConvNet with the new dataset produces descriptor that generalizes across the dataset.
-  V. Balntas, K. Lenc, A. Vedaldi, and K. Mikolajczyk. Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. CVPR, 2017.
-  V. Balntas, E. Riba, D. Ponsa, and K. Mikolajczyk. Learning local feature descriptors with triplets and shallow convolutional neural networks. BMVC, 2016.
-  H. Bay, T. Tuytelaars, and L. V. Gool. Surf: Speeded up robust features. ECCV, 2006.
-  M. Brown, G. Hua, and S. Winder. Discriminative learning of local image descriptors. IEEE TPAMI, 2011.
-  M. Brown and D. G. Lowe. Automatic panoramic image stitching using invariant features. IJCV, 2007.
-  P. Fisher, A. Dosovitskiy, and T. Brox. Descriptor matching with convolutional neural networks: a comparison to sift. arXiv preprint arXiv:1405.5769, 2014.
-  R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. CVPR, 2006.
-  X. Han, T. Leung, Y. Jia, R. Sukthankar, and A. C. Berg. Matchnet: Unifying feature and metric learning for patch-based matching. CVPR, 2015.
-  W. He, T. Yamashita, H. Lu, and S. Lao. Surf tracking. ICCV, 2009.
-  M. Jahrer, M. Grabner, and H. Bischof. Learned local descriptors for recognition and matching. CVWW, 2008.
-  D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
-  K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE TPAMI, 27(10):1615–1630, 2005.
-  E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, P. Fua, and F. Moreno-Noguer. Discriminative learning of deep convolutional feature point descriptors. ICCV, 2015.
-  N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: exploring photo collections in 3D. ACM SIGGRAPH, 2006.
-  C. Strecha, W. V. Hansen, L. V. Gool, P. Fua, and U. Thoennessen. On benchmarking camera calibration and multi-view stereo for high resolution imagery. CVPR, 2008.
-  J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. NIPS, 2014.
-  A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms, 2008.
-  S. Winder, G. Hua, and M. Brown. Picking the best daisy. CVPR, 2009.
-  C. Wu. VisualSFM: A visual structure from motion system, 2011. http://ccwu.me/vsfm/.
-  C. Wu, S. Agarwal, B. Curless, and S. M. Seitz. Multicore bundle adjustment. CVPR, 2011.
-  S. Zagoruyko and N. Komodakis. Learning to compare image patches via convolutional neural networks. CVPR, 2015.