Generic image retrieval is widely employed in practical Structure-from-Motion (SfM) [1, 2, 3, 4] and visual simultaneous localization and mapping (SLAM)  systems to accelerate the image matching process or identify possible closed loops. Until recently, the preferred image retrieval techniques used in SfM are largely variants of the Bag-of-Words (BoW) models [6, 7], despite the fact that CNN-based approaches [8, 9, 10, 11] have shown superior efficiency and scalability for particular object retrieval.
This discrepancy can be explained by the difference between semantic similarity and geometric similarity. For SfM tasks, geometric overlaps among images (geometric similarity), rather than information about object categories (semantic similarity), are required for later reliable image matching. We refer to this specific type of image retrieval task as matchable image retrieval, the goal of which is to find images with large overlaps. Two images are overlapped if they include the same area of the viewed objects or scenes. In this scenario, BoW models based on local descriptors are more robust since they serve as predictors  for how well the local descriptors can be matched. However, neither BoW models nor CNN-based methods perfectly solve the matchable image retrieval problem. On the one hand, BoW models generally have limited scalability as the efficiency and accuracy drop quickly with the increase of data. CNN-based methods, on the other hand, offer efficient and scalable solutions by compact global image representations distilled from intermediate feature maps, yet they lack the ability to identify regional discriminations and local information. This problem has long been overlooked because nearly all of these CNN-based methods are evaluated on object retrieval datasets such as Oxford5k  and Paris6k , in which images are organized by semantic similarity rather than geometric overlaps.
However, in a typical SfM scene (Fig. 1
) consisting of overlapping images with weak semantics, current CNN-based methods are worse than BoW models because they fail to render a fine-grained ranking with respect to scene overlaps. That is probably the reason why stable SfM[1, 2, 3, 4] and SLAM  solutions still adopt BoW models for matchable image retrieval. CNN-based methods should be employed because of its superior efficiency and scalability, and its previous success in object retrieval tasks. However, several problems should be addressed to get rid of the above flaws. First, we are in need of a large-scale SfM database to avoid the data bias in previous evaluations. Second, information about geometric relationships between images should be further exploited to better encode local information. Several methods such as  have attempted to do so but stayed in the SfM level instead of using dense correspondences. Third, the training process should be made more efficient to cope with big data.
In this paper, we present an efficient CNN-based method for matchable image retrieval that utilizes rich geometric context mined from densely reconstructed structures, namely mesh re-projection and overlap masks. Moreover, local information is taken good care of with a post-processing step that exploits regional matching. In summary, our contributions are threefold:
[leftmargin=*, noitemsep, topsep=0pt]
We present Geometric Learning with 3D Reconstruction (GL3D), a large-scale database for 3D reconstruction and geometry-related learning problems, which contains 378 different datasets with full coverage of the scenes.
We make use of the dense correspondences mined from 3D reconstruction to develop an automatic pipeline for ground-truth data generation, which results in fine-grained training data with respect to scene overlaps.
We propose mask triplet loss (MTL) with in-batch mining which utilizes the well-annotated training data combined with regional information to accelerate the training of matchable image retrieval.
2 Related Works
Local descriptor based methods. In the 3D modeling of city-scale imagery, either pairwise image matching or point-cloud matching [15, 16] often take a majority of computation. Since the seminal work of reconstructing Internet imagery , object retrieval techniques have been widely adopted in a series of SfM systems [2, 3, 17, 18, 19]. As a successful BoW model, vocabulary tree  has become indispensable in large-scale SfM, which can be regarded as a preemptive filtering step in which local descriptors vote for images that share scene overlaps. Later works focus on decreasing quantization errors [20, 14, 21], applying post-processing steps  and scaling up object retrieval by aggregating local features into compact representations. To address the very large retrieval problem, VLAD  was designed to be a low dimensional compact code while still preserving good performance.
CNN methods. Different from BoW models, CNN-based image retrieval approaches mostly rely on global information. Generic deep descriptors extracted from deep convolutional neural network models are proved to be good image representations on a series of vision tasks including object retrieval. Babenko et al.  firstly propose a sum-pooling aggregation method utilizing a centering prior, with the knowledge that objects of interest tend to be located close to the center of images. This is not satisfied when finding similarity pairs in terms of region overlaps. Kalantidis et al.  later propose a feature aggregation method based on cross-dimensional weighting. It analyses the spatial weighting and the channel weighting strategies that can boost saliency and distinctiveness of the visual patterns respectively. In parallel, Tolias et al.  propose R-MAC (regional maximum activations of convolutions), which utilizes regional information to boost the performance. Gordo et al.  replace the rigid grid with a learned region proposal network (RPN). All of the above methods, evaluations and assumptions are based on images with salient semantic regions like houses or landscapes. In 3D reconstruction, however, many images in urban datasets or aerial imageries merely serve as bridges to connect partial scenes, with fragmental, discontinuous or even no semantically meaningful regions.
. Both methods employ triplet loss to learn a similarity-based embedding. The first work uses a triplet-based hinge loss to characterize fine-grained image similarity while the second is proposed to solve face recognition problem at scale. Melekhov et al. also tackle the similar whole-image matching problem while they use the 2-channel network , but do not go deeper into 3D reconstruction.
3 The GL3D Benchmark Dataset
We create a database, Geometric Learning with 3D Reconstruction (GL3D), containing 90,590 high-resolution images in 378 different scenes. Each scene contains 50 to 1,000 images with large geometric overlaps, covering urban, rural area, or scenic spots captured by drones from multiple scales and perspectives. It also contains small objects to enrich the data diversity. Fig. 1 gives an overview of various scenes in GL3D and their corresponding 3D models. We randomly select 338 datasets (81,222 images) and run 3D reconstruction pipeline (SfM dense reconstruction mesh reconstruction) for training sample generation as described in Section 4.3. To generate the 3D models, we use the incremental SfM method from  and the multiview stereo with surface reconstruction method from . The testing is carried out on the other 40 datasets with 9,368 images as queries, which allows a thorough evaluation for the matchable image retrieval compared with only 55 queries in Oxford5k .
GL3D is tailored for geometry-related problems and offers rich 3D context information such as feature-track correspondences, camera poses, point cloud data and mesh models. Therefore, it has intrinsic difference with other existing object retrieval datasets such as Oxford5k , Paris6k  and Holiday . The comparisons can be characterized in the following perspectives:
Full coverage. Each dataset has full coverage of the scene, which is the major difference of GL3D from previous crowd-sourced datasets . Existing object retrieval datasets usually have uneven samples of the same landmark, while GL3D are organized by densely connected images from different views.
Weak semantics. Existing object retrieval datasets mainly contain semantically meaningful landmark buildings with intact objects. The superior CNN performance trained on object classification task is therefore suitable to be transferred to object retrieval. In contrast, GL3D has weak semantics because images only capture part of the objects or scenes without definite semantic meanings. Some query images even have texture-less patterns like lawns and rivers, which is not common in the datasets for particular object retrieval [13, 14, 20].
Rich geometric context. Since images are densely connected and have full coverage of the scenes, not only two-view feature matches, but also accurate geometric computations such as camera poses, point clouds and mesh models can be derived. Therefore, we can measure the degree of scene overlaps between images pairs from accurate mesh re-projection. This results in the proposed fine-grained ground-truth generation scheme.
GL3D is not only limited to the matchable image retrieval problem. With various geometric computations such as feature matching, camera poses, and mesh models, it is also beneficial for other geometric learning problems. For the task of matchable image retrieval, we will design and present an automatic pipeline to generate well-annotated data as described in Section 4.3.
4.1 Problem Formulation
Given a set of images with geometric overlaps, we aim to find a rank set for each image in . In the rank set , a natural ordering exists representing the similarity in terms of geometric overlaps between and database images. To find these rank sets, one typical approach is to first map image features onto a space with lower dimension via an embedding function [9, 24, 29, 31]. Then a similarity measurement is computed and similar items are ranked by this similarity score from low to high. The similarity measurement is typically defined as the
distance between two normalized feature vectors:
The most crucial part in this learning framework is to find the embedding function . In this work, we resort to deep CNNs for embedding learning. Our objective is to train a neural network that can differentiate the degree of scene overlaps between pairs of images.
4.2 Network Architecture
We adopt three-branch networks as shown in Fig. 2, with (anchor, positive, negative) image triplets (denoted by ) as inputs. The core of this learning method is to minimize the distance of similar image pairs and maximize the distance of dissimilar pairs to some margin. The embedding function is learned in three feature towers with shared parameters, which can be implemented with any commonly-used CNNs [32, 33]. Different components in the networks are described in detail as follows:
Feature tower. The three feature towers share the same parameters during training, following the essence of triplet loss. Feature tower can be fine-tuned from the widely adopted networks such as VGG  or GoogLeNet . Though the classical networks often come with a fully-connected (FC) layer for classification, FC layers often do not work well for image retrieval tasks . In addition, FC layers are often removed for testing because we would like the input image to be arbitrary size. Therefore, we make the feature tower to be fully convolutional. The feature vectors are composed by first applying pooling on each feature map and then normalization across channels.
We use max pooling to aggregate feature maps into a feature vector. Max pooling has the nice property of translation invariance and widely adopted by previous CNN image retrieval works[9, 10, 11, 25].
Loss function. We use the widely adopted triplet-based loss layer for this learning-to-rank problem. Although pairwise losses such as the contrastive loss  based on Siamese architecture [28, 10] are also feasible, triplet-based losses are typically favored to avoid overfitting as they care about the relative ordering rather than the absolute distance . We conjoin each feature tower to the ranking layer and evaluate the hinge loss of a triplet.
4.3 Fine-grained Training Data Generation
Triplet sampling using SfM. As we have shown in the network architecture, the training data is composed of image triplets . Manual annotation for such a large quantity of training triplets is unrealistic. As is observed in , these triplets can be generated from SfM by computing the ratio of shared 3D tracks (which we refer to as common track ratio) between views in a fully automatic manner. Specifically, suppose is the set that contains all the 3D tracks that are observed by image , then the common track ratio between the image pair is defined as the average of two ratio numbers:
where the average function
is the geometric mean. Though other mean functions can be used, we did not observe substantial difference.
Triplet sampling using surface reconstruction. However, the above sampling method has several drawbacks. First, the generalization power would be limited by the ability of local feature matching. As Fig. 2(b) shows, if a pair of matched images possess a large view angle change that exceeds the matching ability of SIFT (), this pair of images would be regarded as unmatched since few common tracks would exist. Ideally, a good retrieval algorithm should consider all geometrically overlapping pairs and get rid of this limitation. Second, hard samples as shown in Fig. 4 are helpful in matchable image retrieval, in which the triplet images are from the same scene with similar context information. But hard samples cannot be obtained from sampling using SfM, in which negative samples are constrained to be selected from two non-overlapping scenes , since a small ratio of shared tracks does not represent a small overlapping area.
Thus, we combine mesh model re-projection with SfM track overlaps to obtain training triplets. As shown in Fig. 3, we use triangulated mesh models to pinpoint accurate overlap regions between image pairs, which is similar to . The essence is to project triangular meshes with high level-of-details (LoD) through camera projection matrices registered in SfM. Similar to common track ratio, we define mesh overlap ratio between the image pair as
where is the set containing all the triangles that are seen by the corresponding camera of image , and is the same as in Equation 2 which considers relative scale of image pairs. and are both in the range of .
To get a consistent overlap measurement, and should be carefully merged. The magnitude of is usually larger than that of in practice. We take a SfM-overlap-first scheme to ensure the completeness of positive samples. Namely, the combining overlap ratio is defined as
In this work, we fix to be 0.2 as is used in . An image is a strong positive to the anchor image if , and a weak positive if , leaving a safe margin between strong and weak positives. Moreover, the corresponding masks generated by mesh re-projection enable a more accurate computation of the loss term, which will be detailed in the next section.
4.4 Learning With Batched Hard Mining
Anchor swap. For symmetric distance measurements like the one in matchable retrieval, the sample space can be halved by introducing in-triplet hard negative mining , which also considers the distance between the positive and the negative
Mask triplet loss. Beyond the similar/dissimilar relations in particular object retrieval, more accurate overlap correspondences can be pin-pointed from the training data generation pipeline described in Section 4.3. Using the groundtruth masks associated with matched image pairs, we propose a new loss termed as Mask Triplet Loss
where . represents a pair of corresponding masks generated by mesh re-projection, and is the masking operation applied on feature maps from CNNs. In practice, we use the down-sampled corresponding region maps between the positive image pair as a binary filter for pooling operation. The first term in Equation 7 penalizes the difference between the masked regions of positive pairs, with a soft margin to prevent overfitting , while the second term is the triplet loss with anchor swap. and are set to 0.1 and 0.5 respectively. We have found that the proposed mask triplet loss greatly accelerates the training process since it finds the accurate regions for loss computation.
Batched hard mining. Since the sample complexity is cubic in the number of images, which is infeasible to iterate over, triplet sampling is vital to ensure the fast convergence of the model. Therefore, a mining strategy  should be carefully designed to select the proper triplets. Too hard triplets would result in the collapse of the model and too easy triplets would produce no loss and slow down the training process. Inspired by the previous works used in local descriptor learning, such as structured loss [37, 38], we propose a batched triplet mining strategy suitable for this task which utilizes the fine-grained overlap measurement as defined in Section 4.3.
As shown in Fig. 4, each batch forms a matrix of size where is the batch size. Each triplet in the batch comes from a different dataset thus row-wise every pair of images is a negative pair. Each column itself forms a hard triplet sample meaning that the second row is more similar to anchors (the first row) than the third row, measured by the overlap ratio defined in Section 4.3. We call the second row strong positive and the third row weak positive. The total loss is of three parts: 1) easy loss composed by (anchor, strong positive, negative), 2) weak loss composed by (anchor, weak positive, negative), 3) hard loss composed by (anchor, strong positive, weak positive), written as follows:
With this batched loss formulation, the equivalent batch size can be enlarged by an order of magnitude from to , which makes the training process much more effective. In practice, we set the loss weights to 1.
Offline mining with adaptive margins. Hard negatives are generated offline by mesh re-projection (discussed in Section 4.3). As mentioned in , we also observe that using hard negatives in the early training process can harm the performance and collapse the model. Therefore, we use adaptive margins, where we set a smaller margin for hard samples to stabilize the training process. We set for easy triplets, for weak triplets, and for hard triplets.
4.5 Pre-Matching Regional Code (PRC)
Since matchable image retrieval needs fine-grained discrimination of overlap, it is crucial to exploit regional information. R-MAC  provides good insights to tackle this issue. R-MAC samples square regions on activations at different scales, then applies MAC  on those square regions to get regional vectors which are then combined into a single image vector by summing and L2-normalization. However, the mixed regional information may weaken its expressive power. In this work, we propose pre-matching regional code (PRC), an feature aggregation method towards regional information coding based on [9, 40].
Generally, PRC can be combined with any pooling operations, such as L2 pooling, average pooling or max pooling. We use PRC with max pooling due to its translation invariance [9, 25], which is termed PR-MAC. We first sample square regions and generate regional vectors as in R-MAC. Instead of simply summing up all the regional vectors, PR-MAC does pre-matching on regional vectors and aggregates the sub-matching result. Formally, for an image pair associated with regional vector sets and , we first obtain
as the minimum distance between a regional vector for the query image , and the regional vector set for the target image . Then we calculate
to represent the final distance between a pair of images. As an interpretation, PRC conducts pre-matching to find the best match for each region of the query, and computes the similarity considering the matchability of each region. As is demonstrated in extensive experiments, PRC outperforms R-MAC in both object image retrieval and matchable image retrieval.
Discussions on efficiency and comparison with R-MAC. PRC has the computational complexity of where is the number of regional vectors, which is higher than that of R-MAC. We have improved the efficiency in two ways. First, the PRC is applied on the feature map level as in R-MAC instead of on the costly image patch level . Second, PRC can be applied on a shortlist (Top-200) as a re-ranking method . We also compare PRC with approximate max-pooling localization (AML) , which replaces the sum operation in Equation 10 with .
We use TensorFlow to train CNNs on resized images with random contrast and color perturbation. Various methods of vocabulary tree with advanced techniques [7, 20, 21] are implemented in C++ with multi-threading and SIFT features from VLFeat 
. Each image has 10k SIFT features on average. We use stochastic gradient descent (SGD) solver with a momentum of 0.9 and a weight decay of 0.0001. The base learning rate is 0.002 and exponentially decayed to 0.9 of the previous one for every 10k steps. All benchmarks are conducted on single Nvidia GeForce GTX 1080.
Evaluation protocol. We use mean Average Precision (mAP) to measure the performance. We only keep a smaller rank list of size for each query and measure mAP@k, as only the first fewer candidate matches matter in SfM. Instead of searching the same scene dataset, each image is queried against all 9,368 test images to increase the retrieval difficulty. The ground truth overlap rank list is generated as in Section 4.3. We evaluate the case when , which results in 317,090 ground-truth match pairs. It provides a challenging benchmark whose images have large scale and perspective changes unlimited by SfM results.
5.1 Distinctiveness of Matchable Image Retrieval
|(Dim=512 for all CNN methods)||GL3D (mAP@100)||Oxford5k||Paris6k||Holiday (top-10)||INSTRE|
|VocabTree  (depth = 6, branch = 8)||0.599||0.448||0.531||0.549||-|
|VocabTree + HE + WGC ||0.689||0.547||-||0.746||-|
|siaMAC (VGG) + MAC ||0.518||0.731||0.785||0.723||0.296|
|siaMAC (VGG) + R-MAC||0.542||0.770||0.821||0.762||0.313|
|siaMAC (VGG) + R-MAC ()||0.553||0.779||0.810||0.767||-|
|siaMAC (VGG) + PR-MAC||0.617||0.786||0.832||0.782||0.389|
|siaMAC (VGG) + PR-MAC + QE||0.654||0.830||0.874||-||0.588|
|GoogLeNet + R-MAC + TL||0.636||0.711||0.794||0.821||0.243|
|GoogLeNet + PR-MAC + TL||0.708||0.737||0.813||0.825||0.306|
|GoogLeNet + PR-MAC + TL + QE||0.721||0.781||0.855||-||0.504|
|GoogLeNet + R-MAC + MTL||0.638||0.721||0.799||0.824||-|
|GoogLeNet + PR-MAC + MTL||0.711||0.740||0.816||0.841||-|
|GoogLeNet + PR-MAC + MTL + QE||0.722||0.789||0.862||-||-|
We first demonstrate the intrinsic difference of object retrieval and geometric overlap retrieval, by comparing vocabulary tree, which is extensively used in practical SfM systems [2, 3, 4], and various deep models on GL3D, Oxford5k, Paris6k and INSTRE . Table 1 shows that siaMAC  achieves superior performance on object retrieval tasks but fails to beat even the naive vocabulary tree and our method on the GL3D dataset. This partially explains the prevalence of vocabulary tree in SfM, and shows that without proper care CNNs do not generalize well on the fine-grained matchable image retrieval problem.
5.2 Experiments for Matchable Image Retrieval
Below we give thorough evaluations on GL3D in the context of matchable image retrieval. If not explicitly specified, the CNN methods are tested on images with PCA whitening and reduced feature dimensionality of 256. Different from Table 1, we use three scales of 35 (=1+9+25) region vectors for R-MAC and PR-MAC to demonstrate the best performance. As Table 2 shows, the proposed method outperforms all the others.
|CNN-based methods||Raw + MAC||0.478||0.487||11.5||VGG-16||512|
|SiaMAC  + MAC||0.519||0.527||22.6*|
|SiaMAC  + R-MAC ||0.629||0.654|
|SiaMAC  + R-MAC + diffusion ||0.569||0.598||60.5*|
|SiaMAC  + PR-MAC||0.662||0.686||60.9*|
|Ours + TL + MAC||0.627||0.631||9.5|
|Ours + TL + R-MAC||0.681||0.698||11.5|
|Ours + MTL + R-MAC||0.691||0.707|
|Ours + MTL + PR-MAC||0.724||0.731||12.3|
|Fine-tuned + ROI + R-MAC ||0.616||0.629||12.6*||ResNet101||2048|
|Raw + MAC||0.598||0.603||3.2||GoogLeNet||256|
|Ours + TL + MAC||0.625||0.638|
|Ours + MTL + MAC||0.652||0.663|
|Ours + MTL + SPoC ||0.689||0.705|
|Ours + MTL + CRoW ||0.673||0.698|
|Ours + MTL + R-MAC ||0.702||0.715||5.4|
|Ours + MTL + AML ||0.630||0.637||7.2|
|Ours + MTL + PR-MAC (Top-200)||0.722||0.743||5.5|
|Ours + MTL + PR-MAC||0.734||0.758||8.5|
|VocabTree + HE ||0.601||0.615||726|
|VocabTree + WGC ||0.676||0.688||144|
|VocabTree + PGM ||0.641||0.643||173|
|VocabTree + HE + WGC ||0.689||0.703||820|
is evaluated on authors’ public code with Matlab or Caffe, and thus may not be comparable.
Effect of using hard samples. Using hard samples is the main benefit brought by our ground-truth generation method (mesh re-projection). Without hard samples, the mAP@200 of our best model drops from 0.758 to 0.717.
Effect of triplet loss. We compare the performance training with triplet loss (+TL) and the proposed mask triplet loss (+MTL). MTL and TL deliver similar performance after convergence, as shown in Table 2, yet it is observed that MTL converges much faster than TL.
Effect of PRC feature aggregation. Naturally, images of higher resolution provide richer information and are more likely to deliver better performance. To demonstrate that PRC can exploit information not merely from higher resolutions, we compare PR-MAC with MAC and R-MAC for different image sizes. As image size increases, PR-MAC consistently outperforms MAC and R-MAC with both siaMAC model  (Fig. 5(a), left) and our fine-tuned model (Fig. 5(a), right), indicating the versatility of PRC. Moreover, unlike the results in  where MAC and R-MAC deliver comparable improvements on object image retrieval, it shows that R-MAC is notably better than MAC in matchable image retrieval, which again demonstrates the difference between two tasks and the necessity of exploiting regional information. We have also found that manifold diffusion method  and approximate max-pooling localization (AML) in R-MAC  do not work very well on matchable image retrieval, as shown in Table 2.
Efficiency. As shown in Table 2, our best model is able to surpass above BoW models regarding both accuracy and efficiency. Furthermore, Fig. 5(b) compares the computation time and peak memory for MAC, R-MAC and PR-MAC. The higher complexity of PRC can be alleviated to some extent by using PRC as a re-ranking method. For example, by applying R-MAC on our best model and then re-ranking the Top-200 candidates with PR-MAC, the mAP@200 score on GL3D increases from 0.715 to 0.743. Generally, the increase for PR-MAC is due to more I/O operations and the fine-grained matching. However, it still achieves good trade-off to apply PR-MAC for SfM where accuracy is more concerned.
5.3 Integration of Matchable Image Retrieval with SfM
Retrieval performance per scene. Since SfM relies on retrieving matchable images on each independent scene, we extensively evaluate our approach on each of the 40 test sets. Both our method and vocabulary tree outperform siaMAC and again reflects the gap between object and matchable image retrieval. Fig. 6(a) shows the comparisons of the five largest scenes (each more than 400 images) in GL3D. One observation (Fig. 6(b)) is that our CNN-based method is suitable for datasets with rich textures (the green frame), while vocabulary tree does better on texture-less scenes (the red frame). It indicates that vocabulary tree better encodes very local and detailed information. Performance boost for specific query images is shown in Fig. 6(c).
|# Images||# Registered||#Pairs-to-Match||# Sparse Points||# Observations||Track Length||Reproj. Error|
|Tower of London||BoW||1,576||780||122,534||175,452||1,441K||8.28||0.60px|
SfM Results. We conduct SfM experiments on 1DSfM  datasets to demonstrate the integration of the proposed method with SfM. The datasets are reconstructed using COLMAP  with different retrieval methods (BoW, siaMAC, NetVLAD, and ours), as shown in Table 3. We select top-100 candidates for matching, the default parameter in COLMAP. For CNN methods, the long side of image is resized to 896. siaMAC and NetVLAD are tuned to its best performance (learned whitening, query expansion etc.) as described in their papers. As shown, our method is better than siaMAC and NetVLAD, and comparable with COLMAP-BoW. However, our method generates fewer (10%) match pairs than COLMAP-BoW from the top-100 candidates, indicating more symmetric query results. When fixing the number of pairs to match, e.g., in the last row of table where retrieval is performed at top-115, a similar result as COLMAP-BoW can be obtained. Those experiments again validate our observation that there does exist a gap between the matchable and object image retrieval.
In this paper, we first differentiate particular object retrieval and matchable image retrieval, and present a large-scale dataset GL3D and a CNN-based method with auto-annotated training data.
Based on the high-quality fine-grained training data, we utilize the overlap masks obtained from surface reconstruction and develop a batched mask triplet loss to effectively train the network.
Combined with a post-processing method that exploits regional information, this method delivers state-of-the-art performance for matchable image retrieval.
Acknowledgment. This work is supported by T22-603/15N, Hong Kong ITC PSKL12EG02 and the Special Project of International Scientific and Technological Cooperation in Guangzhou Development District (No. 2017GH24).
-  Agarwal, S., Furukawa, Y., Snavely, N., Simon, I., Curless, B., Seitz, S.M., Szeliski, R.: Building rome in a day. Communications of the ACM (2011)
-  Moulon, P., Monasse, P., Marlet, R.: Global fusion of relative motions for robust, accurate and scalable structure from motion. In: ICCV. (2013)
-  Sweeney, C., Sattler, T., Hollerer, T., Turk, M., Pollefeys, M.: Optimizing the viewing graph for structure-from-motion. In: ICCV. (2015)
-  Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR. (2016)
-  Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: Orb-slam: a versatile and accurate monocular slam system. IEEE Transactions on Robotics (2015)
-  Sivic, J., Zisserman, A.: Video google: A text retrieval approach to object matching in videos. In: ICCV. (2003)
-  Nister, D., Stewenius, H.: Scalable recognition with a vocabulary tree. In: CVPR. (2006)
-  Kalantidis, Y., Mellina, C., Osindero, S.: Cross-dimensional weighting for aggregated deep convolutional features. In: ECCV Workshop. (2016)
-  Tolias, G., Sicre, R., Jégou, H.: Particular object retrieval with integral max-pooling of cnn activations. In: ICLR. (2016)
-  Radenović, F., Tolias, G., Chum, O.: Cnn image retrieval learns from bow: Unsupervised fine-tuning with hard examples. In: ECCV. (2016)
-  Iscen, A., Tolias, G., Avrithis, Y., Furon, T., Chum, O.: Efficient diffusion on region manifolds: Recovering small objects with compact cnn representations. In: CVPR. (2017)
-  Havlena, M., Schindler, K.: Vocmatch: Efficient multiview correspondence for structure from motion. In: ECCV. (2014)
-  Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: CVPR. (2007)
-  Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Lost in quantization: Improving particular object retrieval in large scale image databases. In: CVPR. (2008)
-  Zhou, L., Zhu, S., Shen, T., Wang, J., Fang, T., Quan, L.: Progressive large scale-invariant image matching in scale space. In: ICCV. (2017)
-  Zhou, L., Zhu, S., Luo, Z., Shen, T., Zhang, R., Zhen, M., Fang, T., Quan, L.: Learning and matching multi-view descriptors for registration of point clouds. In: ECCV. (2018)
-  Shen, T., Zhu, S., Fang, T., Zhang, R., Quan, L.: Graph-based consistent matching for structure-from-motion. In: ECCV. (2016)
-  Zhu, S., Shen, T., Zhou, L., Zhang, R., Wang, J., Fang, T., Quan, L.: Parallel structure from motion from local increment to global averaging. arXiv preprint arXiv:1702.08601 (2017)
-  Zhu, S., Zhang, R., Zhou, L., Shen, T., Fang, T., Tan, P., Quan, L.: Very large-scale global sfm by distributed motion averaging. In: CVPR. (2018)
-  Jegou, H., Douze, M., Schmid, C.: Hamming embedding and weak geometric consistency for large scale image search. In: ECCV. (2008)
-  Li, X., Larson, M., Hanjalic, A.: Pairwise geometric matching for large-scale object retrieval. In: CVPR. (2015)
-  Chum, O., Mikulik, A., Perdoch, M., Matas, J.: Total recall ii: Query expansion revisited. In: CVPR. (2011)
-  Jégou, H., Douze, M., Schmid, C., Pérez, P.: Aggregating local descriptors into a compact image representation. In: CVPR. (2010)
Babenko, A., Lempitsky, V.:
Aggregating local deep features for image retrieval.In: ICCV. (2015)
-  Gordo, A., Almazan, J., Revaud, J., Larlus, D.: End-to-end learning of deep visual representations for image retrieval. IJCV (2017)
-  Wang, J., Song, Y., Leung, T., Rosenberg, C., Wang, J., Philbin, J., Chen, B., Wu, Y.: Learning fine-grained image similarity with deep ranking. In: CVPR. (2014)
-  Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: A unified embedding for face recognition and clustering. In: CVPR. (2015)
-  Melekhov, I., Kannala, J., Rahtu, E.: Siamese network features for image matching. In: ICPR. (2016)
-  Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: CVPR. (2005)
-  Li, S., Siu, S.Y., Fang, T., Quan, L.: Efficient multi-view surface refinement with adaptive resolution control. In: ECCV. (2016)
-  Ustinova, E., Lempitsky, V.: Learning deep embeddings with histogram loss. In: NIPS. (2016)
-  Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR. (2015)
-  Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR. (2015)
-  Shen, T., Wang, J., Fang, T., Zhu, S., Quan, L.: Color correction for image-based modeling in the large. In: ACCV. (2016)
-  Balntas, V., Riba, E., Ponsa, D., Mikolajczyk, K.: Learning local feature descriptors with triplets and shallow convolutional neural networks. In: BMVC. (2016)
-  Lin, J., Morère, O., Veillard, A., Duan, L.Y., Goh, H., Chandrasekhar, V.: Deephash for image instance retrieval: Getting regularization, depth and fine-tuning right. In: ICMR. (2017)
-  Song, H.O., Xiang, Y., Jegelka, S., Savarese, S.: Deep metric learning via lifted structured feature embedding. In: CVPR. (2016)
-  Luo, Z., Shen, T., Zhou, L., Zhu, S., Zhang, R., Yao, Y., Fang, T., Quan, L.: Geodesc: Learning local descriptors by integrating geometry constraints. In: ECCV. (2018)
-  Azizpour, H., Sharif Razavian, A., Sullivan, J., Maki, A., Carlsson, S.: From generic to specific deep representations for visual recognition. In: CVPR Workshops. (2015)
-  Razavian, A.S., Sullivan, J., Carlsson, S., Maki, A.: Visual instance retrieval with deep convolutional networks. ITE Transactions on Media Technology and Applications (2016)
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M.,
Ghemawat, S., Irving, G., Isard, M., et al.:
Tensorflow: A system for large-scale machine learning.In: OSDI. (2016)
Vedaldi, A., Fulkerson, B.:
Vlfeat: An open and portable library of computer vision algorithms.In: ACM Multimedia. (2010)
-  Wang, S., Jiang, S.: Instre: a new benchmark for instance-level object retrieval and recognition. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) (2015)
-  Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: Netvlad: Cnn architecture for weakly supervised place recognition. In: CVPR. (2016)
-  Wilson, K., Snavely, N.: Robust global translations with 1dsfm. In: ECCV. (2014)