Guide to the Content Based Image Retrieval
In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.READ FULL TEXT VIEW PDF
Guide to the Content Based Image Retrieval
Content-based image retrieval (CBIR) has been a long-standing research topic in the computer vision society. In the early 1990s, the study of CBIR truly started. Images were indexed by the visual cues, such as texture and color, and a myriad of algorithms and image retrieval systems have been proposed. A straightforward strategy is to extract global descriptors. This idea dominated the image retrieval community in the 1990s and early 2000s. Yet, a well-known problem is that global signatures may fail the invariance expectation to image changes such as illumination, translation, occlusion and truncation. These variances compromise the retrieval accuracy and limit the application scope of global descriptors. This problem has given rise to local feature based image retrieval.
The focus of this survey is instance-level image retrieval. In this task, given a query image depicting a particular object/scene/architecture, the aim is to retrieve images containing the same object/scene/architecture that may be captured under different views, illumination, or with occlusions. Instance retrieval departs from class retrieval  in that the latter aims at retrieving images of the same class with the query. In the following, if not specified, we use “image retrieval” and “instance retrieval” interchangeably.
The milestones of instance retrieval in the past years are presented in Fig. 1, in which the times of the SIFT-based and CNN-based methods are highlighted. The majority of traditional methods can be considered to end in 2000 when Smeulders et al.  presented a comprehensive survey of CBIR “at the end of the early years”. Three years later (2003) the Bag-of-Words (BoW) model was introduced to the image retrieval community , and in 2004 was applied to image classification , both relying on the SIFT descriptor . The retrieval community has since witnessed the prominence of the BoW model for over a decade during which many improvements were proposed. In 2012, Krizhevsky et al. 
with the AlexNet achieved the state-of-the-art recognition accuracy in ILSRVC 2012, exceeding previous best results by a large margin. Since then, research focus has begun to transfer to deep learning based methods[7, 8, 9, 10], especially the convolutional neural network (CNN).
The SIFT-based methods mostly rely on the BoW model. BoW was originally proposed for modeling documents because the text is naturally parsed into words. It builds a word histogram for a document by accumulating word responses into a global vector. In the image domain, the introduction of the scale-invariant feature transform (SIFT) makes the BoW model feasible . Originally, SIFT is comprised of a detector and descriptor, but which are used in isolation now; in this survey, if not specified, SIFT usually refers to the 128-dim descriptor, a common practice in the community. With a pre-trained codebook (vocabulary), local features are quantized to visual words. An image can thus be represented in a similar form to a document, and classic weighting and indexing schemes can be leveraged.
In recent years, the popularity of SIFT-based models seems to be overtaken by the convolutional neural network (CNN), a hierarchical structure that has been shown to outperform hand-crafted features in many vision tasks. In retrieval, competitive performance compared to the BoW models has been reported, even with short CNN vectors [10, 16, 17]
. The CNN-based retrieval models usually compute compact representations and employ the Euclidean distance or some approximate nearest neighbor (ANN) search methods for retrieval. Current literature may directly employ the pre-trained CNN models or perform fine-tuning for specific retrieval tasks. A majority of these methods feed the image into the network only once to obtain the descriptor. Some are based on patches which are passed to the network multiple times, a similar manner to SIFT; we classify them into hybrid methods in this survey.
Upon the time of change, this paper provides a comprehensive literature survey of both the SIFT-based and CNN-based instance retrieval methods. We first present the categorization methodology in Section 2. We then describe the two major method types in Section 3 and Section 4, respectively. On several benchmark datasets, Section 5 summarizes the comparisons between SIFT- and CNN-based methods. In Section 6, we point out two possible future directions. This survey will be concluded in Section 7.
|SIFT-based||Large voc.||DoG, Hessian-Affine, dense patches, etc.||Local invariant descriptors such as SIFT||Hard, soft||High||Inverted index|
|Mid voc.||Hard, soft, HE||Medium||Inverted index|
|Small voc.||VLAD, FV||Low||ANN methods|
|CNN-based||Hybrid||Image patches||CNN features||VLAD, FV, pooling||Varies||ANN methods|
|Pre-trained, single-pass||Column feat. or FC of pre-trained CNN models.||VLAD, FV, pooling||Low||ANN methods|
|Fine-tuned, single-pass||A global feat. is end-to-end extracted from fine-tuned CNN models.||Low||ANN methods|
According to the different visual representations, this survey categorizes the retrieval literature into two broad types: SIFT-based and CNN-based. The SIFT-based methods are further organized into three classes: using large, medium-sized or small codebooks. We note that the codebook size is closely related to the choice of encoding methods. The CNN-based methods are categorized into using pre-trained or fine-tuned CNN models, as well as hybrid methods. Their similarities and differences are summarized in Table I.
The SIFT-based methods had been predominantly studied before 2012  (good works also appear in recent years [18, 19]). This line of methods usually use one type of detector, e.g., Hessian-Affine, and one type of descriptor, e.g., SIFT. Encoding maps a local feature into a vector. Based on the size of the codebook used during encoding, we classify SIFT-based methods into three categories as below.
The CNN-based methods extract features using CNN models. Compact (fixed-length) representations are usually built. There are three classes:
Using fine-tuned CNN models. The CNN model (e.g., pre-trained on ImageNet) is fine-tuned on a training set in which the images share similar distributions with the target database . CNN features can be extracted in an end-to-end manner through a single pass to the CNN model. The visual representations exhibit improved discriminative ability [24, 17].
The pipeline of SIFT-based retrieval is introduced in Fig. 2.
Local feature extraction. Suppose we have a gallery consisting of images. Given a feature detector, we extract local descriptors from the regions around the sparse interest points or dense patches. We denote the local descriptors of detected regions in an image as .
Codebook training. SIFT-based methods train a codebook offline. Each visual word in the codebook lies in the center of a subspace, called the “Voronoi cell”. A larger codebook corresponds to a finer partitioning, resulting in more discriminative visual words and vice versa. Suppose that a pool of local descriptors are computed from an unlabeled training set. The baseline approach, i.e., k-means, partitions the points into clusters; the visual words thus constitute a codebook of size .
Feature encoding. A local descriptor is mapped into a feature embedding through the feature encoding process, . When k-means clustering is used, can be encoded according to its distances to the visual words. For large codebooks, hard [12, 11] and soft quantization  are good choices. In the former, the resulting embedding has only one non-zero entry; in the latter, can be quantized to a small number of visual words. A global signature is produced after a sum-pooling of all the embeddings of local features. For medium-sized codebooks, additional binary signatures can be generated to preserve the original information. When using small codebooks, popular encoding schemes include vector of locally aggregated descriptors (VLAD) , Fisher vector (FV) , etc.
Local invariant features aim at accurate matching of local structures between images . SIFT-based methods usually share a similar feature extraction step composed of a feature detector and a descriptor.
Local detector. The interest point detectors aim to reliably localize a set of stable local regions under various imaging conditions. In the retrieval community, finding affine-covariant regions has been preferred. It is called “covariant” because the shapes of the detected regions change with the affine transformations, so that the region content (descriptors) can be invariant. This kind of detectors are different from keypoint-centric detectors such as the Hessian detector , and from those focusing on scale-invariant regions such as the difference of Gaussians (DoG)  detector. Elliptical regions which are adapted to the local intensity patterns are produced by affine detectors. This ensures that the same local structure is covered under deformations caused by viewpoint variances, a problem often encountered in instance retrieval. In the milestone work , the Maximally Stable Extremal Region (MSER) detector  and the affine extended Harris-Laplace detector are employed, both of which are affine-invariant region detectors. MSER is used in several later works [29, 11]. Starting from , the Hessian-affine detector  has been widely adopted in retrieval. It has been shown to be superior to the difference of Gaussians (DoG) detector [13, 31], due to its advantage in reliably detecting local structures under large viewpoint changes. To fix the orientation ambiguity of these affine-covariant regions, the gravity assumption is made 
. The practice which dismisses the orientation estimation is employed by later works[33, 34] and demonstrates consistent improvement on architecture datasets where the objects are usually upright. Other non-affine detectors have also been tested in retrieval, such as the Laplacian of Gaussian (LOG) and Harris detectors used in . For objects with smooth surfaces , few interest points can be detected, so the object boundaries are good candidates for local description.
On the other hand, some employ the dense region detectors. In the comparison between densely sampled image patches and the detected patches, Sicre et al.  report the superiority of the former. To recover the rotation invariance of dense sampling, the dominant angle of patches is estimated in . A comprehensive comparison of various dense sampling strategies, the interest point detectors, and those in between can be accessed in .
Local Descriptor. With a set of detected regions, descriptors encode the local content. SIFT  has been used as the default descriptor. The 128-dim vector has been shown to outperform competing descriptors in matching accuracy . In an extension, PCA-SIFT  reduces the dimension from 128 to 36 to speed up the matching process at the cost of more time in feature computation and loss of distinctiveness. Another improvement is RootSIFT , calculated by two steps: 1) normalize the SIFT descriptor, 2) square root each element. RootSIFT is now used as a routine in SIFT-based retrieval. Apart from SIFT, SURF  is also widely used. It combines the Hessian-Laplace detector and a local descriptor of the local gradient histograms. The integral image is used for acceleration. SURF has a comparable matching accuracy with SIFT and is faster to compute. See  for comparisons between SIFT, PCA-SIFT, and SURF. To further accelerate the matching speed, binary descriptors  replace Euclidean distance with Hamming distance during matching.
Apart from hand-crafted descriptors, some also propose learning schemes to improve the discriminative ability of local descriptors. For example, Philbin et al. 
proposes a non-linear transformation so that the projected SIFT descriptor yields smaller distances for true matches. Simoyanet al.  improve this process by learning both the pooling region and a linear descriptor projection.
A small codebook has several thousand, several hundred or fewer visual words, so the computational complexity of codebook generation and encoding is moderate. Representative works include BoW , VLAD  and FV . We mainly discuss VLAD and FV and refer readers to  for a comprehensive evaluation of the BoW compact vectors.
64, 128, 256. For VLAD, flat k-means is employed for codebook generation. For FV, the Gaussian mixture model (GMM),i.e., , where is the number of Gaussian mixtures, is trained using the maximum likelihood estimation. GMM describes the feature space with a mixture of Gaussian distributions, and can be denoted as , where , and represent the mixture weight, the mean vector and the covariance matrix of Gaussian , respectively.
Due to the small codebook size, relative complex and information-preserving encoding techniques can be applied. We mainly describe FV, VLAD and their improvements in this section. With a pre-trained GMM model, FV describes the averaged first and second order difference between local features and the GMM centers. Its dimension is , where is the dimension of the local descriptors and is the codebook size of GMM. FV usually undergoes power normalization ,  to suppress the burstiness problem (to be described in Section 3.4.3). In this step, each component of FV undergoes non-linear transformation featured by parameter , . Then normalization is employed. Later, FV is improved from different aspects. For example, Koniusz et al.  augment each descriptor with its spatial coordinates and associated tunable weights. In , larger codebooks (up to 4,096) are generated and demonstrate superior classification accuracy to smaller codebooks, at the cost of computational efficiency. To correct the assumption that local regions are identically and independently distributed (iid), Cinbis et al.  propose non-iid models that discount the burstiness effect and yield improvement over the power normalization.
The VLAD encoding scheme proposed by Jégou et al.  can be thought of as a simplified version of FV. It quantizes a local feature to its nearest visual word in the codebook and records the difference between them. Nearest neighbor search is performed because of the small codebook size. The residual vectors are then aggregated by sum pooling followed by normalizations. The dimension of VLAD is . Comparisons of some important encoding techniques are presented in [51, 52]. Again, the improvement of VLAD comes from multiple aspects. In , Jégou and Chum suggest the usage of PCA and whitening (denoted as PCA in Table V) to de-correlate visual word co-occurrences, and the training of multiple codebooks to reduce quantization loss. In , Arandjelović et al. extend VLAD in three aspects: 1) normalize the residual sum within each coarse cluster, called intra-normalization, 2) vocabulary adaptation to address the dataset transfer problem and 3) multi-VLAD for small object discovery. Concurrent to , Delhumeau et al.  propose to normalize each residual vector instead of the residual sums; they also advocate for local PCA within each Voronoi cell which does not perform dimension reduction as . A recent work  employs soft assignment and empirically learns optimal weights for each rank to improve over the hard quantization.
Note that some general techniques benefit various embedding methods, such as VLAD, FV, BoW, locality-constrained linear coding (LLC)  and monomial embeddings. To improve the discriminative ability of embeddings, Tolias et al.  propose the orientation covariant embedding to encode the dominant orientation of the SIFT regions jointly with the SIFT descriptor. It achieves a similar covariance property to weak geometric consistency (WGC)  by using geometric cues within regions of interest so that matching points with similar dominant orientations are up-weighted and vice versa. The triangulation embedding  only considers the direction instead of the magnitude of the input vectors. Jégou et al.  also present a democratic aggregation that limits the interference between the mapped vectors. Baring a similar idea with democratic aggregation, Murray and Perronnin 
propose the generalized max pooling (GMP) optimized by equalizing the similarity between the pooled vector and each coding representation.
The computational complexity of BoW, VLAD and FV is similar. We neglect the offline training and SIFT extraction steps. During visual word assignment, each feature should compute its distance (or soft assignment coefficient) with all the visual words (or Gaussians) for VLAD (or FV). So this step has a complexity of . In the other steps, complexity does not exceed . Considering the sum-pooling of the embeddings, the encoding process has an overall complexity of , where is the number of features in an image. Triangulation embedding , a variant of VLAD, has a similar complexity. The complexity of multi-VLAD  is , too, but it has a more costly matching process. Hierarchical VLAD  has a complexity of , where is the size of the secondary codebook. In the aggregation stage, both GMP  and democratic aggregation  have high complexity. The complexity of GMP is , where is the dimension of the feature embedding, while the computational cost of democratic aggregation comes from the Sinkhorn algorithm.
Due to the high dimensionality of the VLAD/FV embeddings, efficient compression and ANN search methods have been employed [61, 62]. For example, the principle component analysis (PCA) is usually adapted to for dimension reduction, and it is shown that retrieval accuracy even increases after PCA . For hashing-based ANN methods, Perronnin et al.  use standard binary encoding techniques such as locality sensitive hashing  and spectral hashing . Nevertheless, when being tested on the SIFT and GIST feature datasets, spectral hashing is shown to be outperformed by Product Quantization (PQ) . In these quantization-based ANN methods, PQ is demonstrated to be better than other popular ANN methods such as FLANN  as well. A detailed discussion of VLAD and PQ can be viewed in . PQ has since then been improved in a number of works. In , Douze et al. propose to re-order the cluster centroids so that adjacent centroids have small Hamming distances. This method is compatible with Hamming distance based ANN search, which offers significant speedup for PQ. We refer readers to  for a survey of ANN approaches.
We also mention an emerging ANN technique, i.e., group testing , , . In a nutshell, the database is decomposed into groups, each represented by a group vector. Comparisons between the query and group vectors reveal how likely a group contains a true match. Since group vectors are much fewer than the database vectors, search time is reduced. Iscen et al.  propose to directly find the best group vectors summarizing the database without explicitly forming the groups, which reduces the memory consumption.
Approximate methods are critical in assigning data into a large number of clusters. In the retrieval community, two representative works are hierarchical k-means (HKM)  and approximate k-means (AKM) , as illustrated in Fig. 1 and Fig. 3. Proposed in 2006, HKM applies standard k-means on the training features hierarchically. It first partitions the points into a few clusters (e.g., ) and then recursively partitions each cluster into further clusters. In every recursion, each point should be assigned to one of the clusters, with the depth of the cluster tree being , where is the target cluster number. The computational cost of HKM is therefore , where is the number of training samples. It is much smaller than the complexity of flat k-means when is large (a large codebook).
The other milestone in large codebook generation is AKM . This method indexes the cluster centers using a forest of random -d trees so that the assignment step can be performed efficiently with ANN search. In AKM, the cost of assignment can be written as , where is the number of nearest cluster candidates to be accessed in the -d trees. So the computational complexity of AKM is on par with HKM and is significantly smaller than flat k-means when is large. Experiments show that AKM is superior to HKM  due to its lower quantization error (see Section 3.4.2). In most AKM-based methods, the default choice for ANN search is FLANN .
Feature encoding is interleaved with codebook clustering, because ANN search is critical in both components. The ANN techniques implied in some classic methods like AKM and HKM can be used in both clustering and encoding steps. Under a large codebook, the key trade-off is between quantization error and computational complexity. In the encoding step, information-preserving encoding methods such as FV , sparse coding  are mostly infeasible due to their computational complexity. It therefore remains a challenging problem how to reduce the quantization error while keeping the quantization process efficient.
Fro the ANN methods, the earliest solution is to quantize a local feature along the hierarchical tree structure . Quantized tree nodes in different levels are assigned different weights. However, due to the highly imbalanced tree structure, this method is outperformed by -d tree based quantization method : one visual word is assigned to each local feature, using a -d tree built from the codebook for fast ANN search. In an improvement to this hard quantization scheme, Philbin et al.  propose soft quantization by quantizing a feature into several nearest visual words. The weight of each assigned visual word relates negatively to its distance from the feature by , where is the distance between the descriptor and the cluster center. While soft quantization is based on the Euclidean distance, Mikulik et al.  propose to find relevant visual words for each visual word through an unsupervised set of matching features. Built on a probabilistic model, these alternative words tend to contain descriptors of matching features. To reduce the memory cost of soft quantization  and the number of query visual words, Cai et al.  suggest that when a local feature is far away from even the nearest visual word, this feature can be discarded without a performance drop. To further accelerate quantization, scalar quantization 
suggests that local features be quantized without an explicitly trained codebook. A floating-point vector is binarized, and the first dimensions of the resulting binary vector are directly converted to a decimal number as a visual word. In the case of large quantization error and low recall, scalar quantization uses bit-flop to generate hundreds of visual words for a local feature.
TF-IDF. The visual words in codebook are typically assigned specific weights, called the term frequency and inverse document frequency (TF-IDF), which are integrated with the BoW encoding. TF is defined as:
where is the number of occurrences of a visual word within an image . TF is thus a local weight. IDF, on the other hand, determines the contribution of a given visual word through global statistics. The classic IDF weight of visual word is calculated as:
where is the number of gallery images, and encodes the number of images in which word appears. The TF-IDF weight for visual word in image is:
Improvements. A major problem associated with visual word weighting is burstiness . It refers to the phenomenon whereby repetitive structures appear in an image. This problem tends to dominate image similarity. Jégou et al.  propose several TF variants to deal with burstiness. An effective strategy consists in exerting a square operation on TF. Instead of grouping features with the same word index, Revaud et al.  propose detecting keypoint groups frequently happening in irrelevant images which are down-weighted in the scoring function. While the above two methods detect bursty groups after quantization, Shi et al  propose detecting them in the descriptor stage. The detected bursty descriptors undergo average pooling and are fed in the BoW architectures. From the aspect of IDF, Zheng et al.  propose the -norm IDF to tackle burstiness and Murata et al.  design the exponential IDF which is later incorporated into the BM25 formula. When most works try to suppress burstiness, Torii et al  view it as a distinguishing feature for architectures and design new similarity measurement following burstiness detection.
Another feature weighting strategy is feature augmentation on the database side , . Both methods construct an image graph offline, with edges indicating whether two images share a same object. For , only features that pass the geometric verification are preserved, which reduces the memory cost. Then, the feature of the base image is augmented with all the visual words of its connecting images. This method is improved in  by only adding those visual words which are estimated to be visible in the augmented image, so that noisy visual words can be excluded.
The inverted index is designed to enable efficient storage and retrieval and is usually used under large/medium-sized codebooks. Its structure is illustrated in Fig. 4. The inverted index is a one-dimensional structure where each entry corresponds to a visual word in the codebook. An inverted list is attached to each word entry, and those indexed in the each inverted list are called indexed features or postings. The inverted index takes advantages of the sparse nature of the visual word histogram under a large codebook.
In literature, it is required that new retrieval methods be adjustable to the inverted index. In the baseline [12, 11], the image ID and term frequency (TF) are stored in a posting. When other information is integrated, they should be small in size. For example, in , the metadata are quantized, such as descriptor contextual weight, descriptor density, mean relative log scale and the mean orientation difference in each posting. Similarly, quantized spatial information such as the orientation can also be stored , . In co-indexing , when the inverted index is enlarged with globally consistent neighbors, semantically isolated images are deleted to reduce memory consumption. In , the original one-dimensional inverted index is expanded to two-dimensional for ANN search, which learns a codebook for each SIFT sub-vector. Later, it is applied to instance retrieval by  to fuse local color and SIFT descriptors.
Medium-sized codebooks refer to those having 10-200k visual words. The visual words exhibit medium discriminative ability, and the inverted index is usually constructed.
Considering the relatively small computational cost compared with large codebooks (Section 3.4.1), flat k-means can be adopted for codebook generation , . It is also shown in [31, 86] that using AKM  for clustering also yields very competitive retrieval accuracy.
For quantization, nearest neighbor search can be used to find the nearest visual words in the codebook. Practice may tell that using some strict ANN algorithms produces competitive retrieval results. So comparing with the extensive study on quantization under large codebooks (Section 3.4.2) [25, 71, 74], relatively fewer works focus on the quantization problem under a medium-sized codebook.
The discriminative ability of visual words in medium-sized codebooks lies in between that of small and large codebooks. So it is important to compensate the information loss during quantization. To this end, a milestone work, i.e., Hamming embedding (HE) has been dominantly employed.
Proposed by Jégou et al. , HE greatly improves the discriminative ability of visual words under medium-sized codebooks. HE first maps a SIFT descriptor from the -dimensional space to a -dimensional space:
where is a projecting matrix, and is a low-dimensional vector. By creating a matrix of random Gaussian values and applying a QR factorization to it, matrix is taken as the first
rows of the resulting orthogonal matrix. To binarize, Jegou et al. propose to compute the median vector of the low-dimensional vector using descriptors falling in each Voronoi cell . Given descriptor and its projected vector , HE computes its visual word , and the HE binary vector is computed as:
where is the resulting HE vector of dimension . The binary feature serves as a secondary check for feature matching. A pair of local features are a true match when two criteria are satisfied: 1) identical visual words and 2) small Hamming distance between their HE signatures. The extension of HE  estimates the matching strength between feature and reversely to the Hamming distance by an exponential function:
where and are the HE binary vector of and , respectively, computes the Hamming distance between two binary vectors, and is a weighting parameter. As shown in Fig. 6, HE  and its weighted version  improves accuracy considerably in 2008 and 2010.
Applications of HE include video copy detection , image classification  and re-ranking . For example, in image classification, patch matching similarity is efficiently estimated by HE which is integrated into linear kernel-based SVM . In image re-ranking, Tolias et al.  use lower HE thresholds to find strict correspondences which resemble those found by RANSAC, and the resulting image subset is more likely to contain true positives for query reformulation.
propose a vector-to-binary distance comparison. It exploits the vector-to-hyperplane distance while retaining the efficiency of the inverted index. Further, Qinet al.  design a higher-order match kernel within a probabilistic framework and adaptively normalize the local feature distances by the distance distribution of false matches. This method is in the spirit similar to , in which the word-word distance, instead of the feature-feature distance , is normalized, according to the neighborhood distribution of each visual word. While the average distance between a word to its neighbors is regularized to be almost constant in , the idea of democratizing the contribution of individual embeddings has later been employed in . In , Tolias et al. show that VLAD and HE share similar natures and propose a new match kernel which trades off between local feature aggregation and feature-to-feature matching, using a similar matching function to . They also demonstrate that using more bits (e.g., 128) in HE is superior to the original 64 bits scheme at the cost of decreased efficiency. Even more bits (256) are used in , but this method may be prone to relatively low recall.
Local-local fusion. A problem with the SIFT feature is that only local gradient description is provided. Other discriminative information encoded in an image is still not leveraged. In Fig. 5 (B), a pair of false matches cannot be rejected by HE due to their similarity in the SIFT space, but the fusion of other local (or regional) features may correct this problem. A good choice for local-local fusion is to couple SIFT with color descriptors. The usage of color-SIFT descriptors can partially address the trade-off between invariance and discriminative ability. Evaluation has been conducted on several recognition benchmarks  of the descriptors such as HSV-SIFT , HueSIFT  and OpponentSIFT . Both HSV-SIFT and HueSIFT are scale-invariant and shift-invariant. OpponentSIFT describes all the channels in the opponent color space using the SIFT descriptor and is largely robust to the light color changes. In , OpponentSIFT is recommended when no prior knowledge about the datasets is available. In more recent works, the binary color signatures are stored in the inverted index , . Despite the good retrieval accuracy on some datasets, the potential problem is that intensive variation in illumination may compromise the effectiveness of colors.
Local-global fusion. Local and global features describe images from different aspects and can be complementary. In Fig. 5 (C), when local (and regional) cues are not enough to reject a false match pair, it would be effective to further incorporate visual information from a larger context scale. Early and late fusion are two possible ways. In early fusion, the image neighborhood relationship mined by global features such as FC8 in AlexNet  is fused in the SIFT-based inverted index . In late fusion, Zhang el al.  build an offline graph for each type of feature, which is subsequently fused during the online query. In an improvement of , Deng et al.  add weakly supervised anchors to aid graph fusion. Both works on the rank level. For score-level fusion, automatically learned category-specific attributes are combined with pre-trained category-level information . Zheng et al.  propose the query-adaptive late fusion by extracting a number of features (local or global, good or bad) and weighting them in a query-adaptive manner.
A frequent concern with the BoW model is the lack of geometric constraints among local features. Geometric verification can be used as a critical pre-processing step various scenarios, such as query expansion [100, 101]81], database-side feature augmentation , , large-scale object mining , etc. The most well-known method for global spatial verification is RANSAC . It calculates affine transformations for each correspondence repeatedly which are verified by the number of inliers that fit the transformation. RANSAC is effective in re-ranking a subset of top-ranked images but has efficiency problems. As a result, how to efficiently and accurately incorporate spatial cues in the SIFT-based framework has been extensively studied.
A good choice is to discover the spatial context among local features. For example, visual phrases [103, 104, 105, 106] are generated among individual visual words to provide more strict matching criterion. Visual word co-occurrences in the entire image are estimated  and aggregated , while in [109, 110],  visual word clusters within local neighborhoods are discovered. Visual phrases can also be constructed from adjacent image patches , random spatial partitioning , and localized stable regions  such as MSER .
Another strategy uses voting to check geometric consistency. In the voting space, a bin with a larger value is more likely to represent the true transformation. An important work is weak geometrical consistency (WGC) , which focuses on the difference in scale and orientation between matched features. The space of difference is quantized into bins. Hough voting is used to locate the subset of correspondences similar in scale or orientation differences. Many later works can be viewed as extensions of WGC. For example, the method of Zhang et al.  can be viewed as WGC using x, y offsets instead of scale and orientation. This method is invariant to object translations, but may be sensitive to scale and rotation changes due to the rigid coordinate quantization. To regain the scale and the rotation variance, Shen et al.  quantize the angle and scale of the query region after applying several transformations. A drawback of  is that query time and memory cost are both increased. To enable efficient voting and alleviate quantization artifacts, Hough pyramid matching (HPM)  distributes the matches over a hierarchical partition of the transformation space. HPM trades off between flexibility and accuracy and is very efficient. Quantization artifact can also be reduced by allowing a single correspondence to vote for multiple bins . HPM and  are much faster than RANSAC and can be viewed as extensions in the rotation and the scale invariance to the weak geometry consistency proposed along with Hamming Embedding . In , a rough global estimate of orientation and scale changes is made by voting, which is used to verify the transformation obtained by the matched features. A recent method  combines the advantage of hypothesis-based methods such as RANSAC  and voting-based methods [112, 21, 113, 114]. Possible hypothesises are identified by voting and later verified and refined. This method inherits efficiency from voting and supports query expansion since it outputs an explicit transformation and a set of inliers.
As a post-processing step, query expansion (QE) significantly improves the retrieval accuracy. In a nutshell, a number of top-ranked images from the original rank list are employed to issue a new query which is in turn used to obtain a new rank list. QE allows additional discriminative features to be added to the original query, thus improving recall.
In instance retrieval, Chum et al.  are the first to exploit this idea. They propose the average query expansion (AQE) which averages features of the top-ranked images to issue the new query. Usually, spatial verification  is employed for re-ranking and obtaining the ROIs from which the local features undergo average pooling. AQE is used by many later works [17, 10, 24] as a standard tool. The recursive AQE and the scale-band recursive QE are effective improvement but incur more computational cost . Four years later, Chum et al.  improve QE from the perspectives of learning background confusers, expanding the query region and incremental spatial verification. In , a linear SVM is trained online using the top-ranked and bottom-ranked images as positive and negative training samples, respectively. The learned weight vector is used to compute the average query. Other important extensions include “hello neighbor” based on reciprocal neighbors , QE with rank-based weighting , Hamming QE  (see Section 3.5), etc.
Retrieving objects that cover a small portion of images is a challenging task due to 1) the few detected local features and 2) the large amount of background noise. The Instance Search task in the TRECVID campaign  and the task of logo retrieval are important venues/applications for this task.
Generally speaking, both TRECVID and logo retrieval can be tackled with similar pipelines. For keypoint-based methods, the spatial context among the local features is important to discriminative target objects from others, especially in cases of rigid objects. Examples include [118, 119, 120]. Other effective methods include burstiness handling  (discussed in Section 3.4.3), considering the different inlier ratios between the query and target objects , etc. In the second type of methods, effective region proposals  or multi-scale image patches  can be used as object region candidates. In , a recent state-of-the-art method, a regional diffusion mechanism based on neighborhood graphs is proposed to further improve the recall of small objects.
CNN-based retrieval methods have constantly been proposed in recent years and are gradually replacing the hand-crafted local detectors and descriptors. In this survey, CNN-based methods are classified into three categories: using pre-trained CNN models, using fine-tuned CNN models and hybrid methods. The first two categories compute the global feature with a single network pass, and the hybrid methods may require multiple network passes (see Fig. 2).
This type of methods is efficient in feature computation due to the single-pass mode. Given the transfer nature, its success lies in the feature extraction and encoding steps. We will first describe some commonly used datasets and networks for pre-training, and then the feature computation process.
Popular CNN architectures. Several CNN models serve as good choices for extracting features, including AlexNet , VGGNet , GoogleNet  and ResNet , which are listed in Table II. Briefly, CNN can be viewed as a set of non-linear functions and is composed of a number of layers such as convolution, pooling, non-linearities, etc. CNN has a hierarchical structure. From bottom to top layers, the image undergoes convolution with filters, and the receptive field of these image filters increases. Filters in the same layer have the same size but different parameters. AlexNet  was proposed the earliest among these networks, which has five convolutional layers and three fully connected (FC) layers. It has 96 filters in the first layer of sizes and has 256 filters of size in the 5th layer. Zeiler et al.  observe that the filters are sensitive to certain visual patterns and that these patterns evolve from low-level bars in bottom layers to high-level objects in top layers. For low-level and simple visual stimulus, the CNN filters act as the detectors in the local hand-crafted features, but for the high-level and complex stimulus, the CNN filters have distinct characteristics that depart from SIFT-like detectors. AlexNet has been shown to be outperformed by newer ones such as VGGNet, which has the largest number of parameters. ResNet and GoogleNet won the ILSVRC 2014 and 2015 challenges, respectively, showing that CNNs are more effective with more layers. A full review of these networks is beyond the scope of this paper, and we refer readers to [6, 128],  for details.
Datasets for pre-training. Several large-scale recognition datasets are used for CNN pre-training. Among them, the ImageNet dataset  is mostly commonly used. It contains 1.2 million images of 1000 semantic classes and is usually thought of as being generic. Another data source for pre-training is the Places-205 dataset  which is twice as large as ImageNet but has five times fewer classes. It is a scene-centric dataset depicting various indoor and outdoor scenes. A hybrid dataset combining the Places-205 and the ImageNet datasets has also been used for pre-training . The resulting HybridNet is evaluated in [130, 131, 125, 126] for instance retrieval.
|models||size||# layers||training Set||used in|
|AlexNet ||60M||5+3||ImageNet||[22, 133]|
|PlacesNet ||Places||[130, 131]|
|HybridNet ||ImageNet+Places||[130, 131]|
The transfer issue. Comprehensive evaluation of various CNNs on instance retrieval has been conducted in several recent works [130, 131, 134]. The transfer effect is mostly concerned. It is considered in  that instance retrieval, as a target task, lies farthest from the source, i.e., ImageNet. Studies reveal some critical insights in the transfer process. First, during model transfer, features extracted from different layers exhibit different retrieval performance. Experiments confirm that the top layers may exhibit lower generalization ability than the layer before it. For example, for AlexNet pre-trained on ImageNet, it is shown that FC6, FC7, and FC8 are in descending order regarding retrieval accuracy . It is also shown in [134, 10] that the pool5 feature of AlexNet and VGGNet is even superior to FC6 when proper encoding techniques are employed. Second, the source training set is relevant to retrieval accuracy on different datasets. For example, Azizpour et al.  report that HybridNet yields the best performance on Holidays after PCA. They also observe that AlexNet pre-trained on ImageNet is superior to PlacesNet and HybridNet on the Ukbench dataset  which contains common objects instead of architectures or scenes. So the similarity of the source and target plays a critical role in instance retrieval when using a pre-trained CNN model.
FC descriptors. The most straightforward idea is to extract the descriptor from the fully-connected (FC) layer of the network [7, 8, 135], e.g., the 4,096-dim FC6 or FC7 descriptor in AlexNet. The FC descriptor is generated after layers of convolutions with the input image, has a global receptive field, and thus can be viewed as a global feature. It yields fair retrieval accuracy under Euclidean distance and can be improved with power normalization .
Intermediate local features. Many recent retrieval methods [10, 9, 134] focus on local descriptors in the intermediate layers. In these methods, lower-level convolutional filters (kernels) are used to detect local visual patterns. Viewed as local detectors, these filters have a smaller receptive field and are densely applied on the entire image. Compared with the global FC feature, local detectors are more robust to image transformations such as truncation and occlusion, in ways that are similar to the local invariant detectors (Section 3.2).
Local descriptors are tightly coupled with these intermediate local detectors, i.e., they are the responses of the input image to these convolution operations. In other words, after the convolutions, the resulting activation maps can be viewed as a feature ensemble, which is called the “column feature” in this survey. For example in AlexNet , there are detectors (convolutional filters) in the 1st convolutional layer. These filters produces heat maps of size (after max pooling). Each pixel in the maps has a receptive field of and records the response of the image w.r.t the corresponding filter [10, 9, 134]. The column feature is therefore of size (Fig. 2) and can be viewed as a description of a certain patch in the original image. Each dimension of this descriptor denotes the level of activation of the corresponding detector and resembles the SIFT descriptor to some extent. The column feature initially appears in , where Razavian et al. first do max-pooling over regularly partitioned windows on the feature maps and then concatenate them across all filter responses, yielding column-like features. In , column features from multiple layers of the networks are concatenated, forming the “hypercolumn” feature.
When column features are extracted, an image is represented by a set of descriptors. To aggregate these descriptors into a global representation, currently two strategies are adopted: encoding and direct pooling (Fig. 2).
Encoding. A set of column features resembles a set of SIFT features. So standard encoding schemes can be directly employed. The most commonly used methods are VLAD  and FV . A brief review of VLAD and FV can be seen in Section 3.3.2. A milestone work is , in which the column features are encoded into VLAD for the first time. This idea was later extended to CNN model fine-tuning . The BoW encoding can also be leveraged, as the case in . The column features within each layer are aggregated into a BoW vector which is then concatenated across the layers. An exception to these fix-length representations is , in which the column features are quantized with a codebook of size 25k and an inverted index is employed for efficiency.
Pooling. A major difference between the CNN column feature and SIFT is that the former has an explicit meaning in each dimension, i.e., the response of a particular region of the input image to a filter. Therefore, apart from the encoding schemes mentioned above, direct pooling techniques can produce discriminative features as well.
A milestone work in this direction consists in the Maximum activations of convolutions (MAC) proposed by Tolias et al. . Without distorting or cropping images, MAC computes a global descriptor with a single forward pass. Specifically, MAC calculates the maximum value of each intermediate feature map and concatenates all these values within a convolutional layer. In its multi-region version, the integral image and an approximate maximum operator are used for fast computation. The regional MAC descriptors are subsequently sum-pooled along with a series of normalization and PCA-whitening operations . We also note in this survey that several other works , [133, 134] also employ similar ideas with  in employing max or average pooling on the intermediate feature maps and that Razavian et al.  are the first. It has been observed that the last convolutional layer (e.g., pool5 in VGGNet), after pooling usually yields superior accuracy to the FC descriptors and the other convolutional layers .
Apart from direct feature pooling, it is also beneficial to assign some specific weights to the feature maps within each layer before pooling. In , Babenko et al. propose the injection of the prior knowledge that objects tend to be located toward image centers, and impose a 2-D Gaussian mask on the feature maps before sum pooling. Xie et al.  improve the MAC representation 
by propagating the high-level semantics and spatial context to low-level neurons for improving the descriptive ability of these bottom-layer activations. With a more general weighting strategy, Kalantidiset al.  perform both feature map-wise and channel-wise weighing, which aims to highlight the highly active spatial responses while reducing burstiness effects.
Although pre-trained CNN models have achieved impressive retrieval performance, a hot topic consists in fine-tuning the CNN model on specific training sets. When a fine-tuned CNN model is employed, the image-level descriptor is usually generated in an end-to-end manner, i.e., the network will produce a final visual representation without additional explicit encoding or pooling steps.
The nature of the datasets used in fine-tuning is the key to learning discriminative CNN features. ImageNet  only provides images with class labels. So the pre-trained CNN model is competent in discriminating images of different object/scene classes, but may be less effective to tell the difference between images that fall in the same class (e.g., architecture) but depict different instances (e.g., “Eiffel Tower” and “Notre-Dame”). Therefore, it is important to fine-tune the CNN model on task-oriented datasets.
|name||# images||# classes||content|
|3D Landmark ||163,671||713||Landmark|
|Tokyo TM ||112,623||n.a||Landmark|
|MV RGB-D ||250,000||300||House. object|
The datasets having been used for fine-tuning in recent years are shown in Table III. Buildings and common objects are the focus. The milestone work on fine-tuning is . It collects the Landmarks dataset by a semi-automated approach: automated searching for the popular landmarks in Yandex search engine, followed by a manual estimation of the proportion of relevant image among the top ranks. This dataset contains 672 classes of various architectures, and the fine-tuned network produces superior features on landmark related datasets such as Oxford5k  and Holidays , but has decreased performance on Ukbench  where common objects are presented. Babenko et al.  have also fine-tuned CNNs on the Multi-view RGB-D dataset  containing turntable views of 300 household objects, in order to improve performance on Ukbench. The Landmark dataset is later used by Gordo et al.  for fine-tuning, after an automatic cleaning approach based on SIFT matching. In , Radenović et al. employ the retrieval and Structure-From-Motion methods to build 3D landmark models so that images depicting the same architecture can be grouped. Using this labeled dataset, the linear discriminative projections (denoted as L in Table V) outperform the previous whitening technique . Another dataset called Tokyo Time Machine is collected using Google Street View Time Machine which provides images depicting the same places over time . While most of the above datasets focus on landmarks, Bell et al.  build a Product dataset consisting of furniture by developing a crowd-sourced pipeline to draw connections between in-situ objects and the corresponding products. It is also feasible to fine-tune on the query sets suggested in , but this method may not be adaptable to new query types.
The CNN architectures used in fine-tuning mainly fall into two types: the classification-based network and the verification-based network. The classification-based network is trained to classify architectures into pre-defined categories. Since there is usually no class overlap between the training set and the query images, the learned embedding e.g., FC6 or FC7 in AlexNet, is used for Euclidean distance based retrieval. This train/test strategy is employed in , in which the last FC layer is modified to have 672 nodes corresponding to the number of classes in the Landmark dataset.
The verification network may either use a siamese network with pairwise loss or use a triplet loss and has been more widely employed for fine-tuning. A standard siamese network based on AlexNet and the contrastive loss is employed in . In , Radenović et al. propose to replace the FC layers with a MAC layer . Moreover, with the 3D architecture models built in , training pairs can be mined. Positive image pairs are selected based on the number of co-observed 3D points (matched SIFT features), while hard negatives are defined as those with small distances in their CNN descriptors. These image pairs are fed into the siamese network, and the contrastive loss is calculated from the normalized MAC features. In a concurrent work to , Gordo et al. 
fine-tune a triplet-loss network and a region proposal network on the Landmark dataset. The superiority of  consists in its localization ability, which excludes the background in feature learning and extraction. In both works, the fine-tuned models exhibit state-of-the-art accuracy on landmark retrieval datasets including Oxford5k, Paris6k and Holidays, and also good generalization ability on Ukbench (Table V). In , a VLAD-like layer is plugged in the network at the last convolutional layer which is amenable to training via back-propagation. Meanwhile, a new triplet loss is designed to make use of the weakly supervised Google Street View Time Machine data.
For the hybrid methods, multiple network passes are performed. A number of image patches are generated from an input image, which are fed into the network for feature extraction before an encoding/pooling stage. Since the manner of “detector + descriptor” is similar to SIFT-based methods, we call this method type “hybrid”. It is usually less efficient than the single-pass methods.
In hybrid methods, the feature extraction process consists of patch detection and description steps. For the first step, the literature has seen three major types of region detectors. The first is grid image patches. For example, in , a two-scale sliding window strategy is employed to generate patches. In , the dataset images are first cropped and rotated, and then divided into patches of different scales, the union of which covers the whole image. The second type is invariant keypoint/region detectors. For instance, the difference of Gaussian feature points are used in ; the MSER region detector is leveraged in . Third, region proposals also provide useful information on the locations of the potential objects. Mopuri et al.  employ selective search  to generate image patches, while EdgeBox  is used in . In , the region proposal network (RPN)  is applied to locate the potential objects in an image.
The use of CNN as region descriptors is validated in , showing that CNN is superior to SIFT in image matching except on blurred images. Given the image patches, the hybrid CNN method usually employs the FC or pooled intermediate CNN features. Examples using the FC descriptors include [22, 7, 152, 147]. In these works, the 4,096-dim FC features are extracted from the multi-scale image regions [22, 7, 152] or object proposals . On the other hand, Razavian et al.  also uses the intermediate descriptors after max-pooling as region descriptors.
The above methods use pre-trained models for patch feature extraction. Based on the hand-crafted detectors, patch descriptors can also be learned through CNN in either supervised  or unsupervised manner , which improves over the previous works on SIFT descriptor learning , . Yi et al.  further propose an end-to-end learning method integrating region detector, orientation estimator and feature descriptor in a single pipeline.
The encoding/indexing procedure of hybrid methods resembles SIFT-based retrieval, e.g., VLAD/FV encoding under a small codebook or the inverted index under a large codebook.
The VLAD/FV encoding, such as [22, 147], follow the standard practice in the case of SIFT features [15, 14], so we do not detail here. On the other hand, several works exploit the inverted index on the patch-based CNN features [155, 156, 139]. Again, standard techniques in SIFT-based methods such as HE are employed . Apart from the above-mentioned strategies, we notice that several works [7, 133, 152] extract several region descriptors per image to do a many-to-many matching, called “spatial search” . This method improves the translation and scale invariance of the retrieval system but may encounter efficiency problems. A reverse strategy to applying encoding on top of CNN activations is to build a CNN structure (mainly consisting of FC layers) on top of SIFT-based representations such as FV. By training a classification model on natural images, the intermediate FC layer can be used for retrieval .
In this survey, we categorize current literature into six fine-grained classes. The differences and some representative works of the six categories are summarized in Table I and Table V. Our observation goes below.
First, the hybrid method can be viewed as a transition zone from SIFT- to CNN-based methods. It resembles the SIFT-based methods in all the aspects except that it extracts CNN features as the local descriptor. Since the network is accessed multiple times during patch feature extraction, the efficiency of the feature extraction step may be compromised.
Second, the single-pass CNN methods tend to combine the individual steps in the SIFT-based and hybrid methods. In Table V, the “pre-trained single-pass” category integrates the feature detection and description steps; in the “fine-tuned single-pass” methods, the image-level descriptor is usually extracted in an end-to-end mode, so that no separate encoding process is needed. In , a “PCA” layer is integrated for discriminative dimension reduction, making a further step towards end-to-end feature learning.
Third, fixed-length representations are gaining more popularity due to efficiency considerations. It can be obtained by aggregating local descriptors (SIFT or CNN) , [18, 22], , direct pooling , , or end-to-end feature computation [8, 17]. Usually, dimension reduction methods such as PCA can employed on top of the fixed-length representations, and ANN search methods such as PQ  or hashing  can be used for fast retrieval.
Hashing is a major solution to the approximate nearest neighbor problem. It can be categorized into locality sensitive hashing (LSH)  and learning to hash. LSH is data-independent and is usually outperformed by learning to hash, a data-dependent hashing approach. For learning to hash, a recent survey  categorizes it into quantization and pairwise similarity preserving. The quantization methods are briefly discussed in Section 3.3.2. For the pairwise similarity preserving methods, some popular hand-crafted methods include Spectral hashing , LDA hashing , etc.
Recently, hashing has seen a major shift from hand-crafted to supervised hashing with deep neural networks. These methods take the original image as input and produce a learned feature before binarization [159, 160]. Most of these methods, however, focus on class-level image retrieval, a different task with instance retrieval discussed in this survey. For instance retrieval, when adequate training data can be collected, such as architecture and pedestrians, the deep hashing methods may be of critical importance.
Five popular instance retrieval datasets are used in this survey. Statistics of these datasets can be accessed in Table IV.
|SIFT-based||Large Voc.||HKM ||MSER||SIFT||hier. soft, BoW||inv. index||1M||59.7||2.85||44.3||26.6||46.5||9.8kb|
|AKM ||hes-aff||SIFT||hard, BoW||inv. index||1M||64.1||3.02||49.3||34.3||50.2||9.8kb|
|Fine Voc. ||hes-aff||SIFT||alt. word, BoW||inv. index||16M||-/74.9 (75.8)||-||74.2 (84.9)||67.4 (79.5)||74.9 (82.4)||9.8kb|
|Three Things ||hes-aff||rootSIFT||hard, BoW||inv. index||1M||-||-||80.9*||72.2*||76.5*||22.0kb|
|Co-index ||DoG||SIFT, CNN||hard, BoW||co-index||1M||80.86||3.60||68.72||-||-||21.6kb|
|Mid Voc.||HE+WGC [13, 85]||hes-aff||SIFT||hard, BoW, HE||inv. index||20k||81.3 (84.2)||3.42 (3.55)||61.5 (74.7)||51.6 (68.7)||-||36.8kb|
|Burst ||hes-aff||SIFT||burst, BoW, HE||inv. index||20k||83.9 (84.8)||3.54 (3.64)||64.7 (68.5)||58 (63)||62.8||36.8kb|
|Q.ada ||hes-aff||rootSIFT||MA, BoW, HE||inv. index||20k||78.0||-||82.1||72.8||73.6||36.8kb|
|ASMK ||hes-aff||rootSIFT||MA, BoW, agg.HE||inv. index||65k||81.0||-||80.4||75.0 (85.0)||77.0||43.2kb|
|c-MI ||hes-aff||rootSIFT, HS||MA, BoW, HE||2D index||20k200||84.0||3.71||58.2||35.2||55.1||45.2kb|
|Small Voc.||VLAD ||hes-aff||SIFT||VLAD||PCA, PQ||64||4,096||55.6||3.18||37.8||27.2||38.6||16kb|
|FV [47, 52]||hes-aff||PCA-SIFT||FV, pw.||LSH, SH||64||4,096||59.5||3.35||41.8||33.1||43.0||16kb|
|All A. VLAD ||hes-aff||SIFT||Improved VLAD||PCA||64||128||62.5||-||44.8||-||-||0.5kb|
|NE ||hes-aff||rootSIFT||NE, multi.voc, VLAD||PCA||4256||128||61.4||3.36||-||-||-||0.5kb|
|Triangulation ||hes-aff||rootSIFT||triang+democ||PCA, pw.||16||128||61.7||3.40||43.3||35.3||-||0.5kb|
|CNN-based||Hybrid||Off the Shelf ||den. patch||OFeat 1st FC||-||-||-||-||84.3||3.64||-/68.0||-||-/79.5||4-15kb|
|MSS ||den. patch||vgg conv5||MP||-||-||-||88.1||3.72||-/84.4||-||-/85.3||16kb|
|CKN ||hes-aff||CKN||VLAD, power||-||256||65k||79.3||3.76||56.5||-||-||256kb|
|MOP ||den. patch||alex FC7||multi-scale VLAD||PCA||100||2,048||80.2||-||-||-||-||8kb|
|OLDFP||sel. search||alex FC7||MP||ITQ ||-||512||88.5||3.81||60.7||-||66.2||2kb|
|Pre-trained single-pass||BLCF ||vgg16, conv5||hard, BoW||inv. index||25k||-||-||73.9 (78.8)||59.3 (65.1)||82.0 (84.8)||0.67kb|
|R-MAC ||vgg16, conv5||region MP, SP||PCA||-||512||85.2/86.9||-||66.9 (77.3)||61.6 (73.2)||83.0 (86.5)||1kb|
|CroW ||vgg16, conv5||cross-dim pool.||PCA||-||512||-/85.1||-||70.8 (74.9)||65.3 (70.6)||79.7 (84.8)||2kb|
|SPoC ||vgg16, conv5||SP, center prior||PCA||-||256||-/80.2||3.65||53.1/58.9||50.1/57.8||-||1kb|
|VLAD-CNN ||google, various incept.||intra-norm, VLAD||PCA||100||128||83.6||-||-/55.8||-||-/58.3||0.5kb|
|Fine-tuned single-pass||Faster R-CNN ||vgg16-reg-query, conv5||SP or MP||-||-||512||-||-||71.0||-||79.8||2kb|
|Neural Codes ||alexnet-classification loss-Landmark, FC6||PCA||-||128||-/78.9||3.29||-/55.7||-/52.3||-||0.5kb|
|NetVLAD ||vgg16-trip. loss-Tokyo TM, VLAD layer||PCA||64||512||81.7/86.1||-||65.6/67.6||-||73.4/74.9||8kb|
|SiaMAC ||vgg16-pair loss-3D Landmark, MAC layer||L||-||512||-/82.5||3.61||77.0 (82.9)||69.2 (77.9)||83.8 (85.6)||2kb|
|Deep Retrieval||vgg16-trip. loss-cleaned Landmark, PCA layer used||-||512||86.7/89.1||3.62||83.1 (89.1)||78.6 (87.3)||87.1 (91.2)||2kb|
|[17, 162]||ResNet101-trip. loss-cleaned Landmark, PCA layer used||-||2,048||90.3/94.8||3.84||86.1 (90.6)||82.8 (89.4)||94.5 (96.0)||8kb|
Holidays  is collected by Jégou et al. from personal holiday albums, so most of the images are of various scene types. The database has 1,491 images composed of 500 groups of similar images. Each image group has 1 query, totaling 500 query images. Most SIFT-based methods employ the original images, except [71, 32] which manually rotate the images into upright orientations. Many recent CNN-based methods , ,  also use the rotated version of Holidays. In Table V, results of both versions of Holidays are shown (separated by “/”). Rotating the images usually brings 2-3% mAP improvement.
Ukbench  consists of 10,200 images of various content, such as objects, scenes, and CD covers. All the images are divided into 2,550 groups. Each group has four images depicting the same object/scene, under various angles, illuminations, translations, etc. Each image in this dataset is taken as the query in turn, so there are 10,200 queries.
Oxford5k  is collected by crawling images from Flickr using the names of 11 different landmarks in Oxford. A total of 5,062 images form the image database. The dataset defines five queries for each landmark by hand-drawn bounding boxes, so that 55 query Regions of Interest (ROI) exist in total. Each database image is assigned one of four labels, good, OK, junk, or bad. The first two labels are true matches to the query ROIs, while “bad” denotes the distractors. In junk images, less than 25% of the objects are visible, or they undergo severe occlusion or distortion, so these images have zero impact on retrieval accuracy.
Flickr100k  contains 99,782 high resolution images crawled from Flickr’s 145 most popular tags. In literature, this dataset is typically added to Oxford5k to test the scalability of retrieval algorithms.
Paris6k  is featured by 6,412 images crawled from 11 queries on specific Paris architecture. Each landmark has five queries, so there are also 55 queries with bounding boxes. The database images are annotated with the same four types of labels as Oxford5k. Two major evaluation protocols exist for Oxford5k and Paris6k. For SIFT-based methods, the cropped regions are usually used as query. For CNN-based methods, some employ the full-sized query images [8, 137]; some follow the standard cropping protocol, either by cropping the ROI and feeding it into CNN  or extracting CNN features using the full image and selecting those falling in the ROI . Using the full image may lead to mAP improvement. These protocols are used in Table V.
Precision-recall. Recall denotes the ratio of returned true matches to the total number or true matches in the database, while precision refers to the fraction of true matches in the returned images. Given a subset of returned images, assuming there are true matches among them, and a total of true matches exist in the whole database, then recall () and precision () are calculated as and , respectively. In image retrieval, given a query image and its rank list, a precision-recall curve can be drawn on the (precision, recall) points , where is the number of images in the database.
Average precision and mean average precision. To more clearly record the retrieval performance, average precision (AP) is used, which amounts to the area under the precision-recall curve. Typically, a larger AP means a higher precision-recall curve and thus better retrieval performance. Since retrieval datasets typically have multiple query images, their respective APs are averaged to produce a final performance evaluation, i.e., the mean average precision (mAP). Conventionally, we use mAP to evaluate retrieval accuracy on the Oxford5k, Paris6k, and Holidays datasets.
N-S Score. The N-S score is specifically used on the Ukbench dataset and is named after David Nistér and Henrik Stewénius . It is equivalent to precision or recall because every query in Ukbench has four true matches in the database. The N-S score is calculated as the average number of true matches in the top-4 ranks across all the rank lists.
We present the improvement in retrieval accuracy over the past ten years in Fig. 6 and the numbers of some representative methods in Table V. The results are computed using codebooks trained on independent datasets . We can clearly observe that the field of instance retrieval has constantly been improving. The baseline approach (HKM) proposed over ten years ago only yields a retrieval accuracy of 59.7%, 2.85, 44.3%, 26.6%, and 46.5% on Holidays, Ukbench, Oxford5k, Oxford5k+Flickr100k, and Paris6k, respectively. Starting from the baseline approaches [12, 11], methods using large codebooks improve steadily when more discriminative codebooks , spatial constraints [21, 82], and complementary descriptors [72, 163] are introduced. For medium-sized codebooks, the most significant accuracy advance has been witnessed in the years 2008-2010 with the introduction of Hamming Embedding [13, 85] and its improvements [85, 76, 90]. From then on, major improvements come from the strength of feature fusion , ,  with the color and CNN features, especially on the Holidays and Ukbench datasets.
|SIFT large voc.||fair||high||fair||fair||high||fair|
|SIFT mid voc.||fair||low||low||fair||high||high|
|SIFT small voc.||fair||high||high||high||low||Low|
On the other hand, CNN-based retrieval models have quickly demonstrated their strengths in instance retrieval. In the year 2012 when the AlexNet  was introduced, the performance of the off-the-shelf FC features is still far from satisfactory compared with SIFT models during the same period. For example, the FC descriptor of AlexNet pre-trained on ImageNet yields 64.2%, 3.42, and 43.3% in mAP, N-S score, and mAP, respectively, on the Holidays, Ukbench, and Oxford5k datasets. These numbers are lower than  by 13.85%, 0.14 on Holidays and Ukbench, respectively, and lower than  by 31.9% on Oxford5k. However, with the advance in CNN architectures and fine-tuning strategies, the performance of the CNN-based methods is improving fast, being competitive on the Holidays and Ukbench datasets [17, 164], and slightly lower on Oxford5k but with much smaller memory cost .
First, among the SIFT-based methods, those with medium-sized codebooks [13, 31],  usually lead to superior (or competitive) performance, while those based on small codebook (compact representations) [15, 18, 56] exhibit inferior accuracy. On the one hand, the visual words in the medium-sized codebooks lead to relatively high matching recall due to the large Voronoi cells. The further integration of HE methods largely improves the discriminative ability, achieving a desirable trade-off between matching recall and precision. On the other hand, although the visual words in small codebooks have the highest matching recall, their discriminative ability is not significantly improved due to the aggregation procedure and the small dimensionality. So its performance can be compromised.
Second, among the CNN-based categories, the fine-tuned category [8, 17, 24] is advantageous in specific tasks (such as landmark/scene retrieval) which have similar data distribution with the training set. While this observation is within expectation, we find it interesting that the fine-tuned model proposed in  yields very competitive performance on generic retrieval (such as Ukbench) which has distinct data distribution with the training set. In fact, Babenko et al.  show that the CNN features fine-tuned on Landmarks compromise the accuracy on Ukbench. The generalization ability of  could be attributed to the effective training of the region proposal network. In comparison, using pre-trained models may exhibit high accuracy on Ukbench, but only yields moderate performance on landmarks. Similarly, the hybrid methods have fair performance on all the tasks, when it may still encounter efficiency problems [152, 7].
Third, comparing all the six categories, the “CNN fine-tuned” and “SIFT mid voc.” categories have the best overall accuracy, while the “SIFT small voc.” category has a relatively low accuracy.
Feature computation time. For the SIFT-based methods, the dominating step is local feature extraction. Usually, it takes 1-2s for a CPU to extract the Hessian-Affine region based SIFT descriptors for a 640480 image, depending on the complexity (texture) of the image. For the CNN-based method, it takes 0.082s and 0.347s for a single forward pass of a 224224 and 1024768 image through VGG16 on a TitanX card, respectively. It is reported in  that four images (with largest side of 724 pixels) can be processed in 1 second. The encoding (VLAD or FV) time of the pre-trained column features is very fast. For the CNN Hybrid methods, extracting CNN features out of tens of regions may take seconds. Overall speaking, the CNN pre-trained and fine-tuned models are efficient in feature computation using GPUs. Yet it should be noted that when using GPUs for SIFT extraction, high efficiency could also be achieved.
Retrieval time. The efficiency of nearest neighbor search is high for “SIFT large voc.”, “SIFT small voc.”, “CNN pre-trained” and “CNN fine-tuned”, because the inverted lists are short for a properly trained large codebook, and because the latter three have a compact representation to be accelerated by ANN search methods like PQ . Efficiency for the medium-sized codebook is low because the inverted list contains more postings compared to a large codebook, and the filtering effect of HE methods can only correct this problem to some extent. The retrieval complexity for hybrid methods, as mentioned in Section 4.3, may suffer from the expensive many-to-many matching strategy [7, 133, 152].
Training time. Training a large or medium-sized codebook usually takes several hours with AKM or HKM. Using small codebooks reduces the codebook training time. For the fine-tuned model, Gordo et al.  report using five days on a K40 GPU for the triplet-loss model. It may take less time for the siamese  or the classification models , but should still much longer than SIFT codebook generation. Therefore, in terms of training, those using direct pooling [10, 134] or small codebooks ,  are more time efficient.
Memory cost. Table V and Fig. 8 show that the SIFT methods with large codebooks and the compact representations are both efficient in memory cost. But the compact representations can be compressed into compact codes  using PQ or other competing quantization/hashing methods, so their memory consumption can be further reduced. In comparison, the methods using medium-sized codebooks are the most memory-consuming because the binary signatures should be stored in the inverted index. The hybrid methods somehow have mixed memory cost because the many-to-many strategy requires storing a number of region descriptors per image [7, 152] while some others employ efficient encoding methods [22, 147].
Spatial verification and query expansion. Spatial verification which provides refined rank lists is often used in conjunction with QE. The RANSAC verification proposed in  has a complexity of , where is the number of matched features. So this method is computationally expensive. The ADV approach  is less expensive with complexity due to its ability to avoid unrelated Hough votes. The most efficient methods consist in [112, 115] which has a complexity of , and  further outputs the transformation and inliers for QE.
From the perspective of query expansion, since new queries are issued, search efficiency is compromised. For example, AQE  almost doubles the search time due to the new query. For the recursive AQE and the scale-band recursive QE , the search time is much longer because several new searches are conducted. For other QE variants , , the proposed improvements only add marginal cost compared to performing another search, so their complexity is similar to basic QE methods.
We summarize the impact of codebook size on SIFT methods using large/medium-sized codebooks, and the impact of dimensionality on compact representations including SIFT small codebooks and CNN-based methods.
Codebook size. The mAP results on Oxford5k are drawn in Fig. 9, and methods using large/medium-sized codebooks are compared. Two observations can be made. First, mAP usually increases with the codebook size but may reach saturation when the codebook is large enough. This is because a larger codebook improves the matching precision, but if it is too large, matching recall is lower, leading to saturated or even compromised performance . Second, methods using the medium-sized codebooks have more stable performance when codebook size changes. This can be attributed to HE , which contributes more for a smaller codebook, compensating the lower baseline performance.
Dimensionality. The impact of dimensionality on compact vectors is presented in Fig. 7. Our finding is that the retrieval accuracy usually remains stable under larger dimensions, and drops quickly when the dimensionality is below 256 or 128. Our second finding favors the methods based on region proposals , 
. These methods demonstrate very competitive performance under various feature lengths, probably due to their superior ability in object localization.
We provide a brief discussion on when to use CNN over SIFT and the other way around. The above discussions provide comparisons between the two features. On the one hand, CNN-based methods with fixed-length representations have advantages in nearly all the benchmarking datasets. Specifically, in two cases, CNN-based methods can be assigned with higher priority. First, for specific object retrieval (e.g., buildings, pedestrians) when sufficient training data is provided, the ability of CNN embedding learning can be fully utilized. Second, for common object retrieval or class retrieval, the pre-trained CNN models are competitive.
On the other hand, despite the usual advantages of CNN-based methods, we envision that the SIFT feature still has merits in some cases. For example, when the query or some target images are gray-scale, CNN may be less effective than SIFT because SIFT is computed on gray-scale images without resorting to color information. A similar situation involves when object color change is highly intense. In another example, for small object retrieval or when the queried object undergoes severe occlusions, the usage of local features like SIFT is favored. In applications like book/CD cover retrieval, we can also expect good performance out of SIFT due to the rich textures.
A critical direction is to make the search engine applicable to generic search purpose. Towards this goal, two important issues should be addressed. First, large-scale instance-level datasets are to be introduced. While several instance datasets have been released as shown in Table III, these datasets usually contain a particular type of instances such as landmarks or indoor objects. Although the RPN structure used by Gordo et al.  has proven competitive on Ukbench in addition to the building datasets, it remains unknown if training CNNs on more generic datasets will bring further improvement. Therefore, the community is in great need of large-scale instance-level datasets or efficient methods for generating such a dataset in either a supervised or unsupervised manner.
Second, designing new CNN architectures and learning methods are important in fully exploiting the training data. Previous works employ standard classification , pairwise-loss  or Triplet-loss ,  CNN models for fine-tuning. The introduction of Faster R-CNN to instance retrieval is a promising starting point towards more accurate object localization 
. Moreover, transfer learning methods are also important when adopting a fine-tuned model in another retrieval task.
To the other end, there are also increasing interests in specialized instance retrieval. Examples include place retrieval , pedestrian retrieval , vehicle retrieval , logo retrieval , etc
. Images in these tasks have specific prior knowledge that can be made use of. For example in pedestrian retrieval, the recurrent neural network (RNN) can be employed to pool the body part or patch descriptors. In vehicle retrieval, the view information can be inferred during feature learning, and the license plate can also provide critical information when being captured within a short distance.
Meanwhile, the process of training data collection can be further explored. For example, training images of different places can be collected via Google Street View . Vehicle images can be accessed either through surveillance videos or internet images. Exploring new learning strategies in these specialized datasets and studying the transfer effect would be interesting. Finally, compact vectors or short codes will also become important in realistic retrieval settings.
This survey reviews instance retrieval approaches based on the SIFT and CNN features. According to the codebook size, we classify the SIFT-based methods into three classes: using large, medium-sized, and small codebook. According to the feature extraction process, the CNN-based methods are categorized into three classes, too: using pre-trained models, fine-tuned models, and hybrid methods. A comprehensive survey of the previous approaches is conducted under each of the defined categories. The category evolution suggests that the hybrid methods are in the transition position between SIFT and CNN-based methods, that compact representations are getting popular, and that instance retrieval is working towards end-to-end feature learning and extraction.
Through the collected experimental results on several benchmark datasets, comparisons are made between the six method categories. Our findings favor the usage of CNN fine-tuning strategy, which yields competitive accuracy on various retrieval tasks and has advantages in efficiency. Future research may focus on learning more generic feature representations or more specialized retrieval tasks.
The authors would like to thank the pioneer researchers in image retrieval and other related fields. This work was partially supported by the Google Faculty Award and the Data to Decisions Cooperative Research Centre. This work was supported in part to Dr. Qi Tian by ARO grant W911NF-15-1-0290, Faculty Research Gift Awards by NEC Laboratories of America and Blippar, and National Science Foundation of China (NSFC) 61429201.
L. Zheng, S. Wang, Z. Liu, and Q. Tian, “Packing and padding: Coupled multi-index for accurate image retrieval,” inCVPR, 2014.
M. Muja and D. G. Lowe, “Scalable nearest neighbor algorithms for high dimensional data,”TPAMI, vol. 36, no. 11, pp. 2227–2240, 2014.
P. Indyk and R. Motwani, “Approximate nearest neighbors: towards removing the curse of dimensionality,” in
Proceedings of the annual ACM symposium on Theory of computing, 1998.
K. Van De Sande, T. Gevers, and C. Snoek, “Evaluating color descriptors for object and scene recognition,”TPAMI, vol. 32, no. 9, pp. 1582–1596, 2010.
B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene recognition using places database,” inNIPS, 2014.