Implicit Sparse Code Hashing

12/01/2015 ∙ by Tsung-Yu Lin, et al. ∙ 0

We address the problem of converting large-scale high-dimensional image data into binary codes so that approximate nearest-neighbor search over them can be efficiently performed. Different from most of the existing unsupervised approaches for yielding binary codes, our method is based on a dimensionality-reduction criterion that its resulting mapping is designed to preserve the image relationships entailed by the inner products of sparse codes, rather than those implied by the Euclidean distances in the ambient space. While the proposed formulation does not require computing any sparse codes, the underlying computation model still inevitably involves solving an unmanageable eigenproblem when extremely high-dimensional descriptors are used. To overcome the difficulty, we consider the column-sampling technique and presume a special form of rotation matrix to facilitate subproblem decomposition. We test our method on several challenging image datasets and demonstrate its effectiveness by comparing with state-of-the-art binary coding techniques.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

How to efficiently perform similarity search over a large-scale image database is an important research topic with rich applications in computer vision. Its recent advances have notably led to two promising developments, namely, representing each image via a

high-dimensional descriptor to improve the retrieval accuracy, and encoding it with a binary code to speed up the task of finding relevant ones. While such performance gains naturally call for efforts to connect the two advancements for achieving both accuracy and efficiency, there are major obstacles concerning the computation cost to be overcome in formulating a unified approach. Specifically, the large scale of an underlying image database and the high dimensions of the adopted features unavoidably induce enormous data matrices. These, in turn, could cause various numerical-challenging issues in computing the binary codes, which can either exceed the capacity of available computing resources or require time-consuming training and testing processes. Our goal is to establish an effective framework for yielding binary codes so that similarity search over large-scale high-dimensional image data can be conveniently carried out.

We consider unsupervised binary coding to account for that in practical applications involving large-scale image data, the label information is often not completely available. However, the lack of label ground truth imposes extra difficulty on evaluating the retrieval accuracy. To resolve this matter, previous techniques use a nominal threshold based on the average Euclidean distance to the th-nearest neighbor of each image to form the so-called Euclidean ground truth [25]. While this strategy is handy to provide a universal criterion for quantifying the retrieval results, it nevertheless brings in new concerns. First, when comparing, say, two binary coding schemes, it is questionable that the one yielding better retrieval performance on the Euclidean ground truth would guarantee the same advantage on the label ground truth (when available). Second, adopting the Euclidean ground truth implicitly implies that performing similarity search based on any resulting binary coding methods would not outperform using -norm in the original input space. We instead evaluate the performance according to the label ground truth. Such a choice enables less biased comparisons and more likely better generalizations. Still, we emphasize that the label ground truth is not needed for the proposed formulation but solely for achieving more reliable evaluations on the experimental results.

Exploring sparse codes has been shown to be advantageous in many relevant studies. Gkioulekas and Zickler [9] have introduced an unsupervised dimensionality reduction framework by preserving the pairwise inner products of sparse codes. Their approach can improve recovering meaningful image relationships for different classes of signals, including facial images, texture patches, and images of object categories. Timofte and Van Gool [28]

show that by defining the affinity matrix based on sparse codes, the effectiveness of the resulting embedding can be enhanced. The method in

[5] exploits sparse codes to improve nearest-neighbor search, and focuses on learning the dictionary and sparse codes robustly. The derived sparse codes are then mapped to hash codes, which are used for indexing to a hash table for similarity search. Inspired by these relevant improvements, we establish our approach to binary coding by investigating the image structure implied by sparse codes. However, owing to the technique in [9], our method distinctly does not involve an explicit computation of sparse codes and is thus favorable for dealing with large-scale data.

2 Related work

Choosing a proper feature representation to describe an image is crucial to the success of most computer vision techniques. This is especially true in dealing with large-scale problems. Most of the existing high-dimensional descriptors are typically designed to encode as many aspects of image statistics as possible. Take, for example, the Fisher Vector (FV) representation

[23, 24]

. It is established by assuming that local features are generated from a Gaussian Mixture Model (GMM), which can be learned from maximum likelihood estimation. By differentiating the log-likelihood of local features with respect to one type of the GMM parameters, one can derive a specific form of FV. The often-used Bag-of-Words (BoW) model can be thought of as a simple case of FV. In

[26], Sánchez and Perronnin demonstrate that FV is effective for large-scale classification problems and report significant improvements over then state-of-the-art on ILSVRC 2010 [2]. The application of FV has also been expanded to deal with object detection [6] as well as action and event recognition [22]. Another development closely related to FV is the introduction of Vector of Locally Aggregated Descriptors (VLAD), which is independently proposed by Jégou et al. [16]. The way VLAD models the distribution of local image features is similar to the Fisher kernel representation [23]. VLAD is thus a non-probabilistic simplification of FV. Arandjelović and Zisserman [1] have provided a thorough study about VLAD and proposed useful techniques to explore VLAD more effectively. The Locality constrained Linear Coding (LLC) [30]

is also a high-dimensional descriptor where its coding process can be pictured as a hierarchical architecture of carrying out feature extraction, sparse coding, pooling and feature vector concatenation according to the spatial layout of Spatial Pyramid Matching (SPM)

[19].

For unsupervised binary coding, Locality Sensitive Hashing (LSH) first formalizes the concept of approximate nearest-neighbor search to find, in high probability and in sub-linear time, items within

times the optimal similarity [8]. As LSH relies on random projections, it often requires a long code to achieve good precision. Weiss et al. [31] propose a hard criterion, which is related to graph partitioning, to establish Spectral Hashing (SH). However, SH assumes that the data are generated from a separable distribution, and seems not to be competitive as reported in [11]. Subsequently, the same group of authors have extended SH to MultiDimensional Spectral Hashing (MDSH) by considering a different criterion that matches Hamming affinity with the target affinity matrix. In [25], Raginsky and Lazebnik introduce a distribution-free hashing method (termed as SKLSH), which has an encoding scheme similar to SH, but instead uses randomly sampled directions. It is shown that, as opposed to SH, SKLSH does not display the degenerate behavior with respect to the code length. ITerative Quantization (ITQ) by Gong and Lazebnik [11] is an effective framework for deriving similarity-preserving binary codes. It works by first using PCA to reduce the dimension of the data to the desired code size, and then carrying out an alternating optimization strategy to find the optimal rotation and binary codes minimizing the quantization error. In [12], He et al

. have proposed K-Means Hashing (KMH) for binary codes. The idea is to generalize K-means clustering to iteratively minimize the quantization error and the affinity error in the alternating EM steps. In this respect, KMH can be linked to ITQ. SPherical Hashing (SPH) is introduced in

[13]

. Rather than partitioning the data with respect to hyperplanes, SPH uses hyperspheres and measures the resulting binary codes with the spherical Hamming distance.

While the aforementioned binary coding methods are not designed to handle high-dimensional data, Bilinear Projection-based Binary Coding (BPBC) by Gong

et al. [10] is the first attempt to explicitly tackle the challenging issue. BPBC can be considered a high-dimensional extension to ITQ. It is formulated to seek a proper bilinear projection to reduce the complexity of computing a full rotation and to achieve dimensionality reduction simultaneously. In [35], Yu et al

. subsequently develop Circulant Binary Embedding (CBE). The algorithm works by imposing a circulant structure on the dimensionality-reduction projection matrix so that techniques based on fast Fourier transform can be applied to compute binary codes more efficiently. Xia

et al. [32] propose a Sparse Projection-based (SP) binary coding scheme for high-dimensional data. They consider a sparsity regularizer to achieve efficiency in both storage and encoding. However, their method is established upon first solving an eigenproblem and thus does not scale well to the data dimension. The length of an SP binary code can be smaller or greater than the original feature dimension. In the former case, the performance gradually improves to that of ITQ when increasing the code length; in the latter it gives similar performances as LSH [32].

It can be observed that the three high-dimensional binary coding methods, namely, BPBC, CBE and SP, are motivated by ITQ. As the retrieval performances of unsupervised binary codes are evaluated typically based on the Euclidean ground truth, it is insightful to note that their improvements over ITQ are generally not obvious. When the feature dimension is too large to run ITQ, only BPBC and CBE are applicable. The stagnation of achieving significant progress among the newly proposed techniques is mostly due to that the performance by ITQ (when using sufficient bits) is already rather close to using 2-norm to carry out similarity search in the original space. On the other hand, when testing these algorithms with label ground truth, their performance may not even surpass ITQ. Thus, for the sake of achieving more meaningful evaluations, we compare the various binary coding schemes solely based on label ground truth.

Our approach to binary coding is to preserve the inner products of sparse codes, where the advantages of respecting such quantities are justified in, e.g., [5, 9, 28]. Since the sparse codes are not explicitly computed, we term our binary coding method as Implicit Sparse Code Hashing (ISCH). The first step of ISCH is to learn a dictionary from the image database and (conceptually) represent each image by its sparse code. It then carries out dimensionality reduction to preserve the image relationships entailed by the inner products between sparse codes. Leveraging with the techniques in [9, 27], all the sparse codes are not needed to be computed at all. Otherwise, the computation time caused by the process of sparse coding would make ISCH unfeasible for large-scale data. When the feature dimension is extremely large, the main challenge in completing the reduction step is to solve the eigenproblem for . To tackle the high complexity, we consider the column sampling low-rank approximation methods proposed in [18, 21]

. We then approximately solve the large number of eigenvalues and the corresponding eigenvectors. Finally, the resulting mapping to generate binary codes is given by a closed-form formula up to an arbitrary rotation, which in turn can be decided by minimizing the quantization error and by assuming a special structure of the rotation matrix.

3 Implicit sparse code hashing

Given a collection of images, denoted as , our task is to convert each into an -bit () binary code so that similarity search can be efficiently performed. We express the mapping by

(1)

where is the -dimensional Hamming space. In our experiments, the value of , the dimension of the image feature vector, can be up to 105,000.

As the motivation behind our binary code conversion is to consider the image neighborhood relationships implied by the sparse codes, we first need to learn a dictionary of atoms. We require so that can be overcomplete. To construct D, we adjust the image data to be zero-centered and apply an efficient implementation of k-means clustering from the Yael library [33]. (More details about are given in Section 4.) Each atom in is then normalized to be unit length. The sparse code of each is the solution to the following lasso problem:

(2)

where is a parameter to weigh the influence of the sparseness prior. We denote the relation between an image and its sparse code as

(3)

3.1 Dimensionality reduction

To obtain -bit binary codes for , we carry out dimensionality reduction from to . The linear mapping realizing the reduction is specified by

(4)

In most of the existing unsupervised binary coding methods, the mapping is decided by preserving the -norm image relationships in . That is, is obtained by minimizing

(5)

However, using -norm (i.e

., Euclidean distance) to measure the similarity between images would often yield improper results and consequently incorrect image relations. The criterion in (

5) in some sense limits the effectiveness of its resulting binary codes. Motivated by [9], the proposed ISCH instead decides by respecting the inner products of sparse codes and minimizing the expected squared difference in inner product, i.e.,

(6)

Gkioulekas and Zickler [9] prove that without the need to compute the sparse codes by solving (2), the optimization problem (6) can be reduced to minimizing an objective function whose stationary points could be analytically obtained if the following sparse linear model [27] is assumed:

(7)

where

is the Gaussian noise of variance

and sparse code obeys the independent Laplace distribution priors,

(8)

where is the positive scale parameter. Note that the weighting parameter in (2) relates to (7) and (8) by . It follows from [9] that the optimal linear projection is determined (up to an rotation matrix ) by the following closed-form,

(9)

where includes the largest eigenvalues of the matrix and consists of the corresponding eigenvectors as columns. The expression in (9) represents an diagonal matrix whose th diagonal entry is given by

(10)

How to compute and in (9) will be discussed in the next two sections. Suffice it to say now that they can only be approximately decided for the high-dimensional case. Once we have obtained an optimal , to perform an on-line similarity search with respect to a novel image can be done as follows. Let be the th row of and be the binary code of . Then we have, for ,

(11)

According to (6), those images in that have similar sparse codes to that of can now be retrieved through XOR bit-wise operations over binary codes.

3.2 Approximate spectral decomposition

To approximately solve the high-dimensional eigenprobelm for , we use sampling without replacement to select columns from to form a rectangular matrix, say,

and express its compact singular value decomposition by

(12)

where , and . The rank of is assumed to be . Otherwise, we adjust the column sampling to ensure this condition. (An efficient check is described below.) Let the Moore-Penrose pseudoinverse of be . We then carry out column projection to generate a “matrix approximation” of by

(13)

where can be thought of as an -rank approximation to onto the column space of .

Since , the length of binary code, can be large, solving SVD for is still unmanageable in most cases. We further decompose it into subprobelms. Specifically, we randomly select columns from and divide it into sub-matrices of size , denoted as . (Assume that .) The decomposition is to enable a feasible SVD computation for each sub-matrix, which can now be expressed by , for . If any does not have nonzero singular values, we redo the sampling without replacement for its corresponding subset of columns in . We then approximate the SVD for by simultaneously solving problems of SVD for

(14)

Although is still a matrix, its rank is only . To accomplish SVD for , we let and decompose it by . (The SVD of can be efficiently computed owing to .) Since the columns in are orthogonal to each other, it follows that we have indeed derived the SVD of by

(15)

We thus obtain pairs of singular value and left singular vector for each . The validity of transforming into smaller SVD problems depends on how well the following approximation holds:

(16)

which is certainly a crucial topic for further study. Once we have collected the pairs of singular value and left singular vector of , we thus approximately solve the eigenproblem for and also the linear projection in (9).

3.3 Rotation matrix

We are left to determine the rotation matrix in (9) to complete the linear projection . Since our goal is to seek a scheme for producing good binary codes, it is convenient to link the criterion for an appropriate to facilitating the desired encoding. Analogous to ITQ, we consider deriving the rotation matrix by minimizing the quantization error. That is, the optimal binary codes for all and the optimal are jointly optimized as follows:

(17)

where is the -bit binary code of as in (1).

Since the value of is generally quite large for high-dimensional data, directly solving (17) may not be feasible. In BPBC [10], the idea to restrict to a bilinear rotation has been discussed for reducing the computation complexity. In our approach, we simplify the large-scale optimization problem (17) by assuming to be a sparse block-diagonal matrix. We have

(18)

where nonzero elements in appear only at the diagonal blocks , and as before. With (18), the optimization problem (17) is reduced to

(19)

where we have divided each and into segments and used the notations and to represent their th segment. Again, we have decomposed a large-scale problem into smaller ones, each of which can be independently solved using an iterative process of alternating optimization. Namely, we fix or in turn and optimize the other until a preset stopping criterion is met.

4 Experimental results

We carry out extensive experiments to evaluate the effectiveness of ISCH with five benchmark datasets, including CIFAR-10 [29], MNIST [20], ILSVRC2010 [7], INRIA Holiday+Flickr1M [15] and WDRef+LFW [4, 14]. The choice of these datasets is to demonstrate that our method works for image descriptors of both high and moderate dimensions. For the sake of achieving more conclusive evaluations, the retrieval results are measured according to the label ground truth. The methods to be compared with include LSH, SPH, ITQ, BPBC, CBE, SP and using -norm in the ambient space. As we will detail later that our method significantly outperforms the others in all the experiments, comprising datasets of different scales and feature dimensions. In particular, our results show that ISCH could use less bits than all other compared techniques to achieve comparable performance. Such an advantage is useful for both reducing the coding storage of an underlying large-scale image database and the testing time for carrying out exhaustive Hamming distance computation over the whole dataset.

4.1 Evaluation protocols

CIFAR-10 is a subset of Tiny Images dataset [29]. It consists of 60,000 color images of size pixels. Each of the ten categories in CIFAR-10 contains 6,000 images. We randomly select 1,000 images as test queries, 100 images from each class, and use all the remaining images to make up the training set. To represent an image in CIFAR-10, we use the grayscale GIST descriptor to form a -D feature vector. For CIFAR-10, we construct a dictionary of size 1,024 in our ISCH method.

MNIST includes 70,000 hand-written digit images, each of which corresponds to one of the digits from 0 to 9. As the image size is , it can be conveniently represented by a -D vector of raw pixel values. We randomly sample 2,000 images, equally distributed in each category, as test queries and perform retrieval on the remaining image set. We generate 1,568 cluster centers by k-means algorithm to form an overcomplete dictionary.

ILSVRC2010, a subset of ImageNet 

[7], is a challenging image collection for fine-grained category classification. It consists of 1.2M images corresponding to 1,000 classes. We download from the ImageNet website the public SIFT features. To construct the VLAD representation, we form 200 clusters and assign each SIFT descriptor to one of the clusters to represent an image. The resulting dimension is . As suggested in [1] that intra-normalization, i.e., the process to perform -norm normalization over the sum of residuals within each VLAD cluster independently, would yield better performance than power-normalization, we thus normalize VLAD feature vectors by intra-normalization followed by -norm normalization. To further evaluate the retrieval performances, we also consider the LLC [30]

descriptor and the Caffe CNN-fc7 feature

[17]. The setting for LLC is similar to that in [10]

. We cluster the SIFT feature vectors to construct a codebook of size 5,000. For each image, the densely extracted SIFT descriptors are coded by LLC and aggregated by using three-level spatial pyramid and max pooling. This would yield a

dimensional representation. The LLC feature vectors are further processed by zero-centering and -norm normalization. Regarding the CNN-fc7 representation, it is -D and the only supervised feature (obtained from pre-training with ImageNet) used in our experiments.

The INRIA Holiday database is a collection of personal holiday photos, including 1,491 images over 500 different scenes. 500 images from the dataset are randomly chosen as queries, while the remaining 991 images are combined with Flickr1M to form the underlying database. For each query, there are about two images to be retrieved from the database. We download from the author’s website the public SIFT features and the dictionary with 500 vocabularies to construct a dimensional VLAD for encoding each image. Analogously, the resulting VLAD features are intra-normalized and then -norm normalized.

WDRef+LFW is a mixed image collection that comprises the face database WDRef [4] and also LFW [14] serving as distractors. For this dataset, we compare our method with only BPBC and 2-norm similarity search to highlight that ISCH for solving large-scale optimization is not restricted to the descriptor structure. WDRef consists of 71,846 images from 2,995 subjects, most of which have more than 10 images. We randomly sample one image per subject from WDRef to form a query set of 2,995 images. The remaining images are combined with LFW, which has 13,233 images, to form the underlying database. We represent each image with the LE [3] descriptor, which has an implicit -D structure of dimensions . Additionally, we expand LE with a -D LBP to form the other high-dimensional descriptor. To balance the effects of LE and LBP, they are normalized to unit length independently and then concatenated. The combined vector is again normalized to unit length. One notable property of the LE+LBP descriptor is that the implicit -D structure is no longer present. Since BPBC relies on exploring the -D structure of a high-dimensional descriptor, it is not clear how to apply BPBC to work with the LE+LBP descriptor.

Details about how to evaluate the retrieval performance are as follows. The retrieval results of binary codes are decided by Hamming distance. With CIFAR-10 and MNIST, we use 2-norm similarity search in the original space as the baseline and test all methods except BPBC, which is more appropriate for high-dimensional data. With ILSVRC2010 and INRIA Holiday+Flickr1M, we focus on evaluating ITQ, BPBC, CBE, SP and ISCH. When the descriptor dimension is notably high, carrying out ITQ and SP would become too time-consuming and the two will be excluded from the evaluation. For the experiment with ILSVRC2010, we randomly sample 1,000 images as queries and assess the precision at top k nearest neighbors. For Holiady+Flickr1M, we adopt the measure used in  [15] and report mean average precision (mAP), i.e., the area under recall precision curve over 500 predefined queries. Finally, for WDRef+LFW, we also measure the top k nearest-neighbor performance. In all our experiments, the same set of query images are used for comparing the retrieval results respectively by Hamming distance and -norm nearest-neighbor search.

4.2 Implementation details

When the feature dimension is large, learning a dictionary

from a large-scale dataset is nontrivial and time-consuming. To efficiently accomplish the task, rather than directly performing k-means clustering in the original feature space, we simply project the data onto a lower-dimensional space where the resulting features of lower dimensions would serve as proxies to hierarchically cluster original features. Specifically, to hierarchically construct

in H levels is carried out as follows. Let be the data matrix. We first generate dimensionality-reduction random projections, namely, where , and . At level , the hierarchical clustering is to divide each cluster at level into clusters. To this end, we compute the level-wise reduction by . Then, k-means clustering is performed over the reduction images of each cluster. That is, we use as proxy to reduce the complexity of clustering data in . Since the cluster sizes can be imbalanced, a cluster of size smaller than some pre-specified threshold would not be further split into the next level. To form the dictionary, all the cluster centers produced through the hierarchical process are included as dictionary atoms. In our implementation, we empirically set and . That is to say, we generate at most atoms for a dictionary. The general principle of selecting the value of is to make sure that the size of dictionary is larger than . In our experiment, we respectively set the value of to , , and for ILSVRC2010 with -D VLAD, ILSVRC2010 with -D LLC, Holiday+Flickr1M with -D VLAD, and WDRef+LFW with -D LE and -D LE+LBP.

The key parameter in our method is , the scale parameter of the Laplace distribution in (8). From (2) and (8), the parameters of the Laplace distribution and the sparseness weight are related by . In [34], is suggested to be 0.05. We follow this setting and thus only need to decide the value of to specify the formula in (10). As is the scale parameter, its suitable value correlates with the dictionary size. In our experiments, is set to for CIFAR-10 and MNIST, and for all others, except when using the CNN-fc7 feature, we have .

(a) CIFAR-10 (b) MNIST (c) ILSVRC-VLAD
(d) ILSVRC-LLC (e) ILSVRC-CNN-fc7 (f) Holiday+Flickr1M
(g) WDRef+LFW (h) ILSVRC-VLAD-DR (i) ILSVRC-CNN-DR
Figure 1: (a)-(e) Precision at k versus code length for CIFAR-10, MNIST, ILSVRC using -D VLAD, ILSVRC using -D LLC and ILSVRC using -D CNN-fc7, respectively. (f) Mean average precision versus code length for Holiday+Flicke1M using -D VLAD. (g) Retrieval performances on WDRef+LFW, measured by average recall of top k returns. The number following a specific method is the code size. "LE+LBP" indicates that face images are represented by the LE+LBP feature, otherwise, LE feature only. In (h) & (i), the naive binary coding is decided by signs of the reduced features. The plots show precision at 10 for -norm and Hamming distance retrievals versus reduced dimensions for -D VLAD and -D CNN-fc7. Notice that the results of performing -norm and Hamming distance retrievals in the original input space are plotted in black.

4.3 Retrieval results

We begin with the experiments on CIFAR-10 and MNIST. Evaluations are done by returning the top 500 ranked images of each query and reporting the average precision. Figures 1(a) and 1(b) show the averaged precision of 500 nearest neighbors returned by each method. While all the compared binary coding techniques are designed to respect the -norm image relationships in the original feature space, our method is to preserve those implied by the inner products of sparse codes. The advantage is manifest in that a 128-bit binary code by ISCH already outperforms all other implementations using more bits. We also see that most of the compared binary codes would gradually approach the -norm performance when increasing their code size. Notice that we have set the sparsity parameter in SP to and the value of in CBE to 1 in all our experiments.

For ILSVRC2010, we use three different image descriptors, -D VLAD, -D LLC and -D CNN-fc7 . Before we report the evaluation results of the various techniques, we first show that the retrieval accuracy based on these sophisticated descriptors can still be boosted through performing dimensionality reduction. (We consider only VLAD and CNN-fc7 in that the LLC dimension is too large to run PCA.) We respectively conduct 2-norm and Hamming distance search based on a naive sign-thresholding binary coding, and compare their performances before and after performing PCA. The retrieval outcomes support that improvements can be achieved through appropriate dimensionality reduction as shown in Figures 1(h) and 1(i). We then proceed to investigate the effectiveness of different binary coding schemes. When images in ILSVRC2010 are encoded by a -D VLAD, we apply the hierarchical k-means clustering described in the previous section to obtain a dictionary of size 32,385. For each method, we generate binary codes of size in half (12,800), quarter (6,400), and eighth (3,200) of the feature dimension. To achieve the best retrieval performance by BPBC, we have tried several combinations of the reduced dimensions of a bilinear projection, except for the code size 12,800, where we adopt the specific reduced dimensions reported in [10]. Figure 1(c) shows the precision results at 10 retrieved images using binary codes of different lengths. The experiment for ILSVRC2010 with -D LLC is performed analogously. We construct a dictionary of size 12,9281. Despite that the dimensions of now grow to , we can still use the approximation techniques to compute the binary codes. In Figures 1(c) and  1(d), -bit ISCH already yields better performances than all other techniques with substantially large code sizes. In testing ISCH with the -D CNN-fc7 feature, the dictionary size is set to 8,000. Since the Caffe CNN-fc7 yields supervised features, the retrieval performances of all methods, plotted in in Figure 1(e), are significantly improved.

The image descriptor used in the INRIA Holiday+Flickr1M dataset is -D VLAD. For each query, there are about two relevant images in the database to be retrieved. We identify the retrieved images according to Hamming distances and compute the average precision for each query. For ISCH, we construct a dictionary of size 77,001 to generate binary codes of 8,000, 16,000 and 32,000 bits. Figure 1(f) includes the mAP curve for different code sizes of each method. The results show that ISCH uniformly outperforms BPBC and CBE by appreciable margins.

In many challenging computer vision problems, adopting a combined feature vector by fusing different image descriptors is often useful. In the experiment with WDRef+LFW, we demonstrate that ISCH is flexible in this respect. In particular, ISCH only requires that the target code length can be factorized by . However, such a generalization is not applicable to BPBC in that it relies on the two-dimensional structure of a high-dimensional descriptor. Since not all image descriptors have an implicit two-dimensional structure (or even they do, the structures are generally not the same), the feature fusion could not be a workable option for BPBC. To describe each face image in WDRef+LFW, we use LE and LBP descriptors, where the latter does not display a two-dimensional structure. We consider two feature representations based on LE and LE+LBP, and construct two dictionaries of size 34,835 and 34,571, respectively. We generate binary codes of size 3,200, 6,400, and 12,800 for both methods and plot the results measured by recall value in Figure 1(g). We see that ISCH considerably outperforms BPBC when working with the LE feature only. The average recall value of ISCH at top 5,000 returned images is over 7% more than that of BPBC. Furthermore, in Figure 1(g), the retrieval performances by ISCH can noticeably benefit from considering the combined feature LE+LBP, while BPBC is not applicable.

5 Discussions

Previous uses of sparse codes to better define image structure [28] or index a hash table for similarity search [5] require explicitly solving them. However computing sparse codes for large-scale high-dimensional data is not only extremely time-consuming (or even unfeasible) but also parameter sensitive. The technique developed in [9] provides a convenient way of skipping such computation in finding a dimensionality-reduction mapping that preserves the inner products of sparse codes. Since it is achieved by minimizing the expected squared difference of inner products under the assumption of a sparse linear model, the resulting mapping is reasonably robust if the underlying rotation matrix can be properly constructed. In our formulation, we link the criterion of an optimal rotation matrix to yielding good binary codes that minimize the quantization error. The key to our approach for tackling large-scale computation is the use of problem decomposition. By coupling approximate spectral decomposition and the assumption of rotation matrix being sparse block-diagonal, we reduce the daunting eigenproblem and the quantization-error minimization into independent and manageable subproblems. All these efforts result in a new and promising technique, coined as ISCH, to generate effective binary codes for large-scale image data.

We conclude by briefly discussing the complexity of ISCH. Regarding the time complexity, excluding the step of learning a dictionary, the time complexity for training is and for computing the binary code of a testing image is . For the storage usage of ISCH, we take BPBC as a comparative model. From the experimental results, we observe that our method needs only at most one fourth of bits in a binary code as BPBC to achieve comparable retrieval performances. Using ILSVRC2010 with -D VLAD as an example, -bit ISCH would need extra storage of (byte) MB for the dimensionality-reduction mapping. -bit BPBC requires GB for storing the binary codes of the whole dataset, while our method only needs GB for the codes. Nevertheless, as the feature dimension increases, the advantage of storage saving by ISCH would diminish since the storage for the mapping grows up in . This suggests that how to approximate the learned in (9) to save the storage is essentially a practical research topic for future direction.

References

  • [1] R. Arandjelović and A. Zisserman. All about VLAD. In CVPR, pages 1578–1585, 2013.
  • [2] A. Berg, J. Deng, and L. Fei-Fei. Ilsvrc 2010, 2010.
  • [3] Z. Cao, Q. Yin, X. Tang, and J. Sun. Face recognition with learning-based descriptor. In CVPR, pages 2707–2714, 2010.
  • [4] D. Chen, X. Cao, L. Wang, F. Wen, and J. Sun. Bayesian face revisited: A joint formulation. In ECCV (3), pages 566–579, 2012.
  • [5] A. Cherian, S. Sra, V. Morellas, and N. Papanikolopoulos. Efficient nearest neighbors via robust sparse hashing. IEEE Transactions on Image Processing, 23(8):3646–3655, 2014.
  • [6] R. G. Cinbis, J. Verbeek, C. Schmid, et al. Segmentation driven object detection with fisher vectors. In ICCV, 2013.
  • [7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009.
  • [8] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In VLDB, pages 518–529, 1999.
  • [9] I. Gkioulekas and T. Zickler. Dimensionality reduction using the sparse linear model. In NIPS, pages 271–279, 2011.
  • [10] Y. Gong, S. Kumar, H. A. Rowley, and S. Lazebnik. Learning binary codes for high-dimensional data using bilinear projections. In CVPR, pages 484–491, 2013.
  • [11] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In CVPR, pages 817–824, 2011.
  • [12] K. He, F. We, and J. Sun. K-means hashing: an affinity-preserving quantization method for learing binary compact codes. In CVPR, 2013.
  • [13] J.-P. Heo, Y. Lee, J. He, S.-F. Chang, and S.-E. Yoon. Spherical hashing. In CVPR, pages 2957–2964, 2012.
  • [14] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, October 2007.
  • [15] H. Jégou, M. Douze, and C. Schmid. Hamming embedding and weak geometric consistency for large scale image search. In ECCV, pages 304–317. Springer, 2008.
  • [16] H. Jégou, M. Douze, C. Schmid, and P. Pérez. Aggregating local descriptors into a compact image representation. In CVPR, pages 3304–3311, 2010.
  • [17] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
  • [18] S. Kumar, M. Mohri, and A. Talwalkar. On sampling-based approximate spectral decomposition. In ICML, pages 553–560. ACM, 2009.
  • [19] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, volume 2, pages 2169–2178. IEEE, 2006.
  • [20] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, Nov 1998.
  • [21] L. W. Mackey, A. Talwalkar, and M. I. Jordan. Divide-and-conquer matrix factorization. In NIPS, pages 1134–1142, 2011.
  • [22] D. Oneat̨ă, J. Verbeek, C. Schmid, et al. Action and event recognition with fisher vectors on a compact feature set. In ICCV, 2013.
  • [23] F. Perronnin and C. Dance. Fisher kernels on visual vocabularies for image categorization. In CVPR, pages 1–8, 2007.
  • [24] F. Perronnin, J. Sánchez, and T. Mensink. Improving the Fisher kernel for large-scale image classification. In ECCV, pages 143–156. Springer, 2010.
  • [25] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In NIPS, pages 1509–1517, 2009.
  • [26] J. Sánchez and F. Perronnin. High-dimensional signature compression for large-scale image classification. In CVPR, pages 1665–1672, 2011.
  • [27] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model.

    Journal of Machine Learning Research

    , 9:759–813, 2008.
  • [28] R. Timofte and L. Van Gool. Sparse representation based projections. In Proceedings of the 22nd British machine vision conference, pages 61–1, 2011.
  • [29] A. Torralba, R. Fergus, and W. T. Freeman.

    80 million tiny images: A large data set for nonparametric object and scene recognition.

    TPAMI, 30(11):1958–1970, 2008.
  • [30] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for image classification. In CVPR, pages 3360–3367, 2010.
  • [31] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, pages 1753–1760, 2008.
  • [32] Y. Xia, K. He, P. Kohli, and J. Sun. Sparse projections for high-dimensional binary codes. In CVPR, pages 3332–3339, 2015.
  • [33] Yael. https://gforge.inria.fr/projects/yael/, 2009.
  • [34] J. Yang, K. Yu, and T. Huang. Supervised translation-invariant sparse coding. In CVPR, pages 3517–3524. IEEE, 2010.
  • [35] F. X. Yu, S. Kumar, Y. Gong, and S.-F. Chang. Circulant binary embedding. In ICML, 2014.