. Specifically, in order to achieve shorter response time and less computational cost, hashing encodes high-dimensional data into compact binary codes (i.e.,
0 or 1) substantially. In this way, data can be compactly stored and Hamming distances can be efficiently calculated with bit-wise XOR operations. Because of its impressive capacity of dealing with “curse of dimensionality” problem, hashing has been extensively employed in various real-world applications, ranging from multimedia indexing[8, 40, 37, 36] to multimedia event detection .
There are mainly two branches of hashing, i.e., data-independent hashing and data-dependent hashing. For data-independent hashing, such as Locality Sensitive Hashing , no prior knowledge (e.g., supervised information) about data is available, and hash functions are randomly generated. Nonetheless, huge storage and computational overhead might be cost since more than bits are usually required to achieve acceptable performance. To address this problem, research directions turn to data-dependent hashing, which leverages information inside data itself. Roughly, data-dependant hashing can be divided into two categories: unsupervised hashing (e.g., Iterative Quantization  and Sparse Mutli-Modal Hashing ), and (semi-)supervised hashing (e.g., Supervised Hashing with Kernels , Supervised Discrete Hashing , Discrete Graph Hahsing  and Semi-Supervised Hashing ). In general, supervised hashing usually achieves better performance than unsupervised ones because supervised information (e.g., semantic labels and/or pair-wise data relationship) can help to better explore intrinsic data property, thereby generating superior hash codes and hash functions.
Along with the explosive growth of Web data, traditional supervised hashing methods have been facing an enormous challenge, i.e., the generation of reliable supervised knowledge cannot catch up with the rapid increasing speed of newly-emerging semantic concepts and multimedia data. In other words, due to the expensive cost of manual labelling (time-consuming and labor-intensive), sufficient labelled training data is usually not timely available for learning new hash functions that can accurately encodes data of new concepts. As illustrated in Figure 1, within the “seen” zone, where images are attached with known categories, existing supervised hashing algorithms may perform well because they are fed with correct guidance. However, outside the seen area, supervised hashing algorithms may easily fail to generalize to data of new categories that they never observe, e.g., , a two-wheeled, self-balancing, battery-powered electric vehicle. Moreover, most of current approaches use supervised information in the form of either 0/1 semantic labels or pair-wise data relationship for guiding the learning process, which implies that precious correlation among label semantics are inevitably ignored. One straightforward consequence of the semantic independency is that each category can neither learn from other relevant categories nor distribute its own supervised knowledge to other seen classes and/or even those unseen ones.
The aforementioned disadvantages motivate us to consider whether we can encode images of “unseen” categories into binary codes with hash functions learned from limited training samples of“seen” categories? The key challenge of achieving this goal is how to set up a tunnel to transfer supervised knowledge between “seen” and “unseen” categories. In recent years, zero-shot learning (ZSL) [21, 27, 2, 20] has been widely recognized as a way to deal with this problem. The ZSL paradigm aims to learn a general mapping from the feature space to a high-level semantic space, which helps avoid rebuilding models for unseen categories with extra manually labelled data. ZSL is mostly achieved by using class-attribute descriptors to bridge the semantic gaps between low-level features and high-level semantics, where new categories are thus learned using only the relationship between attributes and categories. However, most of existing attribute based ZSL methods still suffer from: (1) erroneous guidance derived from imprecise or incomplete human-labelled attributes , which is usually due to the lack of expertise or mislabeling by annotators, etc.; (2) diminishing of discrimination for pre-defined attributes when confronted with dataset shift [14, 22].
Recently, mining other auxiliary datasets has been shown to be helpful to tackle the zero-shot learning problem. For instance, with a huge corpus such as Wikipedia, one can obtain word embeddings that capture distributional similarity in the text corpus , such that similar words can be located in similar place. During the learning phase, visual modality can be grounded by the word vectors, and such knowledge can thus be transferred into the learned model. Inspired by this, many approaches choose to utilize auxiliary modalities to help address the zero-shot tasks. Socher et al.  uses word embedding as supervision in order to detect novel categories and perform classification accordingly. Frome et al.  adopts a similar manner, which connects raw features and word embedding space using the dot-product similarity and hinge rank loss. In the hashing domain, however, the zero-shot problem has rarely been studied.
As previously analyzed, with the newly-emerging concepts and multimedia data, we are in urgent demand of a reliable and flexible hash function that can be adopted to hash images of unseen categories. In this work, we propose a novel hashing scheme, termed zero-shot approach (ZSH). Inspired by the superior capacity of the word embedding for capturing the semantic correlations among concepts, we map mutually independent labels into a semantic-rich space, where supervised knowledge of both seen and unseen labels can be completely shared. This strategy helps to encode images of unseen categories without any assistance of visual observation in those unknown classes. Besides, even though we cannot retrieve images of exactly the same category, semantically related objects can be returned. Moreover, we recognize the problem of semantic shift caused by off-the-shelf embedding. The embedded space is then rotated to make the hash functions more generalized to images of unseen categories. To further improve the quality of hash functions, we also preserve local structural property and discrete nature in binary codes. We summarize our main contributions as below:
We address the problem of employing training data of seen categories to learn reliable hash functions for transforming images of unseen categories into binary codes. We propose a novel zero-shot hashing, which bridges gaps between originally independent labels through a semantic embedding space. To the best of our knowledge, this is one of the first works that study the problem of hashing data from newly-emerging concepts with limited seen supervised knowledge. Extensive experiments on various multimedia data collections validate the efficacy of our proposed ZSH.
We devise an effective strategy for transferring available supervised knowledge from seen classes to unseen classes. In particular, we transform labels into a word embedding space, where semantic correlations among labels can be quantitatively measured and captured. In this way, unseen labels can leverage the well-established mapping from its semantically close seen categories. For instance, segway may learn from bicycle and automobile.
Since the initial semantic embedding is from an off-the-shelf word embedding space, which may bring in severe semantic shift between categories and the original visual feature. To alleviate the potential influence, we propose to further rotate the embedding space to better fit the underlying feature characteristics, thereby narrowing down the semantic gap effectively.
In order to generate more reliable hash functions, we propose to improve the intermediate binary codes of training data by exploring underlying data properties. Concretely, we impose discrete constraints on binary codes during the code learning process as well as preserve data local structure, i.e., if two datums share similar representations in the original space, they are supposed to be close to each other in the learned Hamming space.
The rest of this paper is organized as follows. In Section 2, we briefly review some related work on hashing and zero-shot learning. In Section 3, we will elaborate our approach with details, together with our optimization method and an analysis of the algorithm. With extensive experiments, various results on various different datasets will be reported in Section 4, followed by the conclusion of this work in Section 5.
2 Related work
In this section, we aim to clarify the relationship between our work and other researches, due to the constraints of space, we cannot completely elaborate every detail of previous literature.
2.1 Zero-Shot Learning
Learning with no data, i.e., zero-shot learning has been proved to be an efficient approach to tackle the increasing difficulty posed by insufficient training examples. Many approaches have been proposed to solve this problem by using an intermediate layer to represent an image. Specifically, with visual attributes or other semantic abundant descriptors, a novel image can thus be defined as the relationship between category and intermediate representation. In the work 
by Farhadi, he leverages attributes as a way to classify unseen objects by describing them with attributes. The work by Larochelle named Zero-data Learning of New Tasks has also proven to be useful when predict categories that are not shown in the training dataset. Recently, learning novel images with auxiliary datasets (e.g., leveraging textual relationship in a large corpus) has been shown to be powerful at doing zero-shot tasks. Learning the correlations between concepts, the label of a novel example omitted from training set can be reasonably inferred. Renown works include Socher’s work  Zero-shot Learning Through Cross-Modal Transfer, which uses label embedding to detect unseen classes and makes semantically reasonable deduction. DEVISE  also uses the same scheme as 
, but with a different language modal and a different loss function to connect two modalities. However, all above methods are limited to classification or prediction scenario. To our best knowledge, we are the first one to handle the zero-shot retrieving problem,i.e., hash novel images that were not observed. By adopting a natural language model  pre-trained with a large corpus from Wikipedia, we precisely capture the correlations between different words, and thus hash unseen images into correct Hamming space.
This subsection overviews fast search with binary codes using hashing technique. Similarity search is a challenge of pursuing data points of smallest distance in a large scale database. The easiest hashing scheme is dubbed Local Sensitive Hashing , which designs hashing function with no prior knowledge of the data distribution. However, such hashing methods require significantly large code length to achieve an acceptable performance, generating large overheads in a database. To address this problem, learning to hash comes as a trend. Unsupervised hashing methods mine the statistic distributional information in the database, generating an optimal hashing function to preserve the similarity in the original space. Classical algorithms such as Spectral Hashing (SH) , solves binary codes to preserve the Euclidean distance in the database; Inductive Manifold Hashing (IMH) , adopts manifold learning techniques to better model the intrinsic structure embedded in the feature space; Iterative Quantization , focuses on minimizing quantization error during unsupervised training. Considering a real-world database is commonly described by multiple modalities, such as visual features (e.g.
, Caffe) or textual information (e.g., image captions, lyrics), Sparse Multi-Modal Hashing  utilizes information of at least two different resources to achieve promising performance. Since the unsupervised way is guided with little human-level knowledge, supervised hashing have been proposed to use supervision information to learn binary codes. Hashing techniques in this category have been emerging continuously in recent years, representative methods include Kernel Supervise Hashing (KSH) , Minimal Loss Hashing (MLH) , Supervise Discrete Hashing (SDH) , Latent Factor Hashing (LFH)  as well as the recently proposed Column Sampling Based Discrete Supervised Hashing (COSDISH) , etc.
Using supervision information, these hashing schemes perform better than unsupervised ones. Recently, with the rising of deep learning, image hashing using large convolutional neural network has also be shown to be effective in hashing domain. By using hidden layers to represent images as feature vectors that are optimal for binary codes generation, hashing performance can be augmented greatly.
Admittedly, hashing algorithms have successfully tackled the “curse of dimensionality” in terms of fast search, however, what if we want to achieve data-dependent performance while no training example is provided? All above hashing methods fail to generalize to “unseen” categories, limiting in the “seen” area where every category correspond to at least one training image. Besides, as the database changes everyday, re-training hashing function frequently can be expensive, further prevents their practical usage in large dynamic real-world databases. Based on tabove analysis, a hashing method that can perform well on unseen data draws a strong need, thus the orientation of zero-shot hashing is quite obvious.
3 Zero-Shot Hashing
In this section, we elaborate our proposed zero-shot hashing (ZSH). We firstly present a formal definition of hashing in zero-shot scenario, and then depict the details of ZSH, including a brief introduction of overall framework, supervision transfer, semantic alignment as well as hashing model. Finally, we introduce the optimization process and algorithm analysis.
3.1 Problem Definition
Suppose we are given training images labeled with a seen visual concept set , where is the dimensionality of visual feature space. Denote is the binary label matrix, where is the label vector of the -th sample and is the number of seen classes in . Different from conventional supervised hashing scenario, where both testing data and training data are associated with the same concept set, i.e., , we intend to cope with the situation where testing data and training data share no common concepts. In other words, testing data (denoted as ) belongs to an “unseen” category set , i.e., . Using only the training images where no training samples of the “unseen” categories in are available, our goal is to learn a hash function , which can map images belonging to both and from original visual feature space to -bit binary codes. The learned hash function not only guarantees that the binary codes of semantically relevant objects have short Hamming distances, but also generalizes well to the testing data belonging to the unseen categories, even though no training data are utilized in the training phase.
3.2 Overall Framework
The flowchart of our overall framework is illustrated in Figure 2. As we can see, there are two stages: the offline phase and the online phase. In the offline phase, suppose only images of a limited number of categories are visible to our system. We firstly extract their visual feature features through a convolutional neural network. At the same time, we use a NLP model to transform seen labels into a semantic-rich embedding space, where each label is represented by a real-valued vector. With the embedded semantics, the relationships among both seen and unseen categories can be well captured and characterized. Instead of -form label vector, ZSH supervises the learning of hash functions with the embedded semantic vectors to transfer supervised knowledge. We further rotate the off-the-shelf embedding space to better align with the low-level visual feature space. Meanwhile, ZSH preserves local structural information and discrete nature of the intermediate binary codes to improve hash functions. Finally, we use the learned hash functions to transform all the images in the database into binary codes for subsequent retrieval. In the online phase, when a new query image of any unseen category comes, we encode the new image into binary code following the same mapping and retrieve images that are close to this query in the Hamming space.
3.3 Transferring Supervised Knowledge
In general, most of existing supervised hashing algorithms may retrieve relevant results of queries in the seen categories since there are supervised information for understanding the queries. Nevertheless, when the hashing systems have no knowledge of certain unseen classes, query images from these classes will be probably be misunderstood, thereby leading to inaccurate search. One of the main causes is that the supervised information is in the form of-form label vectors or pair-wise data relationship, which implicitly makes labels independent to each other and omits the inherent correlation among their high-level semantics (e.g., cat is as different from truck as from dog). As illustrated in Figure 3, using independent labels, each object will be mapped to an independent vertex of a hypercube, and the distance between any two categories will be the same. In order to address such disadvantage, we propose to connect label semantics by taking advantage of the superior ability endowed by neural language processing techniques. Specifically, as illustrated in Figure 3, we map independent labels into a word embedding space, where semantic correlations among labels can be quantitatively measured and captured. Therefore, unseen labels can leverage the well-established mapping from its semantically close seen categories. For example, in the embedding space, cat and dog will be close to each other, hence even the hashing systems may never observe any cat images, they can still gain some useful clues from the supervised knowledge of dog. We adopt the language model  pre-trained using free Wikipedia text. This model leverages not only local information but also global document context, therefore shows superior performance over other competitive approaches. Every category is embedded into a -d word vector111
In practice, we find that by setting word vector to unit length, retrieval performance can be augmented with no distortion of the cosine similarities, thus we empirically normalize word vector to be unit length.. In the subsequent part, we consistently denote the embedded label matrix as for brevity.
3.4 Semantic Alignment
Note that the transformed supervised knowledge from the off-the-shelf embedding space may potentially deviate from the underlying semantics of the image data due to the problems such as domain difference, semantic shift, semasiological variation. This will inevitably jeopardize the whole learning process in our proposed model. In order to prevent this issue, we propose to a semantic alignment strategy, which actively aligns the initial embedding space with the distributional properties of low-level visual feature. In particular, we seek for certain transformation matrix with orthogonal constraint to rotate the embedding space to . Recall that we intend to use the amendatory supervised knowledge to guide the learning of high-quality hash codes and hash functions, therefore, we minimize the following error:
which is the mapping matrix from binary codes to the supervised information. is the code length. The benefit of the above formulation is that it can help to narrow down the semantic gap between binary codes and the supervised knowledge.
3.5 Hashing Model
For convenience, we firstly recap some previous settings here. Suppose we have training samples . For brevity, we denote the corresponding embedded label knowledge as . Our ultimate target is to learn a set of hash functions from “seen” training data supervised by , enabling generating high-quality binary codes for data of “unseen” categories. Meanwhile, the quality of hash functions may heavily rely on the reliability of the intermediate binary codes of training data. In other words, the model is supposed to simultaneously well control both hash functions and hash codes. To achieve the above goals, we propose the following model:
where is the semantic alignment matrix. is the mapping matrix from binary codes to supervisory information. denotes the binary codes of , where is the binary codes of the -th sample . is a diagonal matrix of size . denotes the Frobenius norm of a matrix. and are balancing parameters. define a hash function from a non-linear embedded feature space to the desired Hamming space:
where . is the transformation matrix. Following the successful practice for learning hash functions in , we employ kernel mapping to handle the potential problem of linear inseparability:
where are anchors randomly sampled from and is the bandwidth parameter.
Note that we keep the discrete constraint on the variable to prevent information loss of binary codes to the greatest extent. The term in Eq. (2) preserves local structural information of training data, i.e., if two samples are similar in the original feature space (large ), then they are enforced to share similar binary codes in the Hamming space.
In the next part, we introduce an efficient alternating algorithm to optimize our zero-shot hashing model.
We first rewrite the model in matrix form as follows:
where and the Laplacian matrix is computed as:
where is a diagonal matrix with its -th diagonal element computed as .
Next, we present an alternating algorithm to optimize the model in Eq. (5).
3.6.1 Update P
Fixing all variables except for , we get the quadratic problem as:
By setting its derivative with respect to to 0, we have the following solution
3.6.2 Update B
In this step, we fix all other variables and learn binary codes with discrete constraint. The objective function can be reduced to
The above equation can be further written as
Inspired by , we apply the discrete coordinate descent (DCC) algorithm to solve the above sub-problem. Denote as , and , where , and are the -th row of , and , respectively. Furthermore, for convenience, we denote
Then we can have
Here, . Following the same rule, we also have the following conclusion
The sub-problem can be transformed to
The optimal solution of above equation is
where is the sign function. We can see that each bit of the desired binary code can be learned based on other bits. Thus, we can use cyclic coordinate descent approach to generate the optimal codes until the entire procedure converges.
3.6.3 Update R
With fixed, we then have
which can efficiently solved by the algorithm proposed in .
3.6.4 Update W
, we arrive at a classic ridge regression problem:
The above equation has an closed-form solution
where is a diagonal matrix of size .
By iteratively updating until convergence, we can arrive at an optima. The overall algorithm is summarised in Algorithm 1.
3.7 Algorithm Analysis
In this section, we analyze the convergence and time complexity of our algorithm.
3.7.1 Convergence Study
As shown in Algorithm 1
, in each iteration, the updates of all variables make the objective function value decreased. We also conducted empirical study on the convergence property using ImageNet. Specifically, we trained our zero-shot hashing model with seen images randomly sampled from the ImageNet dataset, with label embedding as supervision. We selected anchors and set the code length to be 64 bits. As Figure 4 shows, our algorithm starts with cost function value roughly at , but descends dramatically within only 10 iterations, and reaches a stable local minima at the 20-th iteration. This phenomenon clearly indicates the efficiency of our algorithm.
3.7.2 Computational Complexity
In each iteration (line 6-9), the time cost is analyzed as follows. The computation of in Eq. (8) is . The DCC algorithm for updating costs . As to the optimization of the sub-problem in Eq. (16), the time cost is . Finally, the computational cost of updating is . Given that , , and our algorithm converges within a few iterations (less than 10), the overall time cost of our algorithm is . It is worth noting that the dominant operation of our algorithm is matrix multiplication, which can be greatly speeded up by using parallel and/or distributed algorithms.
4.1 Experimental Settings
In our experiments, we employ three real-life image datasets, including CIFAR-10222https://www.cs.toronto.edu/ kriz/cifar.html, ImageNet333http://image-net.org/ and MIRFlickr444http://press.liacs.nl/mirflickr/.
CIFAR-10 consists of images which are manually labelled with 10 classes including airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck, with samples in each class. The classes are completely mutually exclusive, i.e., no overlap between classes (e.g., automobiles and trucks).
ImageNet is an image dataset organized according to the WordNet  hierarchy. The subset of ImageNet for the Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) is used for our experiments, consisting of over 1.2 million Web images, manually labeled with object categories.
MIRFlickr consists of images collected from the social photography site Flickr through its public API. Firstly introduced in 2008, this dataset is wildly used in multimedia research. MIRFlickr is a multi-label dataset with every image associating with 24 popular tags such as sky, river, etc.
For all image data, we adopted the winning model for the 1000-class ImageNet Large Scale Visual Recognition Challenge 2012  to extract the fully connected layer fc-7 as visual feature.
Various metrics are employed for performance of different evaluation tasks. For image retrieval, we used the two traditional metrics i.e., Precision and Mean Average Precision (MAP). MAP focuses on the ranking of retrieval results and we reported the results over the top retrieved samples. Precision mainly concentrates on the retrieval accuracy and we reported the results with Hamming radius .
We compared our proposed ZSH with four state-of-the-art supervised hashing approaches, including COSDISH , SDH , KSH  and LFH . For all anchor-based algorithms, we randomly sampled anchors from the training dataset. Furthermore, we compared to one of the most representative unsupervised hashing method, i.e., Inductive Hashing on Manifolds (IMH) .
For all comparing approaches, we followed their suggested parameter settings. For ZSH, we empirically set to and to . For regularization parameters and , we set them to and , respectively. The number of iterations is set to 10. We define the similarity matrix to be computed by
where is the function of searching nearest neighbors. In our experiment, we set .
4.2 Results on CIFAR-10
4.2.1 Overall Comparison of Zero-shot Image Retrieval
To evaluate the efficacy of retrieving images in unseen categories, we split CIFAR-10 into a “seen” training set and an “unseen” testing set. In particular, we select truck as unseen testing category and leave the rest 9 categories as seen training set. For all comparing algorithms, we randomly sample images for learning hash functions. For testing purpose, we randomly select images from the unseen category as query images, and the remaining test images together with the images of seen categories are combined to form the retrieval database.
The performance of all comparing approaches w.r.t. different codes lengths (i.e., ) is illustrated in Figure 6. As we can see, the proposed ZSH outperforms all the other hashing algorithms in terms of MAP at all code lengths. As to Precision, ZSH still shows superior image retrieval performance in most cases. The underlying principle is that our method not only utilizes inherent semantic relationship among labels to transfer supervisory knowledge, but also preserves discrete and structural properties of data in the learning of hash codes and hash functions. An interesting observation is the performance of IMH, which is an unsupervised method, gains competitive even better retrieval results in terms of Precision as compared to some supervised methods such as KSH, SDH. While unsupervised methods encode images solely with the distributional properties in the feature space, the supervised ones may be misled by independent semantic labels in the learning processing.
Besides, MAP increases rapidly for all methods when code length varies from to , and then reaches a slow-growth stage from 64 bits to 128 bits. When code length is short, more codes are required to guarantee the descriptive and discriminative power. However, after encoding space is large enough (e.g. 64 bits), the expression ability saturated, providing more bits cannot significantly improve the performance. As to Precision, hashing performance significantly deteriorates as code length is larger than . Recall that our searching radius is empirically set to 2, forming a hyper-ball of radius 2 in Hamming space. When the code length increases from to , significant improvement in retrieval ability counteracts the searching difficulty. However, as Hamming space becomes larger, searching difficulty grows linearly, thereby degrading the Precision performance. Therefore, as a trade-off between efficiency and effectiveness, an eclectic code length should be chosen.
4.2.2 Effect of Different Unseen Category
In this experiment, we aim to evaluate the performance of zero-shot image retrieval on different unseen categories. The experimental settings are the same as that in the previous subsection. Figure 7 illustrates the MAP and Precision performance of ZSH using each individual label as unseen testing data.
We can observe that zero-shot image retrieval performance varies from one class to another, reaching peak at bird and bottom at automobile. Intuitively, if an unseen class is semantically closer to other seen categories, more relevant supervisory knowledge can be transferred from word embedding space for boosting the retrieval performance. To dig deeper about the reason behind the fluctuation of performance on different unseen objects, we compute the average cosine similarity between each unseen category and other seen categories, and list the corresponding MAP in Table 1.
|Category||Average Cosine Similarity||MAP|
We observe that the MAP performance is positively related to the average cosine similarity. For instance, those of larger cosine similarity (e.g., dog, cat) performs relatively well, while those of smaller similarity (e.g., airplane, automobile) gain relatively poor performance. This observation implies that in order to achieve satisfactory retrieval results, unseen classes should have sufficient correlation with seen ones.
As shown in Figure 7, we also compare the effects of embedded labels and binary labels. The performance of embedded labels is obviously better than that of binary labels. The underlying reason is that the embedding space can help to capture the relationship between seen and unseen categories for transferring supervisory knowledge. In contrast, binary labels neglect semantic correlations, thereby leading to irregular fluctuations of retrieval performance.
4.2.3 Effect of Seen Category Ratio
In this experiment, we evaluate the performance of our proposed ZSH w.r.t. different numbers of seen categories. Specifically, we vary the ratio of seen categories in the training set from to . For each ratio, we randomly sample images from the seen categories for training. Further, we randomly select images from the unseen set as queries to search in the remaining images. Note that when the ratio of seen categories decreases to , we use all datums of that class as training set.
We report the experimental results in Figure 8, from which we have the following observations: (1) The performance of both MAP and Precision boosts as the ratio of the seen categories grows; (2) As the ratio increases from to , we see a dramatic leap of the retrieval performance, followed by a relatively slight performance improvement from to . We analyze that by observing more “seen” categories, we have higher possibility to find relevant supervision for the unseen class, which guides to learn better intermediate hash codes, thereby simultaneously improving the quality of hash functions.
4.2.4 Effect of Training Size
This part of experiment mainly focuses on evaluating the effect of training size on the searching quality of ZSH. We select truck as the unseen object and varies the size of training data in the range of . The results are demonstrated in igure 9. As we can see, when the size increases from to , we observe a rapid rise of the Precision performance. Nonetheless, when fed with more training data, ZSH does not gain noticeable performance boost. For the balance of training efficiency and effectiveness, in the rest experiments, we consistently set the training size to .
4.3 Results on ImageNet
4.3.1 Overall Comparison of Zero-shot Image Retrieval
In this part, we evaluate our proposed ZSH on zero-shot image retrieval as compared to other state-of-the-art methods using the Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) dataset. Recall that the ILSVRC2012 dataset contains more than million images tagged with synsets without any overlap. For evaluation purpose, we randomly choose categories which have corresponding word embedding learned from Wikipedia text corpus, which gives us a set of roughly images. We split the data into a training set ( seen categories) and a testing set ( unseen categories). For all comparing algorithms, we randomly select images of seen categories for training. As to image queries, we randomly sample images from the unseen categories. We use the learned hash function to encode all the remaining images to form the retrieval database.
The performance of our proposed ZSH and other four state-of-the-art supervised hashing methods with different code lengths are reported in Figure 10. As we can see, ZSH consistently outperforms all other competitors in most cases. As code length varies from to , we can observe the similar variation tendency of performance on ImageNet to that on CIFAR-10. This phenomenon again implies that we should choose a trade-off code length to guarantee the retrieval performance.
4.3.2 Image Retrieval in Related Categories
In zero-shot image retrieval scenario, we expect that even though we fail to retrieve relevant images of the same category, we can still obtain semantically related images. For instance, if the query image describes a cat, we may prefer to retrieve images of dog rather than images of car. Our proposed ZSH utilizes semantic embedding to set up connections between semantically similar labels in the embedded space. In this way, the supervision knowledge of seen categories can be transferred into hash functions, which can effectively encode images of unseen categories.
Since we need to search more related categories, all remaining images of both seen and unseen categories are used to form retrieval database. All the other settings are the same as that in Section 4.3.1. In order to evaluate the performance of retrieving related categories, we use two modified metrics, named MAP and Precision, which are defined as
where MAP is calculated based on the top retrieved results, is the average precision based on the related results, calculated by
where is the number of related images in top retrieved results. and are the related retrieval under Hamming radius 2 and total examples retrieved under Hamming radius 2, respectively. Using WordNet , which is a lexical database for the English language, we define query and retrieved object are related if: 1) and are not of the same category. 2) can reach on WordNet within 5 hops.
In practice, we set and . Figure 11 shows the experimental results. We can see that in terms of MAP, our method always outperforms other methods at every code length. When we look at Precision, our proposed ZSH achieves , , at 32 bits, 64 bits and 96 bits, which significantly outperforms the second best method. This observation indicates that ZSH is capable of detecting the semantically similar images from the most related categories.
4.4 Results on MIRFlickr
In real-world pictures, especially in user-generated photos, there often exists multiple tags belong to one picture. To examine the practical efficacy of our proposed ZSH, in this part, we conduct an extra experiment on a real-life multi-label dataset, i.e., MIRFlickr, which contains images downloaded from the social photography site Flickr. Each image is associated with tags. Since in multi-label image dataset, different categories share overlapping images, which makes it difficult to divide the dataset into training set and testing set. Hence, we employ ImageNet as an auxiliary dataset to train our hash functions and evaluate the zero-shot image retrieval performance on MIRFlickr. Specifically, from the ILSVRC2012 dataset we select categories which does not overlap with the tags in MIRFlickr. For fair comparison, all hashing approaches use randomly sampled images for training. After the hash function is learned, we directly apply them to transform the MIRFlickr images into binary codes. We then sample datums as query images and search in the remaining images. We regard the retrieval images sharing at least two tags with the query as the true neighbors, and compute MAP on the top retrieved results and Precision under Hamming distance 2. Figure 12 illustrates our results of our ZSH and other comparing algorithms on MIRFlickr. In the left sub-figure, we can see that with different code lengths, our ZSH can consistently achieve the best MAP performance among all the comparing algorithms. As the code length increases, the MAP performance of each algorithm keeps increasing, reaching at bits, which outperforms the second best hashing method COSDISH by at the same length. In terms of Precision, ZSH exceeds all other methods in most cases. Similar to that of CIFAR-10 and ImageNet, we can see a variation pattern with an increasing trend from to and a performance drop from to . The promising performance on MIRFlickr demonstrates the potential of ZSH in indexing and searching real-life image data.
With the explosion of newly-emerging concepts and multimedia data on the Web, it is impossible to supply existing supervised hashing methods with sufficient labeled data in time. In this paper, we studied the problem of how to map images of unseen categories using hash functions learned from limited seen classes. We proposed a novel hashing scheme, termed zero-shot hashing (ZSH), which is capable of transmit supervised knowledge from seen categories to unseen categories. Independent -form labels were projected into an off-the-shelf embedding space with abundant semantics, where label semantic correlations can be fully characterized and quantified. Considering the issues of domain difference and semantic shift, we further narrowed down the gap between binary codes and high-level semantics by a semantic alignment operation. Specifically, we intentionally rotated the embedding space to adjust the supervised knowledge more suitable for learning high-quality hash codes. Besides, we also preserved local structural property and discrete nature of hash codes in the ZSH model. An effective algorithm was designed to optimize the model in an iterative manner and the empirical study showed the convergency and efficiency. We evaluated our proposed ZSH hashing approach on three real-world image datasets, including CIFAR-10, ImageNet and MIRFlickr. The experimental results demonstrated the superiority of ZSH as compared to several state-of-the-art hashing approaches on the zero-shot image retrieval task.
In the future, we plan to enhance the exploration of label semantic correlations by integrating knowledge from multiple sources, including textual corpus and visual clues. We expect this will compensate the incomplete representation of each individual modality, thereby solving the problem of domain difference and semantic shift fundamentally.
-  L. Cao, Z. Li, Y. Mu, and S.-F. Chang. Submodular video hashing: a unified framework towards video pooling and indexing. In ACM Multimedia, 2012.
-  G. Castañón, Y. Chen, Z. Zhang, and V. Saligrama. Efficient activity retrieval through semantic graph queries. In ACM Multimedia, 2015.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE CVPR, 2009.
-  A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In IEEE CVPR, 2009.
-  A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al. Devise: A deep visual-semantic embedding model. In NIPS, 2013.
-  A. Gionis, P. Indyk, R. Motwani, et al. Similarity search in high dimensions via hashing. In VLDB, 1999.
-  Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In IEEE CVPR, 2011.
-  Y. Hu, Z. Jin, H. Ren, D. Cai, and X. He. Iterative multi-view hashing for cross media indexing. In ACM Multimedia, 2014.
-  E. H. Huang, R. Socher, C. D. Manning, and A. Y. Ng. Improving word representations via global context and multiple word prototypes. In ACL, 2012.
-  D. Jayaraman and K. Grauman. Zero-shot recognition with unreliable attributes. In NIPS, 2014.
-  Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM Multimedia, 2014.
-  W.-C. Kang, W.-J. Li, and Z.-H. Zhou. Column sampling based discrete supervised hashing. In AAAI, 2016.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
-  C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE TPAMI, 2014.
-  H. Larochelle, D. Erhan, and Y. Bengio. Zero-data learning of new tasks. In AAAI, 2008.
-  W. Liu, C. Mu, S. Kumar, and S.-F. Chang. Discrete graph hashing. In NIPS, 2014.
-  W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. In IEEE CVPR, 2012.
-  G. A. Miller. Wordnet: a lexical database for english. Communications of the ACM, 1995.
-  M. Norouzi and D. J. Fleet. Minimal loss hashing for compact binary codes. ICML, 2011.
-  M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean. Zero-shot learning by convex combination of semantic embeddings. arXiv preprint arXiv:1312.5650, 2013.
-  M. Palatucci, D. Pomerleau, G. E. Hinton, and T. M. Mitchell. Zero-shot learning with semantic output codes. In NIPS, 2009.
-  D. Parikh and K. Grauman. Relative attributes. In IEEE ICCV, 2011.
-  S. Petrović, M. Osborne, and V. Lavrenko. Streaming first story detection with application to twitter. In ACL, 2010.
-  F. Shen, W. Liu, S. Zhang, Y. Yang, and H. Tao Shen. Learning binary codes for maximum inner product search. In ICCV, pages 4148–4156, 2015.
-  F. Shen, C. Shen, W. Liu, and H. Tao Shen. Supervised discrete hashing. In IEEE CVPR, 2015.
-  F. Shen, C. Shen, Q. Shi, A. Hengel, and Z. Tang. Inductive hashing on manifolds. In IEEE CVPR, 2013.
-  R. Socher, M. Ganjoo, C. D. Manning, and A. Ng. Zero-shot learning through cross-modal transfer. In NIPS, 2013.
J. Turian, L. Ratinov, and Y. Bengio.
Word representations: a simple and general method for semi-supervised learning.In ACL, 2010.
-  J. Wang, S. Kumar, and S.-F. Chang. Semi-supervised hashing for scalable image retrieval. In IEEE CVPR, 2010.
-  J. Wang, H. T. Shen, J. Song, and J. Ji. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927, 2014.
-  Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2009.
-  Z. Wen and W. Yin. A feasible method for optimization with orthogonality constraints. Mathematical Programming, 2013.
-  F. Wu, Z. Yu, Y. Yang, S. Tang, Y. Zhang, and Y. Zhuang. Sparse multi-modal hashing. IEEE TMM, 2014.
-  T. T. Wu and K. Lange. Coordinate descent algorithms for lasso penalized regression. The Annals of Applied Statistics, 2008.
-  R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan. Supervised hashing for image retrieval via image representation learning. In AAAI, 2014.
-  Y. Yang, F. Shen, H. T. Shen, H. Li, and X. Li. Robust discrete spectral hashing for large-scale image semantic indexing. IEEE TBD, 1(4):162–171, 2015.
-  Y. Yang, Z.-J. Zha, Y. Gao, X. Zhu, and T.-S. Chua. Exploiting web images for semantic video indexing via robust sample-specific loss. Multimedia, IEEE Transactions on, 16(6):1677–1689, 2014.
-  Y. Yang, H. Zhang, M. Zhang, F. Shen, and X. Li. Visual coding in a semantic hierarchy. In ACM Multimedia, pages 59–68, 2015.
-  P. Zhang, W. Zhang, W.-J. Li, and M. Guo. Supervised hashing with latent factor models. In ACM SIGIR, 2014.
-  X. Zhu, Z. Huang, H. T. Shen, and X. Zhao. Linear cross-modal hashing for efficient multimedia search. In ACM Multimedia, 2013.