In the past decade, the field of image recognition has been revolutionized by the emergence of learned deep representations [1, 2, 3]. However, most of the popular recognition frameworks rely on a sufficient number of training samples, and the learned recognition algorithms only can be operated in a limited condition where the categories are included in the training set. In reality, training a particular model for each class is infeasible due to the insufficient training instances. On one hand, long-tailed distribution [4, 5] arises in the frequencies of observing objects, that some popular categories have a large number of instances while some other categories have few or even no instances for training. On the other hand, new concepts are ever-growing, for which collecting and labeling sufficient large training sets for each category could be difficult and expensive. In these circumstances, training an effective classification system for newly appeared categories that are not included in the training set is necessary for the use of the learned model in the real-world applications.
Inspired by the learning mechanism of human on the recognition of new instances, zero-shot learning (ZSL) [6, 7, 8] has been proposed and received a significant amount of interest. Humans are able to recognize new objects with the help of attribute descriptions and some related background knowledge. For instance, with knowledge of “horse” and “black-and-white stripe”, when we are told that “zebras are horse like animals united by their distinctive black-and-white striped coats” we can recognize a zebra even if we never seen a zebra before. This is because we can associate the side information “horse like” and “black-and-white stripe” with zebras. Similarly, the key idea of ZSL is to capture the relationship between the knowledge contained in the seen and unseen instances with the help of side information which is also called auxiliary information.
Auxiliary information of ZSL is usually expressed in a high dimensional vector space called semantic space where seen and unseen classes are related. Class attribute vectors [9, 10, 11] and word vectors [12, 13, 14] are most adopted as semantic representations in the semantic space. Given a set of visual features and semantic representations of the seen classes, the task of ZSL is to learn a joint embedding space where both visual features and semantic representations can be compared directly. With the learned projection functions, the visual features and semantic representations of the unseen test classes can be mapped into the embedding space, in which the recognition can be conducted by simple search of the nearest neighbor class prototype for each test instance.
Recent researches found that taking the visual space as the embedding space is favorable for the ZSL, because of its ability on alleviating the hubness problem [15, 16]. However, the visual instance features are discretely distributed in the visual space and each class contains numerous instance features. This means that the embedded semantic vector of one class should try to be closer to every visual instance features of the same class. The problem is that the visual features learned by CNNs are not always discriminative enough for discriminating the intra- or inter-class relationship. As illustrated in Fig. 1, the intra-class distance is sometimes even larger than the inter-class distance, and this significantly inhibits the learning of embedding functions.
Realizing this situation, this work proposes two methods, which try to learn the visual prototypes and to optimize the visual data structure respectively. Concretely, the visual prototype based method learns a class prototype for visual features of each class, thus an embedded semantic feature just need to be closer to its corresponding visual prototype rather than every visual feature of the same class. In addition, the visual prototype learned by the cross-entropy loss is more discriminative than the visual feature centroid of one class gotten by an average way. As for the second method in this work, we propose a flexible multilayer perceptron framework that not only maps both visual features and semantic representations into an intermediate embedding space, but also ensures a better embedded visual data structure. In this method, the network is trained with ranking loss and structure optimizing loss. Specifically, the ranking loss encourages matched image feature and attribute representation pairs have high similarities, while the structure optimizing loss is to make image pairs in the same category have smaller distance than those from different categories. To sum up, our contributions are:
Propose a visual prototype based method for ZSL, in which the visual space is composed by visual feature prototypes instead of the visual instance features. With the cross-entropy loss, the proposed learnable visual prototypes are more discriminative than the visual centroids.
Propose a simple and effective visual space optimization framework for ZSL, which is able to optimize the distribution structure of visual features during the embedding process. Combined with the proposed structure optimizing function, two kinds of embedding loss, including simple ranking loss and bi-directional ranking loss are considered for ZSL.
The rest of this paper is organized as follows: Section 2 covers the related work on zero-shot learning, embedding space and information preservation in the zero-shot learning. Section 3 describes the proposed approach in detail. Sections 4 and 5 present the experimental evaluation and related discussions respectively. Finally, the paper is concluded in Section 6.
Ii Related works
In this section, we first give an overview of zero-shot learning, and then we briefly discuss the embedding space and data structure preservation in the zero-shot learning task.
Ii-a Zero-shot learning
In the ZSL task, the seen categories in the training set and the unseen categories in the testing set are disjoint. In fact, ZSL can be seen as a subfield of transfer learning[20, 21], as the key idea of ZSL is to transfer the knowledge contained in the training resources to the task of testing instance classification. Early ZSL works [6, 9, 22] follow an intuitive way to object recognition that makes use of the attributes to infer the label of an unseen test image. Recently, learning an embedding function that maps the semantic vectors and visual features into an embedding space, where the visual features and semantic vectors can be compared directly, shows outstanding performance and has been the most popular method [23, 24, 25, 26]. After the projection, nearest neighbor searching methods can be used to find the most similar class attribute vector for the test instance, and the discovered attribute corresponds to the most likely class. The embedding based method is adopted in this work.
Most recently, unseen class information is used to get better performance in the ZSL task [27, 28, 29, 30, 31, 32, 33]. For instance, in the work , unseen information is employed to assist aligning of the visual-semantic structures. As another example, some recent works [29, 30, 31, 32, 33] adopt generative models to enlarge synthesized labeled examples from the unseen classes, and consequently, these examples can be assisted to train a better projection model. Furthermore, a related scenario is the transductive zero-shot learning [34, 35, 36, 23] , which assumes that the unlabeled samples from unseen classes are available during training. However, those works to some extent breach the strict ZSL settings that the testing resources should not be accessed in the training stage. In our work, we make no use of unseen classes information and consider only the seen resources are available at training time.
Compared with the strict ZSL, there is a more realistic and challenging task which is called generalized zero-shot learning (GZSL). Its targets include both seen and unseen categories. The problem of GZSL is proposed at the very beginning of ZSL work , and most of the above mentioned literatures evaluate their methods on both ZSL and GZSL settings. In this work, we also take GZSL into account.
Illustration of the proposed method. (a)Visual prototype based method. The prototypes are learned via backpropagation. With the learned visual prototypes, the semantic representation of each class is embedded to the corresponding visual prototype rather than numerous instance features. (b) Visual feature structure optimization based method. Both semantic representations and visual features are embedded into an intermediate space. The dimensions in the embedding space are same as those in visual space.
Ii-B Embedding space
. Owing to the advantage that each class is represented by one semantic vector in the semantic space, taking the semantic space as the embeddding space is helpful for the better embedded visual data structure. However, on the downside, this strategy will significantly shrink the variance of the data points and thus aggravate the hubness problem[15, 16]. To alleviate this problem, some recent works [16, 41] choose the visual space as the embedding space and map the semantic vectors to the visual space. However, using the visual space as embedding space faces a new problem. Instance features in the visual space are not distributed in an ideal structure due to the possibility of large inter-class similarities and small intra-class similarities.
Common intermediate embedding space is also popular in the literature [42, 43]. Besides, more than one projection method can be realized in some works [27, 44, 23] in the testing process. For instance, in the work , an intermediate aligned space is learned using the class prototypes, and the recognition can be conducted in all three space, namely the visual space, the semantic space and the intermediate space.
In those embedding strategies, the intermediate embedding space makes it possible to adjust data structures both of semantic vectors and visual features. Thus, the intermediate embedding space strategy is adopted in the proposed visual space optimization based method. Considering the intrinsic superiority of using the visual space as embedding space on alleviating the hubness problem , the intermediate space in this method is closer to the visual space instead of being equivalent to visual space and semantic space. Besides, in order to take the visual space as the embedding space with more discriminative structure, the other method proposed in this work is to learn the visual feature prototypes, so that each visual class can be represented by one visual prototype instead of numerous discrete visual features.
Ii-C Structure preservation
Since there is a huge gap between visual and semantic spaces, the learned model tends to not discover the intrinsic topological structure when maps the data into the embedding space. Some works [25, 27, 44, 23, 45, 46, 47, 48, 49, 24, 50] have been conducted to keep the data structure during the projection. Manifold learning is a popular method used to keep the data structure in the ZSL [45, 46, 47, 48, 49]. Taking the visual space as the embedding space, the work  introduces an auxiliary latent-embedding space with manifold regularization to reconcile the semantic space with the visual feature space, which can preserve the intrinsic data structural information of both visual and semantic spaces.
Encoder-decoder paradigm has been taken to preserve the data structure in recent works [25, 44, 23, 24, 50]. In SAE , the encoder is used to learn a projection from the feature space to a semantic space and the decoder tries to reconstruct the original visual features. During the test, the unseen visual features can be projected to the semantic space by the encoder, or reverse projection can be realized by the decoder. Based on this work, LESAE  adds the low-rank constraint for the learned embedding space in the encoder and get better performance. DIPL  extends the encoder-decoder method in a transductive learning ZSL task. The work of  conducts the encoder-decoder process with a multilayer perceptron framework, and three class relations, namely same class, semantically similar class and semantically dissimilar class based on the semantic similarity are considered to preserve the semantic vector structure in the embedding space.
However, most existing ZSL methods, which put much attention on keeping the original visual structure, neglect indistinguishable distribution of visual features. In this work, we are not to preserve the original visual feature structure like previous works but to optimize it. As illustrated in Fig. 2, we propose two strategies to address the indistinguishable distribution of features in the visual space. One is to learn the visual prototypes with which one class in the visual space can be represented by one visual prototype feature rather than discrete instance features. The other is to optimize the visual data structure that makes the distance of embedding visual feature pairs in same class closer and make instances from inter classes have obvious boundaries.
Iii ZSL with visual prototypes
Iii-a Problem definition
Let denote a set of -dimension semantic representations of seen classes , and denote semantic vectors of unseen classes . The seen and unseen classes are disjoint, i.e. . is a -dimension image feature from one seen class. The training set is given as , where is the label of according to , is the semantic vector of the -th image, and denotes the total sample number in the training set. Similarly, the testing set with total sample number of is given as , where is a visual vector of the -th image in the testing set, and is the corresponding unseen semantic representation with the label . Given a new sample from the unseen class, the goal of the ZSL is to predict the correct class of the given sample with a learned model which is trained only with samples from seen classes.
Iii-B Learning visual prototypes
In the light of that each class in the visual space is composed by numerous instance features, we tend to use a visual prototype to represent visual features of one class. Intuitively, the visual feature centroids obtained by averaging visual features of each class can be adopted directly to work as visual prototypes. However, due to the defective instance feature distribution, as shown in Fig. 1, the centroid of features average is also not discriminative enough and it may have small distance with several instances from other classes. Therefore, we propose a learnable strategy to learn the visual prototypes via backpropagation. The visual prototypes are denoted as where represents the index of the classes.
We take the visual prototypes learning process as a prototype-based classification problem. The difference is that we take nothing for the visual features but only update the prototypes themselves. Given an visual feature , the similarity of the visual feature with the visual prototype can be denoted as:
where is a similarity function, such as consine similarity and inner product, and the latter is adopted in this work. Then, we use the Softmax to get the final prediction confidence:
With the prediction confidences and corresponding labels, we can train the visual prototypes using the common cross-entropy loss, defined as:
where is an indicator function for label
(i.e., one hot encoded vector). It is worth noting that, compared with the traditional classifier training process, we update the visual prototypes rather than the visual features or any networks that process visual features.
With the learned visual prototypes, the semantic vectors can be projected into the corresponding visual prototypes in the visual space via a multilayer perceptron.
Iii-C Embedding the semantic representations
With the learned visual prototypes, we only need to make the embedded semantic vectors close to their corresponding visual prototypes when embedding the semantic representations into the visual space. Thus, the object function for the embedding can be:
is the embedding function of semantic vectors. In this work, we adopt a multilayer perceptron network to work as the embedding function, and the loss function can be written as:
where and are the first and second FC layer respectively. is the dimension of the hidden layer.
denotes the ReLU algorithm.
is the hyperparameter weighting the parameters regularization losses.
When the semantic embedding function is learned in the training stage, the recognition for the testing set can be realized. Given an image from the testing set, the recognition is achieved by finding the unseen class label according to its semantic vector :
Iv ZSL with visual data structure optimization
Iv-a Network architecture
The aim of this method is to embed the visual features and semantic representations into a common intermediate embedding space, and meanwhile to optimize the structure of visual data. Thus, two kinds of loss functions are included in this part. One is the embedding loss that works to make the matched pairs of visual features and semantic vectors be closer. The other is the structure optimizing loss for optimizing the visual data structure. For the embedding loss, we consider two specific loss functions: simple ranking loss and bi-directional ranking loss.
According to the adopted specific embedding loss function, we create two different network architectures, as shown in Fig. 3 and Fig. 3. These two networks share the same architecture of visual network branch and semantic network branch respectively. Both the semantic and visual embeddings are achieved by a multilayer perceptron framework which is same as that in the prototype based method for semantic embedding. Specifically, the multilayer perceptron takes a -dimension semantic vector or a
-dimension visual representation as input, and after going through two fully connected (FC) layers + Rectified Linear Unit (ReLU) layers, it outputs a-dimension embedding vector.
Iv-B Embedding loss
We consider two kinds of functions for the embedding loss. One is called simple ranking loss and the other is the bi-directional ranking loss.
Iv-B1 Simple ranking loss.
Given a training set with matched pairs of and , the object function is:
where is the embedding function of visual features and is the embedding function of semantic vectors. According to Fig. 3, the simple ranking loss function for the objection function is as follows:
where and are the first and second FC layer of the visual embedding branch. Same as in the Eq. 5, and are FC layers of the semantic embedding branch. is the dimension in the embedding space. and are dimensions of corresponding hidden layers. denotes the ReLU algorithm.
Iv-B2 Bi-directional ranking loss.
Given a visual feature , let denote its set of non-matched (negative) semantic vectors. If and are positive and negative semantic vectors for respectively, the distance between the and should be smaller than the distance between and with a margin of . Thus we can get a triplet-wise constraint, as follows:
Similarly, given a semantic vector , the analogous constraints in the other direction can be written as:
where and denote the sets of matched (positive) and non-matched (negative) visual features respectively for .
These two constraints can be converted into a margin-based bi-directional ranking loss function:
where , is the balance weight for the different directions. The scale of distance between an embedded visual feature and an embedded semantic feature has a huge change during the training, regardless of whether they come from the same class or not. With this in mind, we take self-adaptive margins instead of the fixed value in this work. Specifically, the margins and are computed by:
where and are hyperparameters adjusting the value of the margins.
Iv-C Visual data structure optimizing loss
As shown in Fig. 1, the distribution of instances in the visual space tends to be indistinctive, thus we propose a structure optimizing constraint for the embedding of visual features to optimize the visual data structure. Let denote the neighborhood of , which is a set of the visual features from the same class of . The purpose of the structure optimizing constraint is to enforce the distances between and neighborhood points and those outside of the neighborhood satisfy:
The corresponding loss function is described as:
where the margin is also defined as a self-adaptive value:
According to the embedding loss whether is the simple ranking loss or the bi-directional ranking loss, the whole loss function can be written as two forms:
where and are hyperparameters weighting the strengths of the structure optimizing loss against the embedding loss. , , and are hyperparameters weighting the parameters regularization losses.
Iv-D Mining tuples
The proposed algorithm relies on mining appropriate tuples for the training. Given a visual feature as the anchor sample, the corresponding semantic vector is needed to compose the matched pair for the input of in Eq. (8). Besides, tuples which contain the positive samples and negative samples are needed to optimize the visual data structure. In the tuple , the positive visual sample is chosen at random from the same class of the anchor sample . The choosing of the negative samples plays an import role for the convergence of the training. In this work, we sample the negative samples in an online fashion, wherein for each iteration a criterion is evaluated, and the hardest negative for each anchor visual feature is sampled within a batch. With the input of tuples , the network with the loss function
can be trained with Stochastic Gradient Descent (SGD). To optimize the loss function, the extra negative semantic vectors should be sampled. Because of the limited total number of semantic vectors, the hardest negative semantic vector is sampled within all semantic vectors instead of within a batch. Then, the tuple sampled for can be given as .
Similar with the recognition process in the prototype based method, with the learned visual embedding function and semantic embedding function and testing image , the recognition is achieved by finding the unseen class label according to its semantic vector :
V-a Datasets and setting
To extensive evaluate our method, we adopt four benchmark datasets in this work. The statistics of these datasets are shown in Table I. Animals with Attributes (AwA1)  is a coarse-grained dataset that contains 30,475 images from 50 classes of animals. The semantic representation of each class is give as an 85-dimension manually marked attribute vector. In the original AwA dataset  (AwA1), the images are not publicly available. In  a new Animals with Attributes2 (AwA2) was introduced with raw images. The AwA2 uses the same 50 animal classes and 85-dimensional attribute vectors as AwA1 data. Both the AwA1 and AwA2 are used to evaluate our model. Caltech UCSD Birds 200-2011 (CUB)  is a fine-grained dataset that contains 11,788 images from 200 different types of birds annotated with 312 attributes. The original split for zero-shot learning given by 
including 150 classes for training and 50 classes for testing. SUN Scene Recognition (SUN) is a fine-grained dataset that contains 14,340 images from 717 type of scenes annotated with 102 attributes. In the original split , 645 classes are used for training and 72 classes for testing.
In the original split of those datasets, some of the testing categories are subset of the ImageNet categories. When the image features are extracted from ImageNet trained models, it will break the assumption of zero-shot learning that the testing categories are never seen in the training stage. To alleviate this problem, new splits that none of the testing categories coincide with ImageNet categories are proposed in . The proposed new split method follows the original class number for training and testing. To eliminate confusions and give a fair comparison, this work strictly use the new split datasets, visual features and attributes provided by . Specifically, the visual features are 2048-dimensional ResNet-101 features and semantic vectors are built by class-level attributes.
Top-1 accuracy is adopted for the evaluation of single label image classification accuracy. According to the protocol given by Xian et. al , the zero-shot performance is evaluated based on per-class classification accuracy. Compared with the per-image classification accuracy, this protocol accounts for the imbalances in the target classes and provides a better measurement of the model performance. The evaluation algorithm is as follows:
As mentioned in the related works, GZSL is a more practical and challenging task, since the search space not only includes unseen classes but also includes seen classes during the evaluation stage. To evaluate the performance on GZSL, we use the harmonic mean of seen and unseen accuracy as existing works:
where and represent the recognition accuracy of images from seen and unseen classes respectively. The harmonic mean pays more attention on the overall recognition performance, i.e. both of the unseen recognition and the seen recognition, and is able to avoid the effect of much higher seen class accuracy.
V-A3 Implementation details.
The proposed framework is implemented using PyTorch111https://pytorch.org/. For the visual prototype based method, we initialize a visual prototype with the average vectors of visual features for each class. Then the visual prototypes can be updated with the loss function Eq. (3). With learned visual prototypes, the embedding framework for the semantic vectors can be learned with the loss function Eq. (5). In practice, we train the visual prototypes and embedding framework in an alternate way. Specifically, every 500 iterations for prototypes learning followed by 1000 iterations for embedding framework training. Details of the training parameters are shown in table II.
|Datasets||Batch size||Prototypes learning||Embedding framework|
In the visual data structure optimization based method, both of attributes and visual features are transformed into the intermediate embedding space with a two-layer multilayer perceptron. Since the hubness problem  can be suppressed effectively when the visual space works as the embedding space , we expect the intermediate embedding space to be closer with the visual space. To this end, the weights and are initialized to be unit diagonal matrices, so that the initial visual embedding features will be the same as the original visual features. The learning rate for and is smaller than that for and . With these settings, we can make sure that the learned intermediate embedding space is closer with the visual space. The learning rates and other parameters for training the model on different datasets are listed in table III.
|Datasets||Batch size||Semantic branch||Visual branch|
V-B Compare with the state-of-the-art
A wide range of existing ZSL models are selected for the performance comparison. Among these models, DAP and IAP , CONSE , CMT , SSE , LATEM , ALE , DEVISE , SJE , ESZSL , SYNC , SAE  and GFZSL  are mostly compared in lots of recent works, and the corresponding results are directly taken from 
. Note that even though those methods originally adopted different visual deep features or evaluation methods, they were re-evaluated by using the unified features and evaluation protocol. For a fair comparison, we also exactly utilize the same features and evaluation protocol in this work. Besides, some recent works, including TVN , LESAE , PSR , DCN , and MLSE  are also considered. These recent works have excellent performances, but no one can beat all the others on all four datasets. For instance, in the task of ZSL, the MLSE  achieves the highest accuracy on datasets of CUB and SUN, whereas the best performances on the datasets of AwA1 and AwA2 are achieved by TVN  and LESAE  respectively. In this work, we compare the proposed methods with the above on the both tasks of ZSL and GZSL. Since this work adopts the original global visual features and no data augmentation strategy is used, those works that focus on data augmentation [33, 60, 61, 62, 63] or try to get more distinctive visual features  are not considered in the comparison.
V-B1 Performance on zero-shot learning
Table IV presents the zero-shot learning Top-1 accuracy on the four datasets. The mehtods “Proposed-SRS” and “Proposed-BRS” represent the methods with visual data structure optimization, of which the former is the the proposed method using the simple ranking loss and visual structure optimizing loss formulated by Eq. (17), and the latter indicates the method formulated by Eq. (18) which adopts the bi-directional ranking loss along with the visual structure optimizing loss. The “Proposed-VPB” represents the visual prototype based method.
Compared with previous methods, the methods with visual structure optimization show outstanding performance on the ZSL task. As shown in Table IV, the proposed SRS and BRS outperform all existing methods except on the CUB dataset. Specifically, SRS and BRS exceed the best competitive method TVN  by 1.2 and 1.4 respectively on the dataset of AwA1. On the AwA2 dataset, SRS and BRS outperform PSR  significantly, and the gains are more than 5 in Top-1 accuracy. Compared with the best existing method LESAE , we still enjoy at least 1 gains. On the dataset of SUN, the proposed methods achieve the best accuracy gotten by MLSE . On the dataset of CUB, although the proposed visual structure optimization based methods are inferior to several works [50, 25, 28, 59], they are superior to other methods. Compared with the simple ranking loss, the bi-directional ranking loss shows only a slight superiority on ZSL. Specifically, the largest gap between the Proposed-SRS and Proposed-BRS is only 0.7 appearing on the CUB dataset. This phenomenon will be discussed in the section of Discussion.
The proposed method based on the visual prototype achieves more outstanding performance. As presented in Table IV, the proposed VPB method only get lower accuracy than SRS and BRS on the CUB dataset, and the same on the dataset of SUN. On datasets AwA1 and AwA2, the VPB exceed the BRS by 3.1 and 3.8. Compared with the previous best accuracy on AwA1 achieved by TVN , the VPB get 3.5 gains, and the raise is 5.4 on the dataset AwA2 compared with the previous best method LESAE .
V-B2 Performance on generalized zero-shot learning
Table LABEL:tab:GZSL reports the result of generalized zero-shot learning on the four datasets. ts refers in which the testing samples belong to unseen classes, and tr refers wherein the testing samples belongs to seen classes. The target labels for the evaluation of both ts and tr are all classes including seen and unseen. High accuracy on tr and low accuracy on ts means that the method performs well on the seen classes but fails to work well on the unseen classes, which implies the overfitting on the seen classes. The harmonic mean (H) of tr and ts gives the comprehensive evaluation on the GZSL task.
As shown in Table LABEL:tab:GZSL, the methods based on visual data structure optimization seem not get obvious superiority on the GZSL task compared with previous state-of-the-art methods. Nevertheless, they still show comparable results with most recent works. Practically, the BRS achieves the best harmonic mean accuracy 38.3% on AwA2 compared with all recently proposed methods. On datasets AwA1 and CUB, the proposed BRS is only inferior to DCN  with small gaps. The BRS is better than SRS on all the four datasets.
The stimulating is that the proposed method based on the visual prototypes VPB achieves considerable improvement compared with all recently proposed methods. Specifically, the proposed VPB gives a harmonic mean accuracy of 55.6% on AwA1 which is the best result among all the reported methods, and it higher than the previous best result by 16.5%, which is indeed a huge improvement. The huge increment also appears on the AwA2, where the harmonic mean accuracy is 53.8%, 16.8% better than the exiting best method. One datasets CUB and SUN, the proposed VPB also shows inspiring performance, and it obtains the best accuracy of 40.7% and 37.3% on these two datasets respectively, which are 2% and 7.1% better than the next best methods on CUB and SUN.
V-C Ablation study
In this section, we present ablation analysis of the proposed methods on the four datasets. For the proposed methods SRS and BRS, the effectiveness of optimizing the visual structure are conducted. In the visual prototypes based method, the superiority of the learned visual prototypes are analyzed.
V-C1 Effectiveness of optimizing the visual structure
To demonstrate the effectiveness of the visual structure optimizing loss, Table VI displays the performance of the proposed models and corresponding algorithms without it on the task of ZSL. SR and SRS denote the simple ranking loss without and with visual structure optimizing loss, respectively. BR and BRS indicate the bi-directional ranking loss without and with visual structure optimizing loss, respectively.
As shown in this table, the item of the visual structure optimizing loss plays a very important role in the proposed framework. The proposed methods are obviously superior to the corresponding methods without optimizing the visual data structure. Specifically, the gap between SR and SRS reaches 7.9 on the CUB dataset, and their minimum gap is 1.1 on the AwA2 dataset. Similarly, the largest gap between BR and BRS achieves 7.7 on the CUB dataset. The minimum gap appears on the AwA1 dataset, where the BRS is 2.5 higher than the BR. Compared with the results on AwA1 and AwA2 datasets, the visual structure optimization shows a more obvious advantage on the CUB and SUN datasets. According to Table I, these two datasets have more classes compared with the other two datasets. Since the more classes the more chaotic of the visual data structure will be, the proposed method can effectively alleviate this problem.
The visual structure optimizing loss also plays an important role in the task of GZSL. As illustrated in Fig. 4, the harmonic mean accuracy of methods with structure optimizing loss consistently outperform the corresponding methods without structure optimizing loss with improvement from 2.9% to 11.7%. On the datasets AwA1, AwA2, and CUB, the improvement of H achieved by the visual structure optimization is more than 8.0%, whereas the minimum increment appears on SUN, which is different with the performance of visual structure optimization on the task of ZSL. The reason is that, in the more realistic task of GZSL, the ability of models to avoid overfitting on the seen classes plays a critical role. The models without visual structure optimizing loss tend to overfit on the datasets with small classes, e.g. AwA1 and AwA2. As shown in Fig. 4(a) and Fig. 4(b), the SR and BR get higher tr accuracy but very low ts accuracy compared with SRS and BRS.
V-C2 Performance of learnable visual prototypes
Averaging the visual features of one class to get the visual centroid vector is an intuitive way to get the visual prototype. To test the superiority of the learnable visual prototypes proposed in this work compared with the visual centroid vectors, we compare the performance of centroid based and learned prototype based method on ZSL and GZSL.
Table VII presents the comparison of visual centroid based (VCB) and learned prototype based methods on the task of ZSL. As shown in this table, the learned prototype based method outperforms the centroid based method on most of datasets but fails on SUN. This only failure of learned prototype on SUN is caused by the small proportion of test classes. From Table I, one can see that the test class number only accounts to 10%, which means the test visual features tend to be discriminative, since there will be less possible for the intersection of instance features from different classes. In that case, the centroids naturally give better performance. After all, they are centorids of each class. Anyway, the more persuasive performance of learned prototypes will be conducted on the task of GZSL.
The comparison of VCB and VPB on GZSL is given in Fig. 5. In terms of harmonic accuracy, the visual prototype based method gets superior performance over the visual centroid based methods on all four datasets. Concretely, the VPB is better than the VCB by 14.5%, 15.3%, 8.9%, and 10.4% on AwA1, AwA2, CUB, and SUN respectively. It is worth noting that the VCB obtains better tr accuracy but obviously lower harmonic accuracy on the datasets AwA1, AwA2, and CUB than VPB, which indicates that the VCB tends to get overfitting results on the seen classes, whereas the proposed learned visual prototype based method shows outstanding performance on generalization.
In terms of the overall performance on the ZSL and GZSL, the prototype based method achieves the most outstanding performance, especially on the task of GZSL. Compared with the direct mapping of the semantic vectors to the visual space, wherein the mapped semantic features need to be optimized according to numerous instance features of the same classes, the prototype based method makes each semantic vector have a clear mapping target, which is the visual prototype feature of the same class instead of a massive instance features. So that even the simple visual prototypes served by the visual centroids can achieve noticeable performance. As shown in Fig. 5 and Table LABEL:tab:GZSL, the VCB outperforms all previous methods on the datasets of AwA1 and AwA2. However, taking the visual centroids as the visual prototypes tends to overfit on the seen classes, since the interacting distribution of instances features in the visual space brings the fact that the visual centroids may very close to some instance features from other classes. The proposed learnable visual prototypes are more distinctive, which can effectively alleviate the overfitting problem and show excellent performance on the task of GZSL.
The visual structure optimizing methods are not good as the visual prototype based method, but they still get conspicuous performance, especially on the ZSL, compared with other existing methods. In the sight of embedding loss, the bi-directional ranking loss pays more attention on making the matched semantic and visual feature pairs closer and non-matched pairs farther. Intuitively, the bi-directional ranking would obtain a better performance than that of the simple ranking loss. However, as listed in Table IV, there is only a slight improvement using the bi-directional ranking loss compared with the simple one. There may be two reasons for the non-significant advantage of the bi-directional ranking loss. One is that the visual structure optimizing loss constrains the embedding visual features to have a good data structure. Under this condition, extra constraints on the semantic features or visual features are not necessary. The other reason may be that on one hand the bi-directional ranking loss makes the embedding semantic features more discriminative, while on the other hand this discrimination may disturb the relation among different categories. This may be the reason that BR is slightly weaker than SR on the AwA2 dataset. Nevertheless, the bi-directional ranking loss shows obviously better performance on the more realistic task GZSL, as shown in Table LABEL:tab:GZSL.
In this paper, we explore the idea of optimizing visual space for ZSL recognition. To this end, we introduce two methods, one of which is called prototype based method and the other is the visual data structure optimization based method. In the former method, we learns a visual prototype for each visual class, so that the semantic vector can be mapped with the certain visual prototype rather than numerous visual features that discretely distribute in the visual space. In the latter method, accompanied with a embedding loss, the proposed visual structure optimizing loss can effectively improve the performance on the ZSL and GZSL. For the embedding loss, we consider two forms, including a simple ranking loss and a bi-directional ranking loss. When the proposed optimizing loss is added to the framework, both of the ranking loss functions show outstanding performance on the ZSL task. Extensive experiments on four zero-shot benchmarks demonstrate the superiority of our proposed models, and the proposed visual prototype based method outperforms all the previous methods, achieving the new state-of-the-art performance.
Considering the generality, in the current proposed visual prototype based method, we only use the visual features to learn the prototypes but ignore the information presented by the corresponding training semantic vectors, with which, in fact, we can further optimize the visual prototypes to make the visual space formed by visual prototypes have closer manifold with the semantic space, and that will be conducted in our future research.
This work was supported in part by the National Natural Science Foundation of China under Grants 61573273 and 61603289, and in part by the Fundamental Research Funds for the Central Universities under Grant xzy022019052.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” inInternational Conference on Neural Information Processing Systems, 2012, Conference Proceedings, pp. 1097–1105.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” Computer Science, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in
-  R. Salakhutdinov, A. Torralba, and J. Tenenbaum, “Learning to share visual appearance for multiclass object detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2011, Conference Proceedings.
-  X. Zhu, D. Anguelov, and D. Ramanan, “Capturing long-tail distributions of object subcategories,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, Conference Proceedings.
-  C. H. Lampert, H. Nickisch, and S. Harmeling, Learning to detect unseen object classes by between-class attribute transfer, 2009.
H. Larochelle, D. Erhan, and Y. Bengio, “Zero-data learning of new tasks,” in
AAAI Conference on Artificial Intelligence, 2008, Conference Proceedings.
-  M. Palatucci, D. Pomerleau, G. E. Hinton, and T. M. Mitchell, “Zero-shot learning with semantic output codes,” in International Conference on Neural Information Processing Systems, 2009, Conference Proceedings, pp. 1410–1418.
-  A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth, “Describing objects by their attributes,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, Conference Proceedings.
-  A. Farhadi, I. Endres, and D. Hoiem, “Attribute-centric recognition for cross-category generalization,” in IEEE Conference on Computer Vision and Pattern Recognition, 2010, Conference Proceedings.
-  A. Z. Vittorio Ferrari, “Learning visual attributes,” Advances in Neural Information Processing Systems, pp. 433–440, 2008.
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” inAdvances in neural information processing systems, 2013, Conference Proceedings, pp. 3111–3119.
-  J. Lei Ba, K. Swersky, and S. Fidler, “Predicting deep zero-shot convolutional neural networks using textual descriptions,” in IEEE International Conference on Computer Vision, 2015, Conference Proceedings, pp. 4247–4255.
-  M. Elhoseiny, B. Saleh, and A. Elgammal, “Write a classifier: Zero-shot learning using purely textual descriptions,” in IEEE International Conference on Computer Vision, 2014, Conference Proceedings.
Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data., 2010.
Y. Shigeto, I. Suzuki, K. Hara, M. Shimbo, and Y. Matsumoto, “Ridge regression, hubness, and zero-shot learning,” in
Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2015, Conference Proceedings, pp. 135–151.
-  Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata, “Zero-shot learning - a comprehensive evaluation of the good, the bad and the ugly,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. PP, no. 99, pp. 1–1, 2017.
-  P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona, “Caltech-UCSD Birds 200,” California Institute of Technology, Tech. Rep. CNS-TR-2010-001, 2010.
-  J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Large-scale scene recognition from abbey to zoo,” in IEEE Conference on Computer Vision and Pattern Recognition, 2010, Conference Proceedings.
-  S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on knowledge and data engineering, vol. 22, no. 10, pp. 1345–1359, 2010.
-  K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” Journal of Big Data, vol. 3, no. 1, p. 9, 2016.
-  C. H. Lampert, N. Hannes, and H. Stefan, “Attribute-based classification for zero-shot visual object categorization,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 36, no. 3, pp. 453–465, 2014.
-  A. Zhao, M. Ding, J. Guan, Z. Lu, and J. R. Wen, “Domain-invariant projection learning for zero-shot recognition,” 2018.
Y. Liu, Q. Gao, J. Li, J. Han, and L. Shao, “Zero shot learning via low-rank embedded semantic autoencoder,” inInternational Joint Conference on Artificial Intelligence, 2018, Conference Proceedings, pp. 2490–2496.
-  Y. Annadani and S. Biswas, “Preserving semantic relations for zero-shot learning,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, Conference Proceedings, pp. 7603–7612.
-  Y. Long, L. Liu, F. Shen, L. Shao, and X. Li, “Zero-shot learning using synthesised unseen visual data with diffusion regularisation,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. PP, no. 99, pp. 1–1, 2017.
-  H. Jiang, R. Wang, S. Shan, and X. Chen, “Learning class prototypes via structure alignment for zero-shot recognition,” 2018.
-  S. Liu, M. Long, J. Wang, and M. I. Jordan, “Generalized zero-shot learning with deep calibration network,” in Advances in Neural Information Processing Systems, 2018, Conference Proceedings, pp. 2009–2019.
-  A. M. V. Kumar Verma, G. Arora and P. Rai, “Generalized zero-shot learning via synthesized examples,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, Conference Proceedings.
-  A. Mishra, S. Krishna Reddy, A. Mittal, and H. A. Murthy, “A generative model for zero shot learning using conditional variational autoencoders,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, Conference Proceedings, pp. 2188–2196.
-  Y. Xian, T. Lorenz, B. Schiele, and Z. Akata, “Feature generating networks for zero-shot learning,” in IEEE conference on computer vision and pattern recognition, 2018, Conference Proceedings, pp. 5542–5551.
-  R. Felix, B. G. V. Kumar, I. Reid, and G. Carneiro, “Multi-modal cycle-consistent generalized zero-shot learning,” 2018.
-  Y. Xian, S. Sharma, B. Schiele, and Z. Akata, “f-vaegan-d2: A feature generating framework for any-shot learning,” 2019.
-  S. Jie, C. Shen, Y. Yang, L. Yang, and M. Song, “Transductive unbiased embedding for zero-shot learning,” 2018.
-  Y. Fu, T. M. Hospedales, T. Xiang, and S. Gong, “Transductive multi-view zero-shot learning,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 37, no. 11, pp. 2332–2345, 2015.
-  X. J. Y. Guo, G. Ding and J. Wang, “Transductive zero-shot recognition via shared model space learning,” in Thirtieth AAAI Conference on Artificial Intelligence, 2016, Conference Proceedings.
-  Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele, “Latent embeddings for zero-shot classification,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, Conference Proceedings, pp. 69–77.
-  B. Romera-Paredes and P. Torr, “An embarrassingly simple approach to zero-shot learning,” in International Conference on Machine Learning, 2015, Conference Proceedings, pp. 2152–2161.
-  Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele, “Evaluation of output embeddings for fine-grained image classification,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2927–2936.
-  A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, and T. Mikolov, “Devise: A deep visual-semantic embedding model,” in Advances in neural information processing systems, 2013, Conference Proceedings, pp. 2121–2129.
-  L. Zhang, T. Xiang, and S. Gong, “Learning a deep embedding model for zero-shot learning,” 2017.
-  S. Changpinyo, W. L. Chao, B. Gong, and F. Sha, “Synthesized classifiers for zero-shot learning,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, Conference Proceedings.
-  Z. Zhang and V. Saligrama, “Zero-shot learning via joint latent similarity embedding,” Computer Science, 2015.
-  E. Kodirov, T. Xiang, and S. Gong, “Semantic autoencoder for zero-shot learning,” 2017.
-  P. Morgado and N. Vasconcelos, “Semantically consistent regularization for zero-shot recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017, Conference Proceedings, pp. 6060–6069.
-  M. Meng and X. Zhan, “Zero-shot learning via low-rank-representation based manifold regularization,” IEEE Signal Processing Letters, vol. 25, no. 9, pp. 1379–1383, 2018.
-  Z. Zhang and V. Saligrama, “Zero-shot recognition via structured prediction,” in European conference on computer vision. Springer, 2016, Conference Proceedings, pp. 533–548.
-  Y. Li, D. Wang, H. Hu, Y. Lin, and Y. Zhuang, “Zero-shot recognition using dual visual-semantic mapping paths,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017, Conference Proceedings, pp. 3279–3287.
-  S. Deutsch, S. Kolouri, K. Kim, Y. Owechko, and S. Soatto, “Zero shot learning via multi-scale manifold regularization,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017, Conference Proceedings, pp. 7112–7119.
-  H. Zhang, Y. Long, Y. Guan, and L. Shao, “Triple verification network for generalized zero-shot learning,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 506–517, 2019.
-  Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid, “Label-embedding for attribute-based classification,” in IEEE Conference on Computer Vision and Pattern Recognition, 2013, Conference Proceedings, pp. 819–826.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015.
D. G. L. Angeliki and B. Marco, “Hubness and pollution: Delving into
cross-space mapping for zero-shot learning,” in
the 7th International Joint Conference on Natural Language Processing, 2014, Conference Proceedings.
-  M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean, “Zero-shot learning by convex combination of semantic embeddings,” arXiv preprint arXiv:1312.5650, 2013.
-  R. Socher, M. Ganjoo, H. Sridhar, O. Bastani, C. D. Manning, and A. Y. Ng, “Zero-shot learning through cross-modal transfer,” in International Conference on Neural Information Processing Systems, 2013, Conference Proceedings.
-  Z. Zhang and V. Saligrama, “Zero-shot learning via joint semantic similarity embedding.” IEEE Conference on Computer Vision and Pattern Recognition, 2016, Conference Proceedings.
-  Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid, “Label-embedding for image classification,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 38, no. 7, pp. 1425–1438, 2016.
-  V. K. Verma and P. Rai, “A simple exponential family framework for zero-shot learning,” 2017.
-  Z. Ding and H. Liu, “Marginalized latent semantic encoder for zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 6191–6199.
-  E. Schonfeld, S. Ebrahimi, S. Sinha, T. Darrell, and Z. Akata, “Generalized zero-and few-shot learning via aligned variational autoencoders,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 8247–8255.
-  H. Huang, C. Wang, P. S. Yu, and C.-D. Wang, “Generative dual adversarial network for generalized zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 801–810.
-  M. Bulent Sariyildiz and R. Gokberk Cinbis, “Gradient matching generative networks for zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 2168–2178.
-  J. Li, M. Jin, K. Lu, Z. Ding, L. Zhu, and Z. Huang, “Leveraging the invariant side of generative zero-shot learning,” 2019.
-  G.-S. Xie, L. Liu, X. Jin, F. Zhu, Z. Zhang, J. Qin, Y. Yao, and L. Shao, “Attentive region embedding network for zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 9384–9393.