Hierarchical Prototype Learning for Zero-Shot Recognition

10/24/2019 ∙ by Xingxing Zhang, et al. ∙ 0

Zero-Shot Learning (ZSL) has received extensive attention and successes in recent years especially in areas of fine-grained object recognition, retrieval, and image captioning. Key to ZSL is to transfer knowledge from the seen to the unseen classes via auxiliary semantic prototypes (e.g., word or attribute vectors). However, the popularly learned projection functions in previous works cannot generalize well due to non-visual components included in semantic prototypes. Besides, the incompleteness of provided prototypes and captured images has less been considered by the state-of-the-art approaches in ZSL. In this paper, we propose a hierarchical prototype learning formulation to provide a systematical solution (named HPL) for zero-shot recognition. Specifically, HPL is able to obtain discriminability on both seen and unseen class domains by learning visual prototypes respectively under the transductive setting. To narrow the gap of two domains, we further learn the interpretable super-prototypes in both visual and semantic spaces. Meanwhile, the two spaces are further bridged by maximizing their structural consistency. This not only facilitates the representativeness of visual prototypes, but also alleviates the loss of information of semantic prototypes. An extensive group of experiments are then carefully designed and presented, demonstrating that HPL obtains remarkably more favorable efficiency and effectiveness, over currently available alternatives under various settings.



There are no comments yet.


page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Traditional object recognition tasks require the test classes to be identical or a subset of the training classes. Due to the deep learning techniques and growing availability of big data, dramatic progresses have been achieved on these tasks in recent years 

[9, 18]. However, in many practical applications, we need the model to have the ability to determine the class labels for the object belonging to unseen classes. The following are some popular application scenarios [47]:

  • The number of target classes is large. Generally, human beings can recognize at least 30,000 object classes. However, collecting sufficient labelled instances for such a large number of classes is challenging. Thus, existing image datasets can only cover a small subset of these classes.

  • Target classes are rare. An example is fine-grained object recognition. Suppose we want to recognize flowers of different breeds. It is hard and even prohibitive to collect sufficient image instances for each specific flower breed.

  • Target classes change over time. An example is recognizing images of products belonging to a certain style and brand. As products of new styles and new brands appear frequently, for some new products, it is difficult to find corresponding labelled instances.

  • Annotating instances is expensive and time consuming. For example, in the image captioning problem, each image in the training data should have a corresponding caption. This problem can be seen as a sequential classification problem. The number of object classes covered by the existing image-text corpora is limited, with many object classes not being covered.

Fig. 1: The visual images and semantic prototypes provided for several classes in benchmark dataset AwA2 [50].

To solve this problem, Zero-Shot Learning (ZSL) [28, 56, 49, 7, 31, 51]

is proposed. The goal of zero-shot recognition is to recognize objects belonging to the classes that have no labelled samples. Since its inception, it has become a fast-developing field in machine learning with a wide range of applications in computer vision. Due to lack of labelled samples in the unseen class domain, auxiliary information is necessary for ZSL to transfer knowledge from the seen to the unseen classes. As shown in Fig. 

1, existing methods usually provide each class with one semantic prototype derived from text (e.g., attribute vector [21, 25, 24] or word vector [2, 13, 40]). This is inspired by the way human beings recognize the world. For example, with the knowledge that “a zebra looks like a horse, and with stripes”, we can recognize a zebra even without having seen one before, as long as we know what a “horse” is and how “stripes” look like.

Typical ZSL approaches generally adopt a two-step recognition strategy [37, 57, 23, 3]. First, an image-semantics projection function is learned from the seen class domain to transfer knowledge to unseen classes. Then, the test sample is projected into the learned embedding space, where the recognition is carried out by considering the similarity between the sample and unseen classes. Thus, various ZSL approaches are developed to learn a well-fitting projection function between visual features and semantic prototypes. However, they all ignore a fact that the provided semantic prototypes are incomplete and less diversified, since both human-defined attribute vectors and automatically extracted word vectors are obtained independently from visual samples, and uniquely for each class. Consequently, the learned projection may not be effective enough to recognize the samples from the same class. For instance, there are different kinds of colors for a “horse” as shown in Fig. 2. Besides, the non-visual components are often included in the provided semantic prototypes, such as “smart”, “agility”, and “inactive” in benchmark dataset AwA2 [50]. Based only on visual information, these attributes are almost impossible to be predicted and even with the level of random guess as observed from Fig. 3. Thus, the learned projection cannot generalize well on the unseen class domain, although work normally in the seen class domain through supervised training. Moreover, in practice, the visual image captured from an object cannot present all the attributes of corresponding class. As a result, a simple projection from an image to its class attribute vector is inaccurate since that image may lack some attributes (e.g., tail and claws are not captured).

Fig. 2: Several instances with different colors from the unseen class “horse” in benchmark dataset AwA2 [50].

Fig. 3: The predictability of each binary attribute, measured with classification accuracy by pre-trained Resnet101 [18], where we only fine-tune the last layer.

Finally, as mentioned in many ZSL approaches [36, 14, 22, 17, 44, 32, 58, 20], large domain gap between the seen and unseen classes is one of the biggest challenges in ZSL. This comes from the fact that for a certain attribute (e.g., “tail”), the unseen classes are often visually very different from the seen ones. Consequently, the projection function learned from seen-class data may not be effective enough to project an unseen object to be close to its corresponding class. To address this domain shift issue, that is, to reduce the domain distribution mismatch between the seen- and unseen- class data, a number of ZSL models resort to transductive learning [14, 17, 32, 58] by utilizing test objects from unseen classes in the training phase. Here, we also introduce a transductive setting, where the learned projection is adapted to unseen classes based on unlabelled target data. It has been proved in [14] that transductive ZSL models can indeed improve generalisation accuracy compared with inductive models.

For the importance of these phenomenons, we develop a novel ZSL model in this paper under the transductive setting. By learning visual prototypes and super-prototypes, instead of learning a projection between the visual and semantic spaces, the proposed model is able to avoid the aforementioned problems caused by semantic prototypes. In particular, considering the incompleteness of provided semantic prototypes and visual images, we choose to couple the semantic prototypes with the learned visual prototypes. Motivated by the fact that there exist unseen/seen prototypes that fall into the same class, we further learn the prototypes of seen and unseen prototypes, called super-prototypes. They can bridge not only the seen and unseen class domains to induce a transductive setting, but also the visual and semantic spaces to align their structure.

In summary, the contributions of this work are four-fold:

  • We propose a novel transductive ZSL model (named HPL) which enforces discriminability on both seen and unseen class domains by learning visual prototypes respectively.

  • Interpretable super-prototypes that are learned from the visual (resp. semantic) prototypes are able to bridge the two domains, since super-prototypes are shared between the seen and the unseen classes.

  • By maximizing the structural consistency of visual and semantic prototypes, the representativeness of learned visual prototypes is further strengthened, thus leading to more discriminative recognition.

  • An efficient algorithm is presented to solve our model with rigorous theoretical analysis. The improvements over currently available alternatives are especially significant under various ZSL settings.

Ii Related Work

In this section, we first briefly introduce some related works on transductive ZSL, and then present a review for image-semantics projection in ZSL.

Transductive Zero-Shot Learning. According to whether information about the testing data is involved during model learning, existing ZSL models consist of inductive [37, 7, 23, 3] and transductive settings [1, 44, 58]. Specifically, this transduction in ZSL can be embodied in two progressive degrees: transductive for specific unseen classes [27] and transductive for specific testing samples [58]

. Specifically, by extending conventional ZSL into a semi-supervised learning scenario, transductive ZSL is an emerging topic recently since it can effectively rectify the domain shift caused by different distributions of the training and the test samples. Depending on the inference strategy for the test data, existing transductive ZSL models mainly involve two groups. The first one  

[36, 14, 15]

generally constructs a graph in the semantic space, and then transfers it to the test set by label propagation. However, due to lack of labelled samples for unseen classes, such methods usually have a great probability to predict the test objects as seen classes. The other one 

[22, 17, 32] refines the predicted labels of unseen-class data dynamically as in self-training. It is worth noting that this kind of method often shares the same projection in the seen and the unseen class domains, which may be less discriminative since the provided semantic prototypes suffer from the properties of incompleteness and non-visual components. Unlike existing transductive ZSL models, we creatively formulate a domain adaptation term, which can learn the visual prototypes and super-prototypes of all unseen classes. Specifically, instead of the popular used image-attribute projection, the image-label projection via prototypes can mitigate the domain shift caused by appearance variations of each attribute across all the classes. While the gap of seen- and unseen- class distributions is bridged by sharing interpretable super-prototypes.

Projection Function. From the view of constructing the image-semantics interactions, existing ZSL approaches fall into four categories. The first one learns a projection function from a visual feature space to a semantic space with a linear [24, 5, 26] or non-linear model [31, 40, 53]

. The test data are then classified by matching the visual representations in the semantic embedding space with the provided semantic prototypes of unseen classes.

The second group [3, 32, 55, 48] chooses the reverse projection direction, i.e., from the semantic to visual spaces, to alleviate the hubness problem [34] caused by nearest neighbour search in a high dimensional space. The test data are then classified by resorting to the most similar visual exemplars in the unseen class domain. To capture more distribution information from visual space, recent work focuses on generating pseudo examples for unseen classes with seen-class data [16], web data [32], Generative Adversarial Networks [51, 60]

, Variational Autoencoder  

[39], etc. Consequently, zero-shot recognition degenerates into a general supervised learning problem.

The third group is a combination of the first two groups but with an additional reconstruction constraint for visual samples or semantic prototypes [23, 3, 58]. Such ZSL approaches generally take the encoder-decoder paradigm, and then conduct the final recognition by the same search strategy as in the first two groups. This makes the projection function generalize better from the seen to the unseen classes as demonstrated in other problems [4]. The last group learns a common space, where both the visual space and semantic space are projected to [27, 8, 19]. In such a framework, a score function is first trained using seen-class labelled examples, and then computes a likelihood score of the test sample.

Inspired by the third group, we propose to learn interpretable visual prototypes for zero-shot recognition by bidirectional projection. In particular, instead of popularly used image-attribute projection, we adopt image-label projection to avoid those problems caused by the provided semantic prototypes. Additionally, unlike many existing two-step ZSL approaches, the proposed HPL model can perform one-step recognition due to the visual prototype learning.

Iii Prototype Learning for Zero-shot Recognition

In this section, we first set up the zero-shot recognition problem (Section III-A), then develop a HPL model for this task (Section III-B), and finally derive an efficient algorithm to solve HPL (Section III-C).

Iii-a Problem Definition

Notation Description
Set of seen classes
Set of unseen classes
Set of semantic prototypes of all seen classes
Set of semantic prototypes of all unseen classes
Visual space and semantic space, respectively
Number of training samples and number of test samples, respectively
The -th training sample: image embedding , and label with one-hot vector and
The -th test sample: image embedding , and label with one-hot vector and
Dimension of each semantic prototype and dimension of each image embedding, respectively
Set of visual prototypes of all seen classes, and
Set of visual prototypes of all unseen classes, and
Set of visual super-prototypes, and
Set of semantic super-prototypes, and
TABLE I: Key notations

Let and denote two disjoint sets of seen classes and unseen classes. Accordingly, let and denote the semantic prototypes (e.g. a -dimensional attribute vector or word vector derived from text for each class) of all seen and unseen classes, respectively. Meanwhile, suppose we are given a set of labelled training samples , where is the -dimensional visual embedding of the -th sample in the training set, and its class label belongs to the seen classes set . and are the one-hot vector and semantic prototype of , indicating the label . Let and . Similarly, let denote a set of unlabelled test samples, where is the unknown label of in the standard ZSL setting. and are the one-hot vector and semantic prototype of , corresponding to the class label . Here, and . The goal of zero-shot recognition is to predict the labels of test samples in by learning a classifier . The key notations used throughout this paper are summarized in Table I.

Iii-B HPL: Formulation

Fig. 4: The illustration of HPL model for zero-shot recognition.

Assume both the seen-class training set and unlabelled unseen-class data are available. To predict the labels of , we propose a hierarchical prototype learning function for zero-shot recognition in an iterative model update process. Specifically, denotes the visual prototypes of , and denotes the visual prototypes of . represents the prototypes of and , named visual super-prototypes, and represents the prototypes of and , named semantic super-prototypes. Generally, . Key to ZSL is to transfer knowledge from the seen to the unseen classes. Motivated by the fact that there exist unseen/seen prototypes that fall into the same class, we thus consider learning super-prototypes to bridge the seen and unseen class domains, and meanwhile, align the visual and semantic spaces. just denotes the structural consistency representations for both two spaces in the seen class domain. For discriminative recognition, the minimization of over all possible assignments, i.e.,


is encouraged to achieve the three goals of i) minimizing the encoding cost of via visual prototypes ; ii) maximizing the structural consistency between and via super-prototypes and in an aligned space; iii) maximizing the structural consistency between and via and under the constraint of the minimum prediction error of . It is worth noting that we additionally introduce a regularizer in Eq. (1) for each super-prototype to enhance the stability of solutions and mitigate the scale issue.

For this end, as shown in Fig. 4, we consider a decomposition of the objective function in Eq. (1) into three functions, corresponding to the three aforementioned objectives, as


where is an encoding function that favors learning discriminative prototypes from by well encoding under the supervision of its labels. denotes an alignment function that favors learning interpretable visual and semantic super-prototypes via structure alignment between the visual and semantic spaces. is a prediction function that favors generating more representative super-prototypes with the assistance of predicted labels of . The parameters control the effects of encoding cost and test data inference on the global objective function . A close to zero will ignore the prediction error, resulting in poor recognition performance, while a larger leads to higher recognition accuracy. Next, we study each of the three functions.

Encoding Function: Inspired by bidirectional projection learning [23], we adopt both the forward and reverse encoding costs to characterize the discriminability of for . Thus, the encoding function factorizes into


Unlike the popularly used image-semantics projection in previous works, a feature vector representing the low-level visual appearance of an object is projected into the high-level label space (instead of the middle-level semantic space), and further back to reconstruct itself in our model. In this way, various problems caused by the provided semantic prototypes can be avoided. Additionally, the projection in the second term of Eq. (3)111 and in Eq. (3) are all normalized vectors for and . Then denotes the cosine distance. encourages the prototype of corresponding class (i.e., ) is very similar to the sample , thus guaranteeing the representativeness of learned prototypes. Meanwhile, the reverse projection in the first term enforces is closest to its corresponding prototype yet far away from other prototypes, thus learning discriminative prototypes.

Alignment Function: Since the semantic prototypes are additionally provided for ZSL, we can strengthen the discriminability of visual prototypes by aligning their intrinsic structure with that of semantic prototypes. This is motivated by the fact that there often exist super-prototypes in visual/semantic space, which encourage the original prototypes in the two spaces to be represented consistently in an aligned space, and thus share the same structure. Therefore, we consider


where is a nonnegative parameter controlling the relative importance of the visual and semantic spaces. Additionally, the alignment function enforces the same number of super-prototypes in the two spaces, because the intrinsic structure is unique either in low-level or high-level spaces. In particular, such a structure alignment strategy in Eq. (4) alleviates the indiscriminability issue of prototypes that are learned from Eq. (3) in the unbalanced data scenario.

Prediction Function: Notice that the super-prototypes learned in the seen class domain are expected to be shared with unseen class domain. Consequently, the unseen-class samples become seen at the superhigh-level semantic space, and thus easier to be recognized 222For instance, “dolphin” belongs to an unseen class, but becomes seen in terms of mammal (a super-class).. For this end, the prediction function mainly pursues the minimum encoding cost of test samples and the maximum structural consistency of prototypes in the unseen class domain. Thus, we formulate the prediction function as


where denotes the structural consistency representations for both visual and semantic prototypes in the unseen class domain,


and .

As a result, our HPL model in Eq. (1) casts ZSL into a min-min optimization problem. Different from most existing ZSL approaches that perform the final recognition via nearest neighbor search, the class label of each test sample is predicted directly via the prediction function in our model. Such a one-step recognition framework is also generic in the sense that it can be easily extended to inductive settings by reformulating the prediction function as


Besides the standard ZSL above, generalized zero-shot learning (GZSL) where prediction on test data is made over both seen and unseen classes, has drawn much attention recently [19, 27]. To further improve the generalization ability of our HPL model on GZSL tasks, we reformulate the prediction function as




and . The main difference between Eq. (5) and Eq. (9) lies in encoding via both and under the GZSL setting.

Iii-C HPL: Algorithm

Here, we consider the model optimization under the standard ZSL setting 333Our proposed optimization can also be generalized to the GZSL setting as presented in the supplementary material.. Putting all three functions together, we consider the following minimization problem


It is not trivial to solve the optimization problem in Eq. (III-C), since the last term in the objective function is also a minimization problem. In the following, we will formulate our solver as an iterative optimization algorithm. Given the two super-prototype sets and at iteration during model learning, we can obtain the optimal solution by solving the optimization problem in Eq. (5). Then, the optimization problem in Eq. (III-C) at iteration can be approximated as follows


where .

As summarized in Algorithm 1, our solver consists of iterating between updating unseen-class data prediction and updating seen-class data fitting. It is obvious that the optimization problem in Eq. (5) (resp. Eq. (12)) is not convex for the three (resp. four) variables simultaneously, but it is convex for each of them separately. We thus employ an alternative optimization method to solve it. The details about solving Eq. (5) and Eq. (12) are provided in the supplementary material. Particularly, the objective function in Eq. (12) can be further simplified equivalently as


where , , , and thus . This can facilitate the parameters tuning.

1:  Input: (training set); (test samples); (semantic prototypes of unseen classes); parameters (); (the number of super-prototypes); ().
2:  Initialize ; , .
3:  Output: .
4:  repeat
5:     Update { via Eq. (5);
6:     Update via Eq. (12);
7:     ;
8:     ;
9:     ;
10:  until ( and ) or ()
11:  , ;
12:  Obtain , , and via Eq. (5).
Algorithm 1 HPL for Zero-Shot Recognition

Convergence Analysis. As can be observed from Eq. (5) and Eq. (12), due to linear formulations, it is easier to solve the seven sub-problems corresponding to the seven variables in our model. Specifically, the solutions to and can be expressed in closed forms, updating and is actually to solve Sylvester equations [6], and computing is to perform minimum search. Moreover, we adopt a line search strategy [30] when updating and . Thus, the objective function in Eq. (III-C) is non-increasing with a lower bound during the iterative optimization of each sub-problem as in Algorithm 1.

Complexity Analysis. We further analyze the time complexity for Algorithm 1 as follows. The complexity of solving Eq. (5) is , and updating {, } by solving Eq. (12) spends  444The computation complexity about training samples and is excluded due to storage in advance., where and are the numbers of (inner) iterations required to converge. To sum up, one iteration in Algorithm 1 has a linear time complexity of () with respect to the test data size . Thus, it is efficient even for large-scale ZSL problems.

Iv Experimental Results and Analysis

In this section, we first detail our experimental protocol, and then present the experimental results by comparing our HPL model with the state of the art for zero-shot recognition on five benchmark datasets under various settings.

Number of Images
Number of Classes SS PS
Dataset Total Total
aPY 64 32 15+5 12 15339 12695 2644 5932 1483+7924
AwA 85 50 27+13 10 30475 24295 6180 19832 4958+5685
SUN 102 717 580+65 72 14340 12900 1440 10320 2580+1440
CUB 312 200 100+50 50 11788 8855 2933 7057 1764+2967
ImageNet 1000 1360 800+200 360 254000 200000 54000 - -
TABLE II: Statistics for five datasets. consists of training and validation classes, and under the PS protocol includes seen- and unseen- class samples. Notice that SS and PS protocols have no effects on ImageNet dataset.

Iv-a Evaluation Setup and Metrics

Datasets. Among the most widely used datasets for ZSL, we first select four attribute datasets. Two of them are coarse-grained, one small (aPascal & Yahoo (aPY) [11]) and one medium-scale (Animals with Attributes (AwA) [24]). Another two datasets (SUN Attribute (SUN) [33] and CUB-200-2011 Birds (CUB) [45]) are both fine-grained and medium-scale. We also additionally adopt a large-scale dataset (ImageNet [38]) for standard ZSL, where 1K classes of ILSRVC 2012 are used as seen classes, and non-overlapping 360 classes from the ILSVRC 2010 serve as unseen classes as in [23]. For ImageNet, we use Word2Vec [29] trained on Wikipedia provided by [7] since attributes of 21K classes are not available. Details of all dataset statistics are in Table II.

Protocols. For fair comparisons, we conduct extensive experiments based on two typical protocols as shown in Table II: Standard Split protocol (SS) [24] and Proposed Split protocol (PS) [50]. The main difference between SS and PS is that PS can guarantee no unseen classes are from ImageNet-1K since it is used to pre-train the base network, otherwise the zero-shot rule would be violated. Specifically, we will conduct standard ZSL task under both SS and PS protocols, while conduct GZSL task just under the PS protocol.

Visual Features. Due to different visual features used by existing ZSL approaches under the SS protocol, we choose to compare with them based on three types of widely-used features: 1024-dim GoogLeNet features (G), 2048-dim ResNet-101 features (R), and 4096-dim VGG19 features (V) provided by [7][50], and [56], respectively. This enables a direct comparison with the published results of existing methods. While under the PS protocol, all compared methods are based on ResNet-101 features since they usually yield higher accuracy than other features as demonstrated in [50].

Evaluation Metrics. At test phase of ZSL, we are interested in having high performance on both densely and sparsely populated classes. Thus, we use the unified evaluation protocol proposed in [50], where the average accuracy is computed independently for each class. Specifically, under the standard ZSL setting, we measure average per-class top-1 accuracy by

In particular, the average per-class top-5 accuracy is computed for ImageNet dataset. While under the GZSL setting, we compute the harmonic mean (

) of and to favor high accuracy on both seen and unseen classes:

where and are the accuracy of recognizing the test samples from the seen and unseen classes respectively, and

Parameter Settings. There are four parameters in our HPL model: , and (the number of super-prototypes). As in [57, 23, 51]

, these hyperparameters are also fine-tuned on a disjoint set of validation set of

classes for each dataset, respectively.

(a) The convergence results
(b) The complexity results
Fig. 5: The empirical results about convergence and complexity performances of our algorithm for standard ZSL on four datasets under the PS protocol.

Compared Methods. We choose to compare with a wide range of competitive and representative ZSL approaches, especially those that have achieved the state-of-the-art results recently. In particular, such compared approaches involve not only both inductive and transductive models, but also both shallow and deep models.

Type Method Fea Model aPY AwA SUN CUB ImageNet
Ind. ESZSL [37] R S 34.4 74.7 57.3 55.1 -
SynC [7] G S - 72.9 62.7 54.7 -
SAE [23] G S 55.4 84.7 65.2 61.4 27.2
EXEM [8] G S - 77.2 69.6 59.8 -
GANZrl [42] G D - - - 62.6 29.6
CAPD [35] G S 55.1 80.8 - 45.3 23.6
DCN [27] G D - 82.3 67.4 55.6 -
SE-ZSL [43] R D - 83.8 64.5 60.3 25.4
MSplit LBI [59] V S - 85.3 - 57.5 18.8
Trans. TMV-HLP [14] V S - 80.5 - 47.9 -
SP-ZSR [57] V S 69.7 92.1 - 55.3 -
GFZSL [44] V S - 94.3 63.7 -
DSRL [52] V S 56.3 87.2 85.4 57.1 -
BiDiLEL [46] G S - 92.6 - 62.8 -
STZSL [16] V S 54.4 83.7 - 58.7 -
TSTD [54] V S - 90.3 - 58.2 -
DIPL [58] G S 87.8 96.1 70.0 68.2 31.7
VZSL [48] V D - 94.8 - 66.5 23.1
QFSL [41] G D - - 61.7 69.7 -
HPL G S 89.2 96.3 85.8 72.1 29.2
HPL R S 80.4 27.3
HPL V S 89.7 95.5 81.5 70.8
TABLE III: Comparative results (%) of standard ZSL on five datasets under the SS protocol. Notations - ’Ind.’: Inductive; ’Trans.’: Transductive; ’S’: Shallow; ’D’: Deep; ’-’: No result reported in the original paper. For each dataset, the best result is marked in bold font and the second best is in blue. We report results averaged over 6 random trails.
Type Method Model aPY AwA SUN CUB
Ind. ESZSL [37] S 38.3 58.2 54.5 53.9
SynC [7] S 23.9 54.0 56.3 55.6
SAE [23] S 8.3 53.0 40.3 33.3
CAPD [35] S 39.3 52.6 49.7 53.8
f-CLSWGAN [51] D - 69.9 62.1 61.5
CDL [20] S 43.0 69.9 63.6 54.5
DCN [27] D 43.6 65.2 61.8 56.2
SE-ZSL [43] D - 69.5 63.4 59.6
PreseR [3] D 38.4 - 61.4 56.0
Trans. ALE [1] S 45.5 65.3 56.1 54.3
GFZSL [44] S 36.9 81.5 63.5 50.4
DSRL [52] S 44.8 74.1 57.2 48.9
DIPL [58] S 69.6 85.6 67.9 65.4
QFSL [41] D - - 58.3 72.1
TABLE IV: Comparative results (%) of standard ZSL on four datasets with ResNet-101 features under the PS protocol.

Iv-B Comparative Results

Standard ZSL. We firstly compare our HPL model with existing state-of-the-art ZSL approaches under the standard setting. Experiments are conducted on five datasets. We use both the SS and PS protocols for more convincing results. To further verify that our method is not only effective to specific visual features, we implement our model under the SS protocol with 1024-dim GoogLeNet features (G), 2048-dim ResNet-101 features (R), and 4096-dim VGG19 features (V) separately. The comparative results under the SS and PS protocols are reported in Table III and Table IV, respectively.

It can be seen that, 1) our HPL model yields better performance than the state-of-the-art baselines. This validates that by minimizing encoding cost and maximizing structural consistency, the learned prototypes are discriminative enough to recognize the unseen-class samples. 2) For our model, ResNet and VGG19 features generally lead to better results than GoogLeNet features, except on the SUN dataset. This is due to the fact that only scarce (about 20) training samples are available for each class in the SUN dataset, thus resulting in over-fitted models (e.g., ResNet101). 3) For five datasets, the improvements obtained by our model over the strongest competitor range from 0.9% to 5.6%. This actually creates new baselines in the area of ZSL, given that most of the compared models take far more complicated nonlinear formulations or even generate numerous training samples for unseen classes. 4) With the assistance of test samples, our model performs better than those inductive approaches under either SS or PS protocol. However, by comparing Table IV with Table III

, almost all ZSL approaches suffer from performance degradation under the PS protocol, which comes from the fact that the unseen class information is removed from the dataset that is used to pre-train the base network for feature extraction. 5) Our model outperforms those image-attribute projection based approaches obviously (e.g., SAE 

[23] and QFSL [41]), which demonstrates the effectiveness of prototype learning via image-label projection.

Additionally, in our experiments, we set the maximum iterations as 100 and the optimization always converges after tens of iterations, usually less than 60. As shown in Fig. 5(a), the objective function of our HPL model is obviously non-increasing and finally converges with the proposed iterative update algorithm. Meanwhile, we also report the physical running time of our algorithm on four datasets under PS protocol for standard ZSL task in Fig. 5(b), which indicates that our iterative update algorithm has linear time complexity with respect to the test data size . These observations finally support the theoretical analysis of convergence and complexity in Section III-C. Thus, the proposed algorithm is indeed efficient especially compared with those approaches taking far more complicated nonlinear formulations.

Type Method aPY AwA SUN CUB
Ind. ESZSL [37] 2.4 70.1 4.6 6.6 75.6 12.1 11.0 27.9 15.8 12.6 63.8 21.0
SynC [7] 7.4 66.3 13.3 8.9 16.2 7.9 43.3 13.4 11.5 70.9 19.8
SAE [23] 0.4 0.9 1.8 77.1 3.5 8.8 18.0 11.8 7.8 54.0 13.6
CAPD [35] 26.8 59.5 37.0 45.2 68.6 54.5 35.8 27.8 31.3 44.9 41.7 43.3
f-CLSWGAN [51] - - - 57.9 61.4 59.6 42.6 36.6 39.4 43.7 57.7 49.7
CDL [20] 19.8 48.6 28.1 28.1 73.5 40.6 23.5 32.9 21.5 34.7 26.5
DCN [27] 14.2 75.0 23.9 25.5 84.2 39.1 25.5 37.0 30.2 28.4 60.7 38.7
SE-ZSL [43] - - - 56.3 67.8 61.5 40.9 30.5 34.9 41.5 53.3 46.7
PreseR [3] 13.5 51.4 21.4 - - - 20.8 37.2 26.7 24.6 54.3 33.9
Trans. ALE [1] - - 9.6 - - 26.1 - - 21.5 - - 31.5
GFZSL [44] - - 0.0 - - 48.5 - - 0.0 - - 33.1
DSRL [52] - - 11.6 - - 22.5 - - 20.6 - - 24.3
QFSL [41] - - - - - - 31.2 38.8
HPL 64.4 75.7 39.1 50.9 47.1 54.2 50.4
TABLE V: Comparative results (%) of GZSL on four datasets with ResNet-101 features under the PS protocol.

Generalized ZSL. In real applications, whether a sample is from a seen or unseen class is unknown in advance. Hence, GZSL is a more practical and challenging task compared with standard ZSL. Here, we further evaluate the proposed model under the GZSL setting with PS protocol 555Note that GZSL has less been performed under the SS protocol due to its unreasonable data split.. The other experimental settings are kept the same as those used in [50], where the 2048-dim ResNet-101 features are adopted as the input, and compared approaches are consistent with those in Table IV. The comparative results are shown in Table V, much lower than those in standard ZSL. This is not surprising since the seen classes are included in the search space which act as distractors for the samples that come from unseen classes. Additionally, it can be observed that generally our method improves the overall performance (i.e., harmonic mean ) over the strongest competitor by an obvious margin (4.8% 5.6%). Such promising performance boost mainly comes from the improvement of mean class accuracy on the unseen classes, meanwhile without much performance degradation on the seen classes. These compelling results aslo verify that our method can significantly alleviate the strong bias towards seen classes by using the test samples from unseen classes. This is mainly due to the fact that, unlike most of existing transductive approaches (e.g., DIPL [58]) that only rely on projection learning, our HPL further introduces structural consistency constraint in the unseen class domain for transductive ZSL. Conversely, benefiting from the convincing predicted labels, the prototypes in both seen and unseen class domains are learned more discriminatively.

For a straightforward illustration of our HPL in zero-shot recognition, we further show the t-SNE of all unseen-class data on AwA dataset with the true and predicted labels for standard ZSL and GZSL . As observed from Fig. 6, our HPL still can capture the global distribution of original data under various settings, although its performance under the GZSL setting is not as attractive as that under standard ZSL setting.

Fig. 6: The t-SNE of all unseen-class data on AwA dataset with the true and predicted labels for standard ZSL and GZSL.

Iv-C Parameter Analysis

It is worth noting that parameters in the proposed model can be easily fine-tuned in the range using the train and validation splits provided by [50], while it is not trivial for the parameter (i.e., the number of super-prototypes). This is because the optimal number of super-prototypes varies with the total number of classes. Additionally, when the seen and unseen classes are changed, the structure of super-prototypes will also be affected. To alleviate this issue, instead of , we fine-tune the parameter , i.e., the proportion of super-prototypes to the total class number, using the validation set. Fig. 7 shows the effects of four parameters in our HPL model respectively, where the standard ZSL task is performed on the four datasets under the PS protocol as in Table IV. It can be observed that generally, and are enough to achieve promising recognition accuracy. Specifically, i) peaks around 0.6, which means the visual encoding cost should overweigh the alignment term slightly. This is reasonable since our zero-shot recognition depends on the encoding term directly while the alignment term actually serves as a regularizer. ii) peaks around 0.5 since the visual and semantic spaces should be equally important in alignment term. iii) peaks around 0.6, where the accuracy would first increase and then decrease along with increasing the value of , corresponding to overfitting and under-fitting of our model on seen-class training sets. iv) means the optimal value of is approximate to (the number of seen classes), since is often larger than (the number of unseen classes).

Fig. 7: Standard ZSL performance as a function of the specific hyper-parameter on the four datasets with ResNet-101 features under PS protocol.
Type Ablation Index Algorithm aPY AwA SUN CUB
Accuracy(%) i HPL1 72.2 91.9 68.3 75.4
HPL 73.8 91.2 70.4 75.2
ii Seen classes part first 72.9 91.0 69.4 75.7
Unseen classes part first 73.8 91.2 70.4 75.2
iii Kmeans [10] 73.2 90.2 68.2 75.5
AP [12] 71.1 91.4 68.8 74.1
Each Class Mean 73.8 91.2 70.4 75.2
iv without alternative 67.8 85.9 65.0 67.5
with alternative 73.8 91.2 70.4 75.2
Time(sec.) i HPL1 1091.0 697.3 1101.6 631.6
HPL 702.7 490.9 811.2 333.4
ii Seen classes part first 1034.2 592.1 1230.1 572.1
Unseen classes part first 702.7 490.9 811.2 333.4
iii Kmeans [10] 1101.1 892.1 782.1 308.3
AP [12] 981.9 689.6 992.3 482.5
Each Class Mean 702.7 490.9 811.2 333.4
iv without alternative 167.1 89.4 201.5 68.7
with alternative 702.7 490.9 811.2 333.4
TABLE VI: Comparative results of standard ZSL on four datasets with ResNet-101 features under the PS protocol.

Iv-D Ablation Study

To verify the advantage of the proposed HPL model and optimization rules in Algorithm 1, we additionally conduct four ablation studies on all datasets with Resnet-101 features under the PS protocol of standard ZSL as follows. i) The bidirectional projection is also introduced in Eq. (4) and Eq. (7) as in Eq. (3) and Eq. (6), dubbed HPL1. This will influence the update of all variables except , but the optimization strategy about each variable is the same as that for our HPL model. Table VI reports the accuracy and physical running time of two models. It can be observed that HPL1 obtains similar accuracy to our HPL but spends more time. Thus, it is generally enough to consider the bidirectional projection only in Eq. (3), since Eq. (4) actually serves as a regularizer of Eq. (3).

For Algorithm 1, ii) we exchange the steps in lines 5 and 6 to update the seen classes part first. The comparison results are presented in Table VI. It can be concluded that optimizing unseen classes part first can achieve similar and even more promising accuracy to the case of seen classes part first, and meanwhile, take much less time. iii) Meanwhile, to analyze the initialization sensitivity of and , we now additionally initialize and using Kmeans [10] and AP [12]. As compared in Table VI, we find that different initial values have no obvious effects on ZSL accuracy, while slight effects on physical running time of our algorithm. This is reasonable since different initialization conditions generally influence the convergence rate of alternative optimization algorithm. iv) Finally, we evaluate the effect of proposed alternative optimization rules in Algorithm 1. Specifically, we first train super-prototypes and from seen classes via Eq. (12) with , and then employ super-prototypes to the unseen classes to solve Eq. (5). By comparison from Table VI, we can find that the standard ZSL accuracy by alternative optimization algorithm is considerably superior to that without alternative update, though takes a little more time. Additionally, the importance of each component in our HPL model can be observed from parameter analysis results in Fig. 7. Obviously, (i.e., ) peaks around 0.6 instead of 0, which validates the encoding function makes a little more effect on final performances than the alignment function. The bidirectional projection in encoding function is also necessary since (i.e., ) peaks around 0.5. In particular, (i.e., ) peaks around 0.6 instead of 0, which shows employing test data in our model is indeed beneficial to ZSL.

V Conclusions and Future Work

In this paper, we propose a hierarchical prototype learning model (HPL) that is able to perform efficient zero-shot recognition in the original visual space, and meanwhile, avoid a series of problems caused by the provided semantic prototypes. In particular, the discriminability of visual prototypes is further strengthened by coupling them with semantic prototypes in an aligned space, thus achieving more promising recognition performance. Furthermore, interpretable super-prototypes shared between the seen and the unseen class domains are exploited to alleviate the domain shift issue. We have carried out extensive experiments about ZSL on five benchmarks, and the results demonstrate the obvious superiority of the proposed HPL to the state-of-the-art approaches. It is also worth noting that the number of visual/semantic prototypes is not controllable in our HPL. In essence, learning one prototype for a class is generally insufficient to recognize one class and differentiate two classes. Thus, our ongoing research work includes learning prototypes adaptively with the data distribution.


  • [1] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid (2016) Label-embedding for image classification. IEEE transactions on pattern analysis and machine intelligence 38 (7), pp. 1425–1438. Cited by: §II, TABLE IV, TABLE V.
  • [2] Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele (2015) Evaluation of output embeddings for fine-grained image classification. In

    Computer Vision and Pattern Recognition

    pp. 2927–2936. Cited by: §I.
  • [3] Y. Annadani and S. Biswas (2018) Preserving semantic relations for zero-shot learning. In Computer Vision and Pattern Recognition, pp. 7603–7612. Cited by: §I, §II, §II, §II, TABLE IV, TABLE V.
  • [4] S. C. AP, S. Lauly, H. Larochelle, M. Khapra, B. Ravindran, V. C. Raykar, and A. Saha (2014) An autoencoder approach to learning bilingual word representations. In Advances in Neural Information Processing Systems, pp. 1853–1861. Cited by: §II.
  • [5] A. Bansal, K. Sikka, G. Sharma, R. Chellappa, and A. Divakaran (2018) Zero-shot object detection. In European Conference Computer Vision, pp. 397–414. Cited by: §II.
  • [6] R. H. Bartels and G. W. Stewart (1972) Solution of the matrix equation . Communications of the ACM 15 (9), pp. 820–826. Cited by: §III-C.
  • [7] S. Changpinyo, W. Chao, B. Gong, and F. Sha (2016) Synthesized classifiers for zero-shot learning. In Computer Vision and Pattern Recognition, pp. 5327–5336. Cited by: §I, §II, §IV-A, §IV-A, TABLE III, TABLE IV, TABLE V.
  • [8] S. Changpinyo, W. Chao, and F. Sha (2017) Predicting visual exemplars of unseen classes for zero-shot learning. In International Conference on Computer Vision, pp. 3476–3485. Cited by: §II, TABLE III.
  • [9] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pp. 647–655. Cited by: §I.
  • [10] R. O. Duda, P. E. Hart, and D. G. Stork (2012) Pattern classification. John Wiley & Sons. Cited by: §IV-D, TABLE VI.
  • [11] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth (2009) Describing objects by their attributes. In Computer Vision and Pattern Recognition, pp. 1778–1785. Cited by: §IV-A.
  • [12] B. J. Frey and D. Dueck (2007) Clustering by passing messages between data points. science 315 (5814), pp. 972–976. Cited by: §IV-D, TABLE VI.
  • [13] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al. (2013) Devise: a deep visual-semantic embedding model. In Advances in neural information processing systems, pp. 2121–2129. Cited by: §I.
  • [14] Y. Fu, T. M. Hospedales, T. Xiang, and S. Gong (2015) Transductive multi-view zero-shot learning. IEEE transactions on pattern analysis and machine intelligence 37 (11), pp. 2332–2345. Cited by: §I, §II, TABLE III.
  • [15] Z. Fu, T. Xiang, E. Kodirov, and S. Gong (2018) Zero-shot learning on semantic class prototype graph. IEEE transactions on pattern analysis and machine intelligence 40 (8), pp. 2009–2022. Cited by: §II.
  • [16] Y. Guo, G. Ding, J. Han, and Y. Gao (2017) Zero-shot learning with transferred samples. IEEE Transactions on Image Processing 26 (7), pp. 3277–3290. Cited by: §II, TABLE III.
  • [17] Y. Guo, G. Ding, X. Jin, and J. Wang (2016) Transductive zero-shot recognition via shared model space learning. In

    Thirtieth AAAI Conference on Artificial Intelligence

    Vol. 3, pp. 8. Cited by: §I, §II.
  • [18] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In computer vision and pattern recognition, pp. 770–778. Cited by: Fig. 3, §I.
  • [19] Y. Hubert Tsai, L. Huang, and R. Salakhutdinov (2017) Learning robust visual-semantic embeddings. In Computer Vision and Pattern Recognition, pp. 3571–3580. Cited by: §II, §III-B.
  • [20] H. Jiang, R. Wang, S. Shan, and X. Chen (2018) Learning class prototypes via structure alignment for zero-shot recognition. In European Conference Computer Vision, pp. 121–138. Cited by: §I, TABLE IV, TABLE V.
  • [21] P. Kankuekul, A. Kawewong, S. Tangruamsub, and O. Hasegawa (2012) Online incremental attribute-based zero-shot learning. In Computer Vision and Pattern Recognition, pp. 3657–3664. Cited by: §I.
  • [22] E. Kodirov, T. Xiang, Z. Fu, and S. Gong (2015) Unsupervised domain adaptation for zero-shot learning. In International Conference on Computer Vision, pp. 2452–2460. Cited by: §I, §II.
  • [23] E. Kodirov, T. Xiang, and S. Gong (2017) Semantic autoencoder for zero-shot learning. In Computer Vision and Pattern Recognition, pp. 3174–3183. Cited by: §I, §II, §II, §III-B, §IV-A, §IV-A, §IV-B, TABLE III, TABLE IV, TABLE V.
  • [24] C. Lampert, H. Nickisch, and S. Harmeling (2014) Attribute-based classification for zero-shot visual object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (3), pp. 453–465. Cited by: §I, §II, §IV-A, §IV-A.
  • [25] C. H. Lampert, H. Nickisch, and S. Harmeling (2009) Learning to detect unseen object classes by between-class attribute transfer. In Computer Vision and Pattern Recognition, pp. 951–958. Cited by: §I.
  • [26] Y. Li, J. Zhang, J. Zhang, and K. Huang (2018) Discriminative learning of latent features for zero-shot recognition. In Computer Vision and Pattern Recognition, pp. 7463–7471. Cited by: §II.
  • [27] S. Liu, M. Long, J. Wang, and M. I. Jordan (2018) Generalized zero-shot learning with deep calibration network. In Advances in Neural Information Processing Systems, pp. 2006–2016. Cited by: §II, §II, §III-B, TABLE III, TABLE IV, TABLE V.
  • [28] T. Mensink, E. Gavves, and C. G. Snoek (2014) Costa: co-occurrence statistics for zero-shot classification. In Computer Vision and Pattern Recognition, pp. 2441–2448. Cited by: §I.
  • [29] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §IV-A.
  • [30] J. J. Moré and D. J. Thuente (1994) Line search algorithms with guaranteed sufficient decrease. ACM Transactions on Mathematical Software (TOMS) 20 (3), pp. 286–307. Cited by: §III-C.
  • [31] P. Morgado and N. Vasconcelos (2017) Semantically consistent regularization for zero-shot recognition. In Computer Vision and Pattern Recognition, Vol. 9, pp. 10. Cited by: §I, §II.
  • [32] L. Niu, A. Veeraraghavan, and A. Sabharwal (2018) Webly supervised learning meets zero-shot learning: a hybrid approach for fine-grained classification. In Computer Vision and Pattern Recognition, pp. 7171–7180. Cited by: §I, §II, §II.
  • [33] G. Patterson and J. Hays (2012) Sun attribute database: discovering, annotating, and recognizing scene attributes. In Computer Vision and Pattern Recognition, pp. 2751–2758. Cited by: §IV-A.
  • [34] M. Radovanović, A. Nanopoulos, and M. Ivanović (2010)

    Hubs in space: popular nearest neighbors in high-dimensional data

    Journal of Machine Learning Research 11 (Sep), pp. 2487–2531. Cited by: §II.
  • [35] S. Rahman, S. Khan, and F. Porikli (2018) A unified approach for conventional zero-shot, generalized zero-shot, and few-shot learning. IEEE Transactions on Image Processing 27 (11), pp. 5652–5667. Cited by: TABLE III, TABLE IV, TABLE V.
  • [36] M. Rohrbach, S. Ebert, and B. Schiele (2013) Transfer learning in a transductive setting. In Advances in neural information processing systems, pp. 46–54. Cited by: §I, §II.
  • [37] B. Romera-Paredes and P. Torr (2015) An embarrassingly simple approach to zero-shot learning. In International Conference on Machine Learning, pp. 2152–2161. Cited by: §I, §II, TABLE III, TABLE IV, TABLE V.
  • [38] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and F. Li (2015) Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115 (3), pp. 211–252. Cited by: §IV-A.
  • [39] E. Schönfeld, S. Ebrahimi, S. Sinha, T. Darrell, and Z. Akata (2019) Generalized zero-and few-shot learning via aligned variational autoencoders. In Computer Vision and Pattern Recognition, Cited by: §II.
  • [40] R. Socher, M. Ganjoo, C. D. Manning, and A. Ng (2013) Zero-shot learning through cross-modal transfer. In Advances in neural information processing systems, pp. 935–943. Cited by: §I, §II.
  • [41] J. Song, C. Shen, Y. Yang, Y. Liu, and M. Song (2018) Transductive unbiased embedding for zero-shot learning. In Computer Vision and Pattern Recognition, pp. 1024–1033. Cited by: §IV-B, TABLE III, TABLE IV, TABLE V.
  • [42] B. Tong, M. Klinkigt, J. Chen, X. Cui, Q. Kong, T. Murakami, and Y. Kobayashi (2018) Adversarial zero-shot learning with semantic augmentation. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: TABLE III.
  • [43] V. K. Verma, G. Arora, A. Mishra, and P. Rai (2018) Generalized zero-shot learning via synthesized examples. In Computer Vision and Pattern Recognition, pp. 4281–4289. Cited by: TABLE III, TABLE IV, TABLE V.
  • [44] V. K. Verma and P. Rai (2017) A simple exponential family framework for zero-shot learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 792–808. Cited by: §I, §II, TABLE III, TABLE IV, TABLE V.
  • [45] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie (2011) The caltech-ucsd birds-200-2011 dataset. Technical Report CNS-TR-2011-001. Cited by: §IV-A.
  • [46] Q. Wang and K. Chen (2017) Zero-shot visual recognition via bidirectional latent embedding. International Journal of Computer Vision 124 (3), pp. 356–383. Cited by: TABLE III.
  • [47] W. Wang, V. W. Zheng, H. Yu, and C. Miao (2019) A survey of zero-shot learning: settings, methods, and applications. ACM Transactions on Intelligent Systems and Technology 10 (2), pp. 1–13. Cited by: §I.
  • [48] X. Wang, Y. Ye, and A. Gupta (2018)

    Zero-shot recognition via semantic embeddings and knowledge graphs

    In Computer Vision and Pattern Recognition, pp. 6857–6866. Cited by: §II, TABLE III.
  • [49] Z. Wang, R. Hu, C. Liang, Y. Yu, J. Jiang, M. Ye, J. Chen, and Q. Leng (2016) Zero-shot person re-identification via cross-view consistency. IEEE Transactions on Multimedia 18 (2), pp. 260–272. Cited by: §I.
  • [50] Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata (2018) Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence. Cited by: Fig. 1, Fig. 2, §I, §IV-A, §IV-A, §IV-A, §IV-B, §IV-C.
  • [51] Y. Xian, T. Lorenz, B. Schiele, and Z. Akata (2018) Feature generating networks for zero-shot learning. In computer vision and pattern recognition, Cited by: §I, §II, §IV-A, TABLE IV, TABLE V.
  • [52] M. Ye and Y. Guo (2017) Zero-shot classification with discriminative semantic representation learning. In Computer Vision and Pattern Recognition, Cited by: TABLE III, TABLE IV, TABLE V.
  • [53] Y. Yu, Z. Ji, Y. Fu, J. Guo, Y. Pang, and Z. (. Zhang (2018)

    Stacked semantic-guided attention model for fine-grained zero-shot learning

    In Advances in Neural Information Processing Systems, pp. 5998–6007. Cited by: §II.
  • [54] Y. Yu, Z. Ji, X. Li, J. Guo, Z. Zhang, H. Ling, and F. Wu (2018) Transductive zero-shot learning with a self-training dictionary approach. IEEE transactions on cybernetics 48 (10), pp. 2908–2919. Cited by: TABLE III.
  • [55] L. Zhang, T. Xiang, and S. Gong (2017) Learning a deep embedding model for zero-shot learning. In Computer Vision and Pattern Recognition, pp. 2021–2030. Cited by: §II.
  • [56] Z. Zhang and V. Saligrama (2015) Zero-shot learning via semantic similarity embedding. In International conference on computer vision, pp. 4166–4174. Cited by: §I, §IV-A.
  • [57] Z. Zhang and V. Saligrama (2016) Zero-shot learning via joint latent similarity embedding. In Computer Vision and Pattern Recognition, pp. 6034–6042. Cited by: §I, §IV-A, TABLE III.
  • [58] A. Zhao, M. Ding, J. Guan, Z. Lu, T. Xiang, and J. Wen (2018) Domain-invariant projection learning for zero-shot recognition. In Advances in Neural Information Processing Systems, pp. 1025–1036. Cited by: §I, §II, §II, §IV-B, TABLE III, TABLE IV.
  • [59] B. Zhao, X. Sun, Y. Fu, Y. Yao, and Y. Wang (2018)

    MSplit lbi: realizing feature selection and dense estimation simultaneously in few-shot and zero-shot learning

    In International Conference on Machine Learning, pp. 5907–5916. Cited by: TABLE III.
  • [60] Y. Zhu, M. Elhoseiny, B. Liu, X. Peng, and A. Elgammal (2018) A generative adversarial approach for zero-shot learning from noisy texts. In Computer Vision and Pattern Recognition, Cited by: §II.