Consider the animal category “okapi”. Even though we might have never heard of this category or seen visual examples in the past, we can still learn a good visual classifier based on the following description: ”zebra-striped four legged animal with a brown torso and a deer-like face” (Test yourself on figure 1). On the other hand, our current recognition algorithms still operate in closed world conditions: that is, they can only recognize the categories they are trained with. Adding a new category requires collecting thousands of training examples and then retraining the classifiers. To tackle this problem, zero-shot learning is often used.
The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar classes (generalization). There are two paradigms of transferring knowledge. The first paradigm is to use implicit knowledge representations, i.e. semantic embeddings. In this approach, one learns a vector representation of different categories using text data and then learns a mapping between the vector representation to visual classifier directly[34, 13]. However, these methods are limited by the generalization power of the semantic models and the mapping models themselves. It is also hard to learn semantic embeddings from structured information.
The alternative and less-explored paradigm for zero-shot learning is to use explicit knowledge bases or knowledge graphs. In this paradigm, one explicitly represents the knowledge as rules or relationships between objects. These relationships can then be used to learn zero-shot classifiers for new categories. The simplest example would be to learn visual classifiers of compositional categories. Given classifiers of primitive visual concepts as inputs,  applies a simple composition rule to generate classifiers for new complex concepts. However, in the general form, the relationships can be more complex than simple compositionality. An interesting question we want to explore is if we can use structured information and complex relationships to learn visual classifiers without seeing any examples.
In this paper, we propose to distill both the implicit knowledge representations (i.e. word embedding) and explicit relationships (i.e. knowledge graph) for learning visual classifiers of novel classes. We build a knowledge graph where each node corresponds to a semantic category. These nodes are linked via relationship edges. The input to each node of the graph is the vector representation (semantic embedding) of each category. We then use Graph Convolutional Network (GCN)  to transfer information (message-passing) between different categories. Specifically, we train a 6-layer deep GCN that outputs the classifiers of different categories.
We focus on the task of image classification. We consider both of the test settings: (a) final test classes being only zero-shot classes (without training classes at test time); (b) at test time the labels can be either the seen or the unseen classes, namely “generalized zero-shot setting” [16, 6, 50]. We show surprisingly powerful results and huge improvements over classical baselines such as DeVise  , ConSE  ,and current state-of-the-art 
. For example, on standard ImageNet with 2-hop setting,of the images retrieved by  in top-10 are correct. Our approach retrieves images correctly. That is a whopping improvement over the current state-of-the-art. More interestingly, we show that our approach scales amazingly well and giving a significant improvement as we increase the size of the knowledge graph even if the graph is noisy.
2 Related Work
With recent success of large-scale recognition systems , the focus has now shifted to scaling these systems in terms of categories. As more realistic and practical settings are considered, the need for zero-shot recognition – training visual classifiers without any examples – has increased. Specifically, the problem of mapping text to visual classifiers is very interesting.
Early work on zero-shot learning used attributes [11, 24, 19] to represent categories as vector indicating presence/absence of attributes. This vector representation can then be mapped to learn visual classifiers. Instead of using manually defined attribute-class relationships, Rohrbach et al. [40, 38] mined these associations from different internet sources. Akata et al.  used attributes as side-information to learn a semantic embedding which helps in zero-shot recognition. Recently, there have been approaches such as  which trys to match Wikipedia text to images by modeling noise in the text description.
With the advancement of deep learning, most recent approaches can be mapped into two main research directions. The first approach is to use semantic embeddings (implicit representations). The core idea is to represent each category with learned vector representations that can be mapped to visual classifiers[48, 44, 13, 41, 25, 15, 14, 18, 23, 4, 5, 55, 54]. Socher et al. 
proposed training two different neural networks for image and language in an unsupervised manner, and then learning a linear mapping between image representations and word embeddings. Motivated by this work, Frome et al. proposed a system called DeViSE to train a mapping from image to word embeddings using a ConvNet and a transformation layer. By using the predicted embedding to perform nearest neighbor search, DeViSE scales up the zero-shot recognition to thousands of classes. Instead of training a ConvNet to predict the word embedding directly, Norouzi et al.  proposed another system named ConSE which constructs the image embedding by combining an existing image classification ConvNet and word embedding model. Recently, Changpinyo et al  proposed an approach to align semantic and visual manifolds via use of ‘phantom’ classes. They report state-of-the-art results on ImageNet dataset using this approach. One strong shortcoming of these approaches is they do not use any explicit relationships between classes but rather use semantic-embeddings to represent relationships.
The second popular way to distill the knowledge is to use knowledge graph (explicit knowledge representations). Researchers have proposed several approaches on how to use knowledge graphs for object recognition [12, 43, 30, 35, 39, 9, 8, 29, 49, 47, 26]. For example, Salakhutdinov et al.  used WordNet to share the representations among different object classifiers so that objects with few training examples can borrow statistical strength from related objects. On the other hand, the knowledge graph can also be used to model the mutual exclusion among different classes. Deng et al.  applied these exclusion rules as a constraint in the loss for training object classifiers (e.g. an object will not be a dog and a cat at the same time). They have also shown zero-shot applications by adding object-attribute relations into the graph. In contrast to these methods of using graph as constraints, our approach used the graph to directly generate novel object classifiers [33, 10, 2].
In our work, we propose to distill information both via semantic embeddings and knowledge graphs. Specifically, given a word embedding of an unseen category and the knowledge graph that encodes explicit relationships, our approach predicts the visual classifiers of unseen categories. To model the knowledge graph, our work builds upon the Graph Convolutional Networks 
. It was originally proposed for semi-supervised learning in language processing. We extend it to our zero-short learning problem by changing the model architecture and training loss.
Our goal is to distill information from both implicit (word-embeddings) and explicit (knowledge-graph) representations for zero-shot recognition. But what is the right way to extract information? We build upon the recent work on Graph Convolutional Network (GCN) 
to learn visual classifiers. In the following, we will first introduce how the GCN is applied in natural language processing for classification tasks, and then we will go into details about our approach: applying the GCN with a regression loss for zero-shot learning.
3.1 Preliminaries: Graph Convolutional Network
Graph Convolutional Network (GCN) was introduced in  to perform semi-supervised entity classification. Given object entities, represented by word embeddings or text features, the task is to perform classification. For example, entities such as “dog” and “cat” will be labeled as “mammal”; “chair” and “couch” will be labeled “furniture”. We also assume that there is a graph where nodes are entities and the edges represent relationships between entities.
Formally, given a dataset with entities where represents the word embedding for entity and represents its label. In semi-supervised setting, we know the ground-truth labels for the first entities. Our goal is to infer for the remaining entities, which do not have labels, using the word embedding and the relationship graph. In the relationship graph, each node is an entity and two nodes are linked if they have a relationship in between.
We use a function to represent the Graph Convolutional Network. It takes all the entity word embeddings as inputs at one time and outputs the SoftMax classification results for all of them as . For simplicity, we denote the output for the th entity as , which is a
dimension SoftMax probability vector. In training time, we apply the SoftMax loss on the firstentities, which have labels as
The weights of are trained via back-propagation with this loss. During testing time, we use the learned weights to obtain the labels for the entities with .
Unlike standard convolutions that operate on local region in an image, in GCN the convolutional operations compute the response at a node based on the neighboring nodes defined by the adjacency graph. Mathematically, the convolutional operations for each layer in the network is represented as
where is a normalized version of the binary adjacency matrix of the graph, with dimensions. is the input feature matrix from the former layer. is the weight matrix of the layer with dimension , where is the output channel number. Therefore, the input to a convolutional layer is ,and the output is a matrix
. These convolution operations can be stacked one after another. A non-linear operation (ReLU) is also applied after each convolutional layer before the features are forwarded to the next layer. For the final convolutional layer, the number of output channels is the number of label classes (). For more details, please refer to .
3.2 GCN for Zero-shot Learning
Our model builds upon the Graph Convolutional Network. However, instead of entity classification, we apply it to the zero-shot recognition with a regression loss. The input of our framework is the set of categories and their corresponding semantic-embedding vectors (represented by ). For the output, we want to predict the visual classifier for each input category (represented by ).
Specifically, the visual classifier we want the GCN to predict is a logistic regression model on the fixed pre-trained ConvNet features. If the dimensionality of visual-feature vector is, each classifier for category is also a -dimensional vector. Thus the output of each node in the GCN is dimensions, instead of dimensions. In the zero-shot setting, we assume that the first categories in the total
classes have enough visual examples to estimate their weight vectors. For the remainingcategories, we want to estimate their corresponding weight vectors given their embedding vectors as inputs.
One way is to train a neural network (multi-layer perceptron) which takesas an input and learns to predict as an output. The parameters of the network can be estimated using training pairs. However, generally is small (in the order of a few hundreds) and therefore, we want to use the explicit structure of the visual world or the relationships between categories to constrain the problem. We represent these relationships as the knowledge-graph (KG). Each node in the KG represents a semantic category. Since we have a total of categories, there are nodes in the graph. Two nodes are linked to each other if there is a relationship between them. The graph structure is represented by the adjacency matrix, . Instead of building a bipartite graph as [22, 52], we replace all directed edges in the KG by undirected edges, which leads to a symmetric adjacency matrix.
As Fig. 2 shows, we use a 6-layer GCN where each layer takes as input the feature representation from previous layer () and outputs a new feature representation (). For the first layer the input is which is an matrix ( is the dimensionality of the word-embedding vector). For the final-layer the output feature-vector is which has the size of ; being the dimensionality of the classifier or visual feature vector.
Loss-function: For the first categories, we have predicted classifier weights and ground-truth classifier weights learned from training images
. We use the mean-square error as the loss function between the predicted and the ground truth classifiers.
During training, we use the loss from the seen categories to estimate the parameters for the GCN. Using the estimated parameters, we obtain the classifier weights for the zero-shot categories. At test time, we first extract the image feature representations via the pre-trained ConvNet and use these generated classifiers to perform classification on the extracted features.
3.3 Implementation Details
Our GCN is composed of 6 convolutional layers with output channel numbers as , where represents the dimension of the object classifier. Unlike the 2-layer network presented in 
, our network is much deeper. As shown in ablative studies, we find that making the network deep is essential in generating the classifier weights. For activation functions, instead of using ReLU after each convolutional layer, we apply LeakyReLU[27, 51] with the negative slope of . Empirically, we find that LeakyReLU leads to faster convergence for our regression problem.
While training our GCN, we perform L2-Normalization on the outputs of the networks and the ground-truth classifiers. During testing, the generated classifiers of unseen classes are also L2-Normalized. We find adding this constraint important, as it regularizes the weights of all the classifiers into similar magnitudes. In practice, we also find that the last layer classifiers of the ImageNet pre-trained networks are naturally normalized. That is, if we perform L2-Normalization on each of the last layer classifiers during testing, the performance on the ImageNet 2012 1K-class validation set changes marginally ().
To obtain the word embeddings for GCN inputs, we use the GloVe text model  trained on the Wikipedia dataset, which leads to 300-d vectors. For the classes whose names contain multiple words, we match all the words in the trained model and find their embeddings. By averaging these word embeddings, we obtain the class embedding.
We now perform experiments to showcase that our approach: (a) improves the state-of-the-art by a significant margin; (b) is robust to different pre-trained ConvNets and noise in the KG. We use two datasets in our experiments. The first dataset we use is constructed from publicly-available knowledge bases. The dataset consists of relationships and graph from Never-Ending Language Learning (NELL)  and images from Never-Ending Image Learning (NEIL) . This is an ideal dataset for: (a) demonstrating that our approach is robust even with automatically learned (and noisy) KG; (b) ablative studies since the KG in this domain is rich, and we can perform ablations on KG as well.
Our final experiments are shown on the standard ImageNet dataset. We use the same settings as the baseline approaches [13, 34, 4] together with the WordNet  knowledge graph. We show that our approach surpasses the state-of-the-art methods by a significant margin.
4.1 Experiments on NELL and NEIL
Dataset settings. For this experiment, we construct a new knowledge graph based on the NELL  and NEIL  datasets. Specifically, the object nodes in NEIL correspond to the nodes in NELL. The NEIL dataset offers the sources of images and the NELL dataset offers the common sense knowledge rules. However, the NELL graph is incredibly large 111http://rtw.ml.cmu.edu/: it contains roughly 1.7M types of object entities and around 2.4M edges representing the relationships between every two objects. Furthermore, since NELL is constructed automatically, there are noisy edges in the graph. Therefore, we create sub-graphs for our experiments.
The process of constructing this sub-graph is straightforward. We perform Breadth-first search (BFS) starting from the NEIL nodes. We discover paths with maximum length hops such that the first and last node in the path are NEIL nodes. We add all the nodes and edges in these paths into our sub-graph. We set during BFS because we discover a path longer than 7 hops will cause the connection between two objects noisy and unreasonable. For example, “jeep” can be connected to “deer” in a long path but they are hardly semantically related.
Note that each edge in NELL has a confidence value that is usually larger than . For our experiments, we create two different versions of sub-graphs. The first smaller version is a graph with high value edges (larger than ), and the second one used all the edges regardless of their confidence values. The statistics of the two sub-graphs are summarized in Table 1. For the larger sub-graph, we have 14K object nodes. Among these nodes, 704 of them have corresponding images in the NEIL database. We use 616 classes for training our GCN and leave 88 classes for testing. Note that these 88 testing classes are randomly selected among the classes that have no overlap with the 1000 classes in the standard ImageNet classification dataset. The smaller knowledge graph is around half the size of the larger one. We use the same 88 testing classes in both settings
|High Value Edges||8819||431/88||40810|
Training details. For training the ConvNet on NEIL images, we use the 310K images associated with the 616 training classes. The evaluation is performed on the randomly selected 12K images associated with the 88 testing classes, i.e. all images from the training classes are excluded during testing. We fine-tune the ImageNet pre-trained VGGM  network architecture with relatively small outputs (128-dimension). Thus the object classifier dimension in is . For training our GCN, we use the ADAM  optimizer with learning rate and weight decay
. We train our GCN for 300 epochs for every experiment.
Baseline method. We compare our method with one of the state-of-the-art methods, ConSE , which shows slightly better performance than DeViSE  in ImageNet. As a brief introduction, ConSE first feedforwards the test image into a ConvNet that is trained only on the training classes. With the output probabilities, ConSE selects top predictions and the word embeddings  of these classes. It then generates a new word embedding by weighted averaging the embeddings with the probability . This new embedding is applied to perform nearest neighbors in the word embeddings of the testing classes. The top retrieved classes are selected as the final result. We enumerate different values of for evaluations.
Quantitative Results. We perform evaluations on the task of 88 unseen categories classification. Our metric is based on the percentage of correctly retrieved test data (out of top retrievals) for a given zero-shot class. The results are shown in Table 2. We evaluate our method on two different sizes of knowledge graphs. We use “High Value Edges” to denote the knowledge graph constructed based on high confidence edges. “All Edges” represents the graph constructed with all the edges. We denote the baseline  as “ConSE(T)” where we set to be 5, 10 and the number of training classes.
Our method outperforms the ConSE baseline by a large margin. In the “All Edges” dataset, our method outperforms ConSE in top-1 accuracy. More impressively, the accuracy of our method is almost 2 times as that of ConSE in top-2 metric and even more than 2 times in top-5 and top-10 accuracies. These results show that using knowledge graph with word embeddings in our method leads to much better result than the state-of-the-art results with word embeddings only.
From small to larger graph. In addition to improving performance in zero-shot recognition, our method obtains more performance gain as our graph size increases. As shown in Table 2, our method performs better by switching from the small to larger graph. Our approach has obtained improvements in all the metrics. On the other hand, there is little to no improvements in ConSE performance. It also shows that the KG does not need to be hand-crafted or cleaned. Our approach is able to robustly handle the errors in the graph structure.
Resilience to Missing Edges We explore how the performance of our model changes if we randomly drop to of the edges in the “All Edges” graph. As Fig. 3 shows, by dropping from to of edges, the performance of our model changes negligibly. This is mainly because the knowledge graph can have redundant information with 14K nodes and 97K edges connecting them. This again implies that our model is robust to small noisy changes in the graph. As we start deleting more than of the edges, the accuracies drop drastically. This indicates that the performance of our model is highly correlated to the size of the knowledge graph.
Random Graph? It is clear that our approach can handle noise in the graph. But does any random graph work? To demonstrate that the structure of the graph is still critical we also created some trivial graphs: (i) star model: we create a graph with one single root node and only have edges connecting object nodes to the root node; (ii) random graph: all nodes in the graph are randomly connected. Table 3 shows the results. It is clear that all the numbers are close to random guessing, which means a reasonable graph plays an important role and a random graph can have negative effects on the model.
|Test Set||Trivial KG||1||2||5||10|
|All Edges||Star Model||1.1||1.6||4.8||9.7|
How important is the depth of GCN? We show that making the Graph Convolutional Network deep is critical in our problem. We show the performance of using different numbers of layers for our model on the “All Edges” knowledge graph shown in Table 4. For the 2-layer model we use
hidden neurons, and the 4-layer model has output channel numbers as. We show that the performance keeps increasing as we make the model deeper from 2-layer to 6-layer. The reason is that increasing the times of convolutions is essentially increasing the times of message passing between nodes in the graph. However, we do not observe much gain by adding more layers above the 6-layer model. One potential reason might be that the optimization becomes harder as the network goes deeper.
|All Edges||Ours (2-layer)||5.3||8.7||15.5||24.3|
Is our network just copying classifiers as outputs? Even though we show our method is better than ConSE baseline, is it possible that it learns to selectively copy the nearby classifiers? To show our method is not learning this trivial solution, we compute the Euclidean distance between our generated classifiers and the training classifiers. More specifically, for a generated classifier, we compare it with the classifiers from the training classes that are at most 3-hops away. We calculate the minimum distance between each generated classifier and its neighbors. We sort the distances for all classifiers and plot Fig. 4. As for reference, the distance between “wooden_spoon” and “spoon” classifiers in the training set is and the distance between “wooden_spoon” and “optimus_prime” is . We can see that our predicted classifiers are quite different from its neighbors.
Are the outputs only relying on the word embeddings? We perform t-SNE  visualizations to show that our output classifiers are not just derived from the word embeddings. We show the t-SNE  plots of both the word embeddings and the classifiers of the seen and unseen classes in the “All Edges” dataset. As Fig. 5 shows, we have very different clustering results between the word embeddings and the object classifiers, which indicates that our GCN is not just learning a direct projection from word embeddings to classifiers.
4.2 Experiments on WordNet and ImageNet
We now perform our experiments on a much larger-scale ImageNet  dataset. We adopt the same train/test split settings as [13, 34]. More specifically, we report our results on 3 different test datasets: “2-hops”, “3-hops” and the whole “All” ImageNet set. These datasets are constructed according to how similar the classes are related to the classes in the ImageNet 2012 1K dataset. For example, “2-hops” dataset (around 1.5K classes) includes the classes from the ImageNet 2011 21K set which are semantically very similar to the ImageNet 2012 1K classes. “3-hops” dataset (around 7.8K classes) includes the classes that are within 3 hops of the ImageNet 2012 1K classes, and the “All” dataset includes all the labels in ImageNet 2011 21K. There are no common labels between the ImageNet 1K class and the classes in these 3-dataset. It is also obvious to see that as the number of class increases, the task becomes more challenging.
Training details. Note that to perform testing on 3 different test sets, we only need to train one set of ConvNet and GCN. We use two different types of ConvNets as the base network for computing visual features: Inception-v1  and ResNet-50 . Both networks are pre-trained using the ImageNet 2012 1K dataset and no fine-tuning is required. For Inception-v1, the output feature of the second to the last layer has dimensions, which leads to object classifiers in the last layer. For ResNet-50, we have . Except for the changes of output targets, other settings of training GCN remain the same as those of the previous experiments on NELL and NEIL. It is worthy to note that our GCN model is robust to different sizes of outputs. The model shows consistently better results as the representation (features) improves from Inception-v1 ( top-1 accuracy in ImageNet 1K val set) to ResNet-50 ().
We evaluate our method with the same metric as the previous experiments: the percentage of hitting the ground-truth labels among the top predictions. However, instead of only testing with the unseen object classifiers, we include both training and the predicted classifiers during testing, as suggested by [13, 34]. Note that in these two settings of experiments, we still perform testing on the same set of images associated with unseen classes only.
Testing without considering the training labels. We first perform experiments excluding the classifiers belonging to the training classes during testing. We report our results in Table. (a)a. We compare our results to the recent state-of-the-art methods SYNC  and EXEM . We show experiments with the same pre-trained ConvNets (Inception-v1) as [4, 5]. Due to unavailability of their word embeddings for all the nodes in KG, we use a different set of word embeddings (GloVe) ,which is publicly available.
Therefore, we first investigate if the change of word-embedding is crucial. We show this via the ConSE baseline. Our re-implementation of ConSE, shown as “ConSE(us)” in the table, uses the GloVe whereas the ConSE method implemented in [4, 5] uses their own word embedding. We see that both approaches have similar performance. Ours is slightly better in top-1 accuracy while the one in [4, 5] is better in top-20 accuracy. Thus, with respect to zero-shot learning, both word-embeddings seem equally powerful.
We then compare our results with SYNC  and EXEM . With the same pre-trained ConvNet Inception-v1, our method outperforms almost all the other methods on all the datasets and metrics. On the “2-hops” dataset, our approach outperforms all methods with a large margin: around on top-1 accuracy and on top-5 accuracy. On the “3-hops” dataset, our approach is consistently better than EXEM  around from top-5 to top-20 metrics.
By replacing the Inception-v1 with the ResNet-50, we obtain another performance boost in all metrics. For the top-5 metric, our final model outperforms the state-of-the-art method EXEM  by a whooping in the “2-hops” dataset, in the “3-hops” dataset and in the “All” dataset. Note that the gain is diminishing because the task increases in difficulty as the number of unseen classes increases.
Sensitivity to word embeddings. Is our method sensitive to word embeddings? What will happen if we use different word embeddings as inputs? We investigate 3 different word embeddings including GloVe  (which is used in the other experiments in the paper), FastText  and word2vec  trained with GoogleNews. As for comparisons, we have also implemented the method in  which trains a direct mapping from word embeddings to visual features without knowledge graphs. We use the Inception-v1 ConvNet to extract visual features. We show the results on ImageNet (with the 2-hops setting same as Table (a)a). We can see that  highly relies on the quality of the word embeddings (top-5 results range from to ). On the other hand, our top-5 results are stably around and are much higher than . With the GloVe word embeddings, our approach has a relative improvement of almost 200% over . This again shows graph convolutions with knowledge graphs play a significant role in improving zero-shot recognition.
Testing with the training classifiers. Following the suggestions in [13, 34], a more practical setting for zero-shot recognition is to include both seen and unseen category classifiers during testing. We test our method in this generalized setting. Since there are very few baselines available for this setting of experiment, we can only compare the results with ConSE and DeViSE. We have also re-implemented the ConSE baselines with both Inception-v1 and ResNet-50 pre-trained networks. As Table (b)b shows our method almost doubles the performance compared to the baselines on every metric and all 3-datasets. Moreover, we can still see the boost in of performance by switching the pre-trained Inception-v1 network to ResNet-50.
Visualizations. We finally perform visualizations using our model and ConSE with in Fig. 6 (Top-5 prediction results). We can see that our method significantly outperforms ConSE(10) in these examples. Although ConSE(10) still gives reasonable results in most cases, the output labels are biased to be within the training labels. On the other hand, our method outputs the unseen classes as well.
We have presented an approach for zero-shot recognition using the semantic embeddings of a category and the knowledge graph that encodes the relationship of the novel category to familiar categories. Our work also shows that a knowledge graph provides supervision to learn meaningful classifiers on top of semantic embeddings. Our results indicate a significant improvement over current state-of-the-art.
Acknowledgement: This work was supported by ONR MURI N000141612007, Sloan, Okawa Fellowship to AG and NVIDIA Fellowship to XW. We would also like to thank Xinlei Chen, Senthil Purushwalkam, Zhilin Yang and Abulhair Saparov for many helpful discussions.
-  Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. Label embedding for attribute-based classification. In CVPR, 2013.
J. L. Ba, K. Swersky, S. Fidler, and R. Salakhutdinov.
Predicting Deep Zero-Shot Convolutional Neural Networks using Textual Descriptions.In ICCV, 2015.
-  A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. H. Jr., and T. M. Mitchell. Toward an architecture for never-ending language learning. In AAAI, 2010.
-  S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha. Synthesized Classifiers for Zero-Shot Learning. In CVPR, 2016.
-  S. Changpinyo, W.-L. Chao, and F. Sha. Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning. In ICCV, 2017.
-  W.-L. Chao, S. Changpinyo, B. Gong, and F. Sha. An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild. In ECCV, 2016.
-  K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In BMVC, 2014.
-  X. Chen, A. Shrivastava, and A. Gupta. Neil: Extracting visual knowledge from web data. ICCV, 2013.
-  J. Deng, N. Ding, Y. Jia, A. Frome, K. Murphy, S. Bengio, Y. Li, H. Neven, and H. Adam. Large-Scale Object Classification Using Label Relation Graphs. In ECCV, 2014.
-  M. Elhoseiny, B. Saleh, and A. Elgammal. Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions. In ICCV, 2013.
-  A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009.
-  R. Fergus, H. Bernal, Y. Weiss, and A. Torralba. Semantic Label Sharing for Learning with Many Categories. In ECCV, 2010.
-  A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, and T. Mikolov. Devise: A deep visual-semantic embedding model. In NIPS, 2013.
-  Y. Fu and L. Sigal. Semi-supervised Vocabulary-informed Learning. In CVPR, 2016.
-  Z. Fu, T. Xiang, E. Kodirov, and S. Gong. Zero-Shot Object Recognition by Semantic Manifold Distance. In CVPR, 2015.
-  B. Hariharan and R. Girshick. Low-shot Visual Recognition by Shrinking and Hallucinating Features. In CoRR, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
C. Huang, C. C. Loy, and X. Tang.
Local similarity-aware deep feature embedding.In NIPS, 2016.
-  D. Jayaraman and K. Grauman. Zero-shot recognition with unreliable attributes. In NIPS, pages 3464–3472, 2014.
-  A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, and T. Mikolov. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651, 2016.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
-  T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. ICLR, 2017.
E. Kodirov, T. Xiang, and S. Gong.
Semantic Autoencoder for Zero-Shot Learning.In CVPR, 2017.
-  C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009.
-  C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute-Based Classification for Zero-Shot Visual Object Categorization. In TPAMI, 2014.
-  Y. Lu. Unsupervised learning on neural network outputs: with application in zero-shot learning. In IJCAI, 2016.
-  A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML, 2013.
L. v. d. Maaten and G. Hinton.
Visualizing data using t-sne.
Journal of Machine Learning Research, 9(Nov):2579–2605, 2008.
-  K. Marino, R. Salakhutdinov, and A. Gupta. The More You Know: Using Knowledge Graphs for Image Classification. In CVPR, 2017.
-  T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Metric Learning for Large Scale Image Classification: Generalizing to New Classes at Near-Zero Cost. In ECCV, 2012.
-  T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. ICLR, 2013.
-  G. A. Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41, 1995.
-  I. Misra, A. Gupta, and M. Hebert. From Red Wine to Red Tomato: Composition with Context. In CVPR, 2017.
-  M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean. Zero-shot learning by convex combination of semantic embeddings. In ICLR, 2014.
-  M. Palatucci, D. Pomerleau, G. E. Hinton, and T. M. Mitchell. Zero-shot Learning with Semantic Output Codes. In NIPS, 2009.
-  J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543, 2014.
-  R. Qiao, L. Liu, C. Shen, and A. van den Hengel. Less is more: zero-shot learning from online textual documents with noise suppression. In CVPR, 2016.
-  M. Rohrbach, S. Ebert, and B. Schiele. Transfer learning in a transductive setting. In NIPS, 2013.
-  M. Rohrbach, M. Stark, and B. Schiele. Evaluating Knowledge Transfer and Zero-Shot Learning in a Large-Scale Setting. In CVPR, 2011.
-  M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych, and B. Schiele. What helps where - and why? semantic relatedness for knowledge transfer. In CVPR, 2010.
-  B. Romera-Paredes and P. H. S. Torr. An embarrassingly simple approach to zero-shot learning. In ICML, 2015.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211–252, 2015.
-  R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learning to Share Visual Appearance for Multiclass Object Detection. In CVPR, 2011.
-  R. Socher, M. Ganjoo, C. D. Manning, and A. Y. Ng. Zero-Shot Learning Through Cross-Modal Transfer. In ICLR, 2013.
-  C. Sun, A. Shrivastava, S. Singh, and A. Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In ICCV, 2017.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going Deeper with Convolutions. In CVPR, 2015.
-  P. Wang, Q. Wu, C. Shen, A. van den Hengel, and A. Dick. FVQA: Fact-based Visual Question Answering. In CoRR, 2016.
-  J. Weston, S. Bengio, and N. Usunier. Large Scale Image Annotation: Learning to Rank with Joint Word-Image Embeddings. In ECML, 2010.
-  Q. Wu, P. Wang, C. Shen, A. Dick, and A. van den Hengel. Ask me anything: Free-form visual question answering based on knowledge from external sources. In CVPR, 2016.
-  Y. Xian, B. Schiele, and Z. Akata. Zero-Shot Learning - The Good, the Bad and the Ugly. In CVPR, 2017.
-  B. Xu, N. Wang, T. Chen, and M. Li. Empirical evaluation of rectified activations in convolutional network. CoRR, abs/1505.00853, 2015.
-  Z. Yang, W. Cohen, and R. Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In ICML, 2016.
-  L. Zhang, T. Xiang, and S. Gong. Learning a deep embedding model for zero-shot learning. In CVPR, 2017.
-  Z. Zhang and V. Saligrama. Zero-shot learning via semantic similarity embedding. In ICCV, 2015.
-  Z. Zhang and V. Saligrama. Zero-shot learning via joint latent similarity embedding. In CVPR, 2016.