Object recognition, and more specifically object categorization, has seen unprecedented advances in recent years with development of convolutional neural networks (CNNs)
. However, most successful recognition models, to date, are formulated as supervised learning problems, in many cases requiring hundreds, if not thousands, labeled instances to learn a given concept class. This exuberant need for large labeled datasets has limited recognition models to domains with 100’s to few 1000’s of classes. Humans, on the other hand, are able to distinguish beyond basic level categories . What is more impressive, is the fact that humans can learn from few examples, by effectively leveraging information from other object category classes, and even recognize objects without ever seeing them (e.g., by reading about them on the Internet). This ability has spawned research in few-shot and zero-shot learning.
Zero-shot learning (ZSL) has now been widely studied in a variety of research areas including neural decoding by fMRI images , character recognition , face verification , object recognition , and video understanding [17, 45]. Typically, zero-shot learning approaches aim to recognize instances from the unseen or unknown testing target categories by transferring information, through intermediate-level semantic representations, from known observed source (or auxiliary) categories for which many labeled instances exist. In other words, supervised classes/instances, are used as context for recognition of classes that contain no visual instances at training time, but that can be put in some correspondence with supervised classes/instances. As such, a general experimental setting of ZSL is that the classes in target and source (auxiliary) dataset are disjoint and typically the learning is done on the source dataset and then information is transferred to the target dataset, with performance measured on the latter.
This setting has a few important drawbacks: (1) it assumes that target classes cannot be mis-classified as source classes and vice versa; this greatly and unrealistically simplifies the problem; (2) the target label set is often relatively small, between ten and several thousand unknown labels , compared to at least entry level categories that humans can distinguish; (3) large amounts of data in the source (auxiliary) classes are required, which is problematic as it has been shown that most object classes have only few instances (long-tailed distribution of objects in the world ); and (4) the vast open set vocabulary from semantic knowledge, defined as part of ZSL , is not leveraged in any way to inform the learning or source class recognition.
A few works recently looked at resolving (1) through class-incremental learning [38, 39] which is designed to distinguish between seen (source) and unseen (target) classes at the testing time and apply an appropriate model – supervised for the former and ZSL for the latter. However, (2)–(4) remain largely unresolved. In particular, while (2) and (3) are artifacts of the ZSL setting, (4) is more fundamental. For example, consider learning about a car by looking at image instances in Fig.1. Not knowing that other motor vehicles exist in the world, one may be tempted to call anything that has 4-wheels a car. As a result the zero-shot class truck may have large overlap with the car class (see Fig.1 [SVR]). However, imagine knowing that there also exist many other motor vehicles (trucks, mini-vans, etc). Even without having visually seen such objects, the very basic knowledge that they exist in the world and are closely related to a car should, in principal, alter the criterion for recognizing instance as a car (making the recognition criterion stricter in this case). Encoding this in our [SS-Voc] model results in better separation among classes.
To tackle the limitations of ZSL and towards the goal of generic open set recognition, we propose the idea of semi-supervised vocabulary-informed learning
. Specifically, assuming we have few labeled training instances and a large open set vocabulary/semantic dictionary (along with textual sources from which statistical semantic relations among vocabulary atoms can be learned), the task of semi-supervised vocabulary-informed learning is to learn a model that utilizes semantic dictionary to help train better classifiers for observed (source) classes and unobserved (target) classes in supervised, zero-shot and open set image recognition settings. Different from standard semi-supervised learning, we do not assume unlabeled data is available, to help train classifier, and onlyvocabulary over the target classes is known.
Contributions: Our main contribution is to propose a novel paradigm for potentially open set image recognition: semi-supervised vocabulary-informed learning (SS-Voc), which is capable of utilizing vocabulary over unsupervised items, during training, to improve recognition. A unified maximum margin framework is used to encode this idea in practice. Particularly, classification is done through nearest-neighbor distance to class prototypes in the semantic embedding space, and we encode a set of constraints ensuring that labeled images project into semantic space such that they end up closer to the correct class prototypes than to incorrect ones (whether those prototypes are part of the source or target classes). We show that word embedding (word2vec) can be used effectively to initialize the semantic space. Experimentally, we illustrate that through this paradigm: we can achieve competitive supervised (on source classes) and ZSL (on target classes) performance, as well as open set image recognition performance with large number of unobserved vocabulary entities (up to ); effective learning with few samples is also illustrated.
2 Related Work
While most of machine learning-based object recognition algorithms require large amount of training data, one-shot learning aims to learn object classifiers from one, or only few examples. To compensate for the lack of training instances and enable one-shot learning, knowledge much be transferred from other sources, for example, by sharing features , semantic attributes [17, 25, 34, 35], or contextual information . However, none of previous works had used the open set vocabulary to help learn the object classifiers.
Zero-shot Learning: ZSL aims to recognize novel classes with no training instance by transferring knowledge from source classes. ZSL was first explored with use of attribute-based semantic representations [11, 15, 17, 18, 24, 32]. This required pre-defined attribute vector prototypes for each class, which is costly for a large-scale dataset. Recently, semantic word vectors were proposed as a way to embed any class name without human annotation effort; they can therefore serve as an alternative semantic representation [2, 14, 19, 30] for ZSL. Semantic word vectors are learned from large-scale text corpus by language models, such as word2vec , or GloVec . However, most of previous work only use word vectors as semantic representations in ZSL setting, but have neither (1) utilized semantic word vectors explicitly for learning better classifiers; nor (2) for extending ZSL setting towards open set image recognition. A notable exception is  which aims to recognize 21K zero-shot classes given a modest vocabulary of 1K source classes; we explore vocabularies that are up to an order of the magnitude larger – 310K.
Open-set Recognition: The term “open set recognition” was initially defined in [37, 38] and formalized in [4, 36] which mainly aims at identifying whether an image belongs to a seen or unseen classes. It is also known as class-incremental learning. However, none of them can further identify classes for unseen instances. An exception is  which augments zero-shot (unseen) class labels with source (seen) labels in some of their experimental settings. Similarly, we define the open set image recognition as the problems of recognizing the class name of an image from a potentially very large open set vocabulary (including, but not limited to source and target labels). Note that methods like [37, 38] are orthogonal but potentially useful here – it is still worth identifying seen or unseen instances to be recognized with different label sets as shown in experiments. Conceptually similar, but different in formulation and task, open-vocabulary object retrieval  focused on retrieving objects using natural language open-vocabulary queries.
Visual-semantic Embedding: Mapping between visual features and semantic entities has been explored in two ways: (1) directly learning the embedding by regressing from visual features to the semantic space using Support Vector Regressors (SVR) [11, 25] or neural network ; (2) projecting visual features and semantic entities into a common new space, such as SJE , WSABIE , ALE , DeViSE , and CCA [16, 18]. In contrast, our model trains a better visual-semantic embedding from only few training instances with the help of large amount of open set vocabulary items (using a maximum margin strategy). Our formulation is inspired by the unified semantic embedding model of , however, unlike , our formulation is built on word vector representation, contains a data term, and incorporates constraints to unlabeled vocabulary prototypes.
3 Vocabulary-informed Learning
Assume a labeled source dataset of samples, where is the image feature representation of image and is a class label taken from a set of English words or phrases ; consequently, is the number of source classes. Further, suppose another set of class labels for target classes , such that , for which no labeled samples are available. We note that potentially . Given a new test image feature vector the goal is then to learn a function , using all available information, that predicts a class label . Note that the form of the problem changes drastically depending on which label set assumed for : Supervised learning: ; Zero-shot learning: ; Open set recognition: or, more generally, . We posit that a single unified can be learned for all three cases. We formalize the definition of semi-supervised vocabulary-informed learning (SS-Voc) as follows:
Semi-supervised Vocabulary-informed Learning (SS-Voc): is a learning setting that makes use of complete vocabulary data () during training. Unlike a more traditional ZSL that typically makes use of the vocabulary (e.g., semantic embedding) at test time, SS-Voc utilizes exactly the same data during training. Notably, SS-Voc requires no additional annotations or semantic knowledge; it simply shifts the burden from testing to training, leveraging the vocabulary to learn a better model.
The vocabulary can come from a semantic embedding space learned by word2vec  or GloVec  on large-scale corpus; each vocabulary entity is represented as a distributed semantic vector . Semantics of embedding space help with knowledge transfer among classes, and allow ZSL and open set image recognition. Note that such semantic embedding spaces are equivalent to the “semantic knowledge base” for ZSL defined in  and hence make it appropriate to use SS-Voc in ZSL setting.
Assuming we can learn a mapping , from image features to this semantic space, recognition can be carried out using simple nearest neighbor distance, e.g., if is closer to than to any other word vector; in this context can be interpreted as the prototype of the class . Thus the core question is then how to learn the mapping and what form of inference is optimal in the semantic space. For learning we propose discriminative maximum margin criterion that ensures that labeled samples project closer to their corresponding class prototypes than to any other prototype in the open set vocabulary .
3.1 Learning Embedding
Our maximum margin vocabulary-informed embedding learns the mapping , from low-level features to the semantic word space by utilizing maximum margin strategy. Specifically, consider , where111Generalizing to a kernel version is straightforward, see .
. Ideally we want to estimatesuch that for all labeled instances in (we would obviously want this to hold for instances belonging to unobserved classes as well, but we cannot enforce this explicitly in the optimization as we have no labeled samples for them).
Data Term: The easiest way to enforce the above objective is to minimize Euclidian distance between sample projections and appropriate prototypes in the embedding space222Eq.(1) is also called data embedding  / compatibility function .:
We need to minimize this term with respect to each instance , where is the class label of instance in . To prevent overfitting, we further regularize the solution:
where indicates the Frobenius Norm. Solution to the Eq.(2
) can be obtained through ridge regression.
Nevertheless, to make the embedding more comparable to support vector regression (SVR), we employ the maximal margin strategy – insensitive smooth SVR (SSVR)  to replace the least square term in Eq.(2). That is,
is regularization coefficient. ,
, and indicates
the -th value of corresponding vector. is the -th
column of . The conventional SVR is formulated as
a constrained minimization problem, i.e., convex quadratic programming
problem, while SSVR employs quadratic smoothing 
to make Eq.(3) differentiable everywhere, and thus SSVR
can be solved as an unconstrained minimization problem directly333We found Eq.(2) and Eq.(3) have
similar results, on average, but formulation in Eq.(3 ) is more stable
and has lower variance.
) is more stable and has lower variance..
Pairwise Term: Data term above only ensures that labelled samples project close to their correct prototypes. However, since it is doing so for many samples and over a number of classes, it is unlikely that all the data constraints can be satisfied exactly. Specifically, consider the following case, if is in the part of the semantic space where no other entities live (i.e., distance from to any other prototype in the embedding space is large), then projecting further away from is asymptomatic, i.e., will not result in misclassification. However, if the is close to other prototypes then minor error in regression may result in misclassification.
To embed this intuition into our learning, we enforce more discriminative constraints in the learned semantic embedding space. Specifically, the distance of should not only be as close as possible, but should also be smaller than the distance , . Formally, we define the vocabulary pairwise maximal margin term 444Crammer and Singer loss [42, 8] is the upper bound of Eq (4) and (5) which we use to tolerate variants of (e.g. ’pigs’ Vs. ’pig’ in Fig. 2) and thus are better for our tasks. :
where is selected from the open vocabulary; is the margin gap constant. Here, indicates the quadratically smooth hinge loss  which is convex and has the gradient at every point. To speedup computation, we use the closest target prototypes to each source/auxiliary prototype in the semantic space. We also define similar constraints for the source prototype pairs:
where is selected from source/auxiliary dataset vocabulary. This term enforces that should be smaller than the distance , . To facilitate the computation, we similarly use closest prototypes that are closest to each prototype in the source classes. Our complete pairwise maximum margin term is:
Vocabulary-informed Embedding: The complete combined objective can now be written as:
where is ratio coefficient of two terms. One practical advantage is that the objective function in Eq.(7) is an unconstrained minimization problem which is differentiable and can be solved with L-BFGS. is initialized with all zeros and converges in iterations.
Fine-tuning Word Vector Space: Above formulation works well assuming semantic space is well laid out and linear mapping is sufficient. However, we posit that word vector space itself is not necessarily optimal for visual discrimination. Consider the following case: two visually similar categories may appear far away in the semantic space. In such a case, it would be difficult to learn a linear mapping that matches instances with category prototypes properly. Inspired by this intuition, which has also been expressed in natural language models , we propose to fine-tune the word vector representation for better visual discriminability.
One can potentially fine-tune the representation by optimizing directly, in an alternating optimization (e.g., as in ). However, this is only possible for source/auxiliary class prototypes and would break regularities in the semantic space, reducing ability to transfer knowledge from source/auxilary to target classes. Alternatively, we propose optimizing a global warping, , on the word vector space:
where is regularization coefficient. Eq.(8) can still be solved using L-BFGS and
is initialized using an identity matrix. The algorithm first updatesand then ; typically the step of updating can converge within iterations and the corresponding class prototypes used for final classification are updated to be .
3.2 Maximum Margin Embedding Recognition
Once embedding model is learned, recognition in the semantic space can be done in a variety of ways. We explore a simple alternative to classify the testing instance ,
Nearest Neighbor (NN) classifier directly measures the distance between predicted semantic vectors with the prototypes in semantic space, i.e.,
. We further employ the k-nearest neighbors (KNN) of testing instances to average the predictions,i.e., is averaging the KNN instances of predicted semantic vectors.555This strategy is known as Rocchio algorithm in information retrieval. Rocchio algorithm is a method for relevance feedback by using more relevant instances to update the query instances for better recall and possibly precision in vector space (Chap 14 in ). It was first suggested for use on ZSL in ; more sophisticated algorithms [16, 34] are also possible.
Datasets. We conduct our experiments on Animals with Attributes (AwA) dataset, and ImageNet / dataset. AwA consists of 50 classes of animals ( images in total). In  standard split into 40 source/auxiliary classes () and 10 target/test classes () is introduced. We follow this split for supervised and zero-shot learning. We use OverFeat features (downloaded from ) on AwA to make the results more easily comparable to state-of-the-art. ImageNet / dataset is a large-scale dataset. We use () classes of ILSVRC as the source/auxiliary classes and () classes of ILSVRC 2010 that are not used in ILSVRC as target data. We use pre-trained VGG-19 model 
to extract deep features for ImageNet. On both dataset, we use few instances from source dataset to mimic human performance of learning from few examples and ability to generalize.
Recognition tasks. We consider three different settings in a variety of experiments (in each experiment we carefully denote which setting is used):
Supervised recognition, where learning is on source classes and we assume test instances come from same classes with as recognition vocabulary;
Zero-shot recognition, where learning is on source classes and we assume test instances coming from target dataset with as recognition vocabulary;
Open-set recognition, where we use entirely open vocabulary with and use test images from both source and target splits.
Competitors. We compare the following models,
SVM classifier trained directly on the training instances of source data, without the use of semantic embedding. This is the standard (Supervised) learning setting and the learned classifier can only predict the labels in testing data of source classes.
SVR is used to learn and the recognition is done in the resulting semantic manifold. This corresponds to only using Eq.(3) to learn .
- DeVise, ConSE, AMP:
. ConSE uses a multi-class logistic regression classifier for predicting class probabilities of source instances; and the parameter T (number of top-T nearest embeddings for a given instance) was selected fromthat gives the best results. ConSE method in supervised setting works the same as SVR. We use the AMP code provided on the author webpage .
We test three different variants of our method.
is a variant of our maximum margin leaning of with the vocabulary-informed constraints only from known classes (i.e., closed set ).
corresponds to our full model with maximum margin constraints coming from both and (or ). We compute using Eq.(7), but without optimizing .
further fine-tunes the word vector space by also optimizing using Eq.(8).
Open set vocabulary. We use google word2vec to learn the open set vocabulary set from a large text corpus of around billion words: UMBC WebBase ( billion words), the latest Wikipedia articles ( billion words) and other web documents ( billion words). Some rare (low frequency) words and high frequency stopping words were pruned in the vocabulary set: we remove words with the frequency or times. The result is a vocabulary of around 310K words/phrases with , which is defined as . .
Computational and parameters selection and scalability. All experiments are repeated times, to avoid noise due to small training set size, and we report an average across all runs. For all the experiments, the mean accuracy is reported, i.e
., the mean of the diagonal of the confusion matrix on the prediction of testing data. We fix the parametersand as and in our experiments when only few training instances are available for AwA (5 instances per class) and ImageNet (3 instances per class). Varying values of , and leads to variances on AwA and variances on ImageNet dataset; but the experimental conclusions still hold. Cross-validation is conducted when more training instances are available. and are set to to balance computational cost and efficiency of pairwise constraints.
To solve Eq.(8
) at a scale, one can use Stochastic Gradient Descent (SGD) which makes great progress initially, but often is slow when approaches a solution. In contrast, the L-BFGS method mentioned above can achieve steady convergence at the cost of computing the full objective and gradient at each iteration. L-BFGS can usually achieve better results than SGD with good initialization, however, is computationally expensive. To leverage benefits of both of these methods, we utilize a hybrid method to solve Eq.(8) in large-scale datasets: the solver is initialized with few instances to approximate the gradients using SGD first, then gradually more instances are used and switch to L-BFGS is made with iterations. This solver is motivated by Friedlander et al. , who theoretically analyzed and proved the convergence for the hybrid optimization methods. In practice, we use L-BFGS and the Hybrid algorithms for AwA and ImageNet respectively. The hybrid algorithm can save between training time as compared with L-BFGS.
4.1 Experimental results on AwA dataset
We report AwA experimental results in Tab. 1, which uses 100/1000-dimensional word2vec representation (i.e., ). We highlight the following observations: (1) SS-Voc variants have better classification accuracy than SVM and SVR. This validates the effectiveness of our model. Particularly, the results of our SS-Voc:full are and higher than those of SVR/SVM on supervised and zero-shot recognition respectively. Note that though the results of SVM/SVR are good for supervised recognition tasks (52.1 and 51.4/57.1 respectively), we can further improve them, which we attribute to the more discriminative classification boundary informed by the vocabulary. (2) SS-Voc: significantly, by up to , improves zero-shot recognition results of SS-Voc:closed. This validates the importance of information from open vocabulary. (3) SS-Voc benefits more from open set vocabulary as compared to word vector space fine-tuneing. The results of supervised and zero-shot recognition of SS-Voc:full are and higher than those of SS-Voc:closed.
Comparing to state-of-the-art on ZSL: We compare our results with the state-of-the-art ZSL results on AwA dataset in Tab. 2. We compare SS-Voc:full trained with all source instances, 800 (20 instances / class), and 200 instances (5 instances / class). Our model achieves accuracy, which is remarkably higher than all previous methods. This is particularly impressive taking into account the fact that we use only a semantic space and no additional attribute representations that many other competitor methods utilize. Further, our results with training instances, a small fraction of the instances used to train all other methods, already outperform all other approaches. We argue that much of our success and improvement comes from a more discriminative information obtained using an open set vocabulary and corresponding large margin constraints, rather than from the features, since our method improved as compared with DAP  which uses the same OverFeat features. Note, our SS-Voc:full result is higher than the closest competitor ; this improvement is statistically significant. Comparing with our work,  did not only use more powerful visual features (GoogLeNet Vs. OverFeat), but also employed more semantic embeddings (attributes, GloVe777GloVe can be taken as an improved version of word2vec. and WordNet-derived similarity embeddings as compared to our word2vec).
|Akata et al. ||A+W|
|AMP (SR+SE) ||A+W|
|Jayaraman et al. ||A||low-level|
|Yu et al. ||A||low-level|
Large-scale open set recognition: Here we focus on Open-set setting with the large vocabulary of approximately 310K entities; as such the chance performance of the task is much much lower. In addition, to study the effect of performance as a function of the open vocabulary set, we also conduct two additional experiments with different label sets: (1) Open-set: the 1000 labels from nearest neighbor set of ground-truth class prototypes are selected from the complete dictionary of 310K labels. This corresponds to an open set fine grained recognition; (2) Open-set: 1000 label names randomly sampled from 310K set. The results are shown in Fig. 2. Also note that we did not fine-tune the word vector space (i.e., is an Identity matrix) on Open-set setting since Eq (8) can optimize a better visual discriminability only on a relative small subset as compared with the 310K vocabulary. While our Open-set variants do not assume that test data comes from either source/auxiliary domain or target domain, we split the two cases to mimic Supervised and Zero-shot scenarios for easier analysis.
On Supervised-like setting, Fig. 2 (left), our accuracy is better than that of SVR-Map on all the three different label sets and at all hit rates. The better results are largely due to the better embedding matrix learned by enforcing maximum margins between training class name and open set vocabulary on source training data.
On Zero shot-like setting, our method still has a notable advantage over that of SVR-Map method on Top- () accuracy, again thanks to the better embedding learned by Eq. (7). However, we notice that our top-1 accuracy on Zero shot-like setting is lower than SVR-Map method. We find that our method tends to label some instances from target data with their nearest classes from within source label set. For example, “humpback whale” from testing data is more likely to be labeled as “blue whale”. However, when considering Top- () accuracy, our method still has advantages over baselines.
|Open-Set||40 / 10|
|Open-Set||(left)||(right)||40 / 10|
|Open-Set||40 / 10|
4.2 Experimental results on ImageNet dataset
We further validate our findings on large-scale ImageNet 2012/2010 dataset; 1000-dimensional word2vec representation is used here since this dataset has larger number of classes than AwA. We highlight that our results are still better than those of two baselines – SVR-Map and SVM on (Supervised) and (Zero-shot) settings respectively as shown in Tab. 3. The open set image recognition results are shown in Fig. 4. On both Supervised-like and Zero-shot-like settings, clearly our framework still has advantages over the baseline which directly matches the nearest neighbors from the vocabulary by using predicted semantic word vectors of each testing instance.
We note that Supervised SVM results () on ImageNet are lower than reported in , despite using the same features. This is because only few, 3 samples per class, are used to train our models to mimic human performance of learning from few examples and illustrate ability of our model to learn with little data. However, our semi-supervised vocabulary-informed learning can improve the recognition accuracy on all settings. On open set image recognition, the performance has dropped from (Supervised) and (Zero-shot) to around and respectively (Fig. 4). This drop is caused by the intrinsic difficulty of the open set image recognition task ( increase in vocabulary) on a large-scale dataset. However, our performance is still better than the SVR-Map baseline which in turn significantly better than the chance-level.
We also evaluated our model with larger number of training instances ( per class). We observe that for standard supervised learning setting, the improvements achieved using vocabulary-informed learning tend to somewhat diminish as the number of training instances substantially grows. With large number of training instances, the mapping between low-level image features and semantic words, , becomes better behaved and effect of additional constraints, due to the open-vocabulary, becomes less pronounced.
Comparing to state-of-the-art on ZSL. We compare our results to several state-of-the-art large-scale zero-shot recognition models. Our results, SS-Voc:full, are better than those of ConSE, DeViSE and AMP on both T-1 and T-5 metrics with a very significant margin (improvement over best competitor, ConSE, is 3.43 percentage points or nearly 62% with training samples). Poor results of DeViSE with training instances are largely due to the inefficient learning of visual-semantic embedding matrix. AMP algorithm also relies on the embedding matrix from DeViSE, which explains similar poor performance of AMP with training instances. In contrast, our SS-Voc:full can leverage discriminative information from open vocabulary and max-margin constraints, which helps improve performance. For DeViSE with all ImageNet instances, we confirm the observation in  that results of ConSE are much better than those of DeViSE. Our results are a further significant improved from ConSE.
|Open-Set||(left)||(right)||1000 / 360|
4.3 Qualitative results of open set image recognition
t-SNE visualization of AwA 10 target testing classes is shown in Fig. 3. We compare our SS-Voc:full with SS-Voc:closed and SVR. We note that (1) the distributions of 10 classes obtained using SS-Voc are more centered and more separable than those of SVR (e.g., rat, persian cat and pig), due to the data and pairwise maximum margin terms that help improve the generalization of learned; (2) the distribution of different classes obtained using the full model SS-Voc:full are also more separable than those of SS-Voc:closed, e.g., rat, persian cat and raccoon. This can be attributed to the addition of the open-vocabulary-informed constraints during learning of , which further improves generalization. For example, we show an open set recognition example image of “persian_cat”, which is wrongly classified as a “hamster” by SS-Voc:closed.
Partial illustration of the embeddings learned for the ImageNet2012/2010 dataset are illustrated in Figure 1, where 4 source/auxiliary and 2 target/zero-shot classes are shown. Again better separation among classes is largely attributed to open-set max-margin constraints introduced in our SS-Voc:full model. Additional examples of miss-classified instances are available in the supplemental material.
5 Conclusion and Future Work
This paper introduces the problem of semi-supervised vocabulary-informed learning, by utilizing open set semantic vocabulary to help train better classifiers for observed and unobserved classes in supervised learning, ZSL and open set image recognition settings. We formulate semi-supervised vocabulary-informed learning in the maximum margin framework. Extensive experimental results illustrate the efficacy of such learning paradigm. Strikingly, it achieves competitive performance with only few training instances and is relatively robust to large open set vocabulary of up to class labels.
We rely on word2vec to transfer information between observed and unobserved classes. In future, other linguistic or visual semantic embeddings could be explored instead, or in combination, as part of vocabulary-informed learning.
-  Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. Label-embedding for attribute-based classification. In CVPR, 2013.
-  Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele. Evaluation of output embeddings for fine-grained image classification. In CVPR, 2015.
-  E. Bart and S. Ullman. Cross-generalization: learning novel classes from a single example by feature replacement. In CVPR, 2005.
-  A. Bendale and T. Boult. Towards open world recognition. In CVPR, 2015.
-  I. Biederman. Recognition by components - a theory of human image understanding. Psychological Review, 1987.
-  S. R. Bowman, C. Potts, and C. D. Manning. Learning distributed word representations for natural logic reasoning. CoRR, abs/1410.4176, 2014.
-  K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In BMVC, 2014.
-  K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. JMLR, 2001.
-  J. Deng, N. Ding, Y. Jia, A. Frome, K. Murphy, S. Bengio, Y. Li, H. Neven, and H. Adam. Large-scale object classification using label relation graphs. In ECCV, 2014.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
-  A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009.
-  L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE TPAMI, 2006.
-  M. P. Friedlander and M. Schmidt. Hybrid deterministic-stochastic methods for data fitting. SIAM J. Scientific Computing, 2012.
-  A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and T. Mikolov. DeViSE: A deep visual-semantic embedding model. In NIPS, 2013.
-  Y. Fu, T. Hospedales, T. Xiang, and S. Gong. Attribute learning for understanding unstructured social activity. In ECCV, 2012.
-  Y. Fu, T. M. Hospedales, T. Xiang, Z. Fu, and S. Gong. Transductive multi-view embedding for zero-shot recognition and annotation. In ECCV, 2014.
-  Y. Fu, T. M. Hospedales, T. Xiang, and S. Gong. Learning multi-modal latent attributes. IEEE TPAMI, 2013.
-  Y. Fu, T. M. Hospedales, T. Xiang, and S. Gong. Transductive multi-view zero-shot learning. IEEE TPAMI, 2015.
-  Z. Fu, T. Xiang, E. Kodirov, and S. Gong. zero-shot object recognition by semantic manifold distance. In CVPR, 2015.
-  S. Guadarrama, E. Rodner, K. Saenko, N. Zhang, R. Farrell, J. Donahue, and T. Darrell. Open-vocabulary object retrieval. In Robotics Science and Systems (RSS), 2014.
-  S. J. Hwang and L. Sigal. A unified semantic embedding: relating taxonomies and attributes. In NIPS, 2014.
-  D. Jayaraman and K. Grauman. Zero shot recognition with unreliable attributes. In NIPS, 2014.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
-  N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and simile classifiers for face verification. In ICCV, 2009.
-  C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE TPAMI, 2013.
-  H. Larochelle, D. Erhan, and Y. Bengio. Zero-data learning of new tasks. In AAAI, 2008.
Y.-J. Lee, W.-F. Hsieh, and C.-M. Huang.
-SSVR: A smooth support vector machine for-insensitive regression. IEEE TKDE, 2005.
-  C. D. Manning, P. Raghavan, and H. Schutze. Introduction to Information Retrieval. Cambridge University Press, 2009.
-  T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Neural Information Processing Systems, 2013.
-  M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean. Zero-shot learning by convex combination of semantic embeddings. ICLR, 2014.
-  M. Palatucci, G. Hinton, D. Pomerleau, and T. M. Mitchell. Zero-shot learning with semantic output codes. In NIPS, 2009.
-  D. Parikh and K. Grauman. Relative attributes. In ICCV, 2011.
-  J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP, 2014.
-  M. Rohrbach, S. Ebert, and B. Schiele. Transfer learning in a transductive setting. In NIPS, 2013.
-  M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych, and B. Schiele. What helps where – and why? semantic relatedness for knowledge transfer. In CVPR, 2010.
-  H. Sattar, S. Muller, M. Fritz, and A. Bulling. Prediction of search targets from fixations in open-world settings. In CVPR, 2015.
-  W. J. Scheirer, L. P. Jain, and T. E. Boult. Probability models for open set recognition. IEEE TPAMI, 2014.
-  W. J. Scheirer, A. Rocha, A. Sapkota, and T. E. Boult. Towards open set recognition. IEEE TPAMI, 2013.
-  R. Socher, M. Ganjoo, H. Sridhar, O. Bastani, C. D. Manning, and A. Y. Ng. Zero-shot learning through cross-modal transfer. In NIPS, 2013.
A. Torralba, R. Fergus, and W. Freeman.
80 million tiny images: A large data set for nonparametric object and scene recognition.IEEE TPAMI, 2008.
-  A. Torralba, K. P. Murphy, and W. T. Freeman. Using the forest to see the trees: Exploiting context for visual object detection and localization. Commun. ACM, 2010.
-  I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. JMLR, 2005.
-  A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. In IEEE TPAMI, 2011.
-  J. Weston, S. Bengio, and N. Usunier. Wsabie: Scaling up to large vocabulary image annotation. In IJCAI, 2011.
-  Z. Wu, Y. Fu, Y.-G. Jiang, and L. Sigal. Harnessing object and scene semantics for large-scale video understanding. In CVPR, 2016.
-  F. X. Yu, L. Cao, R. S. Feris, J. R. Smith, and S.-F. Chang. Designing category-level attributes for discriminative visual recognition. CVPR, 2013.
-  T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In ICML, 2004.