On zero-shot recognition of generic objects

04/10/2019 ∙ by Tristan Hascoet, et al. ∙ 28

Many recent advances in computer vision are the result of a healthy competition among researchers on high quality, task-specific, benchmarks. After a decade of active research, zero-shot learning (ZSL) models accuracy on the Imagenet benchmark remains far too low to be considered for practical object recognition applications. In this paper, we argue that the main reason behind this apparent lack of progress is the poor quality of this benchmark. We highlight major structural flaws of the current benchmark and analyze different factors impacting the accuracy of ZSL models. We show that the actual classification accuracy of existing ZSL models is significantly higher than was previously thought as we account for these flaws. We then introduce the notion of structural bias specific to ZSL datasets. We discuss how the presence of this new form of bias allows for a trivial solution to the standard benchmark and conclude on the need for a new benchmark. We then detail the semi-automated construction of a new benchmark to address these flaws.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 11

page 12

page 16

page 17

page 18

page 19

page 20

Code Repositories

GOZ

Generic Object ZSL Dataset (GOZ)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Datasets play a leading role in computer vision research. Perhaps the most striking example of the impact a dataset can have on research has been the introduction of Imagenet [2]. The new scale and granularity of Imagenet’s coverage of the visual world has paved the way for the success and wide spread adoption of CNN [8, 11] that have revolutionized generic object recognition.

The current best-practice for the development of a practical object recognition solution consists in collecting and annotating application-specific training data to fine-tune a large Imagenet-pretrained CNN on. This data annotation process can be prohibitively expensive for many applications which hinders the wide-spread usage of these technologies. ZSL models generalize the recognition ability of traditional image classifiers to unknown classes, for which no image sample is available for training. The promise of ZSL for generic object recognition is huge: to scale up the recognition capacity of image classifiers beyond the set of annotated training classes. Hence ZSL has the potential to be of great practical impact as they would considerably ease the deployment of object recognition technologies by eliminating the need for expensive task-specific data collection and fine-tuning processes.

Despite its great promise, and after a decade of active research [10], the accuracy of ZSL models on the standard Imagenet benchmark [3] remain far too low for practical applications. To better understand this lack of progress, we analyzed the errors of several ZSL baselines. Our analysis leads us to identify two main factors impacting the accuracy of ZSL models: structural flaws in the standard evaluation protocol and poor quality of both semantic and visual samples. On the bright side of things, we show that once these flaws are taken into account, the actual accuracy of existing ZSL models is much higher than was previously thought.

On the other hand, we show that a trivial solution outperforms most existing ZSL models by a large margin, which is upsetting. To explain this phenomenon, we introduce the notion of structural bias in ZSL datasets. We argue that ZSL models should aim to develop compositional reasoning abilities, but the presence of structural bias in the Imagenet benchmark favors solutions based on a trivial one to one mapping between training and test classes. We come to the conclusion that a new benchmark is needed to address the different problems identified by our analysis and, in the last section of this paper, we detail the semi-automated construction of a new benchmark we propose.

To structure our discussion, we first briefly review preliminaries on ZSL in Section 3. Section 4 details our analysis of the different factors impacting the accuracy of ZSL models on the standard benchmark. Section 5 introduces the notions of structural bias, and propose a way to measure and minimize its impact in the construction of a new benchmark. Finally, Section 6 summarizes the construction of our proposed benchmark. For space constraint, we only include the main results of our analysis in the body of this paper. We refer interested readers to the supplementary material for additional results and details of our analysis.

2 Related Work

2.1 ZSL datasets

Early research on ZSL has been carried out on relatively small scale or domain specific benchmarks [9, 14, 19], for which human-annotated visual attributes are proposed as semantic representations of the visual classes. On the one hand, these benchmarks have provided a controlled setup for the development of theoretical models and the accurate tracking of ZSL progress. On the other hand, it is unclear whether approaches developed on such dataset would generalize to the more practical setting of zero-shot generic object recognition. For instance, in generic object recognition, manually annotating each and every possible visual class of interest with a set of visual attributes is impractical due to the diversity and complexity of the visual world.

The Imagenet dataset [2] consists of more than 13 million images scattered among 21,845 visual classes. Imagenet relies on Wordnet [12] to structure its classes: each visual class in Imagenet corresponds to a concept in Wordnet. Frome et al. [3] proposed a benchmark for ZS generic object recognition based on the Imagenet dataset, which has been widely adopted as the standard evaluation benchmark by recent works [13, 20, 15, 1, 21, 7, 18]. Using word embeddings as semantic representations, they use the 1000 classes of the ILSVRC dataset as training classes and propose different test splits drawn from the remaining 20,845 classes of the Imagenet dataset based on their distance to the training classes within the Wordnet hierarchy: the 2-hops, 3-hops and all test splits.

Careful inspection of these test splits revealed a confusion in their name: The 2-hops test split actually consists of the set of test classes directly connected to the training set classes in Wordnet, i.e; within 1 hop of the training set. Similarly, the 3-hops test set actually corresponds to the test classes within 2-hops. In this paper, we will refer to the standard test splits by the name of their true configuration: 1-hop, 2-hops and all, as illustrated in Figure 1.

2.2 Dataset bias

Bias in datasets can take many forms, depending on the specific target task. Torralba et al. [17] investigates bias in generic object recognition. The notion of structural bias we introduce in Section 5 is closely related to the notion of negative set bias they analyze.

As more complex tasks are being considered, more insidious forms of bias sneak into our datasets. In VQA, the impressive results of early baseline models have later been shown to be largely due to statistical biases in the question/answers pairs [4, 6, 5]. Similar to these works, we will show that a trivial solution leveraging structural bias in the Imagenet ZSL benchmark outperforms early ZSL baselines.

Xian et al. [21] identify structural incoherences in small-scale ZSL benchmarks and proposes new test splits to remedy them. Closely related to our work, they also observe a correlation between test class sample population and classification accuracy in the Imagenet ZSL benchmark. However, their analysis mainly focuses on small-scale benchmarks and the comparison of existing ZSL models, while we analyze the ZSL benchmark for generic object recognition in more depth.

3 Preliminaries

ZSL models aim to recognize unseen classes, for which no image sample is available to learn from. To do so, ZSL models use descriptions of the visual classes, i.e., representations of the visual classes in a semantic space shared by both training and test classes. To evaluate the out-of-sample recognition ability of models, ZSL benchmarks split the full set of classes into disjoint training and test sets. ZSL benchmarks are fully defined by three components: a set of training and test classes , a set of labeled images , and a set of semantic representations :

(1a)
(1b)
(1c)
(1d)
(1e)
(1f)

ZSL models are typically trained to minimize a loss function

over a similarity score between image and semantic features of the training sample set with respect to the model parameters .

(2)

In the standard ZSL setting, test samples are classified among the set of unseen test classes by retrieving the class description of highest similarity score:

(3)

In the generalized ZSL setting, test samples are classified among the full set of training and test classes:

(4)

Xian et al. [20] have shown that many ZSL models can be formulated within a same linear model framework, with different training objectives and regularization terms. More recently, Wang et al. [18] have proposed a Graph Convolutional Network (GCN) model that has shown impressive improvements over the previous state of the art. In our study, we will present results obtained with both a baseline linear model [15] and a state of the art GCN model [18, 7].

4 Error analysis

In the previous section, we have mentioned that ZSL benchmarks are fully defined by three components: a set of labeled images , a set of semantic representations , and the set of training and test classes . In this section, we analyze each of the standard benchmark components individually: We first highlight inconsistencies in the configuration of the different test splits and show that these inconsistencies lead to many false negatives in the reported evaluation of ZSL models outputs. Next, we identify a number of factors impacting the quality of the word embeddings of visual classes and argue that visual classes with poor semantic representations should be excluded from ZSL benchmarks. We then observe that the Imagenet dataset contains many ambiguous image samples. We define what a good image sample means in the context of ZSL and propose a method to automatically select such images.

4.1 Structural flaws

Figure 1 illustrates the configuration of test classes of the standard test splits within the Wordnet hierarchy. This configuration leads to an obvious contradiction: test sets include visual classes of both parents and their children concepts. Consider the problem of classifying images of birds within the hop-1 test split as in Figure 1. The standard test splits give rise to two possibly inconsistent scenarios:

Figure 1: Illustration of the standard test splits configuration

A ZSL model may classify an image of the children class Cathartid as its parent class Raptor. The standard benchmark considers such cases as classification errors, while the classification is semantically correct.

A ZSL model may classify an image of the parent class Raptor as one of its children class: Cathartid. Classification may be semantically correct or incorrect, depending on the specific breed of raptor in the image, but we have no way to automatically assess it without additional annotation. The standard benchmark considers such cases as classification errors, while the classification is semantically undefined.

Figure 2: Distribution of the classification outputs of different ZSL models on the 1-hop test split. An image can be either be classified into its actual label , the parent class of , one of its children class, or an unrelated class. Only the latter case constitutes a definitive error.

We refer to both of the above cases as false negatives. Figure 2 illustrates the distribution of ZSL classification outputs among these different scenarios on the 1-hop test split. On the standard ZSL task for instance, the reported accuracy of the GCN model is 21.8% while the actual (semantically correct) accuracy should be somewhere in between 27.8% and 40.4%.

The ratio of false negatives per accuracy increases dramatically in the generalized ZSL setting. The linear baseline reported accuracy is only 1.9%, while the actual (semantically correct) accuracy lies between 16.0% and 41.1%. This is due to the fact that ZSL models tend to classify test images into their parent or children training class: for example, Cathartid images tend to be classified as Vulture. Appendix A of the supplementary material presents results on the other standard splits on which we show that the ratio of false negative per reported accuracy further increases with with larger test splits.

4.2 Word embeddings

In this section, we identify two factors impacting the quality of word embeddings and analyse their affect on ZSL accuracy: polysemy and occurrence frequency. These problems naturally arise in the definition of large scale object categories so they are inherent problems of ZS recognition of generic objects. However, we argue that ZSL benchmarks should provide a curated environment with high quality, unambiguous, semantic representations and that solutions to tackle the special case of polysemous and rare words should be separately investigated in the future.

4.2.1 Occurrence frequency

Word embeddings are learned in an unsupervised manner from the co-occurrence statistics of words in large text corpora. Common words are learned from plentiful statistics so we expect them to provide more semantically meaningful representations than rare words, which are learned from scarce co-occurrence statistics. We found many Imagenet class labels to be rare words (see Appendix B of the supplementary materials), with as many as 33.7% of label words appearing less than 50 times in Wikipedia. Here, we question whether the few co-occurrence statistics from which such rare word embeddings are learned actually provide any visually discriminative information for ZSL.

To answer this question, we evaluate ZSL models on different test splits of 100 classes: we split the Imagenet classes into different subsets based on the occurrence frequency of their label word. We independently evaluate the accuracy of our model on each of these splits and report the ZSL accuracy with respect to the average occurrence frequency of the visual class labels in Figure 3.

Figure 3: Each dot in these figures represent the top-1 accuracy (y-axis) of a 100 classes test split with respect to the test split characteristics (x-axis): Left: Mean occurrence frequency of the test class labels. Right: test classes of primary meaning, such as cairn (monument), or secondary meaning, such as cairn (dog)

Our results highlight a strong correlation () between word frequency and the Linear baseline accuracy as test splits made of rare words strikingly under-perform test splits made of more common words, although accuracy remains well above chance (1%), even for test sets of very rare words. Results are more nuanced for the GCN model (correlation coefficient ), which can be explained by the fact that GCN uses the Wordnet hierarchy information in addition to word embeddings.

4.2.2 Polysemy

The English language contains many polysemous words, which makes it difficult to uniquely identify a visual class with a single word. We found that half of the ImageNet word labels are shared with at least one other Wordnet concept, and that 38% of ImageNet classes share at least one word label with other visual classes. Figure 4 illustrates the example of the word ”cairn”. Two visual classes share the same label ”cairn”: One relates to the meaning of cairn as a stone memorial, while the other refers to a dog breed. This is problematic as both of these visual classes share the same representation in the label space, so they are essentially defined as the same class although they correspond to two visually very distinct concepts.

To deal with polysemy, we assume that all words have one primary meaning, with possibly several secondary meanings. We consider word embeddings to reflect the semantics of their primary meaning exclusively, and discard visual classes associated with the secondary meanings of their word label. To automatically identify the first meaning of visual class labels. we implement a solution based on both Wordnet and word embeddings statistics detailed in the supplementary material.

Figure 4: Illustration of polysemous words. Each color represents the 100 nearest neighbors of a given word. ”Cairn” and its closest neighbors are clustered around the stone and monument related vocabulary, far away from dog-related vocabulary so we assign the top visual class as primary meaning of the word cairn.

We conduct an experiment to assess both the impact of polysemy on ZSL accuracy and the efficiency of our solution. As in the previous section, we evaluate our ZSL models on different test splits of 100 classes: We separately evaluate test classes identified as the primary meaning of their word label and test classes corresponding to the secondary meaning of their word label. Figure 3 reports the accuracy obtained on these different test splits. We can see a significant boost in the ZSL accuracy of test classes whose word labels are identified as primary meanings. In comparison, test splits made exclusively of secondary meanings performed poorly. This confirms that polysemy does indeed impact ZSL accuracy, and suggests that our solution for primary meaning identification allows addressing this problem.

4.3 Image samples

The ILSVRC dataset consists of a high-quality curated subset of the Imagenet dataset. The current ZSL benchmark uses ILSVRC classes as training classes and classes drawn from the remainder of the Imagenet dataset as test sets, assuming similar standards of quality from these test classes. Upon closer inspection, we found these test classes to contain many inconsistencies and ambiguities. In this section, we detail a solution to automatically filter out ambiguous samples so as to only select quality samples for our proposed benchmark.

4.3.1 Class-wise selection

Xian et al. [20] have first identified a correlation between the sample population of visual classes and their classification accuracy. They conjecture that small population classes are harder to classify because they correspond to fine-grained visual concepts, while large population classes correspond to easier, coarse-grained concepts. Manual inspection of these classes lead us to a different interpretation: First, we found no significant correlation between sample population and concept granularity (Appendix C). For example, fine-grained concepts such as specific species of birds or dogs tend to have high sample populations. On the other hand, we found many visually ambiguous concepts such as ”ringer”, ”covering” or ”chair of state” to have low sample populations. Such visually ambiguous concepts are harder for crowd-sourced annotators to reach consensus on labeling, resulting in lower population counts.

In Figure 5, we report the ZSL accuracy of our models on different test splits with respect to their average population counts. This figure shows a clear correlation between the sample population and the accuracy of both models, with low accuracy for low sample population classes. We use the sample population as a rough indicator to quickly filter out ambiguous visual classes and only consider classes with sample population superior to 300 images as valid candidate classes in our proposed dataset.

Figure 5: ZSL accuracy with respect to sample population sizes. Left: Distribution of Imagenet class population size. 6.1% of Imagenet classes have less than 10 samples, 21.1% have less than 100 samples. Right: ZSL accuracy of different test splits with respect to their mean sample population size.

4.3.2 Sample-wise selection

Even among the selected classes, we found many inconsistent and ambiguous images to remain (Appendix C), so we would like to further filter quality test images sample-wise. But what makes a good candidate image for a ZSL benchmark? How can we measure the quality of a sample? We argue that ZSL benchmarks should only reflect the zero-shot ability of models: ZSL benchmarks should evaluate the accuracy of ZSL models relatively to the accuracy of standard non-ZSL models. Hence, we define a good ZSL sample as an image unambiguous enough to be correctly classified by standard image classifiers trained in a supervised manner.

To automatically filter such quality samples, we fine-tune and evaluate a standard CNN in a supervised manner on the set of candidate test classes. We consider consistently miss-classified samples to be too ambiguous for ZSL and only select samples that were correctly classified by the CNN Details of this selection process are presented in Appendix C of the supplementary material.

4.4 Dataset Summary

Figure 6 summarizes the impact of the different factors we analyzed on the top-1 classification error of both our baseline models on the ”1-hop” test split. The error rate of the Linear model on the standard ZSL setting drops from 86% to 61% after removing ambiguous images, semantic samples, and structural flaws. The error rate of the GCN model on the generalized setting drops from 90% to 47%.

Figure 6: Estimation of the impact of different factors on the reported error of existing models on the 1-hop test split

The GCN model is particularly sensitive to the structural flaws of the standard benchmark, but less sensitive to noisy word embeddings than the linear baseline. This can be easily explained by the fact that GCN models rely on the explicit Wordnet hierarchy information as semantic data in addition to word embeddings. Additional results and details on the methodology of our analysis are given in Appendix D of the supplementary material.

5 Structural bias

ZSL models are inspired by the human ability to recognize unknown objects from a mere description, as it is often illustrated by the following example: Without having ever seen a zebra, a person would be able to recognize one, knowing that zebras look like horses covered in black and white stripes. This example illustrates the human capacity to compose visual features of different known objects to define and recognize previously unknown object categories.

Standard image classifiers encode class labels as local representations (one-hot embeddings), in which each dimension represents a different visual class, as illustrated in Figure 8. As such, no information is shared among classes in the label space: visual class embeddings are equally distant and orthogonal to each other. The main idea behind ZSL models is to instead embed visual classes into distributed representations

: In label space, visual classes are defined by multiple visual features (horse-ish shape, stripes, colors) shared among classes. Distributed representations allow to define and recognize unknown classes by

composition of visual features shared with known classes, in a similar manner as the human ability described above.

The embedding of visual classes into distributed feature representations is especially powerful since it allows to define a combinatorial number of test classes by composition of a possibly small set of features learned from a given set of training classes. Hence, we argue that the key challenge behind ZSL is to achieve ZS recognition of unknown classes by composition of known visual features, following their original inspiration of the human ability, and as made possible by distributed feature representations. In this section, we will see that not all ZSL problems require such kind of compositional ability. On the standard benchmark, we show that a trivial solution based on local representations of visual classes outperform existing approaches based on word embeddings. We show that this trivial solution is made possible by the specific configuration of the standard test splits and introduce the notion of structural bias to refer to the existence of such trivial solutions in ZSL datasets.

5.1 Toy example

Figure 7 illustrates a toy ZSL problem in which, given a training set of Horse and TV monitor images, the goal is to classify images of Zebra and PC laptop. Let’s consider training an image classifier on the training set and directly applying it to images from the test set. We can safely assume that most zebra images will be classified as horses, and most laptop samples as TV monitors. Hence, a trivial solution to this problem consists in defining a one to one mapping between test classes and their closest training class: Horse=Zebra and TV monitor=PC laptop. This example makes it fairly obvious that not all ZSL problems require the ability to compose visual features to solve.

Figure 7: Illustration of the toy example. Left: Wordnet-like class hierarchy. Training classes are shown in red and test class in green. Right: Illustration of image samples. The black captions represent the distance between classes as their shortest path length.

Classification problems define a close-world assumption: As all test samples are known to belong to one of the test classes, classifying an image into a given test class means that is more likely to belong to than other classes of the test set. In other words, classification is performed relatively to a negative set of classes [17]. What made this trivial ZSL solution possible is the fact that test classes of our toy example are very similar to one of the training class, relatively to their negative set. This allowed us to identify a one-to-one mapping by similarity between training and test classes. We refer to this trivial solution as a similarity-based solution, in opposition to solutions based on the composition of visual features.

Figure 8: Illustration of local (one-hot, on the left) and distributed (right) representations of visual classes. The similarity-based solution encodes both training and test classes as local representations. Composition-based solutions need distributed representations.

As illustrated in Figure 8, the similarity mapping between test and training classes can be directly embedded in the semantic space using local representations. The trivial solution consists in assigning to test classes the exact same semantic representation as their most similar training class. Consider applying these semantic embeddings within a ZSL framework to our toy problem: classifying a test image as a Horse relatively to the negative set of TV within the training set becomes strictly equivalent to classifying as Zebra relatively to its negative set PC within the test set. Hence, any existing ZSL model using these local embeddings instead of distributed representations like word embeddings would converge to the same solution.

5.2 Standard benchmark

Besides our toy example, how well would this trivial solution perform on the standard benchmark? To implement it, we used the Linear baseline model [15] with local representations inferred from the Wordnet hierarchy (see Appendix E), but any model would essentially converge to a similar solution. Table 1 compares the accuracy of this trivial solution to state of the art models as reported in [21, 7]. The trivial similarity-based solution outperforms existing ZSL models by a significant margin. Only GCN-based models [7], which we discuss in the next section, seem to outperform our trivial solution.

model 1-hop 2-hops all
SYNC [1] 9.26 2.29 0.96
CONSE [13] 7.63 2.18 0.95
ESZSL [15] 6.35 1.51 0.62
LATEM [20] 5.45 1.32 0.5
DEVISE[3] 5.25 1.29 0.49
CMT [16] 2.88 0.67 0.29
GCNZ [18] 19.8 4.1 1.8
ADGPM [7] 26.6 6.3 3.0
Trivial 20.27 3.59 1.53
Table 1: Top-1 accuracy on the standard test splits (top) as reported for linear baselines in [21], (middle) as reported for GCN-based models in [7] and (down) obtained by our trivial solution

5.3 Measuring structural bias

In our toy example, we have hinted at the fact that structural bias emerges for test sets in which test classes are relatively similar to training classes, while being comparably more dissimilar to each other (to their negative set). To confirm this intuition, we define the following structural ratio:

(5a)
(5b)

In which represents a visual class, and represent test and training sets respectively, and is a distance reflecting similarity between two classes. Here, represents the ratio of the distance between and its closest training class to the distance between and its closest test class. In our experiments, we use the the shortest path length between two classes in the Wordnet hierarchy as a measure of distance , although different metrics would be interesting to investigate as well. We compute the structural ratio of a test set as the mean structural ratio of its individual classes. Figure 9 shows the top-1 accuracy achieved by baseline models on different test sets with respect to their structural ratio . As for previous experiments, we report our results on test splits of 100 classes.

Figure 9: ZSL accuracy on different test sets with respect to their structural ratio .

On test splits of low structural ratio, the trivial solution performs remarkably well, on par with the state of the art GCN model. Such test splits are similar to the toy example in which each test class is closely related to a training class while being far away from other test classes in the Wordnet hierarchy. As an example, the structural ratio of the test split in our toy example is , which corresponds to the highest accuracies achieved by the trivial solution. We say that such test split is structurally biased towards similarity-based trivial solutions.

However, the accuracy of the similarity-based trivial solution decreases sharply with the structural ratio until it reaches near chance accuracy for the highest ratios. Hence maximizing the structural ratio of test splits seems to be an efficient way to minimize structural bias. Although their accuracy decrease with larger structural ratios, both GCN and Linear models remain well above chance. These results suggest that ZSL models based on word embeddings are indeed capable of compositional reasoning. At the very least, they are able to perform more complex ZSL tasks than the trivial similarity-based solution. Interestingly, as the trivial solution converges towards chance accuracy, the GCN model accuracy seems to converge towards the accuracy of the ZSL baseline. This suggests that the main reason behind the success of GCN models is that they efficiently leverage the Wordnet hierarchy to exploit structural bias.

The 1-hop and 2-hops test splits of the standard benchmark consist of the set of test classes closest to the training classes within the Wordnet hierarchy. This leads to test splits of very low structural ratio, similar to our toy example. For instance, the 1-hop test split has a structural ratio of 0.55. It is an example of structural bias even more extreme than our toy example as test classes are either children or parent classes of a training class. In the next section, we propose a new benchmark with maximal structural ratio in order to minimize structural bias.

6 New Benchmark

6.1 Proposed Benchmark

In this section, we briefly detail the semi-automated construction of a new benchmark designed to fix the different flaws of the current benchmark highlighted by our analysis. For space constraints, a number of minor considerations could not be properly presented in this paper. We detail these additional considerations in Appendix F of the supplementary material. Appendix F also provides additional details regarding the different parameters and the level of automation of each of the construction process. Appendix G provides details on the code and data we release. Following Frome et al. [3], we use the ILSVRC dataset as training set, and propose a new test set. The selection of this new test set proceeds in two steps:

In a first step, we select a subset of candidate test classes from the remaining 20,845 Imagenet classes based on the statistics of image samples and word labels: We first filter out semantic samples

corresponding to rare or polysemous words of secondary meaning (Section 4.2). We then discard visual classes of low sample population and filter out ambiguous image samples using supervised learning to select

(Section 4.3). The set of candidate test classes is the subset of visual classes for which sufficiently high quality image and semantic samples were selected.

In a second step, we define the test split as a structurally consistent set of minimal structural bias: The test set was carefully selected so as to contain no overlap among its own classes nor with the training classes in order to provide a structurally consistent test set for the generalized ZSL setting. This test set consists of 500 classes of maximal structural ratio so as to minimize structural bias.

6.2 Evaluation

Model ZSL G-ZSL
@1 @5 @1 @5
Trivial 1.2 3.9 0 0
CONSE [13] 10.65 25.10 0.12 19.34
DEVISE [3] 11.15 29.52 7.87 26.10
ESZSL [15] 13.54 32.61 4.59 25.53
GCN-6 [18] 9.58 27.19 4.81 23.35
GCN-2 [7] 14.09 35.12 4.96 30.35
ADGPM [7] 14.10 36.03 4.90 29.96
Table 2:

Evaluation on the proposed benchmark. Accuracy in the generalized ZSL setting are reported as harmonic means over training and test accuracy following

[21]

Table 2 presents the evaluation of a number of baseline models on the newly proposed benchmarks. A few notable results stand out from this table: First, different from the standard benchmark, CONSE [13] performs worse than DEVISE [3]. The relatively high accuracy reported by the CONSE model on the standard benchmark is most likely due to the fact that word embeddings of test classes are statistically close to the word embedding of their parent/children test classes so that CONSE results more closely fit the trivial similarity-based trivial solution. We expect model averaging methods to benefit the most from the structural bias in the standard benchmark.

Second, the impressive improvements reported by GCN-based models over linear baselines are significantly reduced, although GCN models still outperform linear baselines. This result corroborates the observation, in Section 5, that GCN models tend to converge towards the results of linear baseline models for high structural ratio.

7 Conclusion and Discussion

ZSL has the potential to be of great practical impact for object recognition. However, as for any computer vision task, the availability of a high quality benchmark is a prerequisite for progress. In this paper, we have shown major flaws in the standard generic object ZSL benchmark and proposed a new benchmark to address these flaws. More importantly, we introduced the notion of structural bias in ZSL dataset that allows trivial solutions based on simple similarity matching in semantic space. We encourage researchers to evaluate their past and future models on our proposed benchmark. It seems likely that sound ideas may have been discarded for their poor performance relative to baseline models that benefited most from structural bias. Some of these ideas may be worth revisiting today.

Finally, we believe that a deeper discussion on the goals and the definition of ZSL is still very much needed. There is a risk in developing complex models to address poorly characterized problems: Mathematical complexity can act as a smokescreen of complexity that obfuscates the real problems and key challenges behind ZSL. Instead, we believe that practical considerations grounded in common sense are still very much needed at this stage of ZSL research. The identification of structural bias is a first step towards a sound characterization of ZSL problems. One practical way to continue this discussion would be to investigate structural bias in other ZSL benchmarks.

Aknowledgement

This work was supported in part by JSPS KAKENHI (Grant No. JP17K00236 and No. JP17H01995).

References

  • [1] S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha. Synthesized classifiers for zero-shot learning. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 5327–5336, 2016.
  • [2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. Ieee, 2009.
  • [3] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al. Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems, pages 2121–2129, 2013.
  • [4] A. Jabri, A. Joulin, and L. van der Maaten. Revisiting visual question answering baselines. In European conference on computer vision, pages 727–739. Springer, 2016.
  • [5] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 1988–1997. IEEE, 2017.
  • [6] K. Kafle and C. Kanan. An analysis of visual question answering algorithms. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 1983–1991. IEEE, 2017.
  • [7] M. Kampffmeyer, Y. Chen, X. Liang, H. Wang, Y. Zhang, and E. P. Xing. Rethinking knowledge graph propagation for zero-shot learning. arXiv preprint arXiv:1805.11724, 2018.
  • [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton.

    Imagenet classification with deep convolutional neural networks.

    In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [9] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 951–958. IEEE, 2009.
  • [10] H. Larochelle, D. Erhan, and Y. Bengio. Zero-data learning of new tasks. In AAAI, volume 1, page 3, 2008.
  • [11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • [12] G. A. Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41, 1995.
  • [13] M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean. Zero-shot learning by convex combination of semantic embeddings. arXiv preprint arXiv:1312.5650, 2013.
  • [14] G. Patterson and J. Hays. Sun attribute database: Discovering, annotating, and recognizing scene attributes. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2751–2758. IEEE, 2012.
  • [15] B. Romera-Paredes and P. Torr. An embarrassingly simple approach to zero-shot learning. In

    International Conference on Machine Learning

    , pages 2152–2161, 2015.
  • [16] R. Socher, M. Ganjoo, C. D. Manning, and A. Ng. Zero-shot learning through cross-modal transfer. In Advances in neural information processing systems, pages 935–943, 2013.
  • [17] A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1521–1528. IEEE, 2011.
  • [18] X. Wang, Y. Ye, and A. Gupta.

    Zero-shot recognition via semantic embeddings and knowledge graphs.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6857–6866, 2018.
  • [19] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-ucsd birds 200. 2010.
  • [20] Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele. Latent embeddings for zero-shot classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 69–77, 2016.
  • [21] Y. Xian, B. Schiele, and Z. Akata. Zero-shot learning-the good, the bad and the ugly. arXiv preprint arXiv:1703.04394, 2017.

Appendix A. Structural flaws

Figure 10 reproduces Figure 1 to help the following discussion. This figure illustrates the configuration of visual classes of the standard test splits within the Wordnet hierarchy. It should be noted that the 2-hops test split is a super-set of the 1-hop split: it contains both classes annotated in green and blue. Similarly, the all test split is a super-set of the 2-hops test split: it contains all blue, green and black classes. In the generalized ZSL setting, training classes (red) are also included in the test set.

Figure 10: Illustration of the standard test splits configuration.

Figure 11 and 12 illustrate the distribution of ZSL classification outputs on the 2-hops and all test splits respectively. On the 2-hops standard ZSL test set, 3.6% of test images were correctly classified by the Linear baseline model. This ratio corresponds to the percentage of images of Raptor correctly classified as Raptor, Buzzard images classified as Buzzard, etc. We refer to such classification outputs as True Positive (TP). These correspond to the accuracy reported by previous works on the standard benchmark. 2.3% of test images were classified as one of their parent class: These correspond to images of Buzzard or Hawk classified as Raptor or Bird for example. These classification outputs are considered as errors by the current benchmark, while they are semantically correct: a Hawk is just a specific kind of Bird. 3.7% of test images were classified as one of their children class: images of Raptor or Bird classified as Buzzard or Hawk. Such classification outputs are considered as errors by the current benchmark, whereas they may be either semantically correct or incorrect depending on the specific kind of bird in the image. We refer to both of these classification scenarios as False Negative (FN). On the other hand, an image of Buzzard classified as Aegypiidae is an actual classification error: Buzzard and Aegypiidae are two distinct, mutually exclusive concepts. We refer to such classification errors as True Negatives (TN).

Figure 11: Distribution of classification outputs on the 2-hops test split.
Figure 12: Distribution of classification outputs on the all test split.

Table 3 summarizes the ratio of false negative per true positive on each of the standard test split: . This table shows two interesting trends: First, as noted in the original paper, the ratio is much higher in the Generalized ZSL setting. This is due to the fact that ZSL models tend to classify test images as their parent or children training class. Second, in the standard ZSL setting, the ratio tends to increase with larger test sets: the GCN model ratios are 2.3, 3.8 and 4.1 on the 1-hop, 2-hops and all test splits respectively. We believe this is due to larger overlaps within the Wordnet hierarchy: In the 1-hop test set, the only FN classes for Cathartid images is Raptor. In the 2-hops test set, Buzzard, Condor, Raptor and Bird are all FN classification outputs for Cathartid images. This trend, however, does not hold for the Linear model in the Generalized ZSL setting.

1-hop 2-hops all
Model Task TP FN ratio TP FN. ratio TP FN ratio
Linear ZSL 14.7 10.2 0.7 3.6 6.0 1.7 1.6 2.8 1.7
GZSL 1.9 39.2 20.6 0.8 10.23 12.7 0.4 4.27 10.7
GCN ZSL 21.8 18.6 0.8 4.4 7.6 1.7 1.8 3.6 2.0
GZSL 10.3 34.2 2.3 2.6 10.0 3.8 1.1 4.5 4.1
Table 3: Ratio of false negatives (FN) per true positives (TP).

Appendix B. Word embeddings

Occurrence frequency

We used the full English Wikipedia corpus to estimate the occurrence frequency of words: we scanned the Wikipedia corpus to count the occurrence of each visual class labels (Hawk, Raptor or Aegypiidae, etc.). We use these occurrence counts as a measure to identify rare and common words. Figure 13 represents the cumulative distribution of visual class label occurrence counts.

Figure 13: Word occurrence cumulative distribution. The x axis is in logarithmic scale.

As shown in this figure, 24% of Imagenet class labels occur less than 10 times in the full Wikipedia corpus. 45% of Imagenet class labels occur less than 100 times. We found that fine-grain animal species, in particular, exhibit rare word labels (see Figure 10). We expect the word embedding of such classes to provide noisy semantic representations, which has been confirmed by the experiments presented in the original paper.

Polysemy

Figure 14: Illustration of two Wordnet concepts sharing the same label Queen.

Figure 18 illustrates several polysemous visual classes of the Imagenet dataset. To deal with polysemy, we want to assign a unique visual class to polysemous words. To do so, we define a similarity score between words and their visual classes . Given a polysemous word , we assign to its visual class of highest similarity score:

(6a)
(6b)

As a similarity score, we use the cosine similarity between word embeddings and the average word embedding of visual class parent and children concepts. Consider the example of the word

illustrated in Figure 14. There are 9 visual classes associated with the word in the Imagenet dataset. For brievity, we only consider two of the visual classes: one as an , and one as a The similarity score between and its visual class is given by:

(7a)
(7b)

The similarity score between and its visual class is given by:

(8a)
(8b)

So we assign the word to the visual class of highest similarity score: The one corresponding to the meaning.

Appendix C. Visual samples

Class-wise selection

Xian et al. [21] have proposed different test splits based on visual class sample populations. They conjecture that small population classes correspond to fine-grained visual concepts, while large population classes correspond to coarse-grained concepts. Manually inspecting each of these visual classes, we found many fine-grain concepts to have large image sample populations while many coarse grain concepts have small sample populations. As a measure of the ”granularity” of visual classes, we propose to use their distance to the root node within the Wordnet hierarchy. Fine-grain classes are lower in the Wordnet hierarchy, hence further away from the root node than coarse-grain classes.

Figure 15: Average sample population per visual class with respect to their ”granularity”.

Figure 15 shows the average sample population of visual classes with respect to their distance to the root node in the Wordnet hierarchy. Visual classes within 6 hops of the root node have an average sample population of 490 images. Visual classes within 10 hops of the root node have an average sample population of 700 images. This figure illustrates no clear correlation between visual class granularity and their sample population. In contrast, we found that many low sample population classes instead correspond to visually ambiguous concepts, as illustrated in Figure 19. Hence, we remove low sample population classes from our proposed benchmark to avoid visually ambiguous concepts.

Sample-wise selection process

We define high-quality image samples as images that can be correctly classified by a supervised model on a non-ZSL classification task. We propose a simple procedure to select such image samples. Given a set of labeled samples , our procedure returns a subset of high-quality images. This selection process is formalized in Algorithm 1, and proceeds as follows:

First, we randomly sample subsets of 1000 visual classes from the full Imagenet dataset. Classes are sampled so as to contain no overlap in the Wordnet hierarchy: random splits do not contain both parent and their children classes.

Second, we randomly sample 250 images per class as training samples, and use the remaining images as test samples. We fine-tune the last layer of a pretrained Resent-50 on the set of training samples, and evaluate the classification output of the model on the test samples.

We consider correctly classified image samples as high-quality test samples for our benchmark and discard the incorrectly classified images. We repeat this operation until all samples have been evaluated. The output of this procedure is a subset of high-quality image samples that were correctly classified by the model.

Input:
Imagenet Dataset:
ILSVRC-pretrained ResNet:
Output:
High-quality Imagenet subset:
Init:
Initialize an empty error set and accurate set:
while  do
      
      
      
      
       for  do
             if  then
                  
            else
                  
             end if
            
       end for
      
end while
end
Algorithm 1 Sample-wise selection procedure. is a sampling procedure that returns a subset of n non-overlapping classes (i.e.; no children classes and their parents are contained in ) from the class set . is a sampling procedure that returns a training set of n training samples for each class in , and the remaining samples as a test set . is a procedure that fine-tunes a model on the input training set .

Appendix D. Standard benchmark summary

Figure 15 of the main paper summarizes the impact of visual, semantic and structural flaws on the top-1 accuracy of the 1-hop test split.

In these plots, the accuracy score (in green) corresponds to the model accuracy as reported by the standard benchmark. The model error (in orange), represents the classification errors after removing ambiguous images, semantic samples, and structural flaws. For example, the error rate of the GCN model on the generalized setting drops from 90% to 47%. In order to estimate the impact of all three individual factors individually, we ran a set of experiments with all possible configurations: with or without considering visual sample quality, semantic sample quality, and structural flaws. The estimated impact reported for each factor corresponds to the mean improvement in classification accuracy brought by this specific factor within all the other factors configuration. Figure 16 and 17 of this supplementary material report similar analysis on the top-1 accuracy of the 2-hops and all test splits respectively.

Figure 16: Estimation of the impact of different factors on the reported error of existing models on the 2-hops test split.
Figure 17: Estimation of the impact of different factors on the reported error of existing models on the all test split

Appendix E. Trivial solution

To apply the trivial solution of the toy example to the standard benchmark, we need a similarity mapping between training and test classes. To define such mapping, we used the shortest path length between nodes of the Wordnet hierarchy as a measure of distance . We assign to test classes the semantic embedding of their closest training class, as formalized in equations (4.):

(9a)
(9b)
(9c)

However, this procedure leads to many test classes sharing the exact same semantic representations. Consider the example of Cathartid and Aegypiidae classes in Figure 10. Both classes are closest to the Vulture

training classes so they share the same semantic vector

This leads to undefined behaviors in the classification process. To differentiate between such classes, we add a small Gaussian noise to the semantic embeddings of test classes, following equation (4c).

The trivial solution can be implemented by any existing ZSL model using these semantic embeddings. The results reported in the original paper were computed using the Linear baseline.

Appendix F. Dataset construction

Additional considerations

A number of additional factors were taken into consideration in the construction of our proposed benchmark. For space constraints, we could not include these considerations in the original paper, so we briefly present them in this Appendix.

Sample population: The number of images per test class in the standard benchmark’s test splits is very uneven. Some test classes have as little as one sample image, while some classes have thousands of images. This leads to highly biased evaluations as test classes of high sample population have a larger impact on the reported classification accuracy. We select 100 quality samples for each test class to ensure an evenly distributed test set.

Mutual exclusion: To prevent false negative classification outputs, test classes should be mutually exclusive. The hierarchical structure of Wordnet allows us to automatically create test splits that do not include both parent and test classes, so we can automatically remove such mutually non-exclusive classes from the test sets. However, this is not sufficient to guarantee the mutual exclusivity of test classes. For example, the Imagenet dataset includes classes such as , , , or . We do not want to include such kinds of classes in our benchmark because classifying an image of as or would result in false negative outputs. These classes, although not directly related to each other in the Wordnet hierarchy, are not mutually exclusive. The Wordnet hierarchy does not provide the logical constructs to automatically detect such instances, so we manually inspect the set of candidate test classes and remove them from the test set.

Scale considerations: We favor images of generic objects captured at the scale of human perceptions: we remove classes of images taken at microscopic scale (biological cells, bacteria, etc.), or classes of images at astronomical scales (supernova).

Shape considerations: We favor objects that can be recognized by their characteristic shape and remove classes that require reading comprehension to identify. For example, we remove a number of medicines, such as or branded contents like . Figure 20 illustrates a few such classes.

Dataset construction Summary

Table 4 summarizes the different steps of the creation of our benchmark. It details the level of automation, the different parameters involved in each step, as well as the approximate ratio of visual classes selected within each of these steps.

Step Automation Parameters Filter ratio
Semantic Frequency Auto 82%
Polysemy Auto - 91%
Visual Class-wise Auto 63%
Sample-wise Auto , 100%
Shape Manual - 95-99%
Scale Manual - 99%
Structural Hierarchy Auto - 82%
Mutual Exclusivity Manual - 95-99%
Table 4: Summary of the benchmark construction steps

The majority of the visual classes filtered out from our benchmark were automatically discarded based on their weak semantic features, low sample population or structural constraints to avoid both parents and children classes be included in the test set. Only the semantic and visual sample selection steps are parameterized. We select word labels occurring at least 500 times within the Wikipedia corpus to avoid rare words. We only select visual classes with a sample population superior to 300 images.

Appendix G. Code & Data

The full Imagenet dataset, as considered in the all test split consists of over 13 million images, which is very time-consuming to download and process. In contrast, small-scale benchmarks like AwA, CUB or SUN come with off-the-shelf semantic and visual features. Furthermore, they are orders of magnitude smaller than the Imagenet dataset which makes it much easier for researchers to evaluate their models on. As a result, many recent works on ZSL have only reported the evaluation of their models on small-scale benchmarks, instead of the standard Imagenet benchmark.

To encourage researchers working on ZSL to evaluate their model on our proposed benchmark, we release pretrained semantic and visual features111Download instructions are available at https://github.com/TristHas/GOZ. The dataset is small enough to fit in the memory of most modern computer hardware so it allows for fast prototyping and evaluation. To work on the original raw images, we provide the URL of test images with a Python script for download.

In addition to this data, we also provide code for visual class selection and fast manipulation of the Wordnet hierarchy. This should allow researchers interested in the investigation of different factors impacting ZSL accuracy to quickly build different test splits.

Figure 18: Examples of polysemous classes
Figure 19: Examples of low sample population, visually ambiguous classes.
Figure 20: Examples of manually discarded classes. Cell and Supernova correspond to microscopic and astronomic scale images. Vitamin D, Vitamin C, and Pepsi were discarded as they require reading comprehension to identify.