Context-Aware Zero-Shot Recognition

04/19/2019 ∙ by Ruotian Luo, et al. ∙ ByteDance Inc. Seoul National University Toyota Technological Institute at Chicago 0

We present a novel problem setting in zero-shot learning, zero-shot object recognition and detection in the context. Contrary to the traditional zero-shot learning methods, which simply infers unseen categories by transferring knowledge from the objects belonging to semantically similar seen categories, we aim to understand the identity of the novel objects in an image surrounded by the known objects using the inter-object relation prior. Specifically, we leverage the visual context and the geometric relationships between all pairs of objects in a single image, and capture the information useful to infer unseen categories. We integrate our context-aware zero-shot learning framework into the traditional zero-shot learning techniques seamlessly using a Conditional Random Field (CRF). The proposed algorithm is evaluated on both zero-shot region classification and zero-shot detection tasks. The results on Visual Genome (VG) dataset show that our model significantly boosts performance with the additional visual context compared to traditional methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

page 8

page 12

page 13

page 15

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Supervised object recognition has achieved substantial performance improvement thanks to the advance of deep convolutional neural networks in the last few years 

[36, 35, 19, 16]. Large-scale datasets with comprehensive annotations, e.g., COCO [28], facilitate deep neural networks to learn semantic knowledge of the objects within a predefined set of classes. However, it is impractical to obtain rich annotations for every class in the world while it is important to develop the models that can generalize to new categories without extra annotations. On the other hand, human beings have capability to understand the unseen object categories using external knowledge such as language descriptions and object relationships. The problem of inferring objects in unseen categories is referred to as zero-shot object recognition in recent literature [13, 43].

In the absence of direct supervision, other resources of information such as semantic embedding [32]

, knowledge graph 

[42, 37], and attributes [39, 2] are often employed to infer the appearance of novel object categories through knowledge transfer from seen categories. The assumption behind the approaches is that if an unseen category is semantically close to a seen category, objects of the two categories should be visually similar.

Figure 1: An example of zero-shot recognition with context information. It contains two seen objects (person and dog) and one unseen object (frisbee). The prior knowledge of relationships between seen and unseen categories provide cues to resolve the category of the unseen object.

Besides inferring novel object categories using visual similarity, human often capture the information of an object in the scene context. For example, if we do not know the class label of the red disk-like object in the middle of the image shown in Figure 

1, it is possible to guess its category even with the limited visual cues by recognizing two other objects in the neighborhood, a person and a dog, and using the prior knowledge about the objects that a person and a dog potentially play with together. Suppose that a frisbee is known to be such kind of an object, we can infer the object as a frisbee even without seeing it before. In this scenario, the interaction between multiple objects, e.g.. person, dog, and frisbee, provides additional clues to recognize the novel object—frisbee in this case; note that the external knowledge about the object relationships (person and dog can play with frisbee) is required for unseen object recognition.

Motivated by this intuition, we propose an algorithm for zero-shot image recognition in the context. Different from the traditional methods that infer each of unseen objects independently, we aim to recognize novel objects in the visual context, i.e., by leveraging the relationships of the objects shown in an image. The relationship information is defined by a relationship knowledge graph in our framework and it is more straightforward to construct a knowledge graph than to collect dense annotations on images. In our framework, a Conditional Random Field (CRF) is employed to jointly reason over local context information as well as relationship graph prior. Our algorithm is evaluated on Visual Genome dataset [22], which provides a large number of object categories and diverse object relations; our model based on the proposed context knowledge representation illustrates the clear advantage when applied to various existing methods for zero-shot recognition. We believe the proposed topic will foster more interesting work in the domain of zero-shot recognition.

The main contributions of this work are as follows:

  • We introduce a new framework of zero-shot learning in computer vision, referred to as zero-shot recognition in the context, where unseen object classes are identified by the relation to other ones shown in the same image.

  • We propose a novel model based on deep neural networks and CRF, which learns to leverage object relationship knowledge to recognize unseen object classes.

  • The proposed algorithm achieves the significant improvement compared to existing methods on various models and setting that ignore visual context.

The rest of the paper has the following organization. Section 2 review existing zero-shot learning techniques for visual recognition. Section 3 and 4 describes our main algorithm and its implementation details, respectively. Experimental results are discussed in Section 5 and we conclude the paper in Section 6.

2 Related work

This section present the prior works related to our work including zero-shot learning, context-aware recognition, and knowledge graph.

2.1 Zero-shot learning

A wide range of external knowledge has been explored for zero-shot learning. Early zero-shot classification approaches adopt object attributes as a proxy to learn visual representation of unseen categories [39, 2, 3]. Semantic embeddings are learned from large text corpus and then utilized to bridge seen and unseen categories [12, 32]

. Combination of attributes and word embeddings are employed to learn classifiers of unseen categories by taking linear combinations of synthetic base classifiers 

[2, 3], and text descriptions are also incorporated later to predict classifier weights [26]. A recent work [42, 20] applies Graph Convolutional Network (GCN) [10] over WordNet knowledge graph to propagate classifier weights from seen to unseen categories. More detailed survey can be found in [13, 43].

In addition to these knowledge resources, we propose to exploit the object relationship knowledge in the visual context to infer unseen categories. To the best of our knowledge, this is the first work to consider pairwise object relations for zero-shot visual recognition. The proposed module can be easily incorporated into existing zero-shot image classification models, leading to performance improvement.

In addition to zero-shot recognition, zero-shot object detection (ZSD) task is also studied, which aims to localize individual objects of categories that are never seen during training [1, 40, 34, 46, 7]. Among the approaches, [46] focuses on generating object proposals for unseen categories while [1] trains a background-aware detector to alleviate the corruption of the “background” class with unseen classes. Also, [34]

proposes a novel loss function to reduce noise in semantic features. Although these methods handle object classification and localization jointly, none of them have attempted to incorporate context information in the scene.

2.2 Context-aware detection

Context information has been used to assist object detection before deep learning era

[14, 9, 11, 15, 8]. Deep learning approaches such as Faster R-CNN [36] allow a region feature to look beyond its own bounding box via the large receptive field. Object relationships and visual context are also utilized to improve object detection. For example, [44, 27] show that the joint learning of scene graph generation and object detection improves detection results while [6, 19] perform message passing between object proposals to refine detection results. A common-sense knowledge graph is used for weakly-supervised object detection [23]. For the categories without localization annotations, the common-sense knowledge graph is employed to infer their locations, which are then used as training data.

Although context-aware methods have been studied for object detection for a while, these methods are mostly designed for fully-supervised setting thus cannot be directly applied to zero-shot environment. For example, [9] uses occurrence frequency of object pairs, which is not available for unseen categories. [44] uses densely annotated scene graphs of all object categories to improve detection accuracy. In this paper, we explore to port context-aware idea to zero-shot setting.

2.3 Knowledge graphs

Knowledge graphs has been applied to various vision tasks including image classification [30, 25], zero-shot learning [38, 37, 42], visual reasoning [29, 47, 5], and visual navigation [45]. Graph-based neural networks often propagate information on the knowledge graph [30, 25, 42, 5]. Following [30, 5, 45], we construct the relationship knowledge graph used in our method in a similar way.

Figure 2: The overall pipeline of our algorithm. First, features for individual objects as well as object pairs are extracted from the image. An instance-level zero-shot inference module is applied on individual features to generate unary potentials. A relationship inference module takes pairwise features and relationship knowledge graph to generate pairwise potentials. Finally, the most likely object labels are inferred from CRF constructed by generated potentials.

3 Context-aware zero-shot recognition

3.1 Problem formulation

The existing zero-shot recognition techniques [12, 24] mostly focus on classifying objects independently with no consideration of potentially interacting objects. To facilitate context-aware inference for zero-shot recognition, we propose to classify all the object instances—both seen and unseen objects—in an image. We first assume that the ground-truth bounding box annotations are given and propose to recognize objects in the unseen classes. After that, we also discuss zero-shot object detection when the ground-truth bounding boxes are not available at test time.

Our model takes an image and a set of bounding boxes (regions) as its inputs, and produeces a class label out of the label set for each region. Under the zero-shot recognition setting, the label set is split into two subsets, for seen categories and for unseen categories, where the two sets satisfy and . The object labels in are available during training while the ones in are not. The model needs to classify regions of both seen and unseen categories in testing.

Some existing zero-shot recognition approaches have utilized knowledge graph [42]

for transfer learning from seen to unseen categories, where an object in an unseen category is recognized through the cues from the related seen categories in the knowledge graph. The edges in the knowledge graph typically represent visual similarity or hierarchy. In our formulation, a relationship knowledge graph has edges representing the ordered pairwise relationships in the form of

subject, predicate, object, which indicate the possible interactions between a pair of objects in an image. A directed edge denotes a specific predicate (relation) in the relationship given by a tuple subject, predicate, object

. We may have multiple relations for the same pair of categories; in other words, there can be multiple relationships defined on an ordered pair of categories. Given a set of relations,

, the relationship graph is defined by , where denotes a set of classes and is a set of directed edges representing relations between all pairs of a subject class and an object class . Note that is the number of all possible predicates between the ordered pair of classes.

3.2 Our framework

Our framework is illustrated in Figure 2

. From an image with localized objects, we first extract features from the individual objects and the ordered object pairs. We then apply an instance-level zero-shot inference module to the individual object features, and obtain a probability distribution of the object over all object categories. The individual class likelihoods are used as unary potentials in the unified CRF model. A relationship inference module takes the pairwise features as an input and computes the corresponding pairwise potentials using the relationship graph.

Specifically, let and () be an image region and a class assignment of objects in an image. Our CRF inference model is given by

(1)

where the unary potential comes from the instance-level zero-shot inference module and the pairwise potential is obtained from the relationship inference module. is a weight parameter balancing between unary and pairwise potentials.

The final prediction is generated through the MAP inference on the CRF model given by Eq. (1). We call the whole procedure context-aware zero-shot inference. Similar techniques can be found in context-aware object detection techniques [9, 14]. However, we claim that our algorithm has sufficient novelty because we introduce a new framework of zero-shot learning with context and design the unary and pairwise potentials specialized in CRF for zero-shot setting. We hereafter use and as the abbreviations for and , respectively. We discuss the detail of each component in the CRF next.

3.2.1 Instance-level zero-shot inference

We use a modified version of Fast R-CNN framework [16] to extract features from individual objects. The input image and the bounding boxes are passed through a network composed of convolutional layers and RoiAlign [17] layer. The network outputs a region feature for each region, which is further forwarded to a fully connected layer to produce the probability of each class , where is a weight matrix. The unary potential of the CRF is then given by

(2)

Although it is straightforward to learn the network parameters including in the fully supervised setting, we can train the model only for the seen categories and obtain

. To handle the classification of unseen category objects, we have to estimate

as well and construct the full parameter matrix for prediction. There are several existing approaches [2, 26, 4] to estimate the parameters for the unseen categories from external knowledge. We will evaluate the performance of our context-aware zero-shot learning algorithm in the several parameter estimation techniques for unseen categories in Section 5.

3.2.2 Relationship inference with relationship graph

The pairwise potential of the CRF model is given by a relationship inference module. It takes a pair of regions as its inputs and produces a relation potential, , which indicates the likelihood of the relation between the two bounding boxes. Then the pairwise potential of the CRF is formulated as

(3)

where is an indicator function whether tuple exists in the relationship graph. Intuitively, a label assignment is encouraged when the possible relations between the labels have large likelihoods.

The relationship inference module estimates the pairwise potential from a geometric configuration feature using an embedding function followed by a two-layer multilayer perceptron as

(4)

where is the relative geometry configuration feature of two objects corresponding to and based on [19] and embeds its input onto a high-dimensional space by computing cosine and sine functions of different wavelengths [41]. Formally, translation- and scale-invariant feature is given by

(5)

where represents the location and size of .

To train the MLP in Eq. (4), we design a loss function based on pseudo-likelihood, which is the likelihood of a region given the ground-truth labels of the other regions. Maximizing the likelihood increases the potential of true label pairs while suppressing the wrong ones. Let to be the ground-truth label of . The training objective is to minimize the following loss function:

(6)

where denotes the ground-truth labels of bounding boxes other than and

(7)

Note that is learned implicitly through optimizing of this loss. No ground-truth annotation about relationships is used in training.

3.2.3 Context-aware zero-shot inference

The final step is to find the assignment that maximizes given the trained CRF defined by Eq. (1). We adopt mean field inference [21] for efficient approximation. A distribution is used to approximate , which is given by the product of the independent marginals, which is given by

(8)

To get a good approximation of , we minimize the KL-divergence, , while constraining and to be valid distributions. The optimal is obtained by iteratively updating using the following rule:

(9)

where is a partition function.

The pairwise potential defined in Eq. (3) involves a matrix. Since it may incur a huge computation overhead when and are large, we perform pruning for acceleration. We select the categories with top probabilities in terms of . In this way, our method can be viewed as a cascade algorithm; the instance-level inference serves as the first layer of the cascade, and the context-aware inference refines the results using relationship information.

4 Implementation

This section discusses more implementation-oriented details of our zero-shot recognition algorithm.

4.1 Knowledge graph

We extract our relationship knowledge graph from Visual Genome dataset, similar to [30, 5, 45]. We first select 20 most frequent relations and collect all the subject-object relationships that (1) occurs more than 20 times in the dataset and (2) have the relation defined in . The purpose of this process is to obtain a knowledge graph with common relationships. The relation set includes ‘on’, ‘in’, ‘holding’, ‘wearing’ etc. We will release our code, pretrained model and this relationship knowledge graph once the paper is accepted.

4.2 Model

We build our model based on a PyTorch Mask/Faster R-CNN 

[17] implementation with RoIAlign [17]111https://github.com/roytseng-tw/Detectron.pytorch while the region proposal network and the bounding box regression branch are removed because ground-truth object regions are given. We use ResNet-50 [18] as our backbone model. Each image is resized with its shorter side 600 pixels.

4.3 Training

We use a stochastic gradient descent with momentum to optimize all the modules. The instance-level zero-shot inference and relationship inference modules are trained separately in two stages. In the first stage, we train the instance-level zero-shot module on seen categories for 100K iterations. The model is fine-tuned from the pretrained ImageNet classification model. The learning rate is initialized to 0.005 and reduced by

after 60K and 80K iterations. After training on the seen categories, we run external algorithms are applied to transfer the knowledge to unseen categories. In the second stage, we train the relationship inference module for another 60k iterations with all the other modules fixed. To facilitate training, we omit unary potentials in Eq. (7) in practice. The learning rate is also initialized to 0.005 and reduced by after 20K and 40K iterations. For all the modules, the parameter for the weight decay term is set to

, and the momentum is 0.9. The batch size is set to 8, and the batch normalization layers are fixed during training.

5 Experiments and results

Classic/unseen Generalized/unseen Classic/seen Generalized/seen HM (Generalized)
per-cls per-ins per-cls per-ins per-cls per-ins per-cls per-ins per-cls per-ins
WE 18.9 25.9 3.7 3.7 35.6 57.9 33.8 56.1 6.7 6.9
WE+Context 19.5 28.5 4.1 10.0 31.1 57.4 29.2 55.8 7.2 17.0
CONSE 19.9 27.7 0.1 0.6 39.8 31.7 39.8 31.7 0.2 1.2
CONSE+Context 19.6 30.2 5.8 20.7 29.6 38.8 25.7 35.0 9.5 26.0
GCN 19.5 28.2 11.0 18.0 39.9 31.0 31.3 22.4 16.3 20.0
GCN+Context 21.2 33.1 12.7 26.7 41.3 42.4 32.2 35.0 18.2 30.3
SYNC 25.8 33.6 12.4 17.0 39.9 31.0 34.2 24.4 18.2 20.0
SYNC+Context 26.8 39.3 13.8 26.5 41.5 39.4 34.5 31.7 19.7 28.9
Table 1:

Results on Visual Genome dataset. Each group includes two rows. The upper one are baseline methods from zero-shot image classification literature. The lower ones are the results of their models attached with our context-aware inference. HM denotes harmonic mean of the accuracies on

and .

5.1 Task

We mainly evaluate our system on zero shot region classification task. We provide ground-truth locations, for both training and testing. It enables us to decouple the recognition error from the mistakes from other modules including proposal generation, and diagnose clearly how much context helps zero-shot recognition on object level. As a natural extension of our work, we also evaluate on zero-shot detection task. In this case, we feed region proposals obtained from Edgeboxes [48] instead of ground-truth bounding boxes as input at test time.

5.2 Dataset

We evaluate our method on Visual Genome (VG) dataset [22], which contains 108K images that have 35 objects and 21 relationships between objects in average. VG contains two subsets of images, part-1 with around 60K images and part-2 with around 40K images. For our experiment, only a subset of categories are considered and the annotated relationships are not directly used.

We use the same seen and unseen category split in [1]. 608 categories are considered for classification. Among these, 478 are seen categories, and 130 are unseen categories. The part-1 of VG dataset are used for training, and randomly sampled images from part-2 are used for test. This results in 54,913 training images and 7,788 test images222The training images still include instances of unseen categories, because pure images with only seen categories are too few. However, we only use annotations of seen categories.. The relationship graph in this dataset has 6,396 edges.

5.3 Metrics and settings

We employ classification accuracy (AC) for evaluation, where results are aggregated in two ways; “per-class” computes the accuracy for each class and then computes the average over all classes while “per-instance” is the average accuracy over all regions. Intuitively, “per-class” metric gives more weight to the instances from rare classes than “per-instance” one.

The proposed algorithm is evaluated in both the classic and the generalized zero-shot settings. The model is only asked to predict among the unseen categories at test time in the classic setting while it needs to consider both seen and unseen categories under generalized setting. The generalized setting is more challenging than the classic setting because the model has to distinguish between seen and unseen categories.

5.4 Baseline methods

We compare our method with several baselines. Note that all baselines treat each object in an image as a separate image thus only utilizing instance-level features for inference.

Word Embedding (WE)

As described in Section 3.2.1

, a classification is performed by a dot product between a region feature and a weight vector. In this method, weight vector is set to be the GloVe 

[33] word embedding of each category. Note that the same word embedding is used for the other settings.

Conse [32]

CONSE first trains classifiers on with full supervision. At test time, each instance in an unseen class is embedded onto the word embedding space by a weighted sum of the seen category embeddings, where the weights are given by the classifier defined on . Then the image is predicted to the closest unseen (and seen in the generalized setting) class in the word embedding space.

Gcn [42]

Similar to CONSE, GCN first trains classifiers on . Then it learns a GCN model to predict classifier weights for from the model for the seen classes. The GCN takes the word embeddings of all the seen and unseen categories and the classifier weights of

as its inputs, and learns the global classifier weights by regression. In the end, the predicted classifier weights are used in the inference module for both seen and unseen categories. We use a two-layer GCN with LeakyReLU as the activation function. Dropout is applied in the intermediate layer and L2 normalization is applied at the output of the network. Following

[42], we use WordNet [31] to build the graph. Each category in VG has its corresponding synset, and is represented as a node in the graph. We also add common ancestor nodes of the synsets in VG to connect them in the graph. In total, 1228 nodes are included in the graph.

Sync [2, 3]

This approach aligns semantic and visual manifolds via use of phantom classes. The weight of phantom classifier is trained to minimize the distortion error as

(10)

where is the semantic similarity matrix between seen categories and phantom classes and is the model parameter of the phantom classifier. The classifier weights for is given by a convex combinations of phantom classifier as

(11)

where is the semantic similarity matrix between unseen categories and phantom classes.

5.5 Zero-shot recognition results

Table 1 presents the performance of our context-aware algorithm based on the four zero-shot recognition baseline methods. On all backbone baselines, our model improves the accuracy on both unseen categories, both in classic and generalized settings. The performances on seen categories are less consistent, which is mainly due to the characteristics of baseline methods, but still better in general.

For the original WE and CONSE methods, we can see that there are huge accuracy gaps between seen and unseen categories, especially under generalized setting. This implies that the backbone models are biased towards seen categories significantly. Hence, it is natural that our model sacrifices accuracy on to improve performance on . GCN and SYNC, on the contrary, are more balanced, and our algorithm is able to consistently improve on both seen and unseen categories combined with GCN and SYNC.

The harmonic means of accuracies on seen and unseen categories are consistently higher in our context-aware algorithm than in the baseline methods under generalized setting. Note that this metric is effective to compare overall performance on both seen and unseen categories as suggested in [43].

.7 animal giraffe @zebra herd coat

@zebra giraffe animal herd coat
.7 pie @spatula @pizza sandwich @sugar

@pizza sandwich pie @spatula @sugar

.7 furniture @chair stool rug @tarpaulin

@chair furniture rug stool @tarpaulin
.7 skyscraper @building @house sun church

@building skyscraper @house sun church

.7 @paw hoof @floor pebble sand

hoof @paw @floor pebble sand
.7 billboard @poster sign awning @tank

sign billboard @poster awning @tank

.7 @mountain hill @tree lake cliff

@tree @mountain hill lake cliff
.7 @sausage @pepperoni @cake pastry meat

@pepperoni meat @cake @sausage pastry
Figure 3: Examples of top-5 predictions change before (below left) and after (below right) context-aware inference. Blue boxes are examples of correct refinement and red ones denote failure cases. Each unseen category is prefixed with an @ for distinction.
.7 .7 .7 .7
Figure 4: More qualitative results for zero shot region classification. The blue and green bounding boxes corresponds to objects of seen and unseen categories, respectively.
Classic/unseen Generalized/unseen Classic/seen Generalized/seen
per-cls per-ins per-cls per-ins per-cls per-ins per-cls per-ins
GCN 19.5 28.2 11.0 18.0 39.9 31.0 31.3 22.4
GCN+G 21.2 33.1 12.7 26.7 41.3 42.4 32.2 35.0
GCN+GA 20.4 26.5 9.2 15.3 40.9 44.8 34.7 40.9
SYNC 25.8 33.6 12.4 17.0 39.9 31.0 34.2 24.4
SYNC+G 26.8 39.3 13.8 26.5 41.5 39.4 34.5 31.7
SYNC+GA 26.6 33.6 11.3 16.4 41.6 42.8 36.5 38.5
Table 2: Results of different inputs to relationship inference module. *+G is the model with only geometry information. *+GA is the model with both geometry and appearance feature.
Generalized Classic
top-1 top-5 top-1 top-5
WE+Ctx 03.7 10.0 26.6 25.9 28.5 57.5
CONSE+Ctx 00.6 20.7 29.4 27.7 30.2 56.1
GCN+Ctx 18.0 26.7 38.3 28.2 33.1 51.6
SYNC+Ctx 17.0 26.5 49.4 33.6 39.3 68.9
Table 3: Per-instance top- accuracy on unseen categories.
Top- refinement

As we mentioned in Section 3.2.3, our pruning method makes the context-aware inference a top- class reranking. We conduct current experiment with , results with other options of can be seen in Appendix. In Table 3, we show “per-instance” top-1 accuracy versus top-5 accuracy of different algorithms on unseen categories. The top-5 accuracies are not changed since we only rerank the top-5 classes, and the top-1 accuracy we can achieve are upper bounded by the corresponding top-5 accuracy. After applying context-aware inference, the top-1 accuracies increase. Notably, the baseline model of CONSE has near 0 accuracy under generalized setting because it biases towards seen categories severely. However, its top-5 accuracy is reasonable. Our method is able to reevaluate top-5 predictions with the help of relation knowledge and increase the top-1 accuracy significantly.

Qualitative results

Figure 3 shows qualitative results from the context-aware inference. Our context-aware model adjusts the class probabilities based on the object context. For example, zebra is promoted in the first image because the bands on its body while sausage helps recognize the pizza in the second image. Different patterns can be found for label refinement: general to specific (furniture to chair, air craft to airplane, animal to zebra), specific to general (skyscraper to building), and corrected to similar objects (pie to pizza, paw to hoof). Figure 4 shows more qualitative results of region classification after applying context-aware inference.

Input choices for relationship inference

Our relationship inference module only takes geometry information as input to avoid overfitting to seen categories. One alternative we tried is combining it with region appearance feature. We project region features and into lower dimension and concatenate it with to produce relation potentials. We report the results in Table 2. The appearance augmented relationship inference module is named as +GA in the table. It’s shown that +GA biases towards seen categories, and hurts performance on unseen categories. +GA on generalized setting on unseen categories is even worse than the baselines.

Results by varying the size of

We generate several subsets of by subsampling with the ratios of and , while the unseen category set remains the same. Table 4 shows that our context-aware method consistently benefits in the zero-shot recognition in this ablation study.

# of seens GCN GCN+Ctx SYNC SYNC+Ctx
  95 (20%)   7.2   7.7 0.5 10.6 10.9 0.3
239 (50%) 13.8 14.2 0.4 19.5 19.5 0.0
478 (100%) 19.5 21.2 1.7 25.8 26.8 1.0
Table 4: Results by varying the size of in terms of per-cls accuracy on in classic setting.

5.6 Zero-shot detection results

We extend our region classification model for detection task by adding a background detector. We set the classifier weight of background class to be normalized average classifier weights:

where each row of needs to be normalized in advance. Furthermore, given thousands of region proposals, we only consider the top 100 boxes with highest class scores given by instance-level module for context-aware inference.

Following [1], EdgeBoxes proposals are extracted for test images, where only proposals with scores higher than 0.07 are selected. After detection, non-maximum suppression is applied with IOU threshold 0.4. Due to incomplete annotations in VG, we report Recall@100 scores with IOU threshold 0.4/0.5. Table 5 presents instance-level zero-shot performance of GCN and SYNC models, where our method shows improved accuracy on unseen categories and higher overall recalls given by harmonic means. Note that our results on the generalized zero-shot setting already outperforms the results on the classic setting reported in [1].

Unseen Seen Harmonic mean
0.4 0.5 0.4 0.5 0.4 0.5
GCN   8.5 6.2 23.1 17.8 12.4   9.2
GCN+Context   9.7 6.9 22.3 16.0 13.5   9.6
SYNC 11.1 8.2 24.2 18.8 15.2 11.4
SYNC+Context 12.0 8.6 23.1 17.4 15.8 11.5
Table 5: Generalized zero-shot detection results. Recall@100 with IOU threshold 0.4/0.5 is reported.

6 Conclusions

We presented a novel setting for zero-shot object recognition, where high-level visual context information is employed for inference. Under this setting, we proposed a novel algorithm to incorporate both instance-level and object relationship knowledge in a principled way. Experimental results show that our context-aware approach boosts the performance significantly compared to the models with only instance-level information. We believe that this new problem setting and the proposed algorithm facilitate more interesting research for zero-shot or few-shot learning.

References

  • [1] A. Bansal, K. Sikka, G. Sharma, R. Chellappa, and A. Divakaran. Zero-shot object detection. In The European Conference on Computer Vision (ECCV), September 2018.
  • [2] S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha. Synthesized classifiers for zero-shot learning. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 5327–5336, 2016.
  • [3] S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha. Classifier and exemplar synthesis for zero-shot learning. arXiv preprint arXiv:1812.06423, 2018.
  • [4] S. Changpinyo, W.-L. Chao, and F. Sha. Predicting visual exemplars of unseen classes for zero-shot learning. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 3496–3505. IEEE, 2017.
  • [5] X. Chen, L.-J. Li, L. Fei-Fei, and A. Gupta. Iterative visual reasoning beyond convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [6] Z. Chen, S. Huang, and D. Tao. Context refinement for object detection. In The European Conference on Computer Vision (ECCV), September 2018.
  • [7] B. Demirel, R. G. Cinbis, and N. Ikizler-Cinbis. Zero-shot object detection by hybrid region embedding. arXiv preprint arXiv:1805.06157, 2018.
  • [8] C. Desai, D. Ramanan, and C. C. Fowlkes. Discriminative models for multi-class object layout. International journal of computer vision, 95(1):1–12, 2011.
  • [9] S. K. Divvala, D. Hoiem, J. H. Hays, A. A. Efros, and M. Hebert. An empirical study of context in object detection. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1271–1278. IEEE, 2009.
  • [10] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, 2015.
  • [11] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence, 32(9):1627–1645, 2010.
  • [12] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al. Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems, pages 2121–2129, 2013.
  • [13] Y. Fu, T. Xiang, Y.-G. Jiang, X. Xue, L. Sigal, and S. Gong. Recent advances in zero-shot recognition: Toward data-efficient understanding of visual content. IEEE Signal Processing Magazine, 35(1):112–125, 2018.
  • [14] C. Galleguillos and S. Belongie. Context based object categorization: A critical survey. Computer vision and image understanding, 114(6):712–722, 2010.
  • [15] C. Galleguillos, A. Rabinovich, and S. Belongie. Object categorization using co-occurrence, location and appearance. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008.
  • [16] R. Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.
  • [17] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2980–2988. IEEE, 2017.
  • [18] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [19] H. Hu, J. Gu, Z. Zhang, J. Dai, and Y. Wei. Relation networks for object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [20] M. Kampffmeyer, Y. Chen, X. Liang, H. Wang, Y. Zhang, and E. P. Xing. Rethinking knowledge graph propagation for zero-shot learning. arXiv preprint arXiv:1805.11724, 2018.
  • [21] D. Koller, N. Friedman, and F. Bach. Probabilistic graphical models: principles and techniques. MIT press, 2009.
  • [22] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73, 2017.
  • [23] K. Kumar Singh, S. Divvala, A. Farhadi, and Y. Jae Lee. Dock: Detecting objects by transferring common-sense knowledge. In The European Conference on Computer Vision (ECCV), September 2018.
  • [24] C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(3):453–465, 2014.
  • [25] C.-W. Lee, W. Fang, C.-K. Yeh, and Y.-C. F. Wang. Multi-label zero-shot learning with structured knowledge graphs. 2018.
  • [26] J. Lei Ba, K. Swersky, S. Fidler, et al. Predicting deep zero-shot convolutional neural networks using textual descriptions. In Proceedings of the IEEE International Conference on Computer Vision, pages 4247–4255, 2015.
  • [27] Y. Li, W. Ouyang, B. Zhou, K. Wang, and X. Wang. Scene graph generation from objects, phrases and region captions. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
  • [28] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
  • [29] T. Malisiewicz and A. Efros. Beyond categories: The visual memex model for reasoning about object relationships. In Advances in neural information processing systems, 2009.
  • [30] K. Marino, R. Salakhutdinov, and A. Gupta. The more you know: Using knowledge graphs for image classification. arXiv preprint arXiv:1612.04844, 2016.
  • [31] G. A. Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41, 1995.
  • [32] M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean. Zero-shot learning by convex combination of semantic embeddings. arXiv preprint arXiv:1312.5650, 2013.
  • [33] J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. In

    Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)

    , pages 1532–1543, 2014.
  • [34] S. Rahman, S. Khan, and F. Porikli. Zero-shot object detection: Learning to simultaneously recognize and localize novel concepts. arXiv preprint arXiv:1803.06049, 2018.
  • [35] J. Redmon and A. Farhadi. Yolo9000: better, faster, stronger. arXiv preprint, 2017.
  • [36] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
  • [37] M. Rohrbach, M. Stark, and B. Schiele. Evaluating knowledge transfer and zero-shot learning in a large-scale setting. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1641–1648. IEEE, 2011.
  • [38] M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych, and B. Schiele. What helps where–and why? semantic relatedness for knowledge transfer. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 910–917. IEEE, 2010.
  • [39] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [40] Q. Tao, H. Yang, and J. Cai. Zero-annotation object detection with web knowledge transfer. In The European Conference on Computer Vision (ECCV), September 2018.
  • [41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.
  • [42] X. Wang, Y. Ye, and A. Gupta. Zero-shot recognition via semantic embeddings and knowledge graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6857–6866, 2018.
  • [43] Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata. Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence, 2018.
  • [44] J. Yang, J. Lu, S. Lee, D. Batra, and D. Parikh. Graph r-cnn for scene graph generation. In The European Conference on Computer Vision (ECCV), September 2018.
  • [45] W. Yang, X. Wang, A. Farhadi, A. Gupta, and R. Mottaghi. Visual semantic navigation using scene priors. arXiv preprint arXiv:1810.06543, 2018.
  • [46] P. Zhu, H. Wang, T. Bolukbasi, and V. Saligrama. Zero-shot detection. arXiv preprint arXiv:1803.07113, 2018.
  • [47] Y. Zhu, A. Fathi, and L. Fei-Fei. Reasoning about object affordances in a knowledge base representation. In European conference on computer vision, 2014.
  • [48] C. L. Zitnick and P. Dollár. Edge boxes: Locating object proposals from edges. In European conference on computer vision, 2014.

Appendix A Effect of in pruning

For efficiency, we use only top- categories for inference in the mean field algorithm described in Section (3.2.3). Here we show that the choice of only affects the accuracy of our algorithm marginally. Table 6 shows the result of four baseline models with different choices of . We notice higher leads to slightly worse performance on seen categories, but improves on unseen categories in general.

Classic/unseen Generalized/unseen Classic/seen Generalized/seen HM(Generalized)
per-cls per-ins per-cls per-ins per-cls per-ins per-cls per-ins per-cls per-ins
WE+Context(5) 19.5 28.5 4.1 10.0 31.1 57.4 29.2 55.8 7.2 17.0
WE+Context(10) 20.1 33.1 4.1 11.3 30.0 56.7 28.1 55.0 7.2 18.7
WE+Context(20) 20.5 36.0 4.0 11.6 29.5 56.2 27.6 54.3 7.0 19.1
CONSE+Context(5) 19.6 30.2 5.8 20.7 29.6 38.8 25.7 35.0 9.5 26.0
CONSE+Context(10) 18.6 32.7 6.0 23.3 23.9 36.4 19.5 31.2 9.2 26.7
CONSE+Context(20) 16.6 33.0 5.3 22.1 18.3 32.5 14.1 26.4 7.7 24.1
GCN+Context(5) 21.2 33.1 12.7 26.7 41.3 42.4 32.2 35.0 18.2 30.3
GCN+Context(10) 21.8 35.6 12.7 27.9 40.3 45.3 30.9 36.4 18.0 31.6
GCN+Context(20) 21.4 36.7 12.0 28.1 39.3 45.7 30.0 36.3 17.1 31.7
SYNC+Context(5) 26.8 39.3 13.8 26.5 41.5 39.4 34.5 31.7 19.7 28.9
SYNC+Context(10) 27.2 41.6 13.8 27.1 41.3 41.2 34.4 32.4 19.7 29.5
SYNC+Context(20) 27.2 42.2 13.9 27.1 41.2 41.6 34.2 32.4 19.7 29.5
Table 6: Performance of our model with different top- settings for CRF inference. The number in the parentheses is the parameter for each setting, respectively.
Computation cost

Relative to runtime without context inference, top-100 and top-5 pruning increase the runtime by 58% and 18% respectively. If without pruning, out-of-memory error will be raised.

Appendix B Choice of

We split the original seen label set into dev_seen and dev_unseen evenly; is chosen to be the best performing one on dev set. For WE, CONSE, GCN, is , and for SYNC, is set to be .

Appendix C Accuracy improvement for different classes

In the experiment section, we see that our algorithm improves more on ‘per-instance’ than ‘per-class’ metric. In order to investigate this outcome, we analyze the correlation between accuracy improvement of individual categories and two factors: degree of the category in the relation graph and frequency of the category. The model we use in this section is GCN+Context.

In Fig. 5, we analyze the correlation between accuracy improvement of individual categories and degree of the category in the relation graph. Here the degree of one category is equal to the number of relationships where the category is either subject or object. -axis is the degree of each category in the graph, while -axis is the relative accuracy improvement compared to the baseline model. It is shown that for classes with high degrees, the accuracies are mostly improved; for classes with low degrees, the accuracies actually drop a little bit on average. In general, since most categories are improved, the overall ‘per-class’ accuracy is improved.

In Fig. 6, we analyze the correlation between accuracy improvement and category frequencies. -axis is the number of samples for each category in the whole test set (number in training set is not available for unseen categories). We can see that categories with more occurrences in the test set have larger improvement. This is why more gains are obtained on ‘per-instance’ compared to ‘per-class’ metric. Generally, categories with more samples have more relation/interactions with other object categories thus providing more cues to be inferred from a context.

Figure 5: Correlation between accuracy improvement of individual categories and degree of the category in the relation graph. The width of each bar is proportional to logarithm of number of categories in the bin. -axis denotes degree of a category in the graph.
Figure 6: Correlation between accuracy improvement and category frequencies. The width of each bar is proportional to logarithm of number of categories in the bin. -axis represents number of samples of a category in the test set.

Appendix D Visualization of relation potentials and pairwise potentials

We provide visualizations of learned relation knowledge of our algorithm. The model we use in this section is GCN+Context.

In Fig. 7, we illustrate the relation potentials given the location of subject and object. Our model is able to learn pairwise ‘relation’ without any relation annotations. For example, in the first image, our model is able to give high potential to ‘wearing’ and ‘has’ given the two boxes.

Figure 7: Visualization of relation potentials. For a pair of objects, green box denotes subject and red one denotes object. The values of potential are shown on the right side of each image, respectively.

In Figure 8, we show the graph in which all the objects in one image are connected by pairwise potentials . The width of the line is proportional to the corresponding pairwise potential. For better visualization, edges with potential less than 0.5 are omitted. Objects that are related will have higher potentials. For example, in the top-left image, the wave and water has a thick edge since they have a strong relationship given by the pairwise potential.


Figure 8: Visualization of pairwise potentials. Edges with potential less than 0.5 are omitted. The thickness of the line indicates how large the potential is. The ground truth category is annotated on the top-left corner of each box.