Simple and effective localized attribute representations for zero-shot learning

by   Shiqi Yang, et al.
Universitat Autònoma de Barcelona

Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their semantic descriptions. Some recent papers have shown the importance of localized features together with fine-tuning the feature extractor to obtain discriminative and transferable features. However, these methods require complex attention or part detection modules to perform explicit localization in the visual space. In contrast, in this paper we propose localizing representations in the semantic/attribute space, with a simple but effective pipeline where localization is implicit. Focusing on attribute representations, we show that our method obtains state-of-the-art performance on CUB and SUN datasets, and also achieves competitive results on AWA2 dataset, outperforming generally more complex methods with explicit localization in the visual space. Our method can be implemented easily, which can be used as a new baseline for zero shot learning.


page 8

page 13

page 14


On Implicit Attribute Localization for Generalized Zero-Shot Learning

Zero-shot learning (ZSL) aims to discriminate images from unseen classes...

Zero-Shot Kernel Learning

In this paper, we address an open problem of zero-shot learning. Its pri...

Synthesized Classifiers for Zero-Shot Learning

Given semantic descriptions of object classes, zero-shot learning aims t...

Convolutional Prototype Learning for Zero-Shot Recognition

Zero-shot learning (ZSL) has received increasing attention in recent yea...

Zero-Shot Learning by Generating Pseudo Feature Representations

Zero-shot learning (ZSL) is a challenging task aiming at recognizing nov...

Rethink, Revisit, Revise: A Spiral Reinforced Self-Revised Network for Zero-Shot Learning

Current approaches to Zero-Shot Learning (ZSL) struggle to learn general...

Attributes' Importance for Zero-Shot Pose-Classification Based on Wearable Sensors

This paper presents a simple yet effective method for improving the perf...

1 Introduction

Visual classification with deep convolutional neural networks has achieved remarkable success 

[13, 24], even surpassing humans in some benchmarks [12]. This success, however, requires that the training data contain enough images per class (tens or hundreds of images), which is often not the case in practice, and visual data to learn new classes may be scarce (i.e. few-shot learning -FSL-) or inexistent (i.e. zero-shot learning -ZSL-). Humans, in contrast, are able to infer new classes from few or even no visual examples, just from a semantic description that connects them to known concepts (e.g. a zebra is like a horse but with stripes). Thus, ZSL is a desirable capability in computer vision systems, allowing them to recognize a much larger set of classes via their semantic descriptions.

The key component of a ZSL system is the semantic model, which connects seen and unseen classes in a common semantic space that enables the transference of seen visual representations to infer unseen classes. The most common semantic spaces are visual attributes, word embeddings and textual descriptions. We focus on visual attributes. In addition, generalized zero-shot learning (GZSL) addresses the setting where the test image could belong to seen classes (in addition to unseen classes). In this case the main challenge is the inherent bias towards seen classes. Thus, discriminative and transferable representations, together with properly designed semantic spaces, are key for effective and unbiased inference on unseen classes. In this paper we focus on attributes as semantic model and learning representations that are transferable to unseen classes with low bias.

The common approach is to align visual and semantic representations in a common embedding space via a ranking loss or metric learning losses. The visual representation is extracted with a visual model and the semantic representation is a mapping of the class to the semantic space (e.g. a class prototype in terms of attributes). During inference, seen and unseen classes are mapped to the common embedding space and the class nearest to the visual representation is selected.

(a) Global representation
(b) Part-based representation
(c) Attention-based representation
(d) Proposed representation: localized attributes
Figure 1: Semantic representations for (G)ZSL. We propose (c) localized attribute representations that explicitly localize and disentangles attribute information, in contrast to (a) global representations, and (b) part-based and (c) attention-based approaches that focus on few regions in the visual space and do not disentangle attribute information until the global representation.

The most common representations in ZSL are global visual features extracted from an (ImageNet-)pretrained feature extractor (see Fig. 

(a)a), which are even readily available off-the-shelf from previous works [29, 36]. These global visual features are then projected to the semantic space [1, 10] or to an intermediate space [37], where the comparison with semantic representations takes place. Most papers have focused on designing and learning a good visual-semantic alignment. Notably, Zhang et al. [36]

suggest that the choice of the embedding space is crucial, and argue that the projection to low dimensional semantic spaces or intermediate spaces shrinks the variance of the visual feature, limiting its discriminability and increasing the so-called hubness problem 

[22]. They suggest that the visual space is more discriminative and robust to hubness, and propose to embed classes directly in the visual space and then perform nearest neighbor search. In this paper we argue that this conclusion only holds for global representations, and that the choice of space where features are localized is even more critical.

Little attention has been paid to the role of locality in the design of good semantic representations that are discriminative and transferable to unseen classes. This is particularly critical in fine-grained scenarios where the differences between classes are highly local and subtle (e.g. the color of the beak or the tail could be the only aspect that can discriminate between two classes of birds). This has been confirmed in recent analysis [26]. In this paper we show that the semantic space is indeed a better choice to project local features, and using a suitable spatial aggregation strategy, the resulting global semantic feature remains highly discriminative and effective to compare the similarity with the unseen (and seen) classes.

In this paper we focus on localized semantic representations, and, in particular we propose localized attributes as representations. They can be obtained easily if we rethink the ZSL pipeline and switch the order between local to global aggregation and projection to the semantic space (see Fig. (d)d

). We also investigate how a proper choice of the spatial aggregation mechanism can significantly boost the performance of the semantic representation in some datasets. We show that a simple convolutional layer and global max pooling (GMP) are enough to achieve highly competitive performance and outperform most part-based and attention-based methods. Its simplicity also entails multiple advantages, including easier and efficient training (simply using standard cross-entropy loss), no additional hyperparameters and better transferability to unseen classes (i.e. seen and unseen accuracies are more balanced).

Localized representations have been proposed earlier in ZSL [8, 35, 38, 39]. However, they are mostly limited to the detection or discovery of discriminative parts [8, 38] (see Fig. (b)b) or attention mechanisms in the visual space [35, 39] (see Fig. (c)c), rather than explicitly localizing attributes in a local semantic space as we propose. In addition, part detectors are often trained separately with additional part-specific annotations (e.g. bounding boxes). Sometimes, extracting local features may also require larger images [39]. The number of extracted regions or attention maps is typically low (typically 2-15), in contrast to our localized attributes. In addition, the few features extracted from detected parts or attention maps are not necessarily disentangling attribute information, while we obtain one dedicated map for every attribute.

In short, we summarize our contributions as follows:

  • We propose a simple and effective localized attribute representation (SELAR) for (G)ZSL, which is both discriminative and transferable to unseen classes. The representation is also interpretable as attribute-specific heatmaps.

  • We study the role of the aggregation mechanism to improve localization and reduce the bias. This analysis shows that global max pooling in the localized attribute space leads to significant performance gains, especially improving performance on the unseen classes.

  • We achieving state-of-the-art performance on the SUN and CUB dataset. Notably, our method, which implicitly localizes the attributes, outperforms other more complicated methods with networks with explicit localization, such as attention-based [35, 39] and part-based [8, 38] methods.

2 Related work

Zero-shot learning

The original ZSL task focuses on achieving good predictions on unseen classes. Early approaches tackle this problem via visual-semantic alignment in a common space [2, 3, 11, 10, 20, 23, 25, 30, 37]. The common space can be the semantic space, the visual space [15, 36] or an intermediate space [37]. The alignment can be achieved via linear projections [2, 3, 10], non-linear projections [25, 30] or combinations of seen embeddings [6, 20]. Typically, a ranking loss is used to enforce alignment, but L2 loss [36] and adversarial loss [38] have been also used.

Generalized zero-shot learning

This more challenging, yet more realistic, setting evaluates the classifier on the union of seen and unseen classes 

[7]. The additional problem of bias towards seen classes becomes critical for good GZSL performance, and requires specific techniques to address it [7, 9, 15, 17, 34]. Several of these works relax some assumptions of the GZSL setting and achieve better performance. For example, one of such relaxations is considering that the descriptions of unseen classes are available during training. In that case, a generative model can be trained to generate synthetic features of unseen classes, and then combine them with real seen samples to train a joint and balanced classifier for both seen and unseen classes [9, 19, 32, 34]. Another assumption is having access to unseen images and labels which can help to calibrate the bias between the scores of seen and unseen classes [7, 17]. In this paper, we assume that nor the unseen descriptions are available during training, nor we can calibrate the classifier.

Localized features

Most (G)ZSL approaches focus on the role of the classifier and the semantic models, directly relying on global representations extracted by a pretrained classifier (typically a ResNet-101 trained on ImageNet). The potential of local representations for (G)ZSL has been explored only recently in two directions: part detection [8, 38] and attention mechanisms [35, 39]. In the former group, Zhu et al. uses a part detector trained for fine-grain recognition, where a fixed number of parts is extracted (e.g. seven parts for birds in CUB dataset, such as beak, belly, wings, etc.). Then adversarial learning is used to align semantic and visual representations. The model was improved including a loss encouraging creativity in the model [8]. The main limitation of these approaches is that the part detector requires additional and expensive annotation data (i.e. part ids, bounding boxes) to train the part detection. In the latter group, attention mechanisms focus on discovering discriminative regions. AREN [35] includes an attention layer, combined with an adaptive thresholding mechanism and a second order pooling representation. SGMA [39] first computes part attention maps (only two maps in their case), which subsequently guide a part extractor where local features are extracted. In general, the feature extractor in these methods is fine-tuned to improve the localization ability. Part detectors and attention mechanisms are significantly more complex and arguably more difficult to train (having additional hyperparameters) than the proposed approach, and essentially different since the attributes are not explicitly localized but only a few visual regions.

3 Zero-shot learning with localized attribute representations

3.1 Task Definition

In the ZSL task, the training set contains seen classes and is defined as , where denotes the -th image of the seen class and is its class label. The test set contains unseen classes and is defined as . The sets of seen and unseen classes are disjoint, i.e. . The semantic information about a particular class is obtained by the class embedding function as . In the case of attribute-based representations with attributes, the class prototype is simply a

-dimensional (binary or real valued) attribute vector encoding the presence or absence of each attribute. In this way, the semantic information about all seen classes can be conveniently captured in

-dimensional attribute matrix . Similarly, for unseen classes we obtain . Finally, evaluation in the GZSL setting considers a test set that includes both seen and unseen classes, i.e. .

3.2 Classification pipeline

We formulate ZSL as a classification problem, using a deep convolutional neural network (CNN) that internally projects visual features to the semantic space and is trained end-to-end with cross-entropy loss on seen data (see Fig. (a)a). In particular we are interested in some of the intermediate representations: the local visual feature , the global visual feature , the global semantic feature

, and the logits or

unnormalized class-scores . These intermediate representations lie in three distinctive spaces: the -dimensional visual space, the -dimensional semantic space (where is the number of attributes in our case) and the -dimensional class space. For convenience, we can also split the deep network into several modules: the feature extractor , the spatial aggregation operation , and the linear projection to the semantic space , parametrized by the projection matrix , i.e. a fully connected layer. The projection matrix is trainable, while the feature extractor is usually pretrained and can be optionally fine-tuned. Finally, the overall loss to minimize is


where is the cross-entropy loss. During test, the predicted class is the one with highest cross-product score


with for ZSL, and for GZSL.

A common choice for spatial aggregation is global average pooling (GAP), and a pretrained network in ImageNet as feature extractor. GoogleNet [27] and ResNets [13] that fit in this case, and those global features for common ZSL datasets are commonly provided off-the-shelf and used in benchmarks [29]. Thus, we can also conveniently compare to those previous methods using GAP and fixing the feature extractor. In addition, this pipeline enjoys several advantages: is easy to train, has few additional parameters (i.e. the projection matrix ) and no additional hyperparameters. This simplicity allows our method to generalize better to unseen classes, resulting in a lower bias to seen classes (see Table 2), which is critical in GZSL.

(a) ZSL with global semantic representations (projection after aggregation)
(b) ZSL with local semantic representations (aggregation after projection)
Figure 2: Local and global representations and embedding spaces in ZSL: (a) projection to the semantic space after spatial aggregation, and (b) spatial aggregation after projection. Trainable/tunable modules are highlighted in red.

3.3 Localized attribute representations

Previous approaches using local representations perform localization (via part detection or attention) in the visual space, then extract local visual features and eventually aggregate them to a global visual representation, which is projected to the semantic space. We instead propose projecting local visual features to the semantic space, obtaining localized semantic representations, i.e. localized attributes in our case. We modify our classification baseline by switching the order of spatial aggregation and projection. Now projection is performed first using a convolution with kernel resulting from reshaping . Spatial aggregation is performed on the resulting localized attribute representation (see Fig. (b)b). The resulting global semantic representation is .

Localized attributes provide a representation where attribute information is explicitly disentangled, where each map corresponds to a different attribute (see Fig. (a)a), and every attribute has a unique attribute map. In contrast, attention maps and discovered regions essentially weight visual representations or guide the extraction of visual representations. Thus, they are not necessarily disentangling attribute information (e.g. one region could be related to many attributes or even to none), and also suffer from a more limited number of regions or attention maps, compared to the number of attributes.

In our approach, no explicit attention or detection module performs localization. In contrast, we rely in the implicit localization of visual features that the feature extractor already performs. This highlights that fine-tuning the feature extractor is often crucial to improve the discriminability and transferability of the proposed local semantic representations.

3.4 Spatial aggregation with localized attributes

In this section, we investigate the role of spatial aggregation in the semantic space. While aggregating local visual representations, GAP has been proved a very effective strategy. However, localized semantic representations may behave differently, and the proposed localized attributes provide a rich and highly disentangled representation where averaging may not be the best strategy. In general, the choice of aggregation strategy is also related to how local the attributes are in a particular task. For instance, attributes in fine-grained datasets such as CUB are very local (e.g., ‘has_wing_pattern_spotted’ and ‘has_throat_color_orange’). In contrast, other datasets such as SUN contain global attributes or attributes covering wide areas (e.g.,‘man-made’,‘trees’). We study two aggregation strategies: GAP and global maximum pooling (GMP).

(a) Attribute maps (top: GAP, bottom: GMP)
(b) GAP (seen)
(c) GAP (unseen)
(d) GMP (seen)
(e) GMP (unseen)
Figure 3: Comparison between spatial aggregation methods (GAP and GMP) in CUB (fine-tuned feature extractor). Note that the attribute maps of GMP more accurately identify the relevant regions. (b-e) Global semantic representations (rows) of 50 images per class (randomly selected) of 6 classes (super-rows). Each column corresponds to one of the 312 attributes. The description corresponding to the class is shown in red. Note that GMP generates sparser and more discriminative feature patterns than GAP.

It is worth observing that GAP is a linear operation, so local linear projection followed by GAP and GAP follow by global linear projection are equivalent. In other words, the approaches in Figs. (a)a and (b)b are equivalent111In our experiment, the bias term in 1x1 convolution does not influence the results. when the spatial aggregation method is GAP. Therefore, our method with off-the-shelf pretrained feature extractors that are already using GAP, such as GoogleNet or ResNet, is already implicitly localizing attributes. When implemented as Fig. (a)a, localized attributes are never explicitly computed. However, they can be recovered by reshaping as , and computing .

Regardless of the aggregation method, our representation already achieves pretty good localization of the attributes, as shown in Fig. (a)a. While GAP considers all locations equally important, GMP focuses on the most salient location of each attribute map. This can be useful in datasets such as CUB where each attribute is very localized in a single small area. In addition, when combined with fine-tuning, this encourages the feature extractor to generate maps where salient regions are smaller than with GAP (compare the effect of GAP and GMP on the attribute maps on Fig. (a)a). An important difference with GAP is that GMP is not a linear operation, so the order of projection and aggregation matters in this case. Since we are interested in aggregating in the semantic space, GMP is performed after the convolution (as in Fig. (b)b).

The aggregated representations, i.e. the global semantic representations, obtained with our method (see Fig. 3b-e) are very discriminative and robust. The patterns of the representations obtained for unseen classes are very similar to those obtained for seen classes (compare Fig. 3b and c for GAP, and Fig. 3d and e for GMP), which suggest that the usual bias towards seen classes is relatively low. Finally, we can compare the effect of the aggregation strategy on the global semantic feature. In particular we observe that GMP, by focusing on the most salient location for each attribute, generates sparser and arguably more discriminative feature patterns than GAP, which seems to be more sensitive to noise.

4 Experiments

Method PS PS PS
Without fine-tuning
SYNC [6]
ALE [2]
PSR [4]
DCN [17]
MIIR [5]
Ours without fine-tuning
Attention model based with fine-tuning
AREN [35]
JLA [16]
AttentionZSL [18]
Part detection based with fine-tuning
SGMA [39]
GAZSL [38]
Ours with fine-tuning
Table 1: Zero-shot learning results on SUN, CUB, and AWA. PS = Proposed Split. The results report top-1 accuracy in %. The means adopting ResNet101 as feature extractor. We highlight the best result for fine-tuning and non-finetuning respectively.
Method U S H S/U U S H S/U U S H S/U
Without fine-tuning
SYNC [6]
ALE [2]
PSR [4]
DCN [17]
MIIR [5]
Ours without fine-tuning
Attention model based with fine-tuning
AREN [35]
JLA [16]
AttentionZSL [18]
Part detection based with fine-tuning
SGMA [39]
GAZSL [38]
Ours with fine-tuning
Table 2: Generalized Zero-Shot Learning on Proposed Split (PS). U = Top-1 accuracy on , S = Top-1 accuracy on

, H = harmonic mean, S/U can show the bias towards seen class,

denotes the average over the H on three datasets. The underline means the second high result. indicates results using ResNet101 as feature extractor. * indicates results using VGG19 as feature extractor. We highlight the best result with fine-tuned and fixed feature extractors respectively.

4.1 Datasets and Implementation Details

We evaluate our method on three datasets: the fine-grained dataset CUB [28], SUN [21] and AWA2 [29]. Among them, CUB has 11,788 images with 200 different classes of birds annotated with 312 attributes. SUN contains 14,340 images from 717 types of scenes with 102 attributes. Finally, AWA2 is a dataset with 50 categories of animals, which is composed of 37,322 images and 85 attributes. We follow the proposed split from [29] which is commonly used in ZSL/GZSL, resulting in a 150/50, 645/72 and 40/10 (seen/unseen) category division for CUB, SUN and AWA2 datasets respectively.

We provide results for both the conventional zero-shot learning (ZSL) and generalized zero-shot learning setting (GZSL), but mainly focus on GZSL, the most challenging setting. We denote the accuracy on unseen classes and seen classes as and

, respectively, and the evaluation metric for GZSL is the harmonic mean on the accuracy of seen classes and unseen classes, calculated as

. We apply L2-normalization on the attribute matrix, commonly used in previous works.

We report results with Imagenet pretrained ResNet101 [13] and VGG19 [24] as feature extractors, depending on the experiment, for fair comparison with previous methods. The size of the input image is 224

224 pixels. Similarly, we report results with fixed and fine-tuned feature extractors, depending on the experiment. As for learning rate, we use the following setting when using VGG19: for CUB dataset, the learning rates for feature extractor (FE) and convolutional layer (conv_1x1) are 1e-3 and 0.2; the learning rate decays by a factor 0.1 after 15 epochs. For AWA2, the learning rates are 1e-5 and 0.5 for FE and conv_1x1. When using ResNet101 on CUB and AWA2, all learning rates are 10 times smaller. For SUN, we use ResNet101 as other methods did, the learning rates are set to 1e-3 and 1e-2 for FE and conv_1x1 respectively, and they decay by 0.1 every 6 epochs. The learning rate will just get decayed once.

4.2 Quantitative Results for zero-shot Learning

In the experiments, we compare our method with the most related state-of-the-art approaches, some of which design extra modules to find regions of interest or attention maps. We will show that in zero-shot learning, there is no necessity to add these extra modules. We refer to the pipeline with GAP as SELAR-GAP, and with GMP as SELAR-GMP. We compare essentially with methods using fixed feature extractors and global representations, and methods using fine-tuned feature extractors which is typically used for localized representations. In this case, we distinguish between methods with attention models and methods with part detectors. We do not compare with generative methods [9, 14, 31, 33] that use a GAN or VAE to generate synthetic visual feature vectors for the unseen classes. These methods obtain excellent results, however they require access to the attribute vectors for the unseen classes during training, and not only during inference as for our method. Furthermore, these methods can be seen as a way of data augmentation, and can potentially be combined with the method we propose in this paper.

Fixed feature extractor. We first evaluate our approach with a fixed ResNet-101 feature extractor, which is the most common representation in approaches using global representaitons. We report the ZSL and GZSL results in Table 1 and Table 2, respectively. Interestingly, even without fine-tuning, SELAR-GAP can achieve very competitive performance, in particular for GZSL. Replacing the GAP with GMP, i.e. SELAR-GMP, can achieve state-of-the-art performance on CUB and AWA2. We argue that the main reason could be that the local features from ImageNet-pretrained networks generalize well to other datasets, and have very good generic localization ability. SELAR in this case simply trains a linear mapping that is enough to obtain an effective localized representation in the attribute space. We show additional attribute maps for both the fine-tuning and non-finetuning pipeline in the supplementary material.

Fine-tuned feature extractor. We also evaluate performance with fine-tuned feature extractors in order to compare with methods using attention and part detection. Table 1 shows results for ZSL, where our two pipelines can get comparable performance on CUB and SUN datasets.

Table 2 shows the results for GZSL. SELAR-GMP achieves state-of-the-art performance on both CUB and SUN datasets, and get compelling results on AWA2 dataset, especially it surpasses other significantly more complex methods on the CUB dataset. Among the methods in Table 2, AREN [35], JLA [16] and AttentionZSL [18] utilize extra modules to generate location or attribute attention, and SGMA [39], GAZSL [38] and CIZSL [8] have additional part detection modules. However, these methods obtain inferior results on most datasets compared to our method. SGMA achieves state-of-the-art performance on AWA2, however, it requires four forward passes through the feature extractor and a two times larger input resolution (448448 pixels). Specifically for SUN, using GAP or GMP does not make much difference for the results. We posit that this is due to the fact that attributes in SUN dataset are not always clearly localizable (like the attribute ’natural light’ and ’trees’); whether to consider all these regions (i.e. GAP) or only a single location (i.e. GMP) does not make a very significant difference. We also report the average H over all these datasets (we do not include here methods with results only on two datasets) in Table 2, and our SELAR-GMP has the highest value with/without fine-tuning. Given the simplicity of our approach, we think that the excellent results of our method are rather astonishing, especially considering the often much more complicated architectures used by the compared methods.

From the results of ZSL and GZSL, we can see that our pipeline does not stand out in ZSL, but almost surpasses all other methods on the three dataset in GZSL. In GZSL, the ideal model should learn a better visual-semantic mapping during training, and this knowledge should be transfered to unseen classes. The results on CUB and AWA2 of our pipeline show that GMP for spatial aggregation indeed performs better than GAP, as discussed in Section 3.4.

Aggregation type Aggregation space U S H
GAP visual, attribute, class (all equivalent)
GMP visual
GMP attribute (ours)
GMP class
Table 3: Ablation study of pooling operations and pooling spaces. The results are reported on the CUB dataset.

Aggregation method and aggregation space. We investigate the optimal location to perform the aggregation of local features to global ones. The ablation study shown in Table 3 evaluates GAP and GMP in three different spaces: visual, attribute and class (, and following Fig. (b)b), which also correspond to the order in which features are mapped to the different spaces in our classification pipeline.

Since GAP is a linear mapping, and also the other operations (linear layer or 1x1 convolution, and attribute mapping), in this case pooling in either of the three spaces is equivalent. This is not the case in GMP, which is non-linear. In this case, Table 3 shows that optimal location is the attribute space (after the 1x1 convolutional layer).

Seen/unseen classes bias. Since no images from unseen classes are observed during training, the network will inevitably be biased towards the seen classes. This bias may also increase when also fine-tuning the feature extractor.

We can evaluate the bias towards seen classes by comparing the accuracy in ZSL (see Table 1) and the accuracy on unseen classes and also the harmonic mean H in GZSL (see Table 2). Our two variants have close or slightly lower accuracy under ZSL, but achieve much higher accuracies on unseen and harmonic means than almost all the other methods. This shows our method is less biased towards the seen classes.

Another useful metric to compare seen-unseen biases between different methods is the ratio in GZSL (see Table 2). A large ratio indicates large bias towards seen classes. SELAR-GMP obtains the lowest ratio on all datasets except for SUN (without fine-tuning) where SELAR-GAP obtains slightly better results. Again, we conjecture that there are two reasons behind this: our method is simple and does not add new hyperparameters making the localized attributes representation generalized well to unseen classes, and secondly, GMP is less sensitive to noise and encourages more localized attributes.

4.3 Visualization

Figure 4: Visualization of attribute maps on SUN dataset from the SELAR-GAP (top one) and SELAR-GMP (bottom one). Below the image are the corresponding attributes.

Localized attribute map on SUN dataset. Since we have shown the visualization of attribute maps on CUB in Section 3.4, here we visualize some attribute maps in the localized attribute space, for both SELAR-GAP and SELAR-GMP (with fine-tuning) on SUN dataset in Fig. 4. Whereas on the CUB and AWA datasets the attributes are present in clearly localizable small regions, for the SUN dataset this is not the case. This maybe the reason why the SELAR-GAP and SELAR-GMP have similar performance on SUN.

One thing to emphasize is that we can not guarantee that each attribute map really corresponds to the true attribute, since there is no other constraint. For example, the attribute map for attribute ’neck color - red’ is not necessarily localizing the neck region, the network learns to correlate the related region with its attribute value automatically. But we find that those feature maps with high attribute values (85) in the attribute vector are always highly related to the corresponding attribute. We show additional attribute maps with lower attribute values in Fig. 5, those attributes has either middle value or do not exist (0) in the attribute vector of that class, you can see sometimes the attribute map indeed corresponds to the specific attribute, but sometimes not.

Figure 5: Visualization of attribute maps on CUB dataset from the SELAR-GAP (upper part) and SELAR-GMP (lower part). Below the image are the corresponding attributes. Those attributes has lower value (decreasing from left to right) in the attribute vector.

5 Conclusions

In this paper, we focus on localized semantic representations, and provide a simple but effective pipeline for zero-shot learning, dubbed as SELAR. In this pipeline the localized attribute can be obtained. Each feature map in the localized attribute space corresponds to one specific attribute. We also study the role of spatial aggregation to improve the localization ability in the localized attribute space, and show that global max pooling can lead to significant performance improvement in generalized zero shot learning. This is mainly caused by a drastic improvement on the unseen classes. Finally, we achieve state-of-the-art performance on CUB and AWA2 dataset under both fine-tuning and non-finetuning setting, and also obtain compelling results on AWA2 dataset. This simple pipeline can be a new baseline in zero-shot learning.


  • [1] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid (2016) Label-embedding for image classification. IEEE TPAMI 38 (7), pp. 1425–1438. Cited by: §1.
  • [2] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid (2015) Label-embedding for image classification. IEEE transactions on pattern analysis and machine intelligence 38 (7), pp. 1425–1438. Cited by: §2, Table 1, Table 2.
  • [3] Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele (2015) Evaluation of output embeddings for fine-grained image classification. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 2927–2936. Cited by: §2.
  • [4] Y. Annadani and S. Biswas (2018) Preserving semantic relations for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7603–7612. Cited by: Table 1, Table 2.
  • [5] Y. L. Cacheux, H. L. Borgne, and M. Crucianu (2019) Modeling inter and intra-class relations in the triplet loss for zero-shot learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 10333–10342. Cited by: Table 1, Table 2.
  • [6] S. Changpinyo, W. Chao, B. Gong, and F. Sha (2016) Synthesized classifiers for zero-shot learning. In CVPR, pp. 5327–5336. Cited by: §2, Table 1, Table 2.
  • [7] W. Chao, S. Changpinyo, B. Gong, and F. Sha (2016) An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. In European Conference on Computer Vision, pp. 52–68. Cited by: §2.
  • [8] M. Elhoseiny and M. Elfeki (2019) Creativity inspired zero-shot learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5784–5793. Cited by: 3rd item, §1, §2, §4.2, Table 1, Table 2.
  • [9] R. Felix, V. B. Kumar, I. Reid, and G. Carneiro (2018) Multi-modal cycle-consistent generalized zero-shot learning. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 21–37. Cited by: §2, §4.2.
  • [10] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato, and T. Mikolov (2013) DeViSE: a deep visual-semantic embedding model. In NIPS, pp. 2121–2129. Cited by: §1, §2, Table 1, Table 2.
  • [11] S. Gong (2017)

    Semantic Autoencoder for Zero-shot Learning

    IEEE CVPR 2017. Cited by: §2.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §1.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §3.2, §4.1.
  • [14] H. Huang, C. Wang, P. S. Yu, and C. Wang (2019-06) Generative dual adversarial network for generalized zero-shot learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2.
  • [15] K. Li, M. R. Min, and Y. Fu (2019) Rethinking zero-shot learning: a conditional visual classification perspective. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3583–3592. Cited by: §2, §2.
  • [16] Y. Li and D. Wang (2019) Joint learning of attended zero-shot features and visual-semantic mapping. In BMVC, Cited by: §4.2, Table 1, Table 2.
  • [17] S. Liu, M. Long, J. Wang, and M. I. Jordan (2018) Generalized zero-shot learning with deep calibration network. In Advances in Neural Information Processing Systems, pp. 2005–2015. Cited by: §2, Table 1, Table 2.
  • [18] Y. Liu, J. Guo, D. Cai, and X. He (2019-10) Attribute attention for semantic disambiguation in zero-shot learning. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §4.2, Table 1, Table 2.
  • [19] A. Mishra, S. Krishna Reddy, A. Mittal, and H. A. Murthy (2018-06) A generative model for zero shot learning using conditional variational autoencoders. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §2.
  • [20] M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. Corrado, and J. Dean (2014) Zero-shot learning by convex combination of semantic embeddings. In ICLR, Cited by: §2.
  • [21] G. Patterson, C. Xu, H. Su, and J. Hays (2014)

    The sun attribute database: beyond categories for deeper scene understanding

    International Journal of Computer Vision 108 (1-2), pp. 59–81. Cited by: §4.1.
  • [22] M. Radovanović, A. Nanopoulos, and M. Ivanović (2010)

    Hubs in space: popular nearest neighbors in high-dimensional data


    Journal of Machine Learning Research

    11 (Sep), pp. 2487–2531.
    Cited by: §1.
  • [23] B. Romera-Paredes and P. Torr (2015) An embarrassingly simple approach to zero-shot learning. In International Conference on Machine Learning, pp. 2152–2161. Cited by: §2.
  • [24] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1, §4.1.
  • [25] R. Socher, M. Ganjoo, C. D. Manning, and A. Ng (2013) Zero-shot learning through cross-modal transfer. In Advances in neural information processing systems, pp. 935–943. Cited by: §2.
  • [26] T. Sylvain, L. Petrini, and D. Hjelm (2019) Locality and compositionality in zero-shot learning. arXiv preprint arXiv:1912.12179. Cited by: §1.
  • [27] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9. Cited by: §3.2.
  • [28] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona (2010) Caltech-UCSD Birds 200. Technical report Technical Report CNS-TR-2010-001, California Institute of Technology. Cited by: §4.1.
  • [29] Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata (2018) Zero-shot learning - a comprehensive evaluation of the good, the bad and the ugly. TPAMI, pp. 2251–2265. Cited by: §1, §3.2, §4.1.
  • [30] Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele (2016) Latent embeddings for zero-shot classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 69–77. Cited by: §2.
  • [31] Y. Xian, T. Lorenz, B. Schiele, and Z. Akata (2018) Feature generating networks for zero-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5542–5551. Cited by: §4.2.
  • [32] Y. Xian, T. Lorenz, B. Schiele, and Z. Akata (2018) Feature generating networks for zero-shot learning. In IEEE Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [33] Y. Xian, S. Sharma, B. Schiele, and Z. Akata (2019) F-vaegan-d2: a feature generating framework for any-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10275–10284. Cited by: §4.2.
  • [34] Y. Xian, S. Sharma, B. Schiele, and Z. Akata (2019) F-vaegan-d2: a feature generating framework for any-shot learning. In IEEE Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [35] G. Xie, L. Liu, X. Jin, F. Zhu, Z. Zhang, J. Qin, Y. Yao, and L. Shao (2019-06) Attentive region embedding network for zero-shot learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 3rd item, §1, §2, §4.2, Table 1, Table 2.
  • [36] L. Zhang, T. Xiang, and S. Gong (2017) Learning a deep embedding model for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2021–2030. Cited by: §1, §2.
  • [37] Z. Zhang and V. Saligrama (2015) Zero-shot learning via semantic similarity embedding. In Proceedings of the IEEE international conference on computer vision, pp. 4166–4174. Cited by: §1, §2.
  • [38] Y. Zhu, M. Elhoseiny, B. Liu, X. Peng, and A. Elgammal (2018) A generative adversarial approach for zero-shot learning from noisy texts. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1004–1013. Cited by: 3rd item, §1, §2, §2, §4.2, Table 1, Table 2.
  • [39] Y. Zhu, J. Xie, Z. Tang, X. Peng, and A. Elgammal (2019) Semantic-guided multi-attention localization for zero-shot learning. In Advances in Neural Information Processing Systems, pp. 14917–14927. Cited by: 3rd item, §1, §2, §4.2, Table 1, Table 2.