Implicit and Explicit Attention for Zero-Shot Learning

by   Faisal Alamri, et al.
University of Exeter

Most of the existing Zero-Shot Learning (ZSL) methods focus on learning a compatibility function between the image representation and class attributes. Few others concentrate on learning image representation combining local and global features. However, the existing approaches still fail to address the bias issue towards the seen classes. In this paper, we propose implicit and explicit attention mechanisms to address the existing bias problem in ZSL models. We formulate the implicit attention mechanism with a self-supervised image angle rotation task, which focuses on specific image features aiding to solve the task. The explicit attention mechanism is composed with the consideration of a multi-headed self-attention mechanism via Vision Transformer model, which learns to map image features to semantic space during the training stage. We conduct comprehensive experiments on three popular benchmarks: AWA2, CUB and SUN. The performance of our proposed attention mechanisms has proved its effectiveness, and has achieved the state-of-the-art harmonic mean on all the three datasets.


page 3

page 5

page 11


Multi-Head Self-Attention via Vision Transformer for Zero-Shot Learning

Zero-Shot Learning (ZSL) aims to recognise unseen object classes, which ...

Self-Supervised Implicit Attention: Guided Attention by The Model Itself

We propose Self-Supervised Implicit Attention (SSIA), a new approach tha...

Towards Unbiased Multi-label Zero-Shot Learning with Pyramid and Semantic Attention

Multi-label zero-shot learning extends conventional single-label zero-sh...

Learning To Pay Attention To Mistakes

As evidenced in visual results in <cit.><cit.><cit.><cit.><cit.>, the pe...

On Implicit Attribute Localization for Generalized Zero-Shot Learning

Zero-shot learning (ZSL) aims to discriminate images from unseen classes...

Structure propagation for zero-shot learning

The key of zero-shot learning (ZSL) is how to find the information trans...

Augmented Bi-path Network for Few-shot Learning

Few-shot Learning (FSL) which aims to learn from few labeled training da...

1 Introduction

Most of the existing Zero-Shot Learning (ZSL) methods [44, 38] depend on pretrained visual features and necessarily focus on learning a compatibility function between the visual features and semantic attributes. Recently, attention-based approaches have got a lot of popularity, as they allow to obtain an image representation by directly recognising object parts in an image that correspond to a given set of attributes [50, 53]. Therefore, models capturing global and local visual information have been quite successful [50, 51]

. Although visual attention models quite accurately focus on object parts, it has been observed that often recognised parts in image and attributes are biased towards training (or

seen) classes due to the learned correlations [51]. This is mainly because the model fails to decorrelate the visual attributes in images.

Therefore, to alleviate these difficulties, in this paper, we consider two alternative attention mechanisms for reducing the effect of bias towards training classes in ZSL models. The first mechanism is via the self-supervised pretext task, which implicitly attends to specific parts of an image to solve the pretext task, such as recognition of image rotation angle [27]. For solving the pretext task, the model essentially focuses on learning image features that lead to solving the pretext task. Specifically, in this work, we consider rotating the input image concurrently by four different angles and then predicting the rotation class. Since pretext tasks do not involve attributes or class-specific information, the model does not learn the correlation between visual features and attributes. Our second mechanism employs the Vision Transformer (ViT) [13] for mapping the visual features to semantic space. ViT having a rich multi-headed self-attention mechanism explicitly attends to those image parts related to class attributes. In a different setting, we combine the implicit with the explicit attention mechanism to learn and attend to the necessary object parts in a decorrelated or independent way. We attest that incorporating the rotation angle recognition in a self-supervised approach with the use of ViT does not only improve the ZSL performance significantly, but also and more importantly, contributes to reducing the bias towards seen classes, which is still an open challenge in the Generalised Zero-Shot Learning (GZSL) task [43]. Explicit use of attention mechanism is also examined, where the model is shown to enhance the visual feature localisation and attends to both global and discriminative local features guided by the semantic information given during training. As illustrated in Fig. 1, images fed into the model are taken from two different sources: 1) labelled images, which are the training images taken from the seen classes, shown in green colour, and 2) other images, which could be taken from any source, shown in blue. The model is donated as (), in this paper, we implement either by ViT or by ResNet-101 [22] backbones. The first set of images is used to train the model to predict class attributes leading to the class labels via nearest search. However, the second set of images is used for rotation angle recognition, guiding the model to learn visual representations via implicit attention mechanism.

Figure 1: Our method maps the visual features to the semantic space provided with two different input images (unlabelled and labelled data). Green represents the labelled images provided to train the model to capture visual features and predict object classes. Blue represents the unlabelled images that are rotated and attached to the former set of images to recognise rotated image angles in a self-supervised task. The model learns the visual representations of the rotated images implicitly via the use of attention. (Best viewed in colour)

To summarise, in this paper, we make the following contributions: (1) We propose the utilisation of alternative attention mechanisms for reducing the bias towards the seen classes in zero-shot learning. By involving self-supervised pretext task, our model implicitly attends decorrelated image parts aiding to solve the pretext task, which learns image features independent of the training classes. (2) We perform extensive experiments on three challenging benchmark datasets, i.e. AWA2, CUB and SUN, in the generalised zero-shot learning setting and demonstrate the effectiveness of our proposed alternative attention mechanisms. We also achieve consistent improvement over the state-of-the-art methods. (3) The proposed method is evaluated with two backbone models: ResNet-101 and ViT, and shows significant improvement in the model performances, and reduces the issue of bias towards seen classes. We also show the effectiveness of our model qualitatively by plotting the attention maps.

2 Related Work

In this section we briefly review the related arts on zero-shot learning, Vision Transformer and self-supervised learning.

Zero-Shot Learning (ZSL): Zero-Shot Learning (ZSL) uses semantic side information such as attributes and word embeddings [47, 32, 36, 14, 16, 4]

to predict classes that have never been presented during training. Early ZSL models train different attribute classifiers assuming independence of attributes and then estimate the posterior of the test classes by combining attribute prediction probabilities

[28]. Others do not follow the independence assumption and learn a linear [17, 3, 2] or non-linear [45] compatibility function from visual features to semantic space. There are some other works that learn an inverse mapping from semantic to visual feature space [39, 55]. Learning a joint mapping function for each space into a common space (i.e. a shared latent embedding) is also investigated in [45, 23, 20]. Different from the above approaches, generative models synthesise samples of unseen classes based on information learned from seen classes and their semantic information, to tackle the issue of bias towards the seen classes [44, 58, 38]. Unlike other models, which focus on the global visual features, attention-based methods aim to learn discriminative local visual features and then combine with the global information [53, 59]. Examples include SGA [53] and AREN [50] that apply an attention-based network to incorporate discriminative regions to provide rich visual expression automatically. In addition, GEN [49] proposes a graph reasoning method to learn relationships among multiple image regions. Others focus on improving localisation by adapting the human gaze behaviour [30], exploiting a global average pooling scheme as an aggregation mechanism [52] or by jointly learning both global and local features [59]. Inspired by the success of the recent attention-based ZSL models, in this paper, we propose two alternative attention mechanisms to capture robust image features suitable to ZSL task. Our first attention mechanism is implicit and is based on self-supervised pretext task [27], whereas the second attention mechanism is explicit and is based on ViT [13]. To the best of our knowledge, both of these attention models are still unexplored in the context of ZSL. Here we also point out that the inferential comprehension of visual representations upon the use of SSL and ViT is a future direction to consider for ZSL task.

Vision Transformer (ViT): The Transformer [41]

adopts the self-attention mechanism to weigh the relevance of each element in the input data. Inspired by its success, it has been implemented to solve many computer vision tasks

[5, 13, 25] and many enhancements and modifications of Vision Transformer (ViT) have been introduced. For example, CaiT [40]

introduces deeper transformer networks, Swin Transformer

[31] proposes a hierarchical Transformer capturing visual representation by computing self-attention via shifted windows, and TNT [21] applies the Transformer to compute the visual representations using both patch-level and pixel-level information. In addition, CrossViT [9] proposes a dual-branch Transformer with different sized image patches. Recently, TransGAN [24]

proposes a completely free of convolutions generative adversarial network solely based on pure transformer-based architectures. Readers are referred to

[25], for further reading about ViT based approaches. The applicability of ViT-based models is growing, but it has remained relatively unexplored to the zero-shot image recognition tasks where attention based models have already attracted a lot of attention. Therefore employing robust attention based models, such as ViT is absolutely timely and justified for improving the ZSL performance.

Self-Supervised Learning (SSL): Self-Supervised Learning (SSL) is widely used for unsupervised representation learning to obtain robust representations of samples from raw data without expensive labels or annotations. Although the recent SSL methods use contrastive objectives [10, 19], early works used to focus on defining pretext tasks, which typically involves defining a surrogate task on a domain with ample weak supervision labels, such as predicting the rotation of images [27], relative positions of patches in an image [11, 33], image colours [29, 56] etc. Encoders trained to solve such pretext tasks are expected to learn general features that might be useful for other downstream tasks requiring expensive annotations (e.g. image classification). Furthermore, SSL has been widely used in various applications, such as few-shot learning [18], domain generalisation [7] etc. In contrast, in this paper, we utilise the self-supervised pretext task of image rotation prediction for obtaining implicit image attention to solve ZSL.

Figure 2: IEAM-ZSL Architecture. IEAM-ZSL consists of two pipelines represented in Green and Blue colours, respectively. The former takes images from the ZSL datasets with their class-level information input to the Transformer Encoder for attributes predictions. Outputs are compared with semantic information of the corresponding images using MSE loss as a regression task. The latter, shown in Blue colour, is fed with images after generating four rotations for each (i.e. , , , and ), to predict the rotation angle. At inference, solely the ZSL test datasets, with no data augmentation, are inputted to the model to predict the class-level attributes. A search for the nearest class label is then conducted.

3 Implicit and Explicit Attention for Zero-Shot Learning

In this work, we propose an Implicit and Explicit Attention mechanism-based Model for solving image recognition in Zero-Shot Learning (IEAM-ZSL). We utilise self-supervised pretext tasks, such as image rotation angle recognition, for obtaining image attention in an implicit way. Here the main rational is for predicting the correct image rotation angle, the model needs to focus on image features with discriminative textures, colours etc., which implicitly attend to specific regions in an image. For having explicit image attention, we utilise the multi-headed self-attention mechanism involved in Vision Transform model.

From ZSL perspective, we follow the inductive approach for training our model, i.e. during training, the model only has access to the training set (seen classes), consisting of only the labelled images and continuous attributes of the seen classes . An RGB image in image space is denoted as , where

is the class-level semantic vector annotated with

different attributes. As depicted in Fig. 2, a image with resolution and channels is fed into the model. Addition to , we also use an auxiliary set of unlabelled images for predicting the image rotation angle to obtain implicit attention. Note, here the images from and may or may not overlap, however, the method does not utilise the categorical or semantic label information of the images from the set .

3.1 Implicit Attention

Self-supervised pretext tasks provide a surrogate supervision signal for feature learning without any manual annotations [27, 12, 1] and it is well known that this type of supervision focuses on image features that help to solve the considered pretext task. It has also been shown that these pretext tasks focus on meaningful image features and effectively avoid learning correlation between visual features [27]. As self-supervised learning avoids considering semantic class labels, spurious correlation among visual features are not learnt. Therefore, motivated by the above facts, we employ an image rotation angle prediction task to obtain implicitly attended image features. For that, we rotate an image by , , and , and train the model to correctly classify the rotated images. Let be an operator that rotates an image by an angle , where . Now let be the predicted probability for the rotated image with label , then the loss for training the underlying model is computed as follows:


In our case, the task of predicting image rotation angle trains the model to focus on specific image regions having rich visual features (for example, textures or colours). This procedure implicitly learns to attend image features.

3.2 Explicit Attention

For obtaining explicit attention, we employ Vision Transformer model [13], where each image with resolution and channels is fed into the model after resizing it to . Afterwards, the image is split into a sequence of patches denoted as , where . Patch embeddings (small red boxes in Fig. 2

) are encoded by applying a trainable 2D convolution layer with kernel size=(16, 16) and stride=(16, 16)). An extra learnable classification token (

) is appended at the beginning of the sequence to encode the global image representation, which is donated as (). Position embeddings (orange boxes) are then attached to the patch embeddings to obtain the relative positional information. Patch embeddings () are then projected through a linear projection to dimension (i.e. ) as in Eq. 2. Embeddings are then passed to the Transformer encoder, which consists of Multi-Head Attention (MHA) (Eq. 3) and MLP blocks (Eq. 4

). A layer normalisation (Norm) is applied before every block, and residual connections after every block. The image representation (

) is then produced as in Eq. 5.


Below we provide details of our multi-head attention mechanism within the ViT model.

Multi-Head Attention (MHA): Patch embeddings are fed into the transformer encoder, where the multi-head attention takes place. Self-attention is performed for every patch in the sequence of the patch embeddings independently; thus, attention works simultaneously for all the patches, leading to multi-headed self-attention. This is computed by creating three vectors, namely Query (), Key () and Value (). They are created by multiplying the patch embeddings by three trainable weight matrices (i.e. , and ) applied to compute the self-attention. A dot-product operation is performed on the and vectors, calculating a scoring matrix that measures how much a patch embedding has to attend to every other patch in the input sequence. The score matrix is then scaled down and converted into probabilities using a softmax. Probabilities are then multiplied by the vectors, as in Eq. 6, where is the dimension of the vector . Multi-headed self-attention mechanism produces a number of self-attention matrices which are concatenated and fed into a linear layer and passed sequentially to 1) regression head and 2) classification head.


The multi-headed self-attention mechanism involved in the Vision Transformer guides our model to learn both the global and local visual features. It is worth noting that the standard ViT has only one classification head implemented by an MLP, which is changed in our model to two heads to meet the two different underlying objectives. The first head is a regression head applied to predict different class attributes, whereas the second head is added for rotation angle classification. For the former task, the objective function employed is the Mean Squared Error (MSE) loss as in Eq. 7, where is the target attributes, and is the predicted ones. For the latter task, cross-entropy (Eq. 1) objective is applied.


The total loss used for training our model is defined in Eq. 8, where and .


During the inference phase, original test images from the seen and unseen classes are inputted. Class labels are then determined using the cosine similarity between the predicted attributes and every target class embeddings predicted by our model.

4 Experiments

Datasets: We have conducted our experiments on three popular ZSL datasets: AWA2, CUB, and SUN, whose details are presented in Table 1. The main aim of this experimentation is to validate our proposed method IEAM-ZSL, demonstrating its effectiveness and comparing it with the existing state-of-the-art methods. Among these datasets, AWA2 [47] consists of images of categories ( seen + unseen). Each category contains binary as well as continuous class attributes. CUB [42] contains images forming different types of birds, among them classes are considered as seen, and the other as unseen, which is split by [2]. Together with images CUB dataset also contains attributes describing birds. Finally, SUN [35] has the largest number of classes among others. It consists of types of scene images, which divided into seen and unseen classes. The SUN dataset contains images with annotated attributes.

Datasets AWA2 [47] CUB [42] SUN [35]
Number of Classes
(Seen + Unseen)
Number of Attributes
Number of Images
Table 1: Dataset statistics: The number of classes (seen + unseen classes shown within parenthesis), the number of attributes and the number of images per dataset.

Implementation Details:

In our experiment, we have used two different backbones: (1) ResNet-101 and (2) Vision Transformer (ViT), both of which are pretrained on ImageNet and then finetuned for the ZSL tasks on the datasets mentioned above. We resize the image to

before inputting it into the model. For ViT, the primary baseline model employed uses an input patch size , with hidden dimension, and having layers and heads on each layer, and series encoder. We use the Adam optimiser for training our model with a fixed learning rate of and a batch size of . In the setting where we use self-supervised pretext task, we construct the batch with seen training images from set and rotated images (i.e. eight images, where each image is rotated to , , and ) from set

. We have implemented our model with PyTorch

111Our code is available at: learning framework and trained the model on a GeForce RTX 3090 GPU on a workstation with Xeon processor and 32GB of memory.

Evaluation: The proposed model is evaluated on the three above mentioned datasets. We have followed the inductive approach for training our model, i.e. our model has no access to neither visual nor side-information of unseen classes during training. During the evaluation, we have followed the GZSL protocol. Following [46], we compute the top-1 accuracy for both seen and unseen classes. In addition, the harmonic mean of the top-1 accuracies on the seen and unseen classes is used as the main evaluation criterion. Inspired by the recent works [52, 50, 8], we have used the Calibrated Stacking [8] for evaluating our model under GZSL setting. The calibration factor is dataset-dependent and decided based on a validation set. For AWA2 and CUB, the calibration factor is set to and for SUN, it is set to .

4.1 Quantitative Results

DAP [28] 84.7 0.0 0.0 67.9 1.7 3.3 25.1 4.2 7.2
IAP [28] 87.6 0.9 1.8 72.8 0.2 0.4 37.8 1.0 1.8
DeViSE [17] 74.7 17.1 27.8 53.0 23.8 32.8 30.5 14.7 19.8
ConSE [34] 90.6 0.5 1.0 72.2 1.6 3.1 39.9 6.8 11.6
ESZSL [37] 77.8 5.9 11.0 63.8 12.6 21.0 27.9 11.0 15.8
SJE [3] 73.9 8.0 14.4 59.2 23.5 33.6 30.5 14.7 19.8
SSE [57] 82.5 8.1 14.8 46.9 8.5 14.4 36.4 2.1 4.0
LATEM [45] 77.3 11.5 20.0 57.3 15.2 24.0 28.8 14.7 19.5
ALE [2] 81.8 14.0 23.9 62.8 23.7 34.4 33.1 21.8 26.3
*GAZSL [58] 86.5 19.2 31.4 60.6 23.9 34.3 34.5 21.7 26.7
SAE [26] 82.2 1.1 2.2 54.0 7.8 13.6 18.0 8.8 11.8
*f-CLSWGAN [44] 64.4 57.9 59.6 57.7 43.7 49.7 36.6 42.6 39.4
AREN [50] 79.1 54.7 64.7 63.2 69.0 66.0 40.3 32.3 35.9
*f-VAEGAN-D2 [48] 76.1 57.1 65.2 75.6 63.2 68.9 50.1 37.8 43.1
SGMA [59] 87.1 37.6 52.5 71.3 36.7 48.5 - - -
IIR [6] 83.2 48.5 61.3 52.3 55.8 53.0 30.4 47.9 36.8
*E-PGN [54] 83.5 52.6 64.6 61.1 52.0 56.2 - - -
SELAR [52] 78.7 32.9 46.4 76.3 43.0 55.0 37.2 23.8 29.0
ResNet-101 [22] 66.7 40.1 50.1 59.5 52.3 55.7 35.5 28.8 31.8
ResNet-101 with Implicit Attention 74.1 45.9 56.8 62.7 54.5 58.3 36.3 31.9 33.9
Our model (ViT) 90.0 51.9 65.8 75.2 67.3 71.0 55.3 44.5 49.3
Our model (ViT) with Implicit Attention 89.9 53.7 67.2 73.8 68.6 71.1 54.7 48.2 51.3
  • S, U, H denote Seen classes (), Unseen classes (), and the Harmonic mean, respectively. For each scenario, the best is in red and the second-best is in blue. * indicates generative representation learning methods.

Table 2: Generalised zero-shot classification performance on AWA2, CUB and SUN. Reported models are ordered in terms of their publishing dates. Results are reported in %.

Table 2 illustrates a quantitative comparison between the state-of-the-art methods and the proposed method using two different backbones: (1) ResNet-101 [22] and (2) ViT [13]. The baseline models performance without the employment of the SSL approach is also reported. The performance of each model is shown in % in terms of Seen (S) and Unseen (U) classes and their harmonic mean (H). As reported, the classical ZSL models [28, 17, 34, 34, 45, 2] show good performance in terms of seen classes. However, they perform poorly on unseen classes and encounter the bias issue, resulting in a very low harmonic mean. Among the classical approaches, [2] performs the best on all the three datasets, as it overcomes the shortcomings of the previous models and considers the dependency between attributes. Among generative approaches, f-VAEGAN-D2 [48] performs the best. Although f-CLSWGAN [44] achieves the highest score on AWA2 unseen classes, it shows lower harmonic means on all the datasets than [48]. As noticed, the first top scores for the AWA2 unseen classes accuracy are obtained by generative models [44, 48], which we assume is because they include both seen and synthesised unseen features during the training phase. Moreover, attention-based models, such as [59, 50] are the closest to our proposed model, perform better than the other models due to the inclusion of global and local representations. [50] outperforms all reported models on the unseen classes of the CUB dataset, but still has low harmonic means on all the datasets. SGMA [59] performs poorly on both AWA2 and CUB, and it clearly suffers from the bias issue, where its performance on unseen classes is considered deficient compared to other models. Recent models such as SELAR [52] uses global maximum pooling as an aggregation method and achieves the best scores on CUB seen classes, but achieves low harmonic means. In addition, its performance is seen to be considerably impacted by the bias issue.

ResNet-101: For a fair evaluation of the robustness and effectiveness of our proposed alternative attention-based approach, we consider the ResNet-101 [22] as one of our backbones, which is also used in prior related arts [17, 2, 26, 54, 52]. We have used the ResNet-101 backbone as a baseline model, where we only consider the global representation. Moreover, we also use this backbone with implicit attention, i.e. during training, we simultaneously impose a self-supervised image rotation angle prediction task for training the model. Note, for producing the results in Table 2, we only use the images from the seen classes as set , which is used for rotation angle prediction task. As presented in Table 2, our model with ResNet-101 backbone has performed inferiorly compared to our implicit and explicit variant, which will be discussed in the next paragraph. However, even with the ResNet-101 backbone, the contribution of our implicit attention mechanism should be noted, which provides a substantial boost to the model performance. For example, on AWA2, a considerable increment is observed on both seen and unseen classes, leading to a significant increase in the harmonic mean (i.e. to ). The performance of the majority of the related arts seems to suffer from bias towards the seen classes. We argue that our method tends to mitigate this issue on all the three datasets. Our method enables the model to learn the visual representations of unseen classes implicitly; hence, the performance is increased, and the bias issue is alleviated. Similarly, on the SUN dataset, although this dataset consists of classes, the proposed implicit attention mechanism illustrates the capability of providing ResNet-101 with an increase in the accuracy in terms of both seen and unseen classes, leading to an increase of points in the harmonic mean, i.e. from to .

Vision Transformer (ViT): We have used Vision Transformer (ViT) as another backbone to enable explicit attention in our model. Similar to the ResNet-101 backbone, we use the implicit attention mechanism with ViT backbone as well. During training, we simultaneously impose self-supervised image rotation angle prediction task for training the model. Here also we only use the images from the seen classes for image rotation angle task. As shown in Table 2, consideration of explicit attention performs very well on all the three datasets and it outperforms all the previously reported results with a significant margin. Such results are expected due to the involvement of self-attention employed in ViT. It captures both the global and local features explicitly guided by the class attributes given during training. Furthermore, attention focuses to each element of the input patch embeddings after the image is split, which effectively weigh the relevance of different patches, resulting in more compact representations. Although explicit attention mechanism is seen to provide better visual understanding, the effectiveness of the implicit attention process in terms of recognising the image rotation angle is also quite important. It does not only improve the performance further but also reduces the bias issue considerably, which can be seen in the performance of the unseen classes. In addition, it allows the model via an implicit use of self-attention to encapsulate the visual features and regions that are semantically relevant to the class attributes. Our model achieves the highest harmonic mean among all the reported models on all the three datasets. In terms of AWA2, our approach scores the third highest accuracy on both seen and unseen classes, but the highest harmonic mean. Note that on AWA2 dataset, our model still suffers from bias towards seen classes. We speculate that is due to the lack of the co-occurrence of some vital and identical attributes between seen and unseen classes. For example, attributes nocturnal in bat, longneck in giraffe or flippers in seal score the highest attributes in the class-attribute vectors, but rarely appear among other classes. However, on CUB dataset, this issue seems to be mitigated, as our model scores the highest harmonic mean (i.e. ), where the performance on unseen classes is increased compared to our model with explicit attention. Finally, our model with implicit and explicit attention achieves the highest scores on classes on the SUN dataset, resulting in the best achieved harmonic mean. In summary, our proposed implicit and explicit attention mechanism proves to be very effective across all the three considered datasets. Explicit attention using the ViT backbone with multi-head self-attention is quite important for the good performance of the ZSL model. Implicit attention in terms of self-supervised pretext task is another important mechanism to look at, as it boosts the performance on the unseen classes and provides better generalisation.

Original Images Attention Maps Attention Fusions
Explicit Attention Implicit + Explicit Attention Implicit +
Explicit Attention Explicit Attention

Figure 3: Examples of implicit and explicit attention. First column: Original images, Second and Third: Attention maps without and with SSL, respectively, Four and Fifth: Attention Fusions without and with SSL, respectively. Our model benefits from using the attention mechanism and can implicitly learn object-level attributes and their discriminative features.

Attention Maps: Fig. 3 presents some qualitative results, i.e. attention maps and fusions obtained by our proposed implicit and explicit attention-based model. For generating these qualitative results, we have used our model with explicit attention mechanism, i.e. we have used the ViT backbone. Attention maps and fusions are presented on four randomly chosen images from the considered datasets. Explicit attention with ViT backbone seems to be quite important for the ZSL tasks as it can perfectly focus on the object appearing in the image, which justifies the better performance obtained by our model with ViT backbone. Inclusion of implicit attention mechanism in terms of self-supervised rotated image angle prediction further enhances the attention maps and particularly focuses on specific image parts important for that object class. For example, as shown in the first row of Fig. 3, our model with implicit and explicit attention mechanism focuses on both global and local features of the Whale (i.e. water, big, swims, hairless, bulbous, flippers, etc.). Similarly, on the CUB dataset, the model pays attention to objects’ global features, and more importantly, the discriminative local features (i.e. loggerhead shrike has a white belly, breast and throat, and a black crown forehead and bill). For natural images taken from the SUN dataset, our model with implicit attention is seen to focus on the ziggurat paying more attention to its global features. Furthermore, as in the airline image illustrated in the last row, our model considers both global and discriminative features, leading to precise attention map that focuses accurately on the object.

Source of Rotated
Images ()
(Implicit Attention)
S & U ResNet-101 79.9 44.2 56.4 60.1 56.0 58.0 35.0 33.1 33.7
ViT 87.3 56.8 68.8 74.2 68.9 71.1 54.7 50.0 52.2
PASCAL ResNet-101 72.0 44.3 54.8 62.5 53.1 57.4 35.6 30.3 33.1
ViT 88.1 51.8 65.2 73.4 68.0 70.6 55.2 46.3 50.6
PASCAL & U ResNet-101 75.1 46.5 57.4 62.9 54.4 58.4 33.7 32.7 33.2
ViT 89.8 53.2 66.8 73.02 69.7 71.3 53.9 51.0 52.4
PASCAL & S ResNet-101 73.1 44.5 55.4 62.5 53.2 57.5 36.6 30.1 33.1
ViT 91.2 51.6 65.9 73.7 68.8 71.1 54.19 46.9 50.9
Table 3: Ablation performance of our model with ResNet-101 and ViT backbone on AWA2, CUB and SUN datasets. Here we use the training images from the seen classes as and varies as noted in the first column of the following table. S, U and PASCAL respectively denote the training images from the seen classes, test images from the unseen classes, and PASCAL VOC2012 training set images.

4.2 Ablation Study

Our ablation study evaluates the effectiveness of our proposed implicit and explicit attention-based model for the ZSL tasks. Here we mainly analyse the outcome of our proposed approach if we change the set which we use for sampling images for self-supervised image angle prediction task during training. In Section 4.1, we have only used the seen images for this purpose; however, we have also noted important observation if we change the set . Note, here we can use any collection of images as , since it does not need any annotation regarding its semantic class, because in this case, the only supervision used is the class corresponds to image angle rotation which can be generated online during training. In Table 3

, we present results on all three considered datasets with the above mentioned evaluation metric, where we only vary

as noted in the first column of Table 3. Note, in all these settings remains fixed, and it is set to the set of images from the seen classes. In all the settings, we observe that explicit attention in terms of ViT backbone performs significantly better than the classical CNN backbone, such as ResNet-101. We also observe that the inclusion of unlabelled images from unseen classes (can be considered as transductive ZSL [2]) significantly boosts the performance on all the datasets (see rows 1 and 3 in Table 3). Moreover, we also observe that including datasets that contain diverse images, such as PASCAL [15] improve the performance on unseen classes and increase generalisation.

5 Conclusion

This paper has proposed implicit and explicit attention mechanisms for solving the zero-shot learning task. For implicit attention, our proposed model has imposed self-supervised rotated image angle prediction task, and for the purpose of explicit attention, the model employs the multi-head self-attention mechanism via the Vision Transformer model to map visual features to the semantic space. We have considered three publicly available datasets: AWA2, CUB and SUN, to show the effectiveness of our proposed model. Throughout our extensive experiments, explicit attention via the multi-head self-attention mechanism of ViT is revealed to be very important for the ZSL task. Additionally, the implicit attention mechanism is also proved to be effective for learning image representation for zero-shot image recognition, as it boosts the performance on unseen classes and provides better generalisation. Our proposed model based on implicit and explicit attention mechanism has provided very encouraging results for the ZSL task and particularly has achieved state-of-the-art performance in terms of harmonic mean on all the three considered benchmarks, which shows the importance of attention-based models for ZSL task.


This work was supported by the Defence Science and Technology Laboratory (Dstl) and the Alan Turing Institute (ATI). The TITAN Xp and TITAN V used for this research were donated by the NVIDIA Corporation.


  • [1] P. Agrawal, J. Carreira, and J. Malik (2015) Learning to see by moving. In ICCV, Cited by: §3.1.
  • [2] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid (2016) Label-embedding for image classification. IEEE TPAMI. Cited by: §2, §4.1, §4.1, §4.2, Table 2, §4.
  • [3] Z. Akata, S. E. Reed, D. Walter, H. Lee, and B. Schiele (2015) Evaluation of output embeddings for fine-grained image classification. In CVPR, Cited by: §2, Table 2.
  • [4] F. Alamri and A. Dutta (2021) Multi-Head Self-Attention via Vision Transformer for Zero-Shot Learning. In IMVIP, Cited by: §2.
  • [5] F. Alamri, S. Kalkan, and N. Pugeault (2021) Transformer-encoder detector module: using context to improve robustness to adversarial attacks on object detection. In ICPR, Cited by: §2.
  • [6] Y. L. Cacheux, H. L. Borgne, and M. Crucianu (2019) Modeling inter and intra-class relations in the triplet loss for zero-shot learning. In ICCV, Cited by: Table 2.
  • [7] F. M. Carlucci, A. D’Innocente, S. Bucci, B. Caputo, and T. Tommasi (2019) Domain Generalization by Solving Jigsaw Puzzles. In CVPR, Cited by: §2.
  • [8] W. Chao, S. Changpinyo, B. Gong, and F. Sha (2016) An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. In ECCV, Cited by: §4.
  • [9] C. Chen, Q. Fan, and R. Panda (2021) CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification. In ICCV, Cited by: §2.
  • [10] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A Simple Framework for Contrastive Learning of Visual Representations. In ICML, Cited by: §2.
  • [11] C. Doersch, A. Gupta, and A. A. Efros (2015) Unsupervised visual representation learning by context prediction. In ICCV, Cited by: §2.
  • [12] A. Dosovitskiy, J. T. Springenberg, M. A. Riedmiller, and T. Brox (2014)

    Discriminative unsupervised feature learning with convolutional neural networks

    In NIPS, Cited by: §3.1.
  • [13] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby (2021) An image is worth 16x16 words: transformers for image recognition at scale. In ICLR, Cited by: §1, §2, §2, §3.2, §4.1.
  • [14] A. Dutta and Z. Akata (2020)

    Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-Based Image Retrieval

    IJCV. Cited by: §2.
  • [15] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman (2012) The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. Note: Cited by: §4.2.
  • [16] M. Federici, A. Dutta, P. Forré, N. Kushman, and Z. Akata (2020) Learning Robust Representations via Multi-View Information Bottleneck. In ICLR, Cited by: §2.
  • [17] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and T. Mikolov (2013) DeViSE: a deep visual-semantic embedding model. In NIPS, Cited by: §2, §4.1, §4.1, Table 2.
  • [18] S. Gidaris, A. Bursuc, N. Komodakis, P. Perez, and M. Cord (2019) Boosting few-shot visual learning with self-supervision. In ICCV, Cited by: §2.
  • [19] J. Grill, F. Strub, F. Altché, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar, B. Piot, K. Kavukcuoglu, R. Munos, and M. Valko (2020) Bootstrap your own latent: A new approach to self-supervised Learning. In NeurIPS, Cited by: §2.
  • [20] O. Gune, B. Banerjee, and S. Chaudhuri (2018) Structure aligning discriminative latent embedding for zero-shot learning. In BMVC, Cited by: §2.
  • [21] K. Han, A. Xiao, E. Wu, J. Guo, C. Xu, and Y. Wang (2021) Transformer in transformer. arXiv. Cited by: §2.
  • [22] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1, §4.1, §4.1, Table 2.
  • [23] H. Jiang, R. Wang, S. Shan, Y. Yang, and X. Chen (2017) Learning discriminative latent attributes for zero-shot classification. In ICCV, Cited by: §2.
  • [24] Y. Jiang, S. Chang, and Z. Wang (2021) TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up. In CVPR, Cited by: §2.
  • [25] S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. Khan, and M. Shah (2021) Transformers in vision: a survey. arXiv. Cited by: §2.
  • [26] E. Kodirov, T. Xiang, and S. Gong (2017)

    Semantic autoencoder for zero-shot learning

    In CVPR, Cited by: §4.1, Table 2.
  • [27] N. Komodakis and S. Gidaris (2018) Unsupervised representation learning by predicting image rotations. In ICLR, Cited by: §1, §2, §2, §3.1.
  • [28] C. H. Lampert, H. Nickisch, and S. Harmeling (2009) Learning to detect unseen object classes by between-class attribute transfer. In CVPR, Cited by: §2, §4.1, Table 2.
  • [29] G. Larsson, M. Maire, and G. Shakhnarovich (2016)

    Learning representations for automatic colorization

    In ECCV, Cited by: §2.
  • [30] Y. Liu, L. Zhou, X. Bai, Y. Huang, L. Gu, J. Zhou, and T. Harada (2021) Goal-oriented gaze estimation for zero-shot learning. In CVPR, Cited by: §2.
  • [31] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo (2021) Swin transformer: hierarchical vision transformer using shifted windows. arXiv. Cited by: §2.
  • [32] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NIPS, Cited by: §2.
  • [33] M. Noroozi and P. Favaro (2016) Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, Cited by: §2.
  • [34] M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. Corrado, and J. Dean (2014) Zero-shot learning by convex combination of semantic embeddings. In ICLR, Cited by: §4.1, Table 2.
  • [35] G. Patterson and J. Hays (2012) SUN attribute database: discovering, annotating, and recognizing scene attributes. In CVPR, Cited by: Table 1, §4.
  • [36] J. Pennington, R. Socher, and C. D. Manning (2014) Glove: global vectors for word representation. In EMNLP, Cited by: §2.
  • [37] B. Romera-Paredes and P. Torr (2015) An embarrassingly simple approach to zero-shot learning. In ICML, Cited by: Table 2.
  • [38] E. Schönfeld, S. Ebrahimi, S. Sinha, T. Darrell, and Z. Akata (2019) Generalized zero- and few-shot learning via aligned variational autoencoders. CVPR. Cited by: §1, §2.
  • [39] Y. Shigeto, I. Suzuki, K. Hara, M. Shimbo, and Y. Matsumoto (2015) Ridge regression, hubness, and zero-shot learning. In ECML/PKDD, Cited by: §2.
  • [40] H. Touvron, M. Cord, A. Sablayrolles, G. Synnaeve, and H. Jégou (2021) Going deeper with image transformers. arXiv. Cited by: §2.
  • [41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In NIPS, Cited by: §2.
  • [42] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie (2011) The Caltech-UCSD Birds-200-2011 Dataset. Technical report California Institute of Technology. Cited by: Table 1, §4.
  • [43] W. Wang, V. Zheng, H. Yu, and C. Miao (2019) A survey of zero-shot learning. ACM-TIST. Cited by: §1.
  • [44] Y. Xian, T. Lorenz, B. Schiele, and Z. Akata (2018) Feature generating networks for zero-shot learning. In CVPR, Cited by: §1, §2, §4.1, Table 2.
  • [45] Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele (2016) Latent embeddings for zero-shot classification. In CVPR, Cited by: §2, §4.1, Table 2.
  • [46] Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata (2019) Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE TPAMI. Cited by: §4.
  • [47] Y. Xian, B. Schiele, and Z. Akata (2017) Zero-shot learning - the good, the bad and the ugly. In CVPR, Cited by: §2, Table 1, §4.
  • [48] Y. Xian, S. Sharma, B. Schiele, and Z. Akata (2019) F-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning. In CVPR, Cited by: §4.1, Table 2.
  • [49] G. Xie, L. Liu, F. Zhu, F. Zhao, Z. Zhang, Y. Yao, J. Qin, and L. Shao (2020) Region graph embedding network for zero-shot learning. In ECCV, Cited by: §2.
  • [50] G. Xie, L. Liu, X. Jin, F. Zhu, Z. Zhang, J. Qin, Y. Yao, and L. Shao (2019) Attentive region embedding network for zero-shot learning. In CVPR, Cited by: §1, §2, §4.1, Table 2, §4.
  • [51] W. Xu, Y. Xian, J. Wang, B. Schiele, and Z. Akata (2020) Attribute prototype network for zero-shot learning. In NIPS, Cited by: §1.
  • [52] S. Yang, K. Wang, L. Herranz, and J. van de Weijer (2021) On implicit attribute localization for generalized zero-shot learning. IEEE SPL. Cited by: §2, §4.1, §4.1, Table 2, §4.
  • [53] Y. Yu, Z. Ji, Y. Fu, J. Guo, Y. Pang, and Z. (. Zhang (2018) Stacked semantics-guided attention model for fine-grained zero-shot learning. In NeurIPS, Cited by: §1, §2.
  • [54] Y. Yu, Z. Ji, J. Han, and Z. Zhang (2020) Episode-based prototype generating network for zero-shot learning. In CVPR, Cited by: §4.1, Table 2.
  • [55] L. Zhang, T. Xiang, and S. Gong (2017) Learning a deep embedding model for zero-shot learning. In CVPR, Cited by: §2.
  • [56] R. Zhang, P. Isola, and A. A. Efros (2016) Colorful image colorization. In ECCV, Cited by: §2.
  • [57] Z. Zhang and V. Saligrama (2015) Zero-shot learning via semantic similarity embedding. In ICCV, Cited by: Table 2.
  • [58] Y. Zhu, M. Elhoseiny, B. Liu, X. Peng, and A. Elgammal (2018) Imagine it for me: generative adversarial approach for zero-shot learning from noisy texts. In CVPR, Cited by: §2, Table 2.
  • [59] Y. Zhu, J. Xie, Z. Tang, X. Peng, and A. Elgammal (2019) Semantic-guided multi-attention localization for zero-shot learning. In NeurIPS, Cited by: §2, §4.1, Table 2.