Weakly supervised object localization (WSOL) requires less detailed annotations to identify the location of the object in a scene 
compared to the fully-supervised setting. WSOL is a challenging task since neural networks have access to only image-level labels (“cat” or “no cat”) that confirms the existence of the target object, but not the guidance of the expensive bounding box annotations in an image.
To address the WSOL problem with convolutional neural networks (CNNs), people resort to a general technique called Class Activation Mapping (CAM) for CNNs with global average pooling, which enables classification-trained CNNs to perform object localization. Unfortunately, the CAM tends to cover only the most discriminative part of the object instead of the entire object. This method improves the classification accuracy, but it also leads to localization accuracy degradation .
Existing approaches have explored adversarial erasing , Hide-and-Seek (HaS) , Attention-based Dropout Layer (ADL) . Specifically, the Adversarial Complementary Learning (ACoL) approach 
can efficiently locate different object parts and discover the complementary regions belonging to the same objects or categories by two adversary classifiers in a weakly supervised manner. HaS hides patches in a training image randomly, forcing the network to seek multiple relevant parts. ADL  hides the most discriminative part from the model for pursuing the full object extent, highlighting the informative region for improving the recognition power of the model. In fact, similar to the pixel-based dropout, these techniques are not really the region-based dropout. This is because the drop mask of ADL is generated by setting each pixel to 0 if it is larger than the drop threshold, and 1 if it is smaller. However, neighbouring pixels are correlated spatially on the convolutional feature map. These adjacent pixels share much of the same information. The pixel-based dropout discards the information on the convolutional feature map, but the information are still passed on from the other adjacent pixels that are still active.
Erasing the most discriminative parts is simple yet efficient methods for WSOL. For example, ADL is a lightweight yet powerful method which utilizes self-attention mechanism to remove the most discriminative part of the target object. However, the erasing methods abandon all the information on the most discriminative regions. This induces the model to learn the less discriminative part and sometimes captures useless information of the background, which leads to the attention misdirection and the biased localization. As shown in Figure 1, the bounding box is too large to precisely locate the object, and the classification performance is not as good as before since the focused attention has been changed to other objects.
In this paper, we propose a dual-attention guided dropblock module (DGDM), a lightweight yet powerful method, for WSOL, which is llustrated in Figure 2. It contains two key components, channel attention guided dropout (CAGD) and spatial attention guided dropblock (SAGD), to learn the complementary and discriminative visual patterns in the spatial and channel dimensions, respectively. Specifically, in CAGD, we first aggregate the spatial information of input feature map by GAP, to generate channel attention. We also rank the obtained channel attention according to a measure of importance (e.g., magnitude), and then drop out some elements with low importance. For SAGD, a self-attention map is generated by performing channelwise average pooling on input feature map. We generate an importance map using the sigmoid activation to highlight the most discriminative parts of object and suppress less useful ones. We also generate a drop mask by thresholding the self-attention map. It can not only efficiently remove the information by erasing contiguous regions of feature maps rather than individual pixels, but also simply sense the foreground objects and background regions to alleviate the attention misdirection. The drop mask and the importance map are selected stochastically at each iteration and applied to the input feature map.
Deep networks implemented with DGDM incorporate image classification and WSOL. With an end-to-end learning procedure, the proposed method captures complementary and discriminative visual patterns for precise object localization while maintaining good performance of image classification.
Our main contributions include:
(1) We propose a lightweight and efficient attention module (DGDM) that can be easily applied to convolutional feature maps of the model to improve the performance of WSOL. The spatial self-attention mechanism is designed to remove the most discriminative part of the target object and generates a drop mask. This drop mask can efficiently remove the information by erasing contiguous regions of feature maps and simply sense the foreground objects and background regions.
(2) We extend the channel attention mechanism in the task of WSOL to model channel interdependencies. We rank channel attention according to a measure of importance and treat the top- largest magnitude attentions as important. We also keep some low-valued elements to increase their value if they become important during training.
(3) The proposed method can be easily applied to different CNNs classifiers and achieve new state-of-the-art localization accuracy on CUB-200-2011 and ImageNet-1k. We also demonstrate that the benefits of our proposed method generalise to Stanford Cars dataset.
2 Related Work
2.0.1 Attention mechanism.
Attention mechanism is an artificial data processing method learnt from human perception process . It does not process all the data equally, but focuses more weights on the most informative parts , . Attention mechanisms have demonstrated their utility across various fields, such as scene segmentation , image localization and understanding , , 
, and image inpainting, . In particular, the self-attention mechanism  is firstly proposed to draw global dependencies of inputs and applies it in machine translation. Residual attention networks (RAN)  has improved the accuracy of the classification model using 3D self-attention map with heavy parameters. The squeeze-and-excitation module  is introduced to exploit the channel-interdependencies. The module can significantly reduce the parameter overheads for attention extraction compared to RAN, and it allows the network to perform feature recalibration. Convolutional block attention module  is proposed to emphasize meaningful features by blending cross-channel and spatial information together. However, these techniques require additional training parameters for extracting the attention map.
2.0.2 Dropout in convolutional neural networks.
introduced in 2012 has been proven to be a practical technique to alleviate overfitting in fully-connected neural networks, which drops neuron activations with some fixed probability during training. All activations are used during the testing phase, but the output is scaled according to the dropout probability. In the years since, many methods inspired by the original dropout technique have been proposed. They include dropconnect, variational dropout , Monte Carlo dropout  and many others. Some complete reviews on dropout methods can be found in a survey . However, unlike fully connected layers, applying dropout to the convolutional feature map is less powerful. This reduction can largely be attributed to two factors. The first is that fully-connected layers already have much more parameters than convolutional layers, and therefore convolutional layers require less regularization. The second factor is that spatially neighbouring pixels are strongly correlated on the convolutional feature map and they share much of the same information. Hence, the pixel-based dropout discard the information on the convolutional feature map, but the information still be passed on from the other adjacent pixels that are still active.
In an attempt to apply dropout to the convolutional feature map, Cutout  drops out contiguous sections of inputs rather than individual pixels in the input layer of CNNs. This method encourages the network to better utilize the full context of the image, rather than relying on the presence of a small set of specific visual features. Dropblock  generalizes Cutout by applying Cutout at every feature map in convolutional networks. Its main difference from dropout is that it drops contiguous regions from a feature map of a layer instead of dropping out independent random units. ADL  uses attention mechanism to find the maximally activated part and then drops them out. However, the method does not drop maximally activated region effectively, but the maximally activated pixels, as discussed in Subsection 3.3.
2.0.3 Weakly supervised object localization.
WSOL is an alternative cheaper way to identify the location of the object in a scene by only using image-level supervision, i.e., presence or absence of object categories , , . A WSOL approach is decomposing an image into a “bag” of region proposals and iteratively schoosing an instance (a proposal) from each bag (an image with multiple proposals) to minimize the image classification error in step-wised manner . Recent research  utilizes CNNs classifier for specifying the spatial distribution of discriminative patterns for different image classes. One way to pursue full object extent is self-paced learning . The self-produced guidance (SPG) approach uses a classification network to learn high confident regions, and then leverages attention maps to learn the object extent under the guidance masks of foreground and background regions in a stagewise manner. The other way to enhance object localization is about adversarial erasing and hide-and-seek , , , which first activates the most discriminative regions on the input image or feature map and then erases them so that less discriminative regions can be activated during the training phase. Nevertheless, most existing approaches use alternative optimization, or requiring substantial computing resources to remove the most discriminative part accurately.
3 Dual-attention guided dropblock module
3.1 Spatial attention module
Let is a convolutional feature map. Note that defines the channel number, and are height and width of the feature map, respectively. For simplicity, we omit the mini-batch dimension in this notation. The 2D spatial attention map is generated by compressing using channelwise average pooling (CAP).
is then forwarded to a sigmoid function to produce our importance map. The spatial attention focuses on where is a discriminative part. In short, the importance map is computed as:
where denotes the sigmoid function.
In detail, we obtain a self-attention map by performing CAP on the input feature map. Since the CNNs model is trained for classification, convolutional layers in the model are sufficiently powerful to produce meaningful self-attention map. Hence, the intensity of each pixel in the self-attention map is proportional to the discriminative power . To make use of the information aggregated in the CAP operation, we follow it with the second operation by using sigmoid activation, and then multiply it to input feature map. In this way, we can approximate the spatial distribution of the most discriminative region efficiently, thus improving feature representation for WSOL. We observe that the importance map usually highlight the most discriminative parts of object and suppress less useful ones. In particular, the intensity of each pixel in the importance map is close to 1 for the most discriminative region, while the regions with very low scores are considered as background. Also, unlike other techniques , , we do not require additional parameters to obtain the importance map.
3.2 Channel attention guided dropout
As illustrated in Figure 3, we first aggregate spatial information of a feature map by using global average pooling (GAP) operation, generating the global information embedding: . As mentioned before, the intensity of each pixel in the feature maps is proportional to the discriminative power. Hence, the embedding can be considered as channel attention. According to the relative magnitude of the attention, a binary mask is generated to indicate whether each channel is selected or not. This attention-guided pruning strategies can be treated as a special way to model the interdependencies across the channels. We rank channel attention according to a fast, approximate measure of importance (magnitude), and then drop out those elements with low importance. The strategies treat the top- largest magnitude attentions as important. We consider each element of separately under the -norm.
where is used to return the top- elements out of all elements being considered.
Inspired by Target Dropout , we keep only the channels of highest magnitude in the network and and drop out the others. Similar to regular dropout, this encourages the network to learn a sparse representation. However, we would like for some low-valued elements to be able to increase their value if they become important during training. As in Target Dropout , we introduce stochasticity into the process using a drop probability and a drop threshold by prefixed multiple of minimum intensity of channel attentions. Then, we obtain the drop mask by setting each element as 0 with probability if it is smaller than the drop threshold, and 1 if it is larger. To alleviate the difficulty of learning additional parameters, we set to 0.5 as standard dropout.
3.3 Spatial attention guided dropblock
We observe that ADL drops only strongly activated parts according to the drop threshold. In fact, similar to the pixel-based dropout, the ADL is not really the region-based dropout. This is because the drop mask of ADL is generated by setting each pixel to 0 if it is larger than the drop threshold , and 1 if it is smaller. That is, controls how many activation units to drop. However, neighbouring pixels are correlated spatially on the convolutional feature map. These adjacent pixels share much of the same information. Hence, the ADL cannot completely remove the information on the convolutional feature map.
To address above problems, we propose a region-based dropout in a similar fashion to regular dropout, but with an important distinction. The difference is that we drop out the contiguous regions of feature maps rather than the individual pixels, as demonstrated in Figure 4. Its main difference from Dropblock  is that its drop mask are computed from the self-attention map
obtained by the spatial attention module. Then, the shared drop mask across different feature channels or each feature channel has the same drop mask. The proposed method has two main hyperparameters:and . The presents the size of the block to be discard, and controls how many activation units to drop. When equals to 1 and covers the full feature map, the region-based dropout reduces to the standard dropout and SpatialDropout  respectively. This technique can efficiently remove the information on the feature map. Hence, it forces the network to better capture the full context of the feature map, rather than relying on the presence of a small set of the discriminative features.
Images can be roughly divided into foreground and background regions. Also, the object of interests is usually consisted of the foreground pixels. The work  has reported that attention maps stands for the probabilities of each pixel to be foreground or background. The initial object and background can be obtained according to the scores in the self-attention maps. In particular, the regions with very high scores are foreground, while the regions with low scores are considered as background. Removing discriminative regions induces the model to learn the less discriminative part, which sometimes leads to the attention misdirection and the biased localization. Base on this, we can simply sense the foreground objects and background regions by using the drop mask according to the self-attention map, which will finally benefit WSOL.
We formally define as follows. if the pixel at row and column belongs to background regions or the most discriminative parts, otherwise . Specifically, can be calculated by
where and are thresholds to identify regions in feature maps as background and the most discriminative parts of foreground, respectively.
3.4 Network Implementation
Fully CNNs with DGDM fuse complementary discriminative regions for precise object localization and accurate image classification in an end-to-end manner. Following the work , the DGDM is inserted in higher-level feature maps of the CNNs model.
For the DGDM, convolutional feature maps are averaged by channelwise average pooling to generate the meaningful self-attention map. The attention maps are then activated by using sigmoid function. We then multiply the obtained importance map to input feature map. Base on the obtained self-attention map, the CNNs model can simply sense the foreground objects and background regions by multiplying to input feature map. The importance map rewards the most discriminative region for increasing the classification power of the model. Unfortunately, the classifiers tend to focus only on the most discriminative features to increase their classification accuracy. The work  has reported that losing some classification accuracy results in the huge boost in localization performance. This is caused by the usage of a drop mask, which erases the most discriminative region of the object. From this idea, the importance map or drop mask is stochastically selected during training phase. As in ADL, we also introduce a drop rate, which controls how frequently the drop mask is applied. In addition, channel attention guided mask is also applied to input feature maps to model the interdependencies across the channels.
The CNNs model is optimized with Stochastic Gradient Descent (SGD) algorithm. The heatmap is extracted from classification model using CAM. We also extract the bounding box using the same method as presented in work . Finally, a thresholding approach  is then applied to predict the object locations.
4.1 Experimental Setup
We evaluate the performance of the proposed method in the commonly used CUB-200-2011  and ImageNet-1k  datasets. We also consider Stanford Cars  dataset beyond CUB-200-2011 and ImageNet-1k. The CUB-200-2011 includes 200 species of birds with 5,994 images for training and 5,794 for testing. The ImageNet-1k is a large-scale dataset with 1,000 different classes. For this dataset, we train the model with 1.3 million images for training, and 5,000 images in the validation set for testing. The Stanford Cars dataset  contains 16,185 images of 196 classes of cars. The data is split into 8,144 training images and 8,041 testing images.
4.1.2 Evaluation metrics.
Three evaluation metrics are used for WSOL evaluation
. Top-1 classification accuracy (Top-1 Clas) determines that the answer is correct when the estimated class is equal to the ground truth class. Localization accuracy with known ground-truth class (GT-known Loc) considers the answer as correct when the intersection over union (IoU) between the ground truth bounding box and predicted box for the ground truth class isor more. Top-1 localization accuracy (Top-1 Loc) determines that the answer is correct when both GT-known Loc and Top-1 Clas are correct. It is worth noting that the most appropriate metric is Top-1 Loc for evaluating overall localization performance, according to the work .
Experimental details. The proposed DGDM is integrated with the commonly used CNNs including VGG , ResNet , ResNet-SE , and InceptionV3 . Following the settings of work , we set the drop rate as 75%, and apply DGDM to intermediate and higher-level layers of the network. We also employ a pre-trained model trained with ImageNet-1k dataset, and then fine-tune the network.
4.2 Ablation studies
The ablation studies on CUB-200-2011 using pre-trained VGG-GAP are used to evaluate the effects of the proposed DGDM. During training phase, the DGDM is inserted in all the pooling layers and the conv5-3 layer.
|Acc(%)||Clas (%)||Loc (%)|
The drop mask map can not only remove a small set of the discriminative features to better capture the full context of the feature map, but also identify regions in feature maps as background and foreground, respectively. First, we investigate the effect of removing importance parts on accuracy. The upper part of Table 1 reports the results when we use different . From these results, it can be seen that we achieve the best localization accuracy when is . We also present the result when is adaptive and calculated by [H(W) ]. It can be observed that the drop masks with = 2 from higher-level layers erase the most discriminative part more accurately than those with other . We observe that the classification accuracy decreases as increases. This is because the model never observe the most discriminative part. As a result, classification accuracy decreases significantly, which adversely leads to the huge boost in localization performance.
Next, we investigate the effect of removing a small set of background on accuracy. The middle part of Table 1 reports the results when we use different . It can be seen that the best localization accuracy can be achieved when is . We can also see that the Top-1 Loc increases again (from 52.57% to 53.79%) and the classification accuracy achieve a improvement (from 68.30% to 69.00%), which indicates that removing a small set of background boosts the performance of WSOL. The reason lies in that the proposed erasing method doesn’t lead to the attention misdirection when the discriminative parts are erased.
Lastly, we observe the effect of channel attention guided dropout on the accuracy by using four different drop thresholds. The lower part of Table 1 summarizes the experimental results. From this, we can confirm that the value of the drop threshold has an important effect on performance of WSOL. It can also be seen that three evaluation metrics are improved when the drop threshold is 3. This is the result of the classification accuracy increase.
4.3 Comparison with the state-of-the-arts
We compare our proposed method with existing WSOL techniques on the CUB-200-2011 test set and ILSVRC validation set and report the results in Table 2, Table 3, and Table 4, respectively. These approaches include the state-of-the-art: CAM , DANet , ACoL , SPG , ADL .
|(Mb)||Loc (%)||Clas (%)|
Table 2 summaries the quantitative evaluation results on the CUB-200-2011 test set. With a VGG-GAP backbone, our method reports higher Top-1 Loc and higher Top-1 Clas compared with the ADL approach . With a ResNet50 backbone, it reports 8.30% performance gain over the DANnet approach  at the cost of little classification performance. This is caused by the trade-off relationship between localization and classification accuracy discussed in Subsection 3.4. This method also achieves a new state-of-the-art localization accuracy (59.40%) even when the other two backbone networks are employed. When InceptionV3 is employed as a backbone, the proposed method also has comparable accuracy.
In addition to achieving a new state-of-the-art localization accuracy, this method has high efficiency. We have presented the number of parameters and both computation and parameter overheads along with Top-1 Loc and Top-1 Clas. Similar to ADL , the proposed method has no parameter overheads, and the computation overheads are zero upon the backbone network.
|Method||Backbone||Top-1 Loc||Top-1 Clas|
|Method||Backbone||Top-1 Loc||Top-1 Clas|
In Table 3, we evaluate the performance of our proposed method on the large-scale ImageNet-1k dataset. When VGG-GAP is used as a backbone, the accuracy of our method is better than that of Backprop, but slightly lower than that of others. However, when ResNet50-SE is used as a backbone, localization accuracy of our method is better than that of ADL and comparable with that of CAM even though the required computing resources are much lower. In summary, our method achieves new state-of-the-art accuracy comparing with the current techniques despite its superior efficiency.
4.3.3 Additional datasets.
We next investigate whether the benefits of our proposed method generalise to other dataset beyond CUB-200-2011 and ImageNet-1k. We perform experiments with several popular baseline architectures (ResNet-18, ResNet-34, ResNet-50). DGDM is integrated into these networks. This method and its counterpart (ADL) are trained on the Stanford Cars dataset. We report the performance of our method and its counterpart on Stanford Cars in Table 4. It can be observed that in all the comparisons the proposed method outperforms its counterpart, suggesting that the benefits of DGDM are not confined to CUB-200-2011 and ImageNet-1k datasets.
The work  has reported that ADL extracts the discriminative features from the background which appears frequently with the target object. To analyze the substantial difference between our proposed method and ADL, we show the heatmaps as well as the predicted bounding boxes on Stanford Cars and CUB-200-2011 in Figure 5. It can be observed that the object localization maps generated by our method can obtain more accurate bounding boxes than ADL. That is, the network implemented with DGDM learns well to exploit information in target object regions and aggregate features from them. We attribute it to the erasing operation which guides the network to discover more discriminative patterns so as to obtain better performance of WSOL.
Meanwhile, since all classes of CUB-200-2011 belong to birds, similar backgrounds appear in spite of the classes (e.g., sea). This indicates that background frequently appearing with the object might be the less discriminative region . For example, our method and ADL can discover nearly entire parts of a bird, e.g., the wing and head, but ADL also learn more background features when the most discriminative part is dropped. For example, ADL captures not only the bird, but also sea.
Figure 6 gives drop mask and self-attention map at higher-level layer of ResNet-50. The self-attention maps generated from the classification network are applied as supervision to sense the foreground objects and background regions. It can be seen that the attention regions spread to the background with ADL method while our method alleviate the attention misdirection. We can see that our proposed drop mask can hide the most discriminative part and remove a small set of background by erasing contiguous regions of feature maps rather than independent individual pixels. This prevents the model from relying solely on the most discriminative regions for classification, instead induces it to learn the less discriminative regions better.
In this paper, we proposed a simple yet effective dual-attention guided dropblock module (DGDM) for WSOL. We designed two key components of DGDM, CAGD and SAGD, and unified them with the deep learning framework. The proposed method hides the most discriminative part of the object and then encourages the CNNs classifier to learn the less discriminative part. We defined a pruning strategy so that CAGD can be treated as a special way to model the interdependencies across the channels. In addition, SAGD can not only completely remove the information by erasing contiguous regions of feature maps rather than independent individual pixels, but also simply sense the foreground objects and background regions to alleviate the attention misdirection. Compared to some existing WSOL techniques, , , the proposed method is more efficient and lightweight because there are no additional trainable parameters. We also have achieved new state-of-the-art localization accuracy on CUB-200-2011, ImageNet-1k, and Stanford Cars.
-  (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §2.0.1.
Weakly supervised localization using deep feature maps. In
Proceedings of the European Conference on Computer Vision (ECCV), pp. 714–731. Cited by: §2.0.3.
-  (2015) Look and think twice: capturing top-down visual attention with feedback convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2956–2964. Cited by: §2.0.1.
Attention-based dropout layer for weakly supervised object localization.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2219–2228. Cited by: §1, §1, §2.0.2, §2.0.3, §3.1, §3.4, §4.1.2, §4.3.1, §4.3.1, §4.3.4, §4.3.4, §4.3.
-  (2016) Weakly supervised object localization with multi-fold multiple instance learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (1), pp. 189–203. Cited by: §2.0.3.
-  (2017) Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552. Cited by: §2.0.2.
-  (2019) Dual attention network for scene segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3146–3154. Cited by: §2.0.1.
Dropout as a bayesian approximation: representing model uncertainty in deep learning.
International Conference on Machine Learning (ICML), pp. 1050–1059. Cited by: §2.0.2.
-  (2018) Dropblock: a regularization method for convolutional networks. In Advances in Neural Information Processing Systems (NIPS), pp. 10727–10737. Cited by: §2.0.2, §3.3.
-  Targeted dropout. Cited by: §3.2.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: Figure 5, §4.1.2.
-  (2012) Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. Cited by: §2.0.2.
-  (2018) Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132–7141. Cited by: §2.0.1, §3.1, §4.1.2.
-  (2015) Spatial transformer networks. In Advances in Neural Information Processing Systems (NIPS), pp. 2017–2025. Cited by: §2.0.1.
-  (2015) Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems (NIPS), pp. 2575–2583. Cited by: §2.0.2.
-  (2013) 3d object representations for fine-grained categorization. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 554–561. Cited by: §4.1.1.
-  (2019) Survey of dropout methods for deep neural networks. arXiv preprint arXiv:1904.13310. Cited by: §2.0.2.
-  (2018) Tell me where to look: guided attention inference network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9215–9223. Cited by: §2.0.1.
-  (2018) Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100. Cited by: §2.0.1.
-  (2014) Recurrent models of visual attention. In Advances in Neural Information Processing Systems (NIPS), pp. 2204–2212. Cited by: §2.0.1.
-  (2015) Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115 (3), pp. 211–252. Cited by: §4.1.1, §4.1.2.
-  (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §4.1.2.
-  (2017) Hide-and-seek: forcing a network to be meticulous for weakly-supervised object and action localization. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 3544–3553. Cited by: §1, §2.0.3, §3.4, §4.1.2.
-  (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826. Cited by: §4.1.2.
-  (2015) Efficient object localization using convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 648–656. Cited by: §3.3.
-  (2017) Attention is all you need. In Advances in Neural Information Processing Systems (NIPS), pp. 5998–6008. Cited by: §2.0.1.
-  (2011) The caltech-ucsd birds-200-2011 dataset. Cited by: §4.1.1.
-  (2013) Regularization of neural networks using dropconnect. In International Conference on Machine Learning (ICML), pp. 1058–1066. Cited by: §2.0.2.
-  (2017) Residual attention network for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3156–3164. Cited by: §2.0.1.
-  (2018) Cbam: convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19. Cited by: §2.0.1, §3.1.
-  (2019) DANet: divergent activation for weakly supervised object localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 6589–6598. Cited by: §4.3.1, §4.3, §5.
-  (2018) Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5505–5514. Cited by: §2.0.1.
-  (2018) Adversarial complementary learning for weakly supervised object localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1325–1334. Cited by: §1, §2.0.3, §4.3, §5.
-  (2018) Self-produced guidance for weakly-supervised object localization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 597–613. Cited by: §2.0.3, §3.3, §4.3, §5.
-  (2016) Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921–2929. Cited by: §1, §1, §2.0.3, §3.4, §4.3.