Weakly Supervised Object Localization (WSOL) refers to learning object locations in a given image using the image-level labels. Currently, WSOL has drawn increasing attention since it does not require expensive bounding box annotations for training and thus can save much labour compared to fully-supervised counterparts [32, 13, 12].
It is a very challenging task to learn deep models for locating objects of interest using only image-level supervision. Some pioneer works [48, 45] have been proposed to generate class-specific localization maps according to pre-trained convolutional classification networks. For example, Zhou et al.  modified classification networks (e.g., AlexNet  and VGG-16 ) via replacing a few high-level layers by a global average pooling layer  and a fully connected layer, which can aggregate the features of the last convolutional layer to generate discriminative class activation maps (CAM) for the localization purpose. However, we observe that some critical issues exist in such solutions, mainly including: 1) over-relying on category-wise discriminative features for image classification; 2) failing to localize integral regions of the target objects densely within an image. The two issues are mainly due to the classification networks are inclined to identify patterns from the most discriminative parts for recognition, which inevitably leads to the second issue. For instance, given an image containing a cat, the network can recognize it by identifying the head regardless of the remaining parts such as body and legs.
To tackle such issues, Wei et al.  proposed an adversarial erasing (AE) approach to discover integral object regions by training additional classification networks on images whose discriminative object regions have partially been erased. Nevertheless, one main disadvantage of AE is that it needs to train several independent classification networks for obtaining integral object regions, which costs more training time and computing resources. Recently, Singh et al.  enhanced CAM by randomly hiding the patches of input images so as to force the network to look for other discriminative parts. However, randomly hiding patches without any high-level guidance is inefficient and cannot guarantee that networks always discover new object regions.
In this paper, we propose a novel Adversarial Complementary Learning (ACoL) approach for discovering entire objects of interest via end-to-end weakly supervised training. The key idea of ACoL is to find the complementary object regions by two adversary classifiers motivated by AE . In particular, one classifier is firstly leveraged to identify the most discriminative regions and guide the erasing operation on the intermediate feature maps. Then, we feed the erased features into its counterpart classifier for discovering new and complementary object-related regions. Such a strategy drives the two classifiers to mine complementary object regions and finally obtain integral object localization as desired. To easily conduct end-to-end training for ACoL, we mathematically prove that object localization maps can be obtained by directly selecting from the class-specific feature maps of the last convolutional layer, rather than using a post-inference manner in . Thus discriminative object regions can be identified in a convenient way during the training forward pass according to the online inferred object localization maps.
Our approach offers multiple appealing advantages over AE . First, AE trains three networks independently for adversarial erasing. ACoL trains two adversarial branches jointly by integrating them into a single network. The proposed joint training framework is more capable of integrating the complementary information among the two branches. Second, AE adopts a recursive method to generate localization maps, and it has to forward the networks for multiple times. Instead, our method generates localization map by forwarding the network only once. This advantage greatly improves the efficiency and have our method much easier for implementation. Third, AE directly adopts CAM  to generate localization maps. Thus AE generates localization maps in two steps. Differently, our method generates localization maps in one step, by selecting the feature map which best matches the groundtruth as the localization map. We have also provided detailed proof with theoretical rigor that our method is simpler and more efficient, but yields identical results to CAM  (see Section 3.1).
The process of ACoL is illustrated in Figure 1
, where an image is processed to estimate the regions of a horse. We can observe that Classifier A leverages some discriminative regions (the horse’s head and hind legs) for recognition. By erasing such discriminative regions in feature maps, Classifier B is guided to use features of new and complementary object regions (the horse’s forelegs) for classification. Finally, the integral target regions are obtained by fusing the object localization maps from both branches. To validate the effectiveness of the proposed ACoL, we conduct a series of object localization experiments using the bounding boxes inferred from the generated localization maps.
To sum up, our main contributions are three-fold:
We provide theoretical support of producing class-specific feature maps during the forward pass, so that object regions can be simply identified in a convenient way, which can benefit future relevant researches.
We propose a novel ACoL approach to efficiently mine different discriminative regions by two adversary classifiers in a weakly supervised manner, which discover integral target regions of objects for localization.
This work achieves the current state-of-the-art with the error rate of Top-1 45.14% and Top-5 30.03% on the ILSVRC 2016 dataset in weakly supervised setting.
2 Related Work
Fully supervised detection has been intensively studied and achieved extraordinary successes. One of the earliest deep networks to detect objects in a one-stage manner is OverFeat , which employs a multiscale and sliding window approach to predict object boundaries. These boundaries are then applied for accumulating bounding boxes. SSD  and YOLO  use a similar one-stage method, and they are specifically designed for speeding up the detection. Faster-RCNN  utilize a novel two-stage approach and has achieved great success in the object detection. It generates region proposals using sliding windows and predicts highly reliable object locations in a unified network in real time. Lin et al.  presented that the performance of Faster-RCNN can be significantly improved by constructing feature pyramids with marginal extra cost.
Weakly supervised detection and localization aims to apply an alternative cheaper way by only using image-level supervision [2, 35, 1, 38, 30, 19, 10, 9, 18, 22, 26]. Oquab et al.  and Wei et al. 
adopted a similar strategy to learn multi-label classification networks with max-pooling MIL. The networks are then applied to coarse object localization. Bency et al.  applied a beam search method to leverage local spatial patterns, which progressively localizes bounding box candidates. Singh et al.  proposed a method to augment the input images by randomly hiding patches so as to look for more object regions. Similarly, Bazzani et al.  analysed the scores of a classification network by randomly masking regions of input images and proposed a clustering technique to generate self-taught localization hypotheses. Deselaers et al.  used extra images with available location annotations to learn object features and then applied a conditional random field to generally adapt the generic knowledge to specific detection tasks.
Weakly supervised segmentation applies similar techniques to predict pixel-level labels [40, 41, 39, 16, 20, 27, 43]. Wei et al.  utilized extra images with simple scenes and proposed a simple to complex approach to progressively learn better pixel annotations. Kolesnikov et al. 
proposed SEC that integrates three loss functionsi.e., seeding, expansion and boundary constrain, into a unified framework to learn a segmentation network. Wei et al.  proposed a similar idea as ours to find more discriminative regions, they trained extra independent networks for generating class-specific activation maps with the assistance of the pre-trained networks in a post-processing step.
3 Adversarial Complementary Learning
In this section, we describe details of the proposed Adversarial Complementary Learning (ACoL) approach for WSOL. We first revisit CAM  and introduce a more convenient way for producing localization maps. Then, the details of the proposed ACoL, founded on the above finding, are presented for mining high-quality object localization maps, and locating integral object regions.
3.1 Revisiting CAM
, offering a promising way to visualize where deep neural networks focus on for recognition. Zhouet al.  proposed a two-step approach which can produce object localization maps by multiplying the weights from the last fully connected layer to feature maps in a classification network.
Suppose we are given a Fully Convolutional Network (FCN) with last convolutional feature maps denoted as , where is the spatial size and is the number of channels. In , the feature maps are fed into a Global Average Pooling (GAP) 
layer followed by a fully connected layer. A softmax layer is applied on the top for classification. We denote the average value of thefeature map as , where is the element of the feature map at the row and the column. The weight matrix of the fully connected layer is denoted as , where is the number of target classes. Here, we ignore the bias term for convenience. Therefore, for the target class , the input of the softmax node can be defined as
where denotes the element of the matrix at the row and the column. The row contributes to calculating the value . Therefore, the object localization map of class proposed in  can be obtained by aggregating the feature map as follows,
CAM provides a useful way to inspect and locate the target object regions, but it needs an extra step to generate object localization maps after the forward pass. In this work, we reveal that object localization maps can be conveniently obtained by directly selecting from the feature maps of the last convolutional layer. Recently, some methods [17, 4] have already obtained localization maps like this, but we are the first to prove this convenient approach can generate same-quality localization maps with CAM, which is meaningful and contributes to embedding localization maps into complex networks.
In the following, we provide both theoretical proof and visualized comparison to support our discovery. Given the output feature maps of an FCN, we add a convolutional layer of channels with the kernel size of
, stride 1 on top of the feature maps. Then, the output is fed into a GAP layer followed by a softmax layer for classification. Suppose the weight matrix of the convolutional layer is . We define the localization maps as the output feature maps of the convolutional layer and can be calulated by
where denotes the element of the matrix at the row and the column. Therefore, the input value of the softmax layer is the average value of . So, can be calculated by
It is observed that the and are equal if we initialize the parameters of the both networks in the same way. Also, and have the same mathematical form. Therefore, we get the same-quality object localization maps and after the networks are convergent. In practice, the object localization maps from both methods are very similar and highlight the same target regions expect for some marginal differences caused by the stochastic optimization process. Figure 2 compares the object localization maps generated by CAM and our revised approach. We observe that the both approaches can generate the same quality maps and highlight the same regions in a given image. However, with our revised method, the object localization maps can be directly obtained in the forward pass rather than a post-processing step proposed in CAM.
3.2 The proposed ACoL
The mathematical proof in Section 3.1 provides theoretical support of the proposed ACoL. We identify that deep classification networks usually leverage the unique pattern of a specific category for recognition and the generated object localization maps can only highlight a small region of the target object instead of the entire object. Our proposed ACoL aims at discovering the integral object regions through an adversarial learning manner. In particular, it includes two classifiers, which can mine different but complementary regions of the target object in a given image.
Figure 3 shows the architecture of the proposed ACoL, including three components, Backbone, Classifier A and Classifier B. Backbone is a fully convolutional network acting as a feature extractor, which takes the original RGB images as input and produces high-level position-aware feature maps of multiply channels. The feature maps from Backbone are then fed into the following parallel classification branches. The object localization maps for each classifier can be conveniently obtained as described in Section 3.1. Both branches consist of the same number of convolutional layers followed by a GAP layer and a softmax layer for classification. The input feature maps of the two classifiers are different. In particular, the input features of Classifier B are erased with the guidance of the mined discriminative regions produced by Classifier A. We identify the discriminative regions by conducting a threshold on the localization maps of Classifier A. The corresponding regions within the input feature maps for Classifier B are then erased in an adversarial manner via replacing the values by zeros. Such an operation encourage Classifier B to leverage features from other regions of the target object for supporting image-level labels. Finally, the integral localization map of the target object will be obtained by combining the localization maps produced by the two branches.
Formally, we denote the training image set as , where is the label of the image and is the number of images. The input image is firstly transformed by Backbone to the spatial feature maps with channels and resolution. We use to denote the learnable parameters of the CNN. Classifier A is denoted as which can generate object map of the size given the input feature maps in a weakly supervised manner, as explained in Section 3.1. usually highlights the unique discriminative regions for the target class.
We identify the most discriminative region as the set of pixels whose value is larger than the given threshold in object localization maps.
is resized by linear interpolation toif . We erase the discriminative regions in according to the mined discriminative regions. Let denote the erased feature maps, which can be generated via replacing the pixel values of the identified discriminative regions by zeros. Classifier B can generate the object localization maps with the input . Then, the parameters of the network can be updated by back-propagation. Finally, we can obtain the integral object map for the class by merging the two maps and . Concretely, we normalize both maps to the range and denote them as and . The fused object localization map is calculated by , where is the element of the normalized map at the row and column. The whole process is trained in an end-to-end way. Both classifiers adopt the cross entropy loss function for training. Algorithm 1 illustrates the training procedure of the proposed ACoL approach.
During testing, we extract the fused object maps according to the predicted class and resize them to the same size with the original images by linear interpolation. For fair comparison, we apply the same strategy detailed in  to produce object bounding boxes based on the generated object localization maps. In particular, we firstly segment the foreground and background by a fixed threshold. Then, we seek the tight bounding boxes covering the largest connected area in the foreground pixels. For more details please refer to .
4.1 Experiment setup
Datasets and evaluation metrics
Datasets and evaluation metricsWe evaluate the classification and localization accuracy of ACoL on two datasets, i.e., ILSVRC 2016 [6, 31] and CUB-200-2011 . ILSVRC 2016 contains 1.2 million images of 1,000 categories for training. We compare our approach with other approaches on the validation set which has 50,000 images. CUB-200-2011  has 11,788 images of 200 categories with 5,994 images for training and 5,794 for testing. We leverage the localization metric suggested by  for comparison. The metric calculates the percentage of the images whose bounding boxes have over 50% IoU with the ground-truth. In addition, we also implement our approach on Caltech-256  to visualize the outstanding performance in locating the integral target object.
Implementation details We evaluate the proposed ACoL using VGGnet  and GoogLeNet . Particularly, we remove the layers after conv5-3 (from pool5 to prob) of VGG-16 network and the last inception block of GoogLeNet. Then, we add two convolutional layers of kernel size
, stride 1, pad 1 with 1024 units and a convolutional layer of size, stride 1 with 1000 units (200 and 256 units for CUB-200-2011 and Caltech-256 datasets, respectively). As the proof in Section 3.1, localization maps can be conveniently obtained from the feature maps of the convolutional layer. Finally, a GAP layer and a softmax layer are added on the top of the convolutional layers. Both networks are fine-tuned on the pre-trained weights of ILSVRC . The input images are randomly cropped to pixels after being resized to pixels. We test different erasing thresholds from 0.5 to 0.9. In testing, the threshold maintains constant w.r.t. the value in training. For classification results, we average the scores from the softmax layer with 10 crops (4 corners plus center, same with horizontal flip). We train the networks on NVIDIA GeForce TITAN X GPU with 12GB memory.
4.2 Comparisons with the state-of-the-arts
|Methods||top-1 err.||top-5 err.|
|PANDA R-CNN ||BBox+Parts||23.6|
|GoogLeNet-GAP on full image ||w/o||37.0|
|GoogLeNet-GAP on crop ||w/o||32.2|
|GoogLeNet-GAP on BBox ||BBox||29.5|
Classification: Table 1 shows the Top-1 and Top-5 error on the ILSVRC validation set. Our proposed methods GoogLeNet-ACoL and VGGnet-ACoL achieve sightly better classification results than GoogLeNet-GAP and VGGnet-GAP, respectively, and are comparable to the original GoogLeNet and VGGnet. For the fine-grained recognition dataset CUB-200-2011, it also achieves remarkable performance. Table 2 summarizes the benchmark approaches for classification with or without (w/o) bounding box annotations. We find our VGGnet-ACoL achieves the lowest error 28.1% among all the methods without using bounding box.
To summarize, the proposed method can enable the networks to achieve equivalent classification performance with the original networks though our modified networks actually do not use fully connected layers. We attribute it to the erasing operation which guides the network to discover more discriminative patterns so as to obtain better classification performance.
|Methods||top-1 err.||top-5 err.|
|Backprop on GoogLeNet ||61.31||50.55|
|Backprop on VGGnet ||61.12||51.46|
|Methods||top-1 err.||top-5 err.|
|Methods||top-1 err.||top-5 err.|
Localization: Table 3 illustrates the localization error on the ILSVRC val set. We observe that our ACoL approach outperforms all baselines. VGGnet-ACoL is significantly better than VGGnet-GAP and GoogLeNet-ACoL also achieves better performance than GoogLeNet-HaS-32 which adopts the strategy of randomly erasing the input images. We illustrate the localization performance on the CUB-200-2011 dataset in Table 4. Our method outperforms GoogLeNet-GAP by 4.92% in Top-1 error.
We further improve the localization performance by combining our localization results with the state-of-the-art classification results, i.e., ResNet  and DPN , to break the limitation of classification when calculating localization accuracy. As shown in Table 5, the localization accuracy constantly improves with the classification results getting better. We have a boost to 45.14% in Top-1 error and 38.45% in Top-5 error when applying the classification results generated from the ensemble DPN. In addition, we boost the Top-5 localization performance (indicated by *) by only selecting the bounding boxes from the top three predicted classes following  and VGGnet-ACoL-DPN-ensemble* achieves 30.03% on ILSVRC.
Figure 4 visualizes the localization bounding boxes of the proposed method and CAM method . The object localization maps generated by ACoL can cover larger object regions to obtain more accurate bounding boxes. For example, our method can discover nearly entire parts of a bird, e.g., the wing and head, while the CAM method  can only find a small part of a bird, e.g., the head. Figure 5 compares the object localization maps of the two classifiers in mining object regions. We observe that Classifier A and Classifier B are successful in discovering different but complementary target regions. The localization maps from the two classifiers can finally fuse into a robust one, in which the integral object is effectively highlighted. Consequently, we get boosted localization performance.
4.3 Ablation study
In the proposed method, the two classifiers locate different regions of interest via erasing the input feature maps of Classifier B. We identify the discriminative regions by a hard threshold . In order to inspect its influence on localization accuracy, we test different threshold values shown in Table 6. We obtain the best performance in Top-1 error when the threshold on ILSVRC, and it becomes worse when the erasing threshold is larger or smaller. We can conclude: 1) The proposed complementary branch (Classifier B) successfully works collaboratively with Classifier A, because the former can mine complementary object regions so as to generate integral object regions; 2) a well-designed threshold can improve the performance as a too large threshold cannot effectively encourage Classifier B to discover more useful regions and a too small threshold may bring background noises.
|Dataset||threshold||top-1 err.||top-5 err.|
We also test a cascade network of three classifiers. In particular, we add the third classifier and erase its input feature maps guided by the fused object localization maps from both Classifier A and B. We observe there is no significant improvement in both classification and localization performance. Therefore, adding the third branch does not necessarily improve the performance and two branches are usually enough for locating the integral object regions.
Furthermore, we eliminate the influence caused by classification results and compare the localization accuracy using ground-truth labels. As shown in Table 7, the proposed ACoL approach achieves 37.04% in Top-1 error and surpasses the other approaches. This reveals the superiority of the object localization maps generated by our method, and shows that the proposed two classifiers can successfully locate complementary object regions.
|Methods||GT-known loc. err.|
We firstly mathematically prove that object localization maps can be conveniently obtained by selecting from feature maps. Based on it, we proposed Adversarial Complementary Learning for locating target object regions in a weakly supervised manner. The proposed two adversarial classification classifiers can locate different object parts and discover the complementary regions belonging to the same objects or categories. Extensive experiments show the proposed method can successfully mine integral object regions and outperform the state-of-the-art localization methods.
Yi Yang is the recipient of a Google Faculty Research Award. This work is partially supported by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) - a research collaboration as part of the IBM AI Horizons Network. We acknowledge the Data to Decisions CRC (D2D CRC) and the Cooperative Research Centres Programme for funding this research.
L. Bazzani, A. Bergamo, D. Anguelov, and L. Torresani.
Self-taught object localization with deep networks.
Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, pages 1–9. IEEE, 2016.
A. J. Bency, H. Kwon, H. Lee, S. Karthikeyan, and B. Manjunath.
Weakly supervised localization using deep feature maps.In eccv, pages 714–731. Springer, 2016.
C. Cao, X. Liu, Y. Yang, Y. Yu, J. Wang, Z. Wang, Y. Huang, L. Wang, C. Huang,
W. Xu, et al.
Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks.In Proceedings of the IEEE International Conference on Computer Vision, pages 2956–2964, 2015.
-  A. Chaudhry, P. K. Dokania, and P. H. Torr. Discovering class-specific pixels for weakly-supervised semantic segmentation. arXiv preprint arXiv:1707.05821, 2017.
-  Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng. Dual path networks. arXiv preprint arXiv:1707.01629, 2017.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE CVPR, pages 248–255, 2009.
-  T. Deselaers, B. Alexe, and V. Ferrari. Weakly supervised localization and learning with generic knowledge. ijcv, 100(3):275–293, 2012.
J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell.
Decaf: A deep convolutional activation feature for generic visual
International conference on machine learning, pages 647–655, 2014.
-  X. Dong, D. Meng, F. Ma, and Y. Yang. A dual-network progressive approach to weakly supervised object detection. In ACM Multimedia, 2017.
-  X. Dong, L. Zheng, F. Ma, Y. Yang, and D. Meng. Few-example object detection with model communication. arXiv preprint arXiv:1706.08249, 2017.
-  E. Gavves, B. Fernando, C. G. Snoek, A. W. Smeulders, and T. Tuytelaars. Local alignments for fine-grained categorization. International Journal of Computer Vision, 111(2):191–212, 2015.
-  R. Girshick. Fast r-cnn. In arXiv preprint arXiv:1504.08083, 2015.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE CVPR, pages 580–587, 2014.
-  G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. 2007.
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  Q. Hou, P. K. Dokania, D. Massiceti, Y. Wei, M.-M. Cheng, and P. Torr. Bottom-up top-down cues for weakly-supervised semantic segmentation. arXiv preprint arXiv:1612.02101, 2016.
-  S. Hwang and H.-E. Kim. Self-transfer learning for fully weakly supervised object localization. arXiv preprint arXiv:1602.01625, 2016.
-  Z. Jie, Y. Wei, X. Jin, and J. Feng. Deep self-taught learning for weakly supervised object localization. In IEEE CVPR, 2017.
-  D. Kim, D. Yoo, I. S. Kweon, et al. Two-phase learning for weakly supervised object localization. arXiv preprint arXiv:1708.02108, 2017.
-  A. Kolesnikov and C. H. Lampert. Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In ECCV, pages 695–711, 2016.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
-  X. Liang, S. Liu, Y. Wei, L. Liu, L. Lin, and S. Yan. Towards computational baby learning: A weakly-supervised approach for object detection. In IEEE ICCV, pages 999–1007, 2015.
-  M. Lin, Q. Chen, and S. Yan. Network in network. ICLR, 2013.
-  T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, volume 1, page 4, 2017.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
M. Oquab, L. Bottou, I. Laptev, and J. Sivic.
Is object localization for free?-weakly-supervised learning with convolutional neural networks.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 685–694, 2015.
-  G. Papandreou, L.-C. Chen, K. Murphy, and A. L. Yuille. Weakly-and semi-supervised learning of a dcnn for semantic image segmentation. arXiv preprint arXiv:1502.02734, 2015.
-  J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
-  O. Russakovsky, A. Bearman, V. Ferrari, and L. Fei-Fei. What’s the point: Semantic segmentation with point supervision. In ECCV, pages 549–565, 2016.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
-  P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. International Conference on Learning Representations, 2014.
-  K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations, 2015.
-  K. K. Singh and Y. J. Lee. Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization. arXiv preprint arXiv:1704.04232, 2017.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
-  C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, 2011.
-  L. Wang, G. Hua, R. Sukthankar, J. Xue, and N. Zheng. Video object discovery and co-segmentation with extremely weak supervision. In ECCV, pages 640–655. Springer, 2014.
-  Y. Wei, J. Feng, X. Liang, C. Ming-Ming, Y. Zhao, and S. Yan. Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In IEEE CVPR, 2017.
-  Y. Wei, X. Liang, Y. Chen, X. Shen, M.-M. Cheng, J. Feng, Y. Zhao, and S. Yan. Stc: A simple to complex framework for weakly-supervised semantic segmentation. IEEE TPAMI, 39(11):2314–2320, 2017.
-  Y. Wei, X. Liang, Y. n. Chen, Z. Jie, Y. Xiao, Y. Zhao, and S. Yan. Learning to segment with image-level annotations. Pattern Recognition, 2016.
-  Y. Wei, W. Xia, M. Lin, J. Huang, B. Ni, J. Dong, Y. Zhao, and S. Yan. Hcp: A flexible cnn framework for multi-label image classification. IEEE TPAMI, 38(9):1901–1907, 2016.
-  Y. Wei, H. Xiao, H. Shi, Z. Jie, J. Feng, and T. S. Huang. Revisiting dilated convolution: A simple approach for weakly- and semi- supervised semantic segmentation. In IEEE CVPR, 2018.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
-  J. Zhang, Z. Lin, J. Brandt, X. Shen, and S. Sclaroff. Top-down neural attention by excitation backprop. In European Conference on Computer Vision, pages 543–559. Springer, 2016.
-  N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Part-based r-cnns for fine-grained category detection. In eccv, pages 834–849. Springer, 2014.
-  N. Zhang, R. Farrell, F. Iandola, and T. Darrell. Deformable part descriptors for fine-grained recognition and attribute prediction. In iccv, pages 729–736, 2013.
-  B. Zhou, A. Khosla, L. A., A. Oliva, and A. Torralba. Learning Deep Features for Discriminative Localization. IEEE CVPR, 2016.