Weakly Supervised Object Localization with Inter-Intra Regulated CAMs

11/17/2019 ∙ by Guofeng Cui, et al. ∙ University of Rochester 9

Weakly supervised object localization (WSOL) aims to locate objects in images by learning only from image-level labels. Current methods are trying to obtain localization results relying on Class Activation Maps (CAMs). Usually, they propose additional CAMs or feature maps generated from internal layers of deep networks to encourage different CAMs to be either adversarial or cooperated with each other. In this work, instead of following one of the two main approaches before, we analyze their internal relationship and propose a novel intra-sample strategy which regulates two CAMs of the same sample, generated from different classifiers, to dynamically adapt each of their pixels involved in adversarial or cooperative process based on their own values. We mathematically demonstrate that our approach is a more general version of the current state-of-the-art method with less hyper-parameters. Besides, we further develop an inter-sample criterion module for our WSOL task, which is originally proposed in co-segmentation problems, to refine generated CAMs of each sample. The module considers a subgroup of samples under the same category and regulates their object regions. With experiment on two widely-used datasets, we show that our proposed method significantly outperforms existing state-of-the-art, setting a new record for weakly-supervised object localization.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Weakly-Supervised Object Localization has attracted extensive research efforts in recent years [1, 3, 12, 11, 20, 21, 23, 27, 2, 35, 38]. It aims to infer object locations by only training with image-level labels rather than pixel-level annotations, which can greatly reduce the cost of human labor in annotating images. This task is challenging since no guidance of target object position is provided.

Figure 1: Important components of our method. With a raw image in (a), the CAMs are generated by a deep convolutional network. The feature map corresponding to the ground-truth label is shown in (b), from which we can observe pixels with different values shown by different colors. The binary feature map in (c) reflects high confidence area with yellow color and the segmentation threshold is learned from the network, which varies with each sample.

The most popular line of work tries to find cues from existing classification models. For example, Zhou et al. [45]

introduce Global Average Pooling (GAP) layer to generate Class Activation Maps (CAMs) in top layers, which highlights high-probability positions of target objects. However, CAMs can only detect the most discriminative part of an object, which is far from enough to cover the entire object for precise localization.

Therefore, various methods are proposed to improve the power of CAMs. For examples, to enlarge the localized area from CAMs, some adversarial erasing approaches are proposed [40, 43, 35]. These methods usually need to build new CAMs based on original ones to search additional valuable areas. To encourage new CAMs to focus on different regions, the common way is to erase part of the original image or internal feature map by directly manipulating the corresponding values. Despite the appealing idea, they add artifacts to image features and cannot guarantee a better result if background noise are adopted. Unlike the previous methods, Zhang et al. [44] propose a self-supervised model, called SPG, that adds additional convolutional layers to convert internal feature to pixel-level supervision and achieves the best performance so far. However, the model requires a very deep structure and also a costly training process to extract a great number of targeted pixels from several feature maps. What’s worse, for the whole training process, at least four predefined thresholds are needed to segment different feature maps in order to extract foreground and background pixels. These thresholds, selected as hyper-parameters, may result in ambiguous supervisions for many samples and finally discourage the network from achieving the best performance.

The progress and issues in existing WSOL methods have inspired us to design a more effective method that combines advantages of previous approaches. In fact, we notice that most methods generate new CAMs or feature maps from internal backbone network. However, there is no general rule about what strategy should be chosen to make these feature maps interact with each other. Some prefer to encourage them to be different in order to find more related areas while others treat them as alternate supervision for each other and get enlarged altogether. Therefore, in this work, we propose a dynamic adapted strategy for multiple CAMs to interact with each other, letting them determine automatically what strategy their pixels should apply–becoming more similar or more different.

Our method is inspired by the distribution of values in CAMs. As shown in Figure 1, a CAMs is generated for the left raw image, containing some pixels with highest values and lowest values in relatively yellow and black colors respectively. These pixels show the strong confidence of the CAMs that they belong to target object or background, which are able to become good signals for other CAMs to follow and resemble. In other hand, pixels narrowed between the former two kinds of pixels have kind of median values, which are untrustworthy and actually reflects the ambiguity of the CAMs for those corresponding positions. Therefore, these pixels can encourage other CAMs to keep suspicious and try to explore more. Figure 1(c) is actually a binary mask with value for high-confidence pixels while for ambiguous ones. Such a method, which may look similar with the one in [44], actually learn segmentation thresholds during training process instead of using predefined values. Besides, we also utilizes ambiguous pixels that are not highlighted in the mask to encourage other CAMs to be adversarial while these pixels are just discarded in [44]. We will talk more about mathematical details and demonstrate its generality over the current state-of-the-art method in Section 3.

In addition, inspired by some previous works in co-segmentation tasks [39, 18], we introduce an inter-sample criterion module for our WSOL task. In [39]

, they regard different parts of samples in the same category as one group, like hands of one person and legs of another person in different images. However, they only apply this strategy in single-category datasets while we further expand it, as an inter-sample criterion function, to WSOL tasks. Our module first generates feature vectors representing objects and background regions from raw samples according to CAMs, then make object vectors belonging to the same category closer while pushing background regions far away. Besides, we also apply background regulation for each category separately in our task. With such a metric learning strategy, we force the network to consider fragments of foreground object regions in different images as a more complete object and also prevent background noise getting involved.

In summary, our main contributions are three-fold: (1) We propose a novel strategy to encourage multiple CAMs to interact with each other dynamically. The strategy can be demonstrated in the mathematical way as a general version compared with previous methods. (2) We further introduce an inter-sample criterion module with the purpose to integrate different discriminative regions of objects from different samples and reduce the influence of background noise. (3) With only image-level supervision for training, our method greatly outperforms state-of-the-art methods on two standard benchmarks, ILSVRC validation set and CUB-200-2011, for weakly supervised localization performance.

Figure 2: Overview of our proposed CAMs2i. Each image sample is processed through multiple convolutional blocks and generate two CAMs. From one CAMs, a confidence mask is calculated without importing any predefined threshold and then combined with pixel-wise distance loss from two CAMs. The dashed line indicate as another supervision for to improve its classification performance. For inter-sample criterion module, we calculate weighted images and then feature vectors from raw images and CAMs. Then several metric learning strategies are applied to further regulate CAMs.

2 Related Work

Fully supervised detection methods have been intensively studied and achieve extraordinary success [17, 9, 24, 14, 15, 30, 28]. R-CNN [13] and Fast R-CNN [29] proposes to extract region proposals and feeds them into classifiers. Afterwards, Faster-RCNN [30]

, being one of the most effective methods for object detection, combines region proposal network with a convolutional neural network to localize objects in images by bounding boxes. Moreover, SSD 

[25] and YOLO [28] are designed specifically for speeding up the detection process and reach high performance in real-time detection. Despite the success of the approaches above, they all require a vast number of bounding box annotations during training, which are very expensive to create in a manual way. Besides, the pixel-level annotations are ambiguous since there is no common rule for annotating especially when it comes to pixels around object edges.

Therefore, Class Activation Maps (CAMs) work as a weakly-supervised approach for object localization tasks. Zhou et al. [45] add Global Average Pooling (GAP) layer for deep neural networks to generate CAMs that are utilized to localize objects. Based on it, Wei et al. [40] propose an adversarial erasing approach to expand CAMs with more additional object regions. With a similar purpose, Zhang et al. [43] propose ACoL network adopting cut-and-search strategy on the feature map and further prove that the process for obtaining CAMs can be end-to-end. Moreover, Zhang et al. [44] propose SPG network that adds pixel-level self-supervision for feature maps in different levels and become current state-of-the-art method in this task.

There are also some other methods that are more related with model interpretability but can be also applied to object localization tasks. Ramprasaath et al. [32] combine gradient values and original feature maps to produce gradient CAMs without changing the structure of networks. Chattopadhyay et al. [6] further refine gradCAMs by using a weighted combination of the positive partial derivatives of the last convolutional layer feature maps. These methods are usually more engaged to propose new CAMs that can interpret models on various tasks. However, in our work, we focus on object localization task and try to improve the performance based on multiple CAMs with more reasonable interaction, which, though both utilize CAMs, have totally different purposes.

3 Method

In this section, we first review the seminal Class Activation Maps (CAMs), then introduce our dynamic adapted CAMs along with the inter-sample criterion module. The overview of our proposed method for the training phase is shown in Fig. 2.

3.1 Background

We first describe the weakly supervised object localization problem and some common modules in the basic network for generating CAMs. Given a set of images, , covering objects of categories, our goal is to classify each image to a category and locate the corresponding objects with bounding boxes. Take the method in [43] as an example, for an input image, the Fully Convolutional Network (FCN) produces feature maps with different channel numbers and spatial sizes at different layers . We denote as the last convolutional feature map from a backbone network. To calculate CAMs, the network usually applies a classification block that consists of multiple fully convolutional layers to transform the channels of to the number of categories such that we have

. Following that, a Global Average Pooling (GAP) layer is applied at each channel to generate a class logit

, which is then used for cross-entropy loss calculation. This process can be written as:

(1)

where refers to the classification block, and and locates a certain pixel on the -th channel of the feature map . The feature map in each channel of has relatively high values on positions that may correspond to the target object.

However, in this basic framework, the classification block and the GAP layer are only applied after the last convolutional feature map from the backbone network, which only captures the most discriminative part in the largest receptive field. To solve it, most of previous methods try to generate new CAMs or feature maps from additional extended convolutional layers. These new feature maps are able to search extra areas of target objects beyond the one that has already highlighted by original CAMs, or they can be regulated under the supervision of original CAMs and in turn refine the backbone network. These two potential approaches, though seemingly totally different and even contradicted, can be merged together in our method and thereby improve every pixel in the final CAMs.

3.2 Interactive Class Activation Maps

We assume there are two CAMs generated from different classifiers based on the same backbone network. The first classifier is appended after the final convolutional block as same as what we describe in the previous subsection. Another classifier is inserted in the backbone network with same structure and output dimensions. We can denote these two CAMs as:

(2)

We first obtain the CAMs generated from that is inserted in one of backbone convolutional layers. The values of pixels in vary in a large range from lowest value in background pixel to highest value on target object, which is partly shown in Figure 1(b). If the value is relatively much higher or lower than others, we consider has a high confidence on those pixels being correct. This idea is similar with the one in [44] which extracts pixels with extreme values in CAMs to supervise internal feature maps. However, instead of setting lots of thresholds to separate CAMs in their method, we calculate distance between each pixel and the one with averaged value. The distance, after normalized, can reveal how much the CAMs feel confident about all its pixels. The process can be denoted as:

(3)

where is the averaged value of while refers to the distance of each pixel in to . is the total number of pixels in . We then calculate as the averaged distance among all distance for each pixel. Finally, we determine the strategy of each pixel, i.e. adversarial or cooperated, by checking their corresponding mask values whether it is larger than or not. If the value is larger than , that means the CAMs have a strong confidence that the corresponding pixel is correct, which keeps the unchanged. Otherwise, the mask will be changed to the negative value, denoting a great ambiguity of the pixel.

Once we have conducted the mask, we need to generate another CAMs and make both of them interact with each other. We fist calculate the distance of each pixel of two CAMs and then multiply it with the corresponding mask value we calculate before. If the final value is positive, it means we would like to keep two pixels in different CAMs more similar while negative values, on the contrary, more different, which encourages to explore more on ambiguous values but keep unchanged on confident areas from . The objective function is denoted as:

(4)

By this way, for confident values in , we obtain them as supervisions without importing any threshold. In fact, with and , we have already obtained two thresholds after transforming the Equation  3, which can be denoted as:

(5)

where and represent thresholds used to extract foreground and background pixels from the CAMs respectively. These two values are set as predefined percentage parameters in [44] but determined independently relying on CAMs itself for each sample learned in our method, thus becoming a more general version for separating CAMs. Besides, unlike just throwing away other uncertain values in [44], we apply them to encourage to explore some different areas, which is actually a main approach used in previous adversarial methods. Therefore, all pixels in one CAMs can be contributed to the generation of another and such a method merges two different strategies for refining CAMs into an integral one.

Both classifiers are trained with image-level labels and Cross Entropy loss. However, since is appended after the final convolutional layer, its classification performance is better than the result of , we also set the classification result from as an additional supervision for , which is denoted as:

(6)

Therefore, the total loss for each training sample can be shown as:

(7)

where is Cross Entropy loss and is an linearly increasing weight from to that prevents the last two losses influencing the performance of the network in the beginning training process.

During the inference time, we obtain two CAMs from different classifiers and then combine them together. The feature map corresponding to the class with the highest predicted scores is extracted and upsampled to the same size of the raw testing image. We apply same strategy utilized in [44] to calculate the result bounding boxes. In details, we firstly segment the foreground by a fixed threshold and then seek the tight bounding boxes covering the largest connected area in the foreground pixels. For more details please refer to [44].

3.3 Inter-sample Criterion Module

Besides applying interactive CAMs for each sample, we further develop an inter-sample criterion function that is inspired from co-segmentation tasks [18, 39] to regulate CAMs in our method. The key idea of previous methods is that if different object regions belonging to the same category in multiple samples are extracted from CAMs, they can actually be considered as more similar features in a high dimensional space. In our method, based on that idea, we not only consider that foreground object parts are similar but also assume background pixels surrounding the objects in same category can also share some characteristics. Besides, we expand its generality for regulation of CAMs through multiple categories rather than only in a single one, as shown in Figure 2.

In details, for the CAMs defined above, we first extract the feature map from the CAMs that corresponds to groundtruth label index and then do an element-wise multiplication with original raw input image after upsampling it to the same spatial size. The result combination can be considered as an weighted image that focuses more on corresponding objects. Besides, we apply similar strategy to obtain background-focused images except that we calculate the feature map by finding all maximum values in each position through channels and subtract them by as the background probability. The whole process can be denoted as:

(8)

where indicates the th image and

refers to sigmoid function for non-linear transformation.

These two kinds of combinations are then transformed to feature vectors by another convolutional network that can be denoted as . Then we apply two metric learning strategies to force distances among them. That is we would like foreground object features and background features respectively in the same category to be closer while pushing foreground and background pairs further in a crossing way. The whole process in one category can be denoted as:

(9)

where and represent different samples while denotes the identification score between two different metrics. The method is similar with the one in [18], but instead of applying feature maps from generators, we replace it with multiple CAMs and further add a metric strategy to make background features for same category samples closer to expand distance among different categories.

Finally, we aggregate

through all categories and obtain the loss function as:

(10)

where C refers to amount of categories. During training time, we optimize both and together and remove this module for testing phase.

3.4 Implementation Details

We build our model based on VGG16-net [34] which is commonly used in classification tasks. We first build a convolutional classifier by combining two convolutional layers with filters and one GAP layer to generate CAMs and classification results respectively. Then we add two same classifiers after different layers in the backbone network and also modify the layer with to maintain the same spatial size for different feature maps. For our inter-sample Criterion Module, we apply Alexnet [22] to process combination images into feature vectors with

dimensions. The network is fine-tuned on the pre-trained weights of ImageNet 

[31] for both ILSVRC and CUB datasets. We train the model with an initial of 0.001 and decay of

each epoch. The optimizer is SGD with

momentum and weight decay. For classification result, we follow the instructions in [45]

, which further averages the scores from the softmax layer with

crops.

4 Experiment

4.1 Experiment Setup

Dataset and Evaluation  To draw a fair comparison, we test our model on ILSVRC2016 [31] validation set and CUB-200-2011 [37] test set, which are two most widely-used benchmarks for WSOL. The ILSVRC dataset has a training set containing more than 1.2 million images of 1,000 categories and a validation set of 50,000 images. In CUB-200-2011, there are totally 11,788 bird images of 200 classes, among which 5,994 images are for training and 5,794 for testing. We leverage the localization metric suggested by [31]. Specifically, the bounding box of an image is correctly predicted if: 1) the model predicts the right image label; 2) more than 50% Intersection-over-Union (IoU) is observed in the overlapped area between predicted bounding boxes and ground truth boxes. For more details, please refer to  [31].

4.2 Experiment Result

ILSVRC:  Table 1 and Table 2 show both classification and localization results on ILSVRC validation set respectively. We first build a baseline model with VGG16 backbone network which only has one classification branch in the final top layer to make a comparison with our proposed method. For the classification task, though the performance of our proposed method still has a distance compared with traditional classification networks, it achieves better results than most WSOL state-of-the-art models. We contribute it to the classifier that is inserted in the backbone network, which not only provides additional classification result, but also refines the weights in bottom layers.

Methods Clas. Error
Top-1 Top-5
Classification Task
FixResNeXt-101 [36] 13.6 2.0
ResNeXt-101 [26] 14.6 2.4
EfficientNet-B7 [7] 15.0 2.8
WSOL Task
GoogLeNet-GAP [45] 35.0 13.2
VGGnet-GAP [45] 33.4 12.2
VGGnet-ACoL [43] 32.5 12.0
GoogLeNet-ACoL [43] 29.0 11.8
Grad-CAM on VGG16 [32] 30.4 10.9
SPG [44]
VGG16-base 32.58 11.7
VGG16-CAMsIntra 30.1 10.6
VGG16-CAMs2i 28.1 9.8
Table 1: Classification results for ILSVRC validation set where ”” denotes results obtained by running author-released code.
Methods Loc. Error
Top-1 Top-5
Backprop on GoogLeNet [33] 61.31 50.55
Backprop on VGGnet [33] 61.12 51.46
GoogLeNet-GAP [45] 56.40 43.00
VGGnet-GAP [45] 57.20 45.14
VGGnet-ACoL [43] 54.17 40.57
GoogLeNet-ACoL [43] 53.28 42.58
Grad-CAM on VGG16 [32] 56.51 46.41
SPG [44] 51.40 40.00
VGG16-base 54.23 43.76
VGG16-CAMsIntra 51.66 40.89
VGG16-CAMs2i 48.81 37.46
Table 2: Localization results for ILSVRC validation.
Methods GT-Known Top-1 Loc. Err
GoogLeNet-GAP [45] 41.34
Feedback [5] 38.80
MWP [41] 38.70
STNet [4] 38.60
VGGnet-ACoL [43] 37.04
SPG [44] 35.31
VGG16-base 37.20
VGG16-CAMs2i 33.8
Table 3: GT-Known localization results for ILSVRC validation set.
Methods Top-1 Clas. Err
Classification Task
Inception V3 [8] 10.4
WS-DAN [19] 10.7
PAIRS [16] 10.8
WSOL Task
DPD [42] 49.0
DeCAF+DPD [10] 35.0
PANDA R-CNN 23.6
GoogLeNet-GAP(full) [45] 37.0
GoogLeNet-GAP(crop) [45] 32.2
GoogLeNet-GAP(BBox) [45] 29.5
VGGnet-ACoL [43] 28.1
SPG [44]
VGG16-base 30.4
VGG16-CAMsIntra 25.7
VGG16-CAMs2i 23.6
Table 4: Classification results for CUB-200-2011 test set.
Methods Localization Error
Top-1 Top-5
GoogLeNet-GAP(full) [45] 59.00
VGGnet-ACoL [43] 54.08 43.49
SPG [44] 53.36 42.28
VGG16-base 55.26 42.37
VGG16-CAMsIntra 45.51 35.12
VGG16-CAMs2i 44.17 34.26
Table 5: Localization results for CUB-200-2011 test set.
Figure 3: Output examples of HCAMs. The first three rows show successful results while the last row provides two examples that fail to connect detected parts together.

For localization task, our CAMs2i outperforms the state-of-the-art on both top-1 and top-5 localization error for about . It demonstrates that the interactive CAMs can help to explore more object related regions based on the existing confident area than the single CAMs does. Besides, the inter-sample loss function further improves CAMs by regulate CAMs from different parts of targeted objects and also prevents noise from background pixels.

Figure 4: Compare localization examples between SPG and CAMs2i. All visual results from SPG are generated by strictly following author-released code
Dataset Threshold Top-1 Loc. Err.
CUB 0.5 45.12
0.6 44.17
0.7 45.26
ILSVRC 0.6 47.29
0.7 48.81
0.8 48.02
Table 6: Localization error of different thresholds for foreground segmentation.

In the above comparison, our localization result is restricted by classification performance since we still need to consider the correctness for classification labels. In order to demonstrate the localization ability of our proposed model, we use ground-truth labels for ILSVRC validation set and only evaluate localization performance serving as an “upper-bound.” Table 3 shows the results for our method and many previous WSOL methods. We can see our model with inter-intra regulation module outperforms all other methods for about . That means, no matter how the networks classify images, the CAM can correctly locate corresponding objects in higher probability.

CUB-200-2011:  We further demonstrate the power of our proposed method on CUB-200-2011 in Table 4 and Table 5. For the classification task, our VGG16-CAMs2i outperforms all other WSOL methods including the methods that rely on bounding box crops. For localization, our method outperforms almost on both top-1 and top-5 localization error, which marks a huge improvement.

Figure 3 visualizes the CAMs from each classification branch and also the final bounding box result of our proposed method on both ILSVRC and CUB-200-2011. For each input image, our proposed model first generate CAMs for each classification branch and then combine them together after normalization. Finally, the binary region is obtained by segmenting the foreground part of CAMs and extracting the largest connected one. The areas in dashed lines for each CAMs indicate the segmented regions if we use that CAMs as the final result. We can observe that in most cases, the combination of two CAMs have a more stable foreground segmentation than any single CAMs. It collect the final pixels that two CAMs are confident in and remove ambiguous pixels in each single CAMs. Besides, we compare our localization results between our method and SPG in Fig. 4. In most situations, our method can generate more precise bounding boxes than SPG by discovering more representative object components.

4.3 Ablation Study

In our proposed model, we use one threshold to segment foreground region from the combined CAMs in the inference time. To inspect the influence of the threshold for final localization result, we test different thresholds for our model in Table 6. We find that with and respectively, we can achieve the best localization performance on ILSVRC and CUB-200-2011.

5 Conclusion

We propose a new method for the Weakly Supervised Object Localization task, which generates multiple CAMs from the network and let them interact with each other to explore more related regions. We also propose an inter-sample module to further regulate CAMs in category level. The two modules improve the CAMs in both intra- and inter-sample ways for localization, achieving a new state-of-the-art result for the WSOL task.

References

  • [1] L. Bazzani, A. Bergamo, D. Anguelov, and L. Torresani (2016-03) Self-taught object localization with deep networks. In

    2016 IEEE Winter Conference on Applications of Computer Vision (WACV)

    ,
    Vol. , pp. 1–9. External Links: Document, ISSN Cited by: §1.
  • [2] A. Bearman, O. Russakovsky, V. Ferrari, and L. Fei-Fei (2016) What’s the point: semantic segmentation with point supervision. In Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), Cham, pp. 549–565. External Links: ISBN 978-3-319-46478-7 Cited by: §1.
  • [3] A. J. Bency, H. Kwon, H. Lee, S. Karthikeyan, and B. S. Manjunath (2016)

    Weakly supervised localization using deep feature maps

    .
    In Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), Cham, pp. 714–731. External Links: ISBN 978-3-319-46448-0 Cited by: §1.
  • [4] M. Biparva and J. K. Tsotsos (2017) STNet: selective tuning of convolutional networks for object localization. CoRR abs/1708.06418. External Links: Link, 1708.06418 Cited by: Table 3.
  • [5] C. Cao, X. Liu, Y. Yang, Y. Yu, J. Wang, Z. Wang, Y. Huang, L. Wang, C. Huang, W. Xu, D. Ramanan, and T. S. Huang (2015-12) Look and think twice: capturing top-down visual attention with feedback convolutional neural networks. In The IEEE International Conference on Computer Vision (ICCV), Cited by: Table 3.
  • [6] A. Chattopadhyay, A. Sarkar, P. Howlader, and V. N. Balasubramanian (2017) Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks. CoRR abs/1710.11063. External Links: Link, 1710.11063 Cited by: §2.
  • [7] E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le (2019) RandAugment: practical data augmentation with no separate search. External Links: 1909.13719 Cited by: Table 1.
  • [8] Y. Cui, Y. Song, C. Sun, A. Howard, and S. Belongie (2018)

    Large scale fine-grained categorization and domain-specific transfer learning

    .
    In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 4109–4118. Cited by: Table 4.
  • [9] J. Dai, Y. Li, K. He, and J. Sun (2016) R-fcn: object detection via region-based fully convolutional networks. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 379–387. Cited by: §2.
  • [10] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In

    International conference on machine learning

    ,
    pp. 647–655. Cited by: Table 4.
  • [11] X. Dong, L. Zheng, F. Ma, Y. Yang, and D. Meng (2018) Few-example object detection with model communication. IEEE Transactions on Pattern Analysis and Machine Intelligence (), pp. 1–1. External Links: Document, ISSN 0162-8828 Cited by: §1.
  • [12] X. Dong, D. Meng, F. Ma, and Y. Yang (2017) A dual-network progressive approach to weakly supervised object detection. In Proceedings of the 25th ACM International Conference on Multimedia, MM ’17, New York, NY, USA, pp. 279–287. External Links: ISBN 978-1-4503-4906-2, Document Cited by: §1.
  • [13] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik (2013) Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR abs/1311.2524. External Links: Link, 1311.2524 Cited by: §2.
  • [14] R. Girshick, J. Donahue, T. Darrell, and J. Malik (2014-06) Rich feature hierarchies for accurate object detection and semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [15] R. Girshick (2015-12) Fast r-cnn. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
  • [16] P. Guo and R. Farrell (2018) Fine-grained visual categorization using PAIRS: pose and appearance integration for recognizing subcategories. CoRR abs/1801.09057. External Links: Link, 1801.09057 Cited by: Table 4.
  • [17] K. He, G. Gkioxari, P. Dollar, and R. Girshick (2017-10) Mask r-cnn. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
  • [18] K. Hsu, Y. Lin, and Y. Chuang (2018) Co-attention cnns for unsupervised object co-segmentation.. In IJCAI, pp. 748–756. Cited by: §1, §3.3, §3.3.
  • [19] T. Hu and H. Qi (2019) See better before looking closer: weakly supervised data augmentation network for fine-grained visual classification. CoRR abs/1901.09891. External Links: Link, 1901.09891 Cited by: Table 4.
  • [20] Z. Jie, Y. Wei, X. Jin, J. Feng, and W. Liu (2017-07) Deep self-taught learning for weakly supervised object localization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
  • [21] D. Kim, D. Cho, D. Yoo, and I. So Kweon (2017-10) Two-phase learning for weakly supervised object localization. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
  • [22] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.), pp. 1097–1105. External Links: Link Cited by: §3.4.
  • [23] X. Liang, S. Liu, Y. Wei, L. Liu, L. Lin, and S. Yan (2015-12) Towards computational baby learning: a weakly-supervised approach for object detection. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
  • [24] T. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie (2017-07) Feature pyramid networks for object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [25] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) SSD: single shot multibox detector. In Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), Cham, pp. 21–37. External Links: ISBN 978-3-319-46448-0 Cited by: §2.
  • [26] D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten (2018-09) Exploring the limits of weakly supervised pretraining. In The European Conference on Computer Vision (ECCV), Cited by: Table 1.
  • [27] M. Oquab, L. Bottou, I. Laptev, and J. Sivic (2015-06)

    Is object localization for free? - weakly-supervised learning with convolutional neural networks

    .
    In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
  • [28] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016-06) You only look once: unified, real-time object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [29] S. Ren, K. He, R. B. Girshick, and J. Sun (2015) Faster R-CNN: towards real-time object detection with region proposal networks. CoRR abs/1506.01497. External Links: Link, 1506.01497 Cited by: §2.
  • [30] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 91–99. Cited by: §2.
  • [31] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015-12-01) ImageNet large scale visual recognition challenge. International Journal of Computer Vision 115 (3), pp. 211–252. External Links: ISSN 1573-1405, Document Cited by: §3.4, §4.1.
  • [32] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017-10) Grad-cam: visual explanations from deep networks via gradient-based localization. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2, Table 1, Table 2.
  • [33] K. Simonyan, A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Cited by: Table 2.
  • [34] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §3.4.
  • [35] K. K. Singh and Y. J. Lee (2017-10) Hide-and-seek: forcing a network to be meticulous for weakly-supervised object and action localization. In 2017 IEEE International Conference on Computer Vision (ICCV), Vol. , pp. 3544–3553. External Links: Document, ISSN 2380-7504 Cited by: §1, §1.
  • [36] H. Touvron, A. Vedaldi, M. Douze, and H. Jégou (2019) Fixing the train-test resolution discrepancy. CoRR abs/1906.06423. External Links: Link, 1906.06423 Cited by: Table 1.
  • [37] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie (2011) The caltech-ucsd birds-200-2011 dataset. Technical report Technical Report CNS-TR-2011-001, California Institute of Technology. Cited by: §4.1.
  • [38] L. Wang, G. Hua, R. Sukthankar, J. Xue, Z. Niu, and N. Zheng (2017-10) Video object discovery and co-segmentation with extremely weak supervision. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (10), pp. 2074–2088. External Links: Document, ISSN 0162-8828 Cited by: §1.
  • [39] X. Wang, S. You, X. Li, and H. Ma (2018) Weakly-supervised semantic segmentation by iteratively mining common object features. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1354–1362. Cited by: §1, §3.3.
  • [40] Y. Wei, J. Feng, X. Liang, M. Cheng, Y. Zhao, and S. Yan (2017-07) Object region mining with adversarial erasing: a simple classification to semantic segmentation approach. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
  • [41] J. Zhang, S. A. Bargal, Z. Lin, J. Brandt, X. Shen, and S. Sclaroff (2018-10-01) Top-down neural attention by excitation backprop. International Journal of Computer Vision 126 (10), pp. 1084–1102. External Links: ISSN 1573-1405, Document Cited by: Table 3.
  • [42] N. Zhang, R. Farrell, F. Iandola, and T. Darrell (2013-12) Deformable part descriptors for fine-grained recognition and attribute prediction. In The IEEE International Conference on Computer Vision (ICCV), Cited by: Table 4.
  • [43] X. Zhang, Y. Wei, J. Feng, Y. Yang, and T. S. Huang (2018-06) Adversarial complementary learning for weakly supervised object localization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §3.1, Table 1, Table 2, Table 3, Table 4, Table 5.
  • [44] X. Zhang, Y. Wei, G. Kang, Y. Yang, and T. Huang (2018-09) Self-produced guidance for weakly-supervised object localization. In The European Conference on Computer Vision (ECCV), Cited by: §1, §1, §2, §3.2, §3.2, §3.2, Table 1, Table 2, Table 3, Table 4, Table 5.
  • [45] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016-06) Learning deep features for discriminative localization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §3.4, Table 1, Table 2, Table 3, Table 4, Table 5.