Object detection, which is a fundamental problem in computer vision, aims to localize spatial extents of all instances of a particular object category. The state-of-the-art object detection approaches typically train deep Convolutional Neural Networks (CNNs)  from large scale image datasets with instance-level labels (, bounding boxes) [8, 12, 21, 23, 24]. A key bottleneck of these approaches is that they require instance-level labels (strong labels) during training, which are extremely costly to obtain. Image-level labels (weak labels), in the form of binary image labels that indicate which object categories are present in an image, are far easier to collect than detailed instance-level labels, especially given the growth of tagged images on the Internet . However, it still remains a challenging problem to utilize these image-level labels when training object detectors. In this paper, we develop an EM based object detection method for training CNN based object detectors from image-level labels, either alone or in combination with some instance-level labels.
There are two different settings for training detectors given image-level labels: 1) learning detectors from image-level labels alone, and 2) learning detectors from image-level labels combined with some instance-level labels. In the first setting, the detection problem is also known as Weakly Supervised Detection (WSD), which has been explored in many recent literatures. Despite this progress, WSD is still far from solved, since the state-of-the-art performance of WSD on standard benchmarks [3, 6, 17, 34] is considerably lower than fully supervised counterparts [8, 24]. In the second setting, the detection problem is known as Semi Supervised Detection (SSD). Several previous works [13, 31] assume the existence of several strongly annotated categories (all images in these categories have instance-level labels), and transfer knowledge from these strongly annotated categories to weakly annotated categories (all images in these categories only have image-level labels). We focus on a different setting, where both image-level labels and instance-level labels are in the same category, and additional strongly annotated categories are not required.
Although many existing WSD and SSD methods have shown promising results, there are three main drawbacks of existing approaches: (1) Existing WSD approaches do not cover the semi supervised setting, which is more practical in the real world application since SSD can achieve comparable detection performance to fully supervised methods with significant less instance-level labels. (2) Many WSD approaches treat object proposals in an image as independent instances, and invoke MI-SVM to mine positive proposals. They usually make a hard decision for choosing a positive object proposal, thus support only one hypothesis at the same time. Intuitively, it’s usually better to assign instance-level labels to proposals in a probabilistic fashion, especially when the detector is not sure which object proposal is positive. (3) Existing SSD methods usually transfer knowledge from auxiliary strongly annotated categories [13, 31, 14]. Tang  shows that visual and semantic similarities between weakly annotated categories and auxiliary strongly annotated categories play an essential role in improving the adaptation process. However, for a specific detection task, suitable additional strongly annotated categories are not always readily available.
In this paper, we present an EM based object detection method, which can be applied uniformly to both the semi supervised and weakly supervised settings. The EM algorithm estimates a probability distribution of missing values, thus it’s smoother to optimize and can support multiple different hypotheses at the same time. This is important in the early training stage, since the estimation of instance-level labels for weakly annotated images is very noisy at that time. Given all observed data, we use the maximum likelihood estimation (MLE) to estimate the parameters of CNN (Section2.1). Treating instance-level labels as missing data for weakly annotated images, our method alternates between these two steps: 1) E-step: estimate a probability distribution over all possible latent locations, and 2) M-step: update the CNN weights using estimated locations from the last E-step (Section 2.2). In practice, the quality of the final extremum depends heavily on the initialization, since the whole optimization problem is highly non-convex. We use WSDDN by Bilen and Vedaldi  to initialize our EM algorithm (Section 2.3).
Our main contributions are:
We present EM algorithms for object detection, applicable to both weakly supervised and semi supervised settings.
We show that our method outperforms the current state-of-the-art methods in the weakly supervised setting. On the PASCAL VOC 2007 test set, our method achieves 39.4% mAP using AlexNet, 46.1% mAP using VGG.
We show that by accessing a small number of strongly annotated images, our method can almost match the performace of the fully supervised detectors.
In this section we introduce our EM based object detection method, which consists of a pre-training step followed by several EM iterations. Our method, which can not only learn from image-level labels, but also utilize possibly existing instance-level labels, is uniformly applicable to both SSD and WSD.
Notation. Let denote an image, where and are image height and width, respectively. An image can be either weakly annotated or strongly annotated. We extract bounding box proposals from one image. In the fully supervised paradigm, each proposal is assigned to one of categories (including background category). Let denote the instance-level label, where if the instance-level label of the -th proposal is the -th category. Let denote the image-level label, where if the image contains the -th object somewhere. Let
denote the vector of model parameters (, the CNN weights). We denote bythe set of all pairs , where is a strongly annotated image and is the instance-level label of . We denote by the set of al pairs , where is a weakly annotated image and is the image-level label of image .
2.1 Objective function
Using the maximum likelihood estimation, we maximize the joint likelihood for all observsed data, both weakly annotated and strongly annotated.
For strongly annotated images, the objective is to maximize . As we train detectors in a region-based fashion, we maximize by maximizing the probability of each object proposal, which is similiar to [9, 8, 24]. We define the overall probability of an image as the product of probability of all object proposals
where is the -th proposal of image , row vector is the instance-level label of (, the -th row of ).
For weakly annotated images, we treat as missing data. We maximize for these images, since only the image-level label is available for them. Note that once is given, we can infer deterministically by taking maximum over each column, thus could be either or . We denote by the set of instance-level labels that satisfy . Thus we have
We find the maximum likelihood estimation of . Since we have observed both weakly annotated images and strongly annotated images , the objective function (log likelihood for all observed data) is given by111Terms that do not depend on are ignored.
2.2 EM algorithm
E-step. The purpose of the E-step is to estimate the complete-data log likelihood. For strongly annotated images, we have complete data , thus no estimation is required. For weakly annotated images, we estimate the complete-data log likelihood by taking expectation with respect to the latent variable .
Given the previously estimated parameter , the expected complete-data log likelihood for weakly annotated image and it’s label is given by222Details in supplementary material.
where and are given in Sec 2.1. The complete-data log likelihood for all images is
2.3 Integrate CNN into the EM algorithm
We use a CNN to classify object proposals in the EM algorithm, since CNNs have excellent ability to learn visual features. The model parameter
we try to estimate, is the weights of the CNN. We use the same CNN architecture as in Fast RCNN: an ImageNet pre-trained CNN with a ROI pooling layer inserted in the middle, taking as input an entire image and a set of bounding box proposals. The network performs classification of the individual regions, by mapping each of them to a-dimensional probability vector of class scores. The probability of an object proposal in (1
) is given by the last softmax layer of CNN. In the M-step, we use SGD to optimize the expected complete-data log likelihood (5).
Spatial Consistence. Direct optimization of (5) is difficult, since there are too many terms to sum in (, the cardinality of is too large). For example, if there are positive foreground categories in image , then the cardinality of is , which grows exponentially with respect to . On the PASCAL VOC 2007, typically we have , and . We notice that Spatial Consistence (SC) is powerful to reduce the number of terms in the sum in (4). SC refers to the fact that proposals usually contain the same object and share the same instance-level label, if they have large intersection over union (IoU) overlap. SC is used to sample foreground/background proposals in many region-based object detection approaches [9, 8, 24, 20, 34]. In this paper, we use SC as a regularization technique for the latent space. We assume that there is only 1 object for each positive category, like many WSD approaches [20, 34, 3]. We believe all reasonable can be generated by the following procedure. First, for each positive category , choose 1 box as the center box and assign as the instance-level label to it. Second, assign as the instance-level label to all boxes that have at least 0.5 IoU with the center box, while assign the background category to other boxes. We simply ignore any that can not be generated by the above procedure. Since there are about different choices in the first step, the number of all possible reduces to .
where is given by
K-EM. Hard-EM may lose lots of information and hurt detection performance, since it discards too many terms in the summation in (4). K-EM achieves a better trade-off between information amount and computational cost, by keeping terms in (4). For each positive catetory , we sort bounding boxes according to in descending order, and greedily keep the top bounding boxes. We set in all experiments.
Pre-training and post-processing. Learning CNN weights and localizing objects are two interconnected tasks. The whole optimization problem is highly non-convex, thus it’s prone to get trapped into poor local extrema. In practice, the quality of the final extremum depends heavily on the initialization. In this paper we use WSDDN by Bilen and Vedaldi  to initialize our EM algorithm, since it can learn deep representation suitable for detection from only image-level labels. Investigating better initialization is left to future work.
We use the following strategy to transform WSDDN’s two-stream network architecture to Fast RCNN’s single-stream architecture. In the first E-step, we compute WSDDN’s output score of center box for each positive category in image . Then we set proportional to the product of these scores. In the first M-step, we initialize shared layers of Fast RCNN from WSDDN. Other reasonable transformation stragety can also be used.
Given a test image, we first generate around 2000 bounding box proposals using Edge Boxes . Then, we score each proposal using our trained network. We perform the same post-processing as Fast RCNN: thresholding detected boxes class-by-class by their probabilities and then performing non maximum suppression with an overlap threshold of 0.4.
3.1 Dataset and evaluation metrics
Dataset. We evaluate our method on the PASCAL VOC 2007 dataset, which is commonly used in object detection. The PASCAL VOC 2007 dataset consists of 2501 training images, 2510 validation images, and 5011 test images over 20 categories. We use both train and val splits as our training sets, and test split as our test set.
Evaluation metrics. We use two metrics to evaluate detection performance. First, we evaluate detection mean Average Precision (mAP) on the PASCAL VOC 2007 test split, following the standard PASCAL VOC protocol . Second, we compute CorLoc  on the PASCAL VOC 2007 trainval splits. CorLoc is the fraction of positive training images in which we localize an object of the target category correctly. Following , a detected bounding box is considered correct if it has at least 0.5 IoU with a ground truth bounding box.
3.2 Experimental setup
Data augmentation. Following WSDDN, we use multi-scale augmentation to achieve scale invariant object detection. For AlexNet, we resize training images to six different scales (setting minimum of width or height to ). For VGG, we only use three different scales due to the limited GPU memory. We also apply horizontal flips to double the training set, as in Fast RCNN. Given a test image, we resize it to the same six scales, and each bounding box proposal is assigned to the scale such that the scaled bounding box is closest to pixels in area, as in SPPnet .
Pretraining. We use the offical public implementation of WSDDN by Bilen and Vedaldi 
in our pre-training step. In all experiments, we use the same hyperparameter configuration as.
Training. Following , we generate around 2000 bounding box proposals for each image using Edge Boxes to train detectors. We use SGD to optimize the CNN weights. Every M-step consists of 40k SGD iterations. The learning rate is set to 0.001 in the first 30k iterations in the first M-step, and 0.0001 in all later iterations (, we use 0.0001 learning rate in all 40k iterations in the second M-step). We finetune all layers after conv1 in all M-steps. We stop training after 3 M-steps (., 120k SGD iterations in total). A momentum 0.9 and a weights decay of 0.0005 are used. The mini-batch is always constructed from 2 images: a randomly sampled image and it’s flipped image . We sample 16 foreground proposals and 48 background proposals from each image, constructing a mini-batch of 128 proposals. Note that our sampling strategy differs from Fast RCNN, which sample the second image randomly rather than generate the horizontally flipped image from the first image. Note the mAP scores are typically 0.3 point worse if we use the sampling strategy in Fast RCNN. Using AlexNet, the whole training procedure takes about 10 hours with a Titan X Pascal GPU.
Reproducibility. Our implementation is based on the open-sourced Fast-RCNN code by Girshick 
, which is itself based on the excellent Caffe framework. We share our source code and the trained models at https://github.com/ZiangYan/EM-WSD.
|Li , AlexNet ||49.7||33.6||30.8||19.9||13||40.5||54.3||37.4||14.8||39.8||9.4||28.8||38.1||49.8||14.5||24.0||27.1||12.1||42.3||39.7||31.0|
|WSDDN†, AlexNet ||47.9||54.5||26.9||18.3||5.7||50.8||53.0||29.1||2.3||42.3||9.3||30.0||50.2||52.7||13.6||15.6||37.1||38.0||46.3||50.6||33.7|
|ContextLocNet, VGG ||57.1||52.0||31.5||7.6||11.5||55.0||53.1||34.1||1.7||33.1||49.2||42.0||47.3||56.6||15.3||12.8||24.8||48.9||44.4||47.8||36.3|
|WCCN, AlexNet ||43.9||57.6||34.9||21.3||14.7||64.7||52.8||34.2||6.5||41.2||20.5||33.8||47.6||56.8||12.7||18.8||39.6||46.9||52.9||45.1||37.3|
|WSDDN-ENS, VGG ||46.4||58.3||35.5||25.9||14.0||66.7||53.0||39.2||8.9||41.8||26.6||38.6||44.7||59.0||10.8||17.3||40.7||49.6||56.9||50.8||39.3|
|Li , VGG ||54.5||47.4||41.3||20.8||17.7||51.9||63.5||46.1||21.8||57.1||22.1||34.4||50.5||61.8||16.2||29.9||40.7||15.9||55.3||40.2||39.5|
|WCCN, VGG ||49.5||60.6||38.6||29.2||16.2||70.8||56.9||42.5||10.9||44.1||29.9||42.2||47.9||64.1||13.8||23.5||45.9||54.1||60.8||54.5||42.8|
|Ke , VGG ||51.5||66.1||45.5||19.4||11.0||56.6||64.5||57.3||3.0||51.1||42.7||41.8||51.9||64.8||21.6||27.4||46.4||46.1||47.8||51.4||43.4|
|Hard-EM, no BB, AlexNet||48.1||52.6||31.8||22.1||15.1||45.1||61.1||36.3||1.8||39.1||16.7||27.7||47.0||57.2||20.7||18.3||42.2||35.6||38.5||51.0||35.4|
|Li , AlexNet ||77.3||62.6||53.3||41.4||28.7||58.6||76.2||61.1||24.5||59.6||18.0||49.9||56.8||71.4||20.9||44.5||59.4||22.3||60.9||48.8||49.8|
|Li , VGG ||78.2||67.1||61.8||38.1||36.1||61.8||78.8||55.2||28.5||68.8||18.5||49.2||64.1||73.5||21.4||47.4||64.6||22.3||60.9||52.3||52.4|
|WCCN, AlexNet ||79.7||68.1||60.4||38.9||36.8||61.1||78.6||56.7||27.8||67.7||20.3||48.1||63.9||75.1||21.5||46.9||64.8||23.4||60.2||52.4||52.6|
|WSDDN†, AlexNet ||73.1||68.7||52.4||34.3||26.6||66.1||76.7||51.6||15.1||66.7||17.5||45.4||71.8||82.4||32.6||42.9||71.9||53.3||60.9||65.2||53.8|
|ContextLocNet, VGG ||83.3||68.6||54.7||23.4||18.3||73.6||74.1||54.1||8.6||65.1||47.1||59.5||67.0||83.5||35.3||39.9||67.0||49.7||63.5||65.2||55.1|
|WCCN, VGG ||83.9||72.8||64.5||44.1||40.1||65.7||82.5||58.9||33.7||72.5||25.6||53.7||67.4||77.4||26.8||49.1||68.1||27.9||64.5||55.7||56.7|
|WSDDN-ENS, VGG ||68.9||68.7||65.2||42.5||40.6||72.6||75.2||53.7||29.7||68.1||33.5||45.6||65.9||86.1||27.5||44.9||76.0||62.4||66.3||66.8||58.0|
|Hard-EM, no BB, AlexNet||76.9||75.7||52.4||39.2||34.4||67.7||82.7||60.2||10.8||67.4||18.0||50.6||68.6||82.0||35.6||44.1||71.9||58.1||56.7||71.5||56.2|
3.3 Weakly supervised detection results
Comparison with the state-of-the-art. We evaluate our method on the PASCAL VOC 2007 benchmark, using only image-level labels for training. In all experiments we set , , we compute around 100 different for each weakly annotated image. We compare our results with the state-of-the-art methods for weakly supervised object detection in Table 1, 2.
In Table 1 we report detection average precision (AP) on the PASCAL VOC 2007 test set. Our best model, K-EM using VGG, achieves 46.1% mAP and outperforms all recent state-of-the-art weakly supervised detection methods. Many previous works [20, 6, 30, 29, 1, 33] use AlexNet as feature extractor. Cinbis  also use Fisher Vector  and Edge Boxes objectness score . For fair comparison we also train a model (K-EM, AlexNet in Table 1) using AlexNet in both pre-training and training. Our method achieves 39.4% using AlexNet, outperforming the second best method  using the same CNN architecture (39.4% vs. 37.3%).
In Table 2 we report correct localization (CorLoc) on the PASCAL VOC 2007 trainval set. Our best model, K-EM using VGG, achieves 65.0% average CorLoc for the 20 categories, outperforming the second best method  by 7 points (65.0% vs. 58.0%). Using AlexNet, our method achieves average CorLoc of 59.8%, and outperform the current state-of-the-art method  using the same network by 6 points. We would like to highlight that even using AlexNet, our method also outperforms the all methods that use VGG as their base CNN architecture.
While our method outperforms the current state-of-the-art approaches in terms of mAP or average CorLoc for all the 20 categories, our performance is not as strong in chair, diningtable, person and pottedplant categories. Sample detection results are illustrated in Fig 6. We apply the detector error analysis tool from Hoiem  for these four categories, as shown in Fig 3. It can be noted that the majority of detection failures comes from failed localization. Like previous methods [20, 3]
, our system often focus on discriminative, less variable object part (, person face) instead of the whole object, due to the high variance of appearance of the whole object. We believe that we can improve our performance by incorporating additional cue about the whole object.
Ablation studies Our best model, K-EM using VGG, takes WSDDN-ENS-VGG as pre-training. With the EM algorithm, we obtain detection performance improvement on the PASCAL VOC 2007 test set from 39.3% to 45.0% mAP. For AlexNet, we train a WSDDN initialized from AlexNet, and take this WSDDN as pre-training. Using AlexNet as the base CNN architecture, we improve detection performance from 33.7% to 39.4% mAP with the EM algorithm. There are also consistent improvements in terms of CorLoc on the PASCAL VOC 2007 trainval set, if we train object detector using EM algorithm after the pre-training step. The result demonstrates the EM algorithm can help the CNN to select better proposals, and learn a better object appearance model. We also compare K-EM and Hard-EM. Using AlexNet, the K-EM setting obtains 1.3% mAP and 1.1% CorLoc improvement over the Hard-EM setting. This can be explained by the fact that Hard-EM greedily keeps only one term with largest in (4), discarding too much information. In our experiments, setting only gives marginal performance improvement (about 0.1% mAP), while significantly increases computational cost. So we report K-EM results under . Applying bounding box regression also gives 2.7% mAP points improvement, demonstrating that recent techniques on fully supervised detection problem can also improve weakly supervised detection performance. It’s easy to integrate those techniques into our method, since the core of our M-step is solving a fully supervised detection problem. We also evaluate the effect of different object proposal generators. We compare SelectiveSearch  and Edge Boxes . In our experiments, by training detector with Edge Boxes, typically about 1.2% mAP improvement on the PASCAL VOC 2007 test set can be obtained over SelectiveSearch. Furthermore, we get very poor performance of 3.4% mAP if removing the WSDDN pre-training step, validating the importance of WSDDN pre-training.
3.4 Semi supervised detection results
We evaluate our method on the PASCAL VOC 2007 benchmark, using image-level labels in combination with instance-level labels for training. For each category in the training set, some of images have instance-level labels, while other images only have image-level labels. In the pre-training step, we use only image-level labels, as in Section 3.3. After that, we use both image-level labels and instance-level labels in the EM algorithm. For simplicity, we use K-EM () with AlexNet in all semi supervised experiments.
Results are summarized in Fig 4. By training with 40% instance-level labels and 60% image-level labels, we achieve 55.7% mAP, which is only 1.4% inferior the fully supervised Fast RCNN (100% instance-level labels, 57.1% mAP). Note the performance is always lower if we train detector from image-level labels alone, as shown in Fig 4. Fig 5 shows the response maps on weakly annotated training images (with 50% instance-level labels available). Our method progressively refine the localization during the training.
4 Related Work
Weakly Supervised Detection. Many existing methods [29, 4, 20, 30, 29, 28, 1] formulate WSD as a Multiple Instance Learning (MIL) problem. They usually use MI-SVM to mine positive object proposals, and develop better initialization and optimization strategies in order to prevent poor local extrema. Li  propose a two-step domain adaptation approach. They transfer the classifier from the ImageNet 1000 categories to the PASCAL VOC 20 categories, and filter object proposals class-by-class. Then they apply MI-SVM on a cleaner collection of class-specific object proposals. Finally they use mined confident object candidates to train a Fast RCNN. Despite the fact that we address WSD in a different point of view from [34, 20] (they mine positive patches while we estimate probability of objects over all possible locations), their methods actually end up to be special cases of our method. From our point of view, [34, 20] apply Hard-EM approximation after carefully initialization, and perform only one EM iteration (, a single E-step followed by a single M-step). Hard-EM discards too much information in the E-step, and the CNN weights are usually not fully converged in single EM iteration. Bilen  also assign instance-level labels in a smooth way. They help the optimization by enforcing a soft similarity between each possible location in the image and a reduced set of exemplars. WSDDN  and subsequent work [17, 34] achieve state-of-the-art performance, using an end-to-end, two-stream CNN architecture to perform region selection and appearance model learning simultaneously. The main differences between our methods and existing WSD methods are: (1) Our method can be applied to semi supervised setting. (2) From our point of view, these methods can either be a pre-training step or a special case of our framework. We believe it’s possible to gain potential performance improvement by integrating these methods into our framework.
Semi Supervised Detection. There are several previous works [13, 14, 31] which focus on combing image-level labels and instance-level labels in object detection. They assume the existence of strongly annotated categories, and transfer knowledge from these strongly annotated categories to weakly annotated categories. For example, if we want to detect the 20 PASCAL VOC categories , some additional strongly annotated categories, which are not in the 20 PASCAL VOC categories, are required. Our work differs these methods in that, we focus on a different setting, where both image-level labels and instance-level labels are in the same category, thus no additional strongly annotated categories are required.
We present an EM based method to train object detector from image-level labels using deep convolutional neural networks. We treat instance-level labels as missing values. Our method can learn detector from either image-level labels alone or in combination with some instance-level labels. Using image-level labels solely, our method achieves 46.1% mAP on the PASCAL VOC 2007 test set, outperforming the current state-of-the-art weakly supervised detection approaches. Having access to little instance-level labels, our method can almost match the performance of the fully supervised Fast RCNN. Our results show that by exploiting weakly annotated images, excellent detection performance can be obtained with less annotation effort.
-  H. Bilen, M. Pedersoli, and T. Tuytelaars. Weakly supervised object detection with posterior regularization. In BMVC, volume 3, 2014.
-  H. Bilen, M. Pedersoli, and T. Tuytelaars. Weakly supervised object detection with convex clustering. In CVPR, pages 1081–1089, 2015.
-  H. Bilen and A. Vedaldi. Weakly supervised deep detection networks. In CVPR, June 2016.
-  R. G. Cinbis, J. Verbeek, and C. Schmid. Multi-fold mil training for weakly supervised object localization. In CVPR, pages 2409–2416. IEEE, 2014.
-  T. Deselaers, B. Alexe, and V. Ferrari. Weakly supervised localization and learning with generic knowledge. IJCV, 100(3):275–293, 2012.
-  A. Diba, V. Sharma, A. Pazandeh, H. Pirsiavash, and L. Van Gool. Weakly supervised cascaded convolutional networks. arXiv preprint arXiv:1611.08258, 2016.
-  M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.
-  R. Girshick. Fast r-cnn. In ICCV, pages 1440–1448, 2015.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, pages 580–587, 2014.
-  M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid. Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation. In ICCV, pages 309–316. IEEE, 2009.
-  K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, pages 346–361. Springer, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, June 2016.
-  J. Hoffman, S. Guadarrama, E. S. Tzeng, R. Hu, J. Donahue, R. Girshick, T. Darrell, and K. Saenko. Lsda: Large scale detection through adaptation. In NIPS, pages 3536–3544, 2014.
-  J. Hoffman, D. Pathak, T. Darrell, and K. Saenko. Detector discovery in the wild: Joint multiple instance and representation learning. In CVPR, pages 2883–2891, 2015.
-  D. Hoiem, Y. Chodpathumwan, and Q. Dai. Diagnosing error in object detectors. In ECCV, pages 340–353. Springer, 2012.
-  Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
-  V. Kantorov, M. Oquab, M. Cho, and I. Laptev. Contextlocnet: Context-aware deep network models for weakly supervised localization. In ECCV, pages 350–365. Springer, 2016.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  D. Li, J.-B. Huang, Y. Li, S. Wang, and M.-H. Yang. Weakly supervised object localization with progressive domain adaptation. In CVPR, June 2016.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In ECCV, 2016.
-  F. Perronnin, J. Sánchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In ECCV, pages 143–156. Springer, 2010.
-  J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In CVPR, June 2016.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, pages 91–99, 2015.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211–252, 2015.
-  Z. Shi, T. M. Hospedales, and T. Xiang. Bayesian joint topic modelling for weakly supervised object localisation. In ICCV, pages 2984–2991, 2013.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
-  P. Siva, C. Russell, and T. Xiang. In defence of negative mining for annotating weakly labelled data. In ECCV, pages 594–608. Springer, 2012.
-  H. O. Song, R. B. Girshick, S. Jegelka, J. Mairal, Z. Harchaoui, T. Darrell, et al. On learning to localize objects with minimal supervision. In ICML, pages 1611–1619, 2014.
-  H. O. Song, Y. J. Lee, S. Jegelka, and T. Darrell. Weakly-supervised discovery of visual pattern configurations. In NIPS, pages 1637–1645, 2014.
-  Y. Tang, J. Wang, B. Gao, E. Dellandrea, R. Gaizauskas, and L. Chen. Large scale semi-supervised object detection using visual and semantic knowledge transfer. In CVPR, June 2016.
-  J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders. Selective search for object recognition. IJCV, 104(2):154–171, 2013.
-  C. Wang, W. Ren, K. Huang, and T. Tan. Weakly supervised object localization with latent category learning. In ECCV, pages 431–445. Springer, 2014.
-  K. Yang, D. Li, Y. Dou, S. Lv, and Q. Wang. Weakly supervised object detection using pseudo-strong labels. arXiv preprint arXiv:1607.04731, 2016.
-  C. L. Zitnick and P. Dollár. Edge boxes: Locating object proposals from edges. In ECCV, pages 391–405. Springer, 2014.