There has been an explosion of images available on the Internet in recent years, largely due to the popularity of photo sharing sites like Facebook and Flicker. However, most of these images are either unlabeled or weakly-labeled. One way of accessing these images is finding images depicting the same object, for instance, Google Image Search will return the images contains the common object described by the user input keyword. In this paper, we aim to localize the common object in the scenario like this, i.e., without using any other forms of supervision (manually-labeled bounding boxes or negative images). This task is known as the image co-localization task [30, 17, 4] in literature.
Image co-localization is a particularly challenging task, and thus there exist a limited number of comparable methods [30, 17, 4]. These methods address this problem from various perspectives. The work in  introduces binary latent variables to indicate the presence of the common object and formulates the co-localization via latent variable inference. The work of , in contrast, localizes the common object by matching common object parts. Our work differs from previous approaches in that it directly learns the common object detector by modeling its detection confidence score distribution on each image, and achieves the localization with the learned detector.
The key insight in our method is that although we do not have sufficient supervision to learn a strongly supervised object detector, it is still possible to learn a detector by enforcing its detection confidence scores distributed as those of a strongly supervised detector. For a strongly supervised object detector, we have made the following observation: when an accurate strongly supervised object detector is applied to an image contains the object of interest, only a small minority of proposals are given high detection confidence scores while most of them are associated with low scores. Motivated by the key insight and the above observation, in this paper we design a novel Shannon-entropy-based objective function to promote the scarcity of high detection confidence scores within an image while avoiding the trivial solution of producing low scores for all proposals. In other words, by optimizing the proposed objective, our approach will encourage the existence of a few high response proposals in each image as the common object while suppressing responses in the remainder proposals which will be deemed as background.
To generate the final co-localization results, we have also devised a method for improving the bounding box estimate. Inspired by detection-by-segmentation approaches (e.g., ), we use the final detection heat map and color information to define a CRF-based segmentation algorithm, the output of which indicates the instances of the common object.
Through an extensive evaluation on several benchmark datasets, including the PASCAL VOC 2007 and 2012 
, and also some subsets of the ImageNet, we demonstrate that our approach not only outperforms the state-of-the-art in image co-localization, but is also on par with some weakly supervised object localization approaches.
2 Related Work
Image co-localization shares some similarities with image co-segmentation [16, 26, 3] in the sense that both problems require a set of images of objects from a common category as input. Instead of generating a precise segmentation of the related objects in each image, co-localization algorithms [30, 17, 4] aim to draw a tight bounding box around the object. Image co-localization is also related to works on weakly supervised object localization (WSOL) [29, 7, 5, 28, 32, 1, 33, 24] as both try to localize objects of the same type within an image set, the key difference is WSOL requires manually-labeled negative images whereas co-localization does not.
Tang et al.  formulate co-localization as a boolean constrained quadratic program which can be relaxed to a convex problem, which is further accelerated by the Frank-Wolfe Algorithm . Recently, Cho et al.  propose a Probabilistic Hough Matching algorithm to match object proposals across images and then dominant objects are localized by selecting proposals based on matching scores. There are also approaches address the problem of co-localization in video [23, 17, 21]. Notably, Prest et al.  select spatio-temporal tubes which are likely to contain the common object, and Joulin et al. , in contrast, extend  by incorporating temporal consistency.
However, in this paper, we tackle the co-localization problem from a new perspective, that is, learning the common object detector by modeling its detection confidence score distribution, and thus get rid of the need of manually-labeled negative images. An advantage of the proposed approach for learning common object detectors is that it provides an explicit mechanism by which to exploit the relationship between localization and detection. The benefits of exploiting this relationship have been identified before in WSOL. In , objects are localized by minimizing a Conditional Random Field (CRF) energy function which incorporates class-specific information, and the class-specific information is learned from the localized objects. Cinbis et al.  propose a multi-fold training procedure for Multiple Instance Learning whereby, at each iteration, positive instances in each fold are localized by a detector trained from other folds in the previous iteration. The approach that we propose here, however, is the first to systematically leverage the idea of jointly performing object detection and localization for co-localizing common objects in images.
We give an overview of our image co-localization framework in Fig. 1. The input to our framework is a set of images contains one common object (e.g., aeroplane), and we aim to annotate the location of common object instances in each image. Inspired by the behaviour of an accurate strongly supervised object detector (Sec. 3.1), the core of our framework is the procedure of learning the common object detector by modeling its detection confidence score distribution (Sec. 3.2). We further formulate object localization as a segmentation problem (Sec. 3.3), which involves using the detection heat map to define unary potentials of a binary energy function and solve it efficiently by standard graph-cuts.
3.1 The behaviour of an accurate strongly supervised detector
Object proposals [31, 34], which are image regions that are likely to contain objects, have been widely used in state-of-the-art object detection approaches [11, 12, 10]. In this section we are interested in the statistics of proposal detection confidence scores on an image generated by a strongly supervised detector. The observation here motivates our formulation for learning common object detectors in Sec. 3.2.
More specifically, we apply one state-of-the-art strongly supervised object detector Fast R-CNN  (trained on PASCAL VOC 2007 trainval set ) to a PASCAL VOC 2007 test image which contains the object of interest (Fig. 2 (a)). After obtaining the detection confidence scores of the more than object proposals  extracted from this image, we calculate the normalized histogram of detection confidence scores of all proposals (Fig. 2 (b)).
From Fig. 2 (b) it is clear that, although there are multiple instances of the object of interest (“car” in this case), more than of object proposals have a very low detection confidence score (less the ), which indicates that a dominantly large portion of proposals are likely to cover image regions that do not cover the object of interest tightly. This is understandable as object proposal generation is a pre-processing step in object detection systems, where recall rate much more important than precision (not missing any objects of interest is more important than generating less false positives).
3.2 Learning detectors by modeling detection score distribution
In the setting of image co-localization, although all we know is that there exists a common object category across images, we still aim to learn the common object detector. This is possible by modeling the distribution of proposals detection confidence scores. More specifically, in our method the common object detector will be learned by enforcing its the distribution of detection confidence scores to mimic that of an accurate strongly supervised detector (Sec. 3.1).
Formally, for each image , we first extract a set of object proposals using EdgeBox , the performance of which has been illustrated in a recent review . Let denote the feature representation of proposal . The particular detection confidence scores that we use are formulated as follows
where , denote weight and bias terms of the detector respectively, and is the softplus function which has the form .
Irrespective of the form of the detector, we can construct the set of detection confidence scores over all the proposals of image , and normalized them as , where the parameter is a small constant. If the detector in Eq. (1) is trained with strong supervision, according to the observation in Sec. 3.1, most of its detection confidence scores in
should have near-zero values which means that the score vectorand its normalized version should be sparse vectors. Note that when all proposals have zero detection confidence scores, will be sparse but will be dense due to the effect of the constant . Thus, our method will be based on because enforcing its sparsity will be equivalent to requiring the detector to have few high detection confidence scores and many low (zero) detection confidence scores, in other words, the detection confidence score distribution will mimic that of an accurate strongly supervised detector.
Objective function. To measure the sparsity of the normalized detection confidence score vector , we utilize the Shannon entropy as a sparsity indicator, that is,
and the objective for learning the common object detector is formulated as follows:
where we use the square of the -norm of as a regularizer on the weight vector.
So the optimal value of the weight and bias of the detector is given by:
Note that Eq. (4) does not involve a set of manually-labeled negative images,
but rather describes the desired form of the detection confidence score distribution.
The learning process also implicitly takes advantage of the chicken-and-egg relationship between object localization and detection: precisely localized object instances are critical for training a good object detector, and objects can be localized more precisely by a well-trained detector.
) with different random initializations. Although these common visual patterns may not be suitable for the co-localization task, they may be useful for other computer vision tasks, such as discovering common object parts for fine-grained image classification.
Optimization. As our objective function in Eq. (419], we divide all data (i.e., object proposals) into mini-batches. We initialize the weight vector
from a zero-mean Gaussian distribution, while the bias termis set to zero initially. During training we divide the learning rate (which is set to initially) by after each epochs. We stop learning after epochs when the objective function converges.
Modification. After minimizing Eq. (4), when we visualize the proposal with the maximal detection confidence score for each image (Fig. 3), it is interesting to note that the learnt detector may not fire at the common object but some common visual patterns (e.g., common object parts, common object with some context) instead. Also, the discovered common visual patterns can be very different if the initialization of our detector varies (different local minimums). However in this work, as we aim to co-localize the common object, we reformulate Eq. (1) by incorporating the “objectness” score (outputs of Edgebox) of each proposal as a weight to favour proposals with high objectness score (which more likely to cover a whole object tightly)
Localizing the common object. The optimal and , inserted into Eq. (1), lead to a mechanism for determining the detection confidence scores for all object proposals. The nature of the co-localization problem means that the maximal score for each image indicates the desired detection. This method is used as a baseline in the Experiments section (Sec. 4.1).
Discussion. Theoretically, other sparsity measures could be employed to replace the Shannon entropy. Note that the commonly used norm cannot be applied here because . One possible way to use norm is to redefine the normalization score .
3.3 Refining the bounding box estimate
The quality of the detections generated through the above described process depends entirely on the quality of object proposals. To overcome this dependency, and enable better final bounding box estimates to be achieved, we have developed a bounding box refinement process as follows.
Given the optimal and identified by minimizing Eq. (4), we generate the detection heat map as follows. For each pixel in the image, we add up the weighted detection confidence score from Eq. (5) for all proposals that cover this pixel (zero for pixels not covered by any proposals). The values are then normalized to the interval . This gives rise to a set of detection heat maps . Some examples are illustrated in Fig. 4.
Given the set of detection heat maps , we aim to produce a segmentation of the entire object. This approach is inspired by previous work which casts localization as a segmentation problem (e.g., ).
Formally, we formulate the segmentation problem as a standard graph-cut problem. We first extract superpixels  to construct the vertex set and aim to label each superpixel as foreground () or background (). Mathematically, the energy function is given by
where and are the unary and pairwise potential respectively.
is the set of edges connecting superpixels111In our case two superpixels are connected if the distance between their centroids is smaller than the sum of their major axis length..
Unary potential . Inspired by , the unary potential is the novel part of our segmentation framework, which carries information from the detection heat map :
where is the prior information from the detection heat map :
where is the mean of values inside superpixel on map .
Pairwise potential . Our pairwise potential is defined as follows.
As our pairwise potential in Eq. (9) is submodular, the optimal label can be found efficiently by the graph-cuts . As shown in Fig. 4, the segmentation derived through this approach is accurate. The final bounding box estimate is then calculated as the smallest rectangle which covers the segmentation.
Datasets. We evaluate our approach on three datasets, including PASCAL VOC 2007 and 2012  datasets, six subsets of the ImageNet dataset  which have not been used in the ILSVRC 222The six categories are chipmunk, rhino, stoat, racoon, rake and wheelchair. Bounding box annotations are available for these categories..
For PASCAL VOC datasets, following previous works in co-localization and weakly supervised object localization [5, 4, 1, 33], we use all images
on the trainval set discarding images that only contain object instances marked as “difficult” or “truncate”. For ImageNet subsets,
we filter images with very large ground-truth bounding boxes.
We use two metrics to evaluate our approach.
Firstly, for comparison with state-of-the-art approaches, we use the CorLoc metric , which is defined as the percentage of images that are correctly localized.
An image is considered as correctly localized if the IoU score between the predicted bounding box and any ground-truth bounding boxes of the object of interest exceeds .
Instead of using a fixed threshold () for CorLoc metric,
we also compute percentages of correctly localized images under a wide range of thresholds from to which results in a CorLoc curve.
Implementation details. We use Edgebox  to extract object proposals with a maximum of proposals extracted from each image. We represent each Edgebox proposal as a -dimensional CNN feature from the
layer (after ReLU) from theBVLC Reference CaffeNet model . We use a fixed value of for in Eq. (4
) which controls the tradeoff between the loss function and regularizer. The value ofin Eq.(9) is set to .
4.1 Ablation study
Baselines. To investigate the impact of the various elements of the proposed approach, we consider the following two baseline methods:
“obj-sel”: the predicted bounding box for an image is simply the proposal with maximum objectness score.
“obj-seg”: for each image, objectness scores of all proposals are treated as detection confidence scores to generate a fake detection heat map, which is then sent to our segmentation model in Sec. 3.3.
The two methods proposed in our work are:
“our-sel”: given the learnt detector 3.2, we simply select the object proposal which has the maximum detection confidence score i.e., .
Corloc scores for the above four methods on the PASCAL VOC 2007 dataset are illustrated in Fig. 5.
is heuristically defined based on only edge information. Therefore, taking the proposal with maximum objectness score is certainly not the optimal method.
However, the “obj-seg” baseline in which we use objectness scores to generate a detection heat map for each image, performs quite well, with CorLoc increasing to . Surprisingly, this performance is on the par with one state-of-the-art image co-localization approach  ( vs.), even though there is no common object assumption. This phenomenon indicates that our segmentation model is quite effective.
Thanks to the proposed common object detector learning procedure in Sec. 3.2, “our-sel” achieves a performance of , outperforming “obj-sel” and “obj-seg” by over and respectively. This verifies the effectiveness of this procedure, and particularly that, although we do not have annotated image labels nor bounding boxes, the detector still captures the appearance of the common object, which improves co-localization significantly.
Combing the advantages of the common object detector learning procedure (Sec. 3.2) and segmentation refinement (Sec. 3.3), we observe another boost in the case of “our-seg”, reaching Corloc. Thus we use “our-seg” to compare with state-of-the-art approaches.
Number of candidate proposals. To test the robustness of our method under different number of candidate object proposals, we select three settings—, and , which result in , and CorLoc respectively. This indicates that our method is quite insensitive to the changes in number of candidate proposals.
4.2 Diagnosing the localization error
In order to better understand the localization errors, following [13, 5], each predicted bounding box predicted by the proposed approach is categorized into the following five cases: (1) correct: IoU score exceeds , (2) g.t. in hypothesis: ground-truth completely inside prediction, (3) hypothesis in g.t.: prediction completely inside ground-truth, (4) no overlap: IoU score equals zero, (5) low overlap: none of the above four cases. In Fig. 6 we show the error modes of our approach across all categories on the PASCAL VOC 2007 dataset.
As shown in Fig. 6, the fraction of “no overlap” cases is quite small () across all categories, which means our approach can localize common objects to some extent in most cases. Comparing “g.t. in hypothesis” to its “hypothesis in g.t.”, it is clear that the former appears more frequently ( v.s. ), which means our approach tends to localize objects with some context details. In terms of correct localization, the three categories with lowest CorLoc values are bottle (), chair () and pottedplant (). Images of objects in these categories are always in very clustered environments with occlusion (e.g., chair is often occluded by table) and large appearance changes.
4.3 Comparison to state-of-the-art approaches
Comparison to image co-localization approaches. We now compare the results of the proposed approach to the state-of-the-art image co-localization approaches of Joulin et al.  and Cho et al.  on the PASCAL VOC 2007 dataset (Table 1). The performance of the proposed method exceeds that of Joulin et al.  significantly in most categories, with an improvement of over in mean CorLoc. The recent method of Cho et al.  relies on matching object parts by Hongh Transform with the predicted bounding box is selected by a heuristic standout score. Candidate regions are object proposals represented by whitened HOG features. However, we found that this whitening process, whose mean vector and covariance matrix are estimated from the random sampled images from the same dataset (inevitably using images from other categories), is crucial for the performance of their algorithm. Our performance bypasses that of  by a reasonable margin of .
To further verify the effectiveness of the proposed approach, we now present an evaluation on the PASCAL VOC 2012 dataset  which has twice the number of images of PASCAL VOC 2007.
Table 2 shows our performance along with that of Cho et al.  which we evaluated using their publicly available code.
It is clear that on average our method outperforms that of Cho et al.  by .
|Cho et al. |
Comparison to weakly supervised object localization approaches. We also compare the proposed approach with some state-of-the-art approaches on weakly supervised object localization. Table 3 illustrates the comparison of several recent works and our approach on PASCAL VOC 2007 dataset. In particular, our performance () is comparable to that of a very recent work () which also uses CNN features and Edgebox proposals. Please note that in our framework we do not have any negative images whereas WSL approaches do, which means the we are addressing a more challenging problem. As shown in Table 3, though we do not have any negative images, we still outperforms WSOL approaches on of categories.
We have also conducted an object detection experiment on PASCAL VOC 2007. Specifically, for each category, we treated predicted bounding boxes of our co-localization algorithm on trainval set as ground-truth annotations and sampled proposals from other categories or have a overlap ratio less than against our localized bounding boxes as negative samples. The fc6 feature from the CaffeNet are extracted and hard negative mining is performed to train the detector. We achieve a mAP of on the testset when using a nms threshold of . Although our performance is lower than some WSOL methods, it is understandable as we do not use negative data for co-localization. Moreover, we can easily extend our formulation (Eq.4) to handle negative data and thus perform WSOL.
Visualization. In Fig. 7, we provide a set of successful co-localization results along with the corresponding detection heat maps for some categories of the PASCAL VOC 2007 dataset. It demonstrates that detection heat maps successfully predict the correct location of the common object regardless of changes in scale, appearance and viewpoint. This provides a strong indication that, although trained without annotated positive or negative examples, our method is able to discriminate the common object from other objects in the scene.
4.4 ImageNet subsets
We note that the CNN model used for extracting features is pre-trained in the ILSVRC , whose training set may have some overlapping categories with the VOC datasets. In order to justify the proposed method is insensitive to the object category, we randomly selected six subsets of the ImageNet  which have not been used in the ILSVRC (thus “unseen” by the CNN model) for evaluation.
Table 4 shows our co-localization result along with that of the current state-of-the-art work of Cho et al. . Clearly, the proposed approach outperforms  by a reasonable margin on all categories except the rhino category, whose images tend to have relatively large common instances and less cluttered background. Some successfully co-localization samples are depicted in Fig. 8.
We also visualize some failure cases of the two categories our approach performed worst—rake and wheelchair ( Fig. 9). Interestingly, these failure cases are quite understandable. For example, a large portion of images in the rake category have the scenario in which people are holding a rake, thus our co-localization approach tends to capture this scenario as the “common object”. A similar phenomenon is also observed in the wheelchair category in which people tend to sit on the wheelchair.
We have addressed the image co-localization problem by directly learning a common object detector. The key discovery made in this paper is that this detector can be learned with the objective of making its detection score distribution mimic an accurate strongly supervised object detector. Also, we have illustrated that it is profitable to use a CRF model to refine the co-localization result, which has not been explored in recent works on co-localization.
Acknowledgements This work was in part supported by the Data to Decisions CRC Centre. C. Shen’s participation was in part supported by ARC Future Fellowship No. FT120100969.
-  Bilen, H., Pedersoli, M., Tuytelaars, T.: Weakly supervised object detection with convex clustering. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 1081–1089 (2015)
-  Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001)
-  Chen, X., Shrivastava, A., Gupta, A.: Enriching visual knowledge bases via object discovery and segmentation. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 2035–2042 (2014)
-  Cho, M., Kwak, S., Schmid, C., Ponce, J.: Unsupervised object discovery and localization in the wild: Part-based matching with bottom-up region proposals. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 1201–1210 (2015)
-  Cinbis, R.G., Verbeek, J.J., Schmid, C.: Multi-fold MIL training for weakly supervised object localization. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 2409–2416 (2014)
-  Deng, J., Dong, W., Socher, R., Li, L., Li, K., Li, F.: Imagenet: A large-scale hierarchical image database. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 248–255 (2009)
-  Deselaers, T., Alexe, B., Ferrari, V.: Weakly supervised localization and learning with generic knowledge. Int. J. Comp. Vis. 100(3), 275–293 (2012)
-  Everingham, M., Eslami, S.M.A., Gool, L.V., Williams, C.K.I., Winn, J.M., Zisserman, A.: The pascal visual object classes challenge: A retrospective. Int. J. Comp. Vis. 111(1), 98–136 (2015)
-  Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. Int. J. Comp. Vis. 59(2), 167–181 (2004)
-  Girshick, R.: Fast r-cnn. In: Proc. IEEE Int. Conf. Comp. Vis. pp. 1440–1448 (2015)
-  Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. (2015)
-  He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)
-  Hoiem, D., Chodpathumwan, Y., Dai, Q.: Diagnosing error in object detectors. In: Proc. Eur. Conf. Comp. Vis. pp. 340–353 (2012)
-  Hosang, J.H., Benenson, R., Dollár, P., Schiele, B.: What makes for effective detection proposals? IEEE Trans. Pattern Anal. Mach. Intell. 38(4), 814–830 (2016)
-  Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
-  Joulin, A., Bach, F.R., Ponce, J.: Discriminative clustering for image co-segmentation. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 1943–1950 (2010)
-  Joulin, A., Tang, K., Li, F.: Efficient image and video co-localization with frank-wolfe algorithm. In: Proc. Eur. Conf. Comp. Vis. pp. 253–268 (2014)
-  Krause, J., Jin, H., Yang, J., Li, F.: Fine-grained recognition without part annotations. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 5546–5555 (2015)
-  Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proc. Adv. Neural Inf. Process. Syst. pp. 1106–1114 (2012)
-  Küttel, D., Ferrari, V.: Figure-ground segmentation by transferring window masks. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 558–565 (2012)
-  Kwak, S., Cho, M., Ponce, J., Schmid, C., Laptev, I.: Unsupervised object discovery and tracking in video collections. In: Proc. IEEE Int. Conf. Comp. Vis. pp. 3173–3181 (2015)
-  Parkhi, O.M., Vedaldi, A., Jawahar, C.V., Zisserman, A.: The truth about cats and dogs. In: Proc. IEEE Int. Conf. Comp. Vis. pp. 1427–1434 (2011)
-  Prest, A., Leistner, C., Civera, J., Schmid, C., Ferrari, V.: Learning object class detectors from weakly annotated video. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 3282–3289 (2012)
-  Ren, W., Huang, K., Tao, D., Tan, T.: Weakly supervised large scale object localization with multiple instance learning and bag splitting. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 405–416 (2016)
-  Rother, C., Kolmogorov, V., Blake, A.: Grabcut: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 309–314 (2004)
-  Rubinstein, M., Joulin, A., Kopf, J., Liu, C.: Unsupervised joint object discovery and segmentation in internet images. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 1939–1946 (2013)
-  Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.S., Berg, A.C., Li, F.: Imagenet large scale visual recognition challenge. Int. J. Comp. Vis. 115(3), 211–252 (2015)
-  Shi, Z., Hospedales, T.M., Xiang, T.: Bayesian joint topic modelling for weakly supervised object localisation. In: Proc. IEEE Int. Conf. Comp. Vis. pp. 2984–2991 (2013)
-  Siva, P., Xiang, T.: Weakly supervised object detector learning with model drift detection. In: Proc. IEEE Int. Conf. Comp. Vis. pp. 343–350 (2011)
-  Tang, K., Joulin, A., Li, L., Li, F.: Co-localization in real-world images. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. pp. 1464–1471 (2014)
-  Uijlings, J.R.R., van de Sande, K.E.A., Gevers, T., Smeulders, A.W.M.: Selective search for object recognition. Int. J. Comp. Vis. 104(2), 154–171 (2013)
-  Wang, C., Ren, W., Huang, K., Tan, T.: Weakly supervised object localization with latent category learning. In: Proc. Eur. Conf. Comp. Vis. pp. 431–445 (2014)
-  Wang, X., Zhu, Z., Yao, C., Bai, X.: Relaxed multiple-instance svm with application to object discovery. In: Proc. IEEE Int. Conf. Comp. Vis. pp. 1224–1232 (2015)
-  Zitnick, C.L., Dollár, P.: Edge boxes: Locating object proposals from edges. In: Proc. Eur. Conf. Comp. Vis. pp. 391–405 (2014)