Weakly Supervised Object Detection with Segmentation Collaboration

04/01/2019 ∙ by Xiaoyan Li, et al. ∙ Institute of Computing Technology, Chinese Academy of Sciences 0

Weakly supervised object detection aims at learning precise object detectors, given image category labels. In recent prevailing works, this problem is generally formulated as a multiple instance learning module guided by an image classification loss. The object bounding box is assumed to be the one contributing most to the classification among all proposals. However, the region contributing most is also likely to be a crucial part or the supporting context of an object. To obtain a more accurate detector, in this work we propose a novel end-to-end weakly supervised detection approach, where a newly introduced generative adversarial segmentation module interacts with the conventional detection module in a collaborative loop. The collaboration mechanism takes full advantages of the complementary interpretations of the weakly supervised localization task, namely detection and segmentation tasks, forming a more comprehensive solution. Consequently, our method obtains more precise object bounding boxes, rather than parts or irrelevant surroundings. Expectedly, the proposed method achieves an accuracy of 51.0 2007 dataset, outperforming the state-of-the-arts and demonstrating its superiority for weakly supervised object detection.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 7

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As the data-driven approaches prevail on object detection task in both academia and industry, the amount of data in an object detection benchmark is expected to be larger and larger. However, annotating object bounding boxes is both costly and time-consuming. In order to reduce the labeling workload, researchers hope to make object detectors work in a weakly-supervised fashion, learning a detector with only category labels rather than bounding boxes.

(a)
(b)
Figure 1: The schematic diagram of the previous works with segmentation utilization [8, 31] and the proposed collaboration approach. In [8, 31], a two-stage paradigm is used, in which proposals are first filtered and then detection is performed on these remaining boxes ([8] shares the backbone between two modules). In our approach, detection and segmentation modules instruct each other in a dynamic collaboration loop in the training process.

Recently, the most high-profile works on weakly supervised object detection all exploit the multiple instance learning (MIL) paradigm [3, 6, 23, 22, 19, 15, 1, 24, 25, 2, 8]

. Based on the assumption that the object bounding box should be the one contributing most to image classification among all proposals, the MIL based approaches work in an attention-like mechanism: automatically assign larger weights to the proposals consistent with the classification labels. Several promising works combining MIL with deep learning 

[2, 26, 31] have greatly pushed the boundaries of weakly supervised object detection. However, as noted in [26, 31], these methods are easy to over-fit on object parts, because the most discriminative classification evidence may derive from the entire object region, but may also from the crucial parts. The attention mechanism is effective in selecting the discriminative boxes, but does not guarantee the completeness of a detected object. For a more reasonable inference, a further elaborative mechanism is necessary.

Meanwhile, the completeness of a detected region is easier to ensure in weakly supervised segmentation.One common way to outline whole class-related segmentation regions is recurrently discovering and masking these regions in several forward passes [30]. These segmentation maps can potentially constrain the weakly supervised object detection, given that a proposal having low intersection over union (IoU) with the corresponding segmentation map is not likely to be an object bounding box. In [8, 31], weakly supervised segmentation maps are used to filter object proposals and reduce the difficulty of detection, as shown in Fig. (a)a. However, these approaches adopt cascaded or independent models with relatively coarse segmentations to do “hard” delete on the proposals, inevitably resulting in a drop of the proposal recall. In a word, these methods underutilize the segmentation and limit the improvements of weakly supervised object detection.

The MIL based object detection approaches and semantic segmentation approaches focus on restraining different aspects of the weakly supervised localization and have opposite strengths and shortcomings. The MIL based object detection approaches are precise in distinguishing object-related regions and irrelevant surroundings, but incline to confuse entire objects with parts due to its excessive attention to the significant regions. Meanwhile, the weakly supervised segmentation is able to cover the entire instances, but tends to mix irreverent surroundings with real objects. This complementary property is verified by Table 1, that the segmentation can achieve a higher pixel-wise recall but lower precision, while the detection can achieve a higher pixel-level precision but lower recall. Rather than working independently, the two are naturally cooperative and can work together to overcome their intrinsic weaknesses.

Task Recall Precision
Weakly supervised detection 62.9% 46.3%
Weakly supervised segmentation 69.7% 35.4%
Table 1: Pixel-wise recall and precision of detection and segmentation results on the VOC 2007 test set, following the same setting in Sec. 4.2. For a comparable pixel-level metric, the detection results are converted to the equivalent segmentation maps in a similar way described in Sec. 3.3.

In this work, we propose a segmentation-detection collaborative network (SDCN) for more precise object detection under weak supervision, as shown in Fig. (b)b

. In the proposed SDCN, the detection and segmentation branches work in a collaborative manner to boost each other. Specifically, the segmentation branch is designed as a generative adversarial localization structure to sketch the object region. The detection module is optimized in an MIL manner with the obtained segmentation map serving as spatial prior probabilities of the object proposals. Besides, the object detection branch also provides supervision back to the segmentation branch by a synthetic heatmap generated from all proposal boxes and their classification scores. Therefore, these two branches tightly interact with each other and form a dynamic cooperating loop. Overall, the entire network is optimized under weak supervision of the classification loss in an end-to-end manner, which is superior to the cascaded or independent architectures in previous works 

[8, 31].

In summary, we make three contributions in this paper: 1) the segmentation-detection collaborative mechanism enforces deep cooperation between two complementary tasks and boosts valuable supervision to each other under the weakly supervised setting; 2) for the segmentation branch, the novel generative adversarial localization strategy enables our approach to produce more complete segmentation maps, which is crucial for improving both the segmentation and the detection branches; 3) as demonstrated in Section 4, we achieve the best performance on PASCAL VOC 2007 and 2012 datasets, surpassing the previous state-of-the-arts.

2 Related works

Multiple Instance Learning (MIL). MIL [9]

is a concept in machine learning, illustrating the essence of inexact supervision problem, in which only coarse-grained labels are available 

[34]. Formally, given a training image , all instances in some specific form constitute a “bag”. object proposals (in detection task) or image pixels (in segmentation task) can be different forms of instances. If the image is labeled with class , then the “bag” of is positive with regard to , meaning that there is at least one positive instance of class in this bag. If is not labeled with class , the corresponding “bag” is negative to and there is no instance of class in this image. The MIL models aim at predicting the label of an input bag, and more importantly, finding positive instances in positive bags.

Figure 2: The overall architecture. The SDCN is composed of three modules: the feature extractor, the segmentation branch, and the detection branch. The segmentation branch is instructed by a classification network in a generative adversarial learning manner, while the detection branch employs a conventional weakly supervised detector OICR [26], guided by an MIL objective. These two branches further supervise each other in a collaboration loop. The solid ellipses denote the cost functions. The operations are denoted as blue arrows, while the collaboration loop is shown with orange ones.

Weakly Supervised Object Detection.

Recently, the incorporation of deep neural networks and MIL significantly improves the previous state-of-the-arts. Bilen

[2]

proposed a Weakly Supervised Deep Detection Network (WSDDN) composing of two branches acting as a proposal selector and a proposal classifier, respectively. The idea, detecting objects by the attention-based selection, is proved to be so effective that most of the latter works follow it. , WSDDN is further improved by adding recursive refinement branches in

[26]. Besides these single-stage approaches, researchers have also considered the multiple-stage methods in which fully-supervised detectors are trained with the boxes detected by the single-stage methods as pseudo-labels. Zhang [33]

proposed a metric to estimate image difficulty with the proposal classification scores of WSDDN, and progressively trained a Fast R-CNN with curriculum learning strategy. To speed up the weakly supervised object detectors, Shen

[20] used WSDDN as an instructor which guides a fast generator to produce similar detection results.

Weakly Supervised Object Segmentation. Another route for localizing objects is semantic segmentation. To obtain weakly supervised segmentation map, in [18], Kolesnikov took segmentation map as an output of the network and then aggregated it to a global classification prediction to learn with category labels. In [10], the aggregation function is improved to incorporate both negative and positive evidence, representing both the absence and presence of the target class. In [30], a recurrent adversarial erasing strategy is proposed to mask the response region of the previous forward passes and force to generate responses on other undetected parts during the current forward pass.

Utilization of Segmentation in Weakly Supervised Detection. Researchers have found that there are inherent relations between the weakly supervised segmentation and detection tasks. In [8], a segmentation branch generating coarse response maps is used to eliminate proposals unlikely to cover any objects. In [31], the proposal filtering step is based on a new objectness rating TS2C defined with the weakly supervised segmentation map. Ge [13] proposed a complex framework for both weakly supervised segmentation and detection, where results from segmentation models are used as both object proposal generator and filter for the latter detection models. These methods incorporate the segmentation to overcome the limitations of weakly supervised object detection, which are reasonable and promising considering their superiorities over their baseline models. However, they ignore the mentioned complementarity of these tasks and only exploit one-way cooperation, as shown in Fig. (a)a. The suboptimal manners in using the segmentation information limit the performance of their methods.

3 Method

The overall architecture of the proposed segmentation-detection collaborative network (SDCN) is shown in Fig. 2. The network is mainly composed of three components: a backbone feature extractor , a segmentation branch , and a detection branch . For an input image , its feature is extracted by the extractor , and then feeds into and for segmentation and detection, respectively. The entire network is guided by the classification labels , (where is the number of object classes), which is formatted as an adversarial classification loss and an MIL objective. Additional collaboration loss is designed for improving the accuracy of both branches in a manner of the collaborative loop.

In 3.1, we first briefly introduce our detection branch, which follows the Online Instance Classifier Refinement (OICR) [26]. The proposed segmentation branch and collaboration mechanism are described in detail in 3.2 and 3.3.

3.1 Detection Branch

The detection branch aims at detecting object instances in an input image, given only image category labels. The design of follows the OICR [26], which works in a similar fashion to the Fast RCNN [14]. Specifically, takes the feature from the backbone and object proposals (where is the number of proposals) from Selective Search [28] as input, and detects by classifying each proposal, formulated as below:

(1)

where denotes the number of classes with the class as the background. Each element indicates the probability of the proposal belonging to the class.

The detection branch consists of two sub-modules, a multiple instance detection network (MIDN) and an online instance classifier refinement module . The MIDN serves as an instructor of the refinement module , while produces the final detection output.

The MIDN is the same as the mentioned WSDDN [2], which computes the probability of each proposal belonging to each class under the supervision of category label, with an MIL objective (in Eq. (1) of [26]) formulated as follows:

(2)
(3)

where (denoted as in [26]) shows the probability of an input image belonging to the category by summing up that of all proposals, and denotes the standard multi-class binary cross entropy loss.

Then, the resulting probability from minimizing Eq. (3) is used to generate pseudo instance classification labels for the refinement module. This process is denoted as:

(4)

Each binary element indicates if the proposal is labeled as the class. denotes the conversion from the soft probability matrix to discrete instance labels , where the top-scoring proposal and its highly overlapped ones are labeled as the image label and the rest are labeled as the background. Details are referred to Sec. 3.2 in [26].

The online instance classifier refinement module performs detection proposal by proposal and further constrains the spatial consistency of the detection results with the generated labels , which is formulated as below:

(5)
(6)

where is a row of , indicating the classification scores for proposal .

denotes the weighted cross entropy (CE) loss function in Eq. (4) of

[26]. Here, is employed instead of considering that each proposal has one and only one positive category label.

Eventually, the detection results are given by the refinement module, , and the overall objective for the detection module is a combination of Eq. (3) and Eq. (6):

(7)

where and are balancing factors for the loss.

After optimization according to Eq. (7), the refinement module can do object detection independently by discarding the MIDN in testing.

3.2 Segmentation Branch

Generally, the MIL weakly supervised object detection module is subject to over-fitting on discriminative parts, since smaller regions with less variation are more likely to have high consistency across the whole training set. To overcome this issue, the completeness of a detected object needs to be measured and adjusted, by comparing with a segmentation map. Therefore, a weakly supervised segmentation branch is proposed to cover the complete object regions with generative adversarial localization strategy.

In detail, the segmentation branch takes the feature as input and predicts a segmentation map, as below,

(8)
(9)

where has channels. Each channel corresponds to a segmentation map for the class with a size of .

To ensure that the segmentation map covers the complete object regions precisely, a novel generative adversarial localization strategy is designed as adversarial training between the segmentation predictor and an independent image classifier , severing as generator and discriminator respectively, as shown in Fig. 2. The training target of the generator is to fool into misclassifying by masking out the object regions, and the discriminator aims to eliminate the effect of the erased regions and correctly predict the category labels. The and are optimized alternatively, given the other one fixed.

Here, we first introduce the optimization of the segmentation branch , given the classifier fixed. Overall, the objective of the segmentation branch can be formulated as a sum of losses for each class,

(10)

Here, is the loss for the channel of the segmentation map, consisting of an adversarial loss and a classification loss , described in detail as following.

If the class is a positive foreground class 111A positive foreground class means that the foreground class presents in the current image, while a negative one means that it does not appear., the segmentation map should fully cover the region of the class, but does not overlap with the regions of the other classes. In other words, for an accurate , only the object region masked out by should be classified as the class, while its complementary region should not. Formally, this expectation can be satisfied by minimizing the function

(11)

where denotes pixel-wise product. The first term represents that the object region covered by the generated segmentation map, , should be recognized as the class by the classifier , but does not respond to any other classes with the label , where and . The second term means that when the region related to the class is masked out from the input, , the classifier should not recognize the class anymore without influence on the other classes, with the label , where and . Here, we note that generally the mask can be applied to the image or the input of any layer of the classifier , and since is fixed, the loss function in Eq. (11) only penalizes the segmentation branch .

If the class is a negative foreground class, the should be all-zero, as no instance of this foreground class presents. This is restrained with a response constraint term. In this term, the top 20% response pixels of each map are pooled and averaged for a classification predication optimized with a binary cross entropy loss as below,

(12)

If the class is labeled as negative, is enforced to be close to 0, all elements of the map should approximately be 0. However, the above loss is also applicable when the class is positive, should be close to 1, agreeing with the constraint in Eq. (11).

The background is taken as a special case. In Eq. (11), though the labels and do not involve the background class, the background segmentation map is also applicable same as the other classes. When is multiplied as the first term in Eq. (11), the target label should be all-zero ; when is used as the mask in the second term of Eq. (11), the target label should be exactly the same as the original label . For Eq. (12), we assume that a background region always appears in any input image, for all images.

Overall, the total loss of the segmentation branch in Eq. (10) can be summarized and rewritten as follows,

(13)

where and denote balance weights.

After optimizing Eq. (13), following the adversarial manner, the segmentation branch is fixed, and the classifier is further optimized with the following objective,

(14)
(15)

The objective consists of a classification loss and an adversarial loss . The target of the classifier should always be , since it aims at digging out the remaining object regions, even if is masked out.

Our idea for designing the segmentation branch shares the same adversarial spirit with [30], but our design is more efficient compared with [30] that recurrently performs several forward passes for one segmentation map. Besides, we do not have the trouble of deciding number recurrent steps as [30], which may vary with different objects.

3.3 Collaboration Mechanism

A dynamic collaboration loop is designed to complement both detection and segmentation for more accurate predictions, namely neither so large that cover the background nor so small that degenerate to object parts.

Segmentation instructs Detection. As mentioned, the detection branch is easy to over-fit to discriminative parts, while the segmentation can cover the whole object region. So naturally, the segmentation map can be used to refine the detection results by making the proposal having a larger IoU with the corresponding segmentation map have a higher score. This is achieved by re-weighting the instance classification probability matrix in Eq. (2) in the detection branch by using a prior probability matrix stemming from the segmentation map as follows,

(16)

where denotes the overlap degree between the object proposal and the connected regions from the segmentation map. is generated as below:

(17)

Here, denotes the connected component in the segmentation map , and denotes the intersection over union between and the object proposal . The constant adds a fault-tolerance for the segmentation branch. Each column of is normalized by its maximum value, to make it range within [0, 1].

With the re-weighting in Eq. (16), the object proposals only focusing on local parts are assigned with lower weights, while those proposals precisely covering the object stand out. The connected components are employed to alleviate the issue of multiple instance occurrences, which is a hard case for weakly supervised object detection. The recent TS2C [31] objectness rating designed for solving this issue is also tested in place of IoU with connected components, but no superiority shows in our case.

The re-weighted probability matrix replaces in Eq. (3) and further instructs the MIDN as in Eq. (18) and the refinement module as in Eq. (19):

(18)
(19)

where denotes the pseudo labels deriving from as that in Eq. (4). Finally, the overall objective of the detection branch in Eq. (7) is reformulated as below,

(20)

Detection instructs Segmentation. Though the detection boxes may not cover the whole object, they are effective for distinguishing an object from the background. To guide the segmentation branch, a detection heatmap is generated, which can be seen as an analog of the segmentation map. Each channel corresponds to a heatmap for the class. Specifically, for the positive class , each proposal box contributes its classification score to all pixels within this proposal and thus generates the by

(21)

while the other corresponding to negative classes are set to zero. Then, is normalized by its maximum response and the background heatmap can be simply calculated as the complementary set of the foreground,

(22)

To generate pseudo category label for each pixel, the soft segmentation map is first discretized by taking the arguments of the maxima at each pixel and then the top 10% pixels for each class are kept, while other ambiguous ones are ignored. The generated label is denoted by , and the instructive loss is formulated as below:

(23)

Therefore, the loss function of the whole segmentation branch in Eq. (13) is now updated to

(24)

Overall Objective. With the updates in Eq. (20) and Eq. (24), the final objective for the entire network is

(25)

Briefly, the above objective is optimized in an end-to-end manner. The image classifier is optimized with the loss alternatively, as most adversarial methods. The optimization can be easily conducted using gradient descent. For clarity, the training and the testing of our SDCN are summarized in Algorithm 1.

In the testing stage, as shown in Algorithm 1, only the feature extractor and the refinement module are needed, which make our method as efficient as [26].

1:training set with category labels .
2:procedure Training
3:     forward SDCN , , ,
4:     forward the classifier and ,
5:     generate variables and with and ,
6:     compute in Eq.(20) and in Eq.(24),
7:     backward the loss for SDCN,
8:     compute and backward the loss for ,
9:     continue until convergence.
10:the optimized SDCN ( and ) for detection.
1:test set .
2:procedure Testing
3:     forward SDCN , ,
4:     post-process for detected bounding boxes with .
5:the detected object bounding boxes for .
Algorithm 1 Training and Testing SDCN

4 Experiments

We evaluate the proposed segmentation-detection collaborative network (SDCN) for weakly supervised object detection to prove its advantages over the state-of-the-arts.

(a)
(b)
Figure 3: Visualization of the segmentation and the detection results without and with collaboration. In (a), the columns from left to right are the original images, the segmentation map obtained without and with the collaboration loop. In (b), the detection results of OICR[26] without consideration of collaboration, and the proposed method with collaboration loop are shown with red and green boxes, respectively. (Absence of boxes means no detected object given the detection threshold.)

4.1 Experimental Setup

Datasets. The evaluation is conducted on two commonly used datasets for weakly supervised detection, including the PASCAL VOC 2007 [12] and 2012 [11]. The VOC 2007 dataset includes 9,963 images with total 24,640 objects in 20 classes. It is divided into a trainval set with 5,011 images and a test set with 4,952 images. The more challenging VOC 2012 dataset consists of 11,540 images with 27,450 objects in trainval set and 10,991 images for test. In our experiments, the trainval split is used for training and the test set is for testing. The performance is reported in terms of two metrics: 1) correct localization (CorLoc) [7] on the trainval spilt and 2) average precision (AP) on the test set.

Implementation. For the backbone network , we use the VGG-16 [21]. For , the same architecture as that in OICR [31] is employed. For , similar segmentation header to the CPN [4] is adopted. For the adversarial classifier , ResNet-101 [16] is used and the segmentation masking operation is applied after the res4b22 layer. The detailed architecture is shown in the Appendix A.

We follow a three-step training strategy: 1) the classifier is trained with a fixed learning rate until its convergence; 2) the segmentation branch and detection branch are pre-trained without collaboration; 3) the entire architecture is trained following the end-to-end manner. The SDCN runs for 40k iterations with learning rate , following 30k iterations with learning rate . The same multi-scale training and testing strategies in OICR [26] are adopted. To achieve balanced impacts between detection and segmentation branches, the weights of the losses are simply set to make the gradients have similar scales, i.e. , , , and , respectively. The constant in Eq. (17) is empirically set to 0.5.

Det. branch Seg. branch Seg. Det. Det. Seg. mAP
41.2
41.3
36.8
48.3
Table 2: mAP (in %) of different weakly supervised strategies with the same backbone on the VOC 2007 dataset.

4.2 Ablation Studies

Our ablation study is conducted on VOC 2007 dataset. Four weakly supervised strategies are compared and the results are shown in Table 2. The baseline detection method without the segmentation branch is the same as the OICR[31]. Another naive consideration is directly including the detection and segmentation modules in a multi-task manner without any collaboration between them. The model where only segmentation branch instructs detection branch is also tested. Its mAP is the lowest, since the mean intersection over union (mIoU) between the segmentation results and the ground-truth drops from 37% to 25.1% without the guidance of detection branch, which proves that these two branches should not collaborate in one-way. Our method with segmentation-detection collaboration achieves the highest mAP. It can be observed that the proposed method improves all baseline models by large margins, demonstrating the effectiveness and necessity of the collaboration loop between detection and segmentation.

Methods aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv mAP
Single-stage
WSDDN-VGG16 [2] 39.4 50.1 31.5 16.3 12.6 64.5 42.8 42.6 10.1 35.7 24.9 38.2 34.4 55.6 9.4 14.7 30.2 40.7 54.7 46.9 34.8
OICR-VGG 16 [26] 58.0 62.4 31.1 19.4 13.0 65.1 62.2 28.4 24.8 44.7 30.6 25.3 37.8 65.5 15.7 24.1 41.7 46.9 64.3 62.6 41.2
MELM-L+RL[29] 50.4 57.6 37.7 23.2 13.9 60.2 63.1 44.4 24.3 52.0 42.3 42.7 43.7 66.6 2.9 21.4 45.1 45.2 59.1 56.2 42.6
TS2C  [31] 59.3 57.5 43.7 27.3 13.5 63.9 61.7 59.9 24.1 46.9 36.7 45.6 39.9 62.6 10.3 23.6 41.7 52.4 58.7 56.6 44.3
[27] 57.9 70.5 37.8 5.7 21.0 66.1 69.2 59.4 3.4 57.1 57.3 35.2 64.2 68.6 32.8 28.6 50.8 49.5 41.1 30.0 45.3
SDCN (ours) 59.8 67.1 32.0 34.7 22.8 67.1 63.8 67.9 22.5 48.9 47.8 60.5 51.7 65.2 11.8 20.6 42.1 54.7 60.8 64.3 48.3
Multiple-stage
WSDDN-Ens. [2] 46.4 58.3 35.5 25.9 14.0 66.7 53.0 39.2 8.9 41.8 26.6 38.6 44.7 59.0 10.8 17.3 40.7 49.6 56.9 50.8 39.3
HCP+DSD+OSSH3[17] 52.2 47.1 35.0 26.7 15.4 61.3 66.0 54.3 3.0 53.6 24.7 43.6 48.4 65.8 6.6 18.8 51.9 43.6 53.6 62.4 41.7
OICR-Ens.+FRCNN[26] 65.5 67.2 47.2 21.6 22.1 68.0 68.5 35.9 5.7 63.1 49.5 30.3 64.7 66.1 13.0 25.6 50.0 57.1 60.2 59.0 47.0
MELM-L2+ARL[29] 55.6 66.9 34.2 29.1 16.4 68.8 68.1 43.0 25.0 65.6 45.3 53.2 49.6 68.6 2.0 25.4 52.5 56.8 62.1 57.1 47.3
ZLDN-L[33] 55.4 68.5 50.1 16.8 20.8 62.7 66.8 56.5 2.1 57.8 47.5 40.1 69.7 68.2 21.6 27.2 53.4 56.1 52.5 58.2 47.6
TS2C+FRCNN  [31] 48.0
Ens.+FRCNN[27] 63.0 69.7 40.8 11.6 27.7 70.5 74.1 58.5 10.0 66.7 60.6 34.7 75.7 70.3 25.7 26.5 55.4 56.4 55.5 54.9 50.4
SDCN+FRCNN (ours) 61.1 70.6 40.2 32.8 23.9 63.4 68.9 68.2 18.3 60.2 53.5 63.6 53.6 66.1 14.6 21.8 50.5 56.9 62.4 67.9 51.0
Table 3: Average precision (in %) for our method and the state-of-the-arts on VOC 2007 test split.
Methods aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv CorLoc
Single-stage
WSDDN-VGG16 [2] 65.1 58.8 58.5 33.1 39.8 68.3 60.2 59.6 34.8 64.5 30.5 43.0 56.8 82.4 25.5 41.6 61.5 55.9 65.9 63.7 53.5
OICR-VGG16 [26] 81.7 80.4 48.7 49.5 32.8 81.7 85.4 40.1 40.6 79.5 35.7 33.7 60.5 88.8 21.8 57.9 76.3 59.9 75.3 81.4 60.6
TS2C [31] 84.2 74.1 61.3 52.1 32.1 76.7 82.9 66.6 42.3 70.6 39.5 57.0 61.2 88.4 9.3 54.6 72.2 60.0 65.0 70.3 61.0
[27] 77.5 81.2 55.3 19.7 44.3 80.2 86.6 69.5 10.1 87.7 68.4 52.1 84.4 91.6 57.4 63.4 77.3 58.1 57.0 53.8 63.8
SDCN (ours) 85.8 83.1 56.2 58.5 44.7 80.2 85.0 77.9 29.6 78.8 53.6 74.2 73.1 88.4 18.2 57.5 74.2 60.8 76.1 79.2 66.8
Multiple-stage
HCP+DSD+OSSH3[17] 72.7 55.3 53.0 27.8 35.2 68.6 81.9 60.7 11.6 71.6 29.7 54.3 64.3 88.2 22.2 53.7 72.2 52.6 68.9 75.5 56.1
WSDDN-Ens. [2] 68.9 68.7 65.2 42.5 40.6 72.6 75.2 53.7 29.7 68.1 33.5 45.6 65.9 86.1 27.5 44.9 76.0 62.4 66.3 66.8 58.0
MELM-L2+ARL[29] 61.4
ZLDN-L[33] 74.0 77.8 65.2 37.0 46.7 75.8 83.7 58.8 17.5 73.1 49.0 51.3 76.7 87.4 30.6 47.8 75.0 62.5 64.8 68.8 61.2
OICR-Ens.+FRCNN[26] 85.8 82.7 62.8 45.2 43.5 84.8 87.0 46.8 15.7 82.2 51.0 45.6 83.7 91.2 22.2 59.7 75.3 65.1 76.8 78.1 64.3
Ens.+FRCNN[27] 83.8 82.7 60.7 35.1 53.8 82.7 88.6 67.4 22.0 86.3 68.8 50.9 90.8 93.6 44.0 61.2 82.5 65.9 71.1 76.7 68.4
SDCN+FRCNN (ours) 88.3 84.3 59.2 58.5 47.7 81.2 86.7 78.8 29.9 81.5 54.0 78.4 75.2 90.8 20.2 55.3 76.3 68.6 79.1 82.8 68.8
Table 4: CorLoc (in %) for our method and the state-of-the-arts on VOC 2007 trainval split.

The segmentation masks and detection results without and with the collaboration are visualized in Fig. 7. As observed in Fig. (a)a, with the instruction from the detection branch, the segmentation map becomes much more precise with fewer confusions between the background and the class-related region. Similarly, as shown in Fig. (b)b, the baseline approach inclines to mix discriminative parts with target object bounding boxes, while with the guidance from segmentation the more complete objects are detected. The visualization clearly illustrates the benefits to each other.

For the validation of hyper-parameters and detailed error analysis, please refer to the Appendix B.

4.3 Comparisons with state-of-the-arts

All comparison methods are first evaluated on VOC 2007 as shown in Table 3 and Table 4 in terms of mAP and CorLoc. Among single-stage methods, our method outperforms others on the most categories, leading to a notable improvement on average. Especially, our method performs much better than the state-of-the-arts on “boat”, “cat”, “dog”, as our approach leans to detect more complete objects, though in most cases instances of these categories can be identified by parts. Moreover, our method produces significant improvements compared with the OICR[26] with exactly the same architecture. The most competitive method [27] is designed for weakly supervised object proposal, which is not really competing but complementary to our method, and replacing the fixed object proposal in our method with [27] potentially improves the performance. Besides, the performance of our single-stage method is even comparable with the multiple-stage methods [26, 31, 33, 29], illustrating the effectiveness of the proposed dynamic collaboration loop.

Furthermore, all methods can be enhanced by training with multiple stages, as shown at the bottom of Table 3. Following [26, 31], the top scoring detection bounding boxes from SDCN is used as the labels for training a Fast RCNN [14] with the backbone of VGG16, denoted as SDCN+FRCNN. By this simple multi-stage training strategy, the performance can be further boosted to 51%, which surpasses all the state-of-the-art multiple-stage methods, though [26, 27] use more complex ensemble models. It is noted that the approaches, . HCP+DSD+OSSH3[17] and ZLDN-L[33], attempt to design more elaborate training mechanism by using self-paced or curriculum learning. We believe that the performance of our model SDCN+FRCNN can be further improved by adopting such algorithms.

Methods mAP CorLoc
Single-stage OICR-VGG16 [26] 37.9 62.1
TS2C [31] 40.0 64.4
[27] 40.8 64.9
SDCN (ours) 43.5 67.9
Multiple-stage MELM-L2+ARL[29] 42.4
OICR-Ens.+FRCNN [26] 42.5 65.6
ZLDN-L[33] 42.9 61.5
TS2C+FRCNN [31] 44.4
Ens.+FRCNN[27] 45.7 69.3
SDCN+FRCNN (ours) 46.7 69.5
Table 5: mAP and CorLoc (in %) for our method and the state-of-the-arts on VOC 2012 trainval split.

The comparison methods are further evaluated on the more challenging VOC 2012 dataset, as shown in Table 5. As expected, the proposed method achieves significant improvements with the same architecture as [26, 31], demonstrating its superiority again.

Overall, our SDCN significantly improves the performance of weakly supervised object detection on average, benefitting from the deep collaboration of segmentation and detection. However, there are still several classes on which the performance is hardly improved as shown in Table 3, “chair” and “person”. The main reason is the large portion of occluded and overlapped samples for these classes, which leads to incomplete or connected responses on the segmentation map and bad interaction with the detection branch, leaving room for further improvements.

Time cost. Our training speed is roughly 2 slower than that of the baseline OICR [26], but the testing time costs of our method and OICR are the same, since they share exactly the same architecture of the detection branch.

5 Conclusions and Future Work

In this paper, we present a novel segmentation-detection collaborative network (SDCN) for weakly supervised object detection. Different from the previous works, our method exploits a collaboration loop between segmentation task and detection task to combine the merits of both. Extensive experimental results safely reach the conclusion that our method successfully exceeds the previous state-of-the-arts, while it keeps efficiency in the inference stage. The design of SDCN may be more elaborate for densely overlapped or partially occluded objects, which is more challenging and left as future work.

Appendix A Appendix: Network Architecture

The network architectures of the proposed method are shown in Fig. 4. The feature extractor and the detection branch are exactly the same as the OICR [26], while the segmentation branch follows the design of the RefineNet in CPN [4]. The classification network for generative adversarial localization is omitted, considering that it has exactly the same architecture with the well-known ResNet [16].

(a)
(b) (c)
Figure 4: Network architectures for (a) the feature extractor, (b) the detection branch, and (c) the segmentation branch.

The feature extractor in Fig. (a)a is basically the VGG16 [21]

network. The max-pooling layer after “con4” and its subsequent convolutional layers are replaced by the dilated convolutional layers in order to increase the resolution of the last output feature map.

The detection branch is composed of a multiple instance detection network (MIDN) and an online instance classifier refinement module , which are shown in green and blue in Fig. (b)b, respectively. In MIDN, two branches are in charge of computing the instance classification weights for each proposal and classifying each proposal respectively, by performing softmax along different dimensions. For the refinement module, although the instance classifier is refined only one time for a clear illustration in the manuscript, in fact, it can be refined multiple times. We follow the OICR [26], which performs the refinement 3 times, as shown in Fig. (b)b. The refinement is instructed by the detection results (with the as the detection result). During testing, the outputs from all refinement branches are averaged for the final detection result .

The segmentation branch is shown in Fig. (c)c and it is similar to the RefineNet in CPN [4], which is effective in integrating the multi-scale information for the accurate localization problem. As it is illustrated in [4]

, the architecture, mainly consisting of several stacked bottleneck blocks, can transmit information across different scales and integrate all of them. The normalization layers in the bottleneck blocks are changed from the batch normalization to group normalization 

[32] in our experiments, given that the batch size is too small to train a good batch normalization layer.

Appendix B Appendix: Further Ablation Study

b.1 Investigation of Hyper-parameters

The influences of balance weights , , , and are shown in Fig. 5. As can be seen, the detection performance is not sensitive to these parameters when they are larger than 0.1, demonstrating the robustness of the proposed method.

Figure 5: The curves of the mAP varying with the balance weights for each loss on the PASCAL VOC test set.
(a)
(b)
Figure 6: Per-class frequencies of error modes, and averaged across all classes for the baseline OICR [26] and our proposed method on the PASCAL VOC 2007 trainval set.

b.2 Error Analysis

We investigate the detailed sources of errors following [5], where detected boxes are categorized into five cases: 1) correct localization (overlap with the ground-truth 50%), 2) the hypothesis completely inside the ground-truth, 3) the ground-truth completely inside the hypothesis, 4) none of the above, but non-zero overlap, and 5) no overlap.

The frequencies of these five cases are shown in Fig. (a)a for the baseline OICR. The largest error lies in the low overlap between the hypothesis and the ground-truth, which is inevitable for all existing weakly supervised object detector, resulting from hard cases or self-limitations of the detector. It is noticeable that the hypothesis inside the ground-truth is the second largest error mode, which indicates that the OICR frequently confuses object parts with real objects.

Figure 7: Visualization of the proposed SDCN on the PASCAL VOC 2007 test set.

The corresponding result for the proposed SDCN model is shown in Fig. (b)b. The area of deep blue bars, representing the ratio of correct localization, increases obviously, and the frequencies for three types of errors decreases, especially for the hypothesis inside the ground-truth. It indicates our method greatly overcomes the mentioned confusion in OICR. However, the cases for ground-truth inside the hypothesis increase inevitably, owing to the utilization of semantic segmentation maps rather than instance segmentation maps, which will be considered in our future work.

Additional visualization of the detection results is shown in Fig.7. As can be seen, although these input images include hard samples, occluded or distorted objects and multiple instances in one image, the proposed method still detect these objects.

References

  • [1] H. Bilen, M. Pedersoli, and T. Tuytelaars. Weakly supervised object detection with posterior regularization. In British Machine Vision Conference (BMVC), pages 1–12, 2014.
  • [2] H. Bilen and A. Vedaldi. Weakly supervised deep detection networks. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 2846–2854, 2016.
  • [3] M. Blaschko, A. Vedaldi, and A. Zisserman. Simultaneous object detection and ranking with weak supervision. In Advances in Neural Information Processing Systems (NIPS), pages 235–243, 2010.
  • [4] Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun. Cascaded Pyramid Network for Multi-Person Pose Estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [5] R. G. Cinbis, J. Verbeek, and C. Schmid. Weakly supervised object localization with multi-fold multiple instance learning. IEEE Transactions on Pattern Analysis & Machine Intelligence (TPAMI), 39(1):189–203, 2017.
  • [6] T. Deselaers, B. Alexe, and V. Ferrari. Localizing objects while learning their appearance. In European Conference on Computer Vision (ECCV), pages 452–466, 2010.
  • [7] T. Deselaers, B. Alexe, and V. Ferrari. Weakly supervised localization and learning with generic knowledge. International Journal of Computer Vision (IJCV), 100(3):275–293, 2012.
  • [8] A. Diba, V. Sharma, A. M. Pazandeh, H. Pirsiavash, and L. Van Gool. Weakly supervised cascaded convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), page 9, 2017.
  • [9] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence., 89(1-2):31–71, 1997.
  • [10] T. Durand, T. Mordan, N. Thome, and M. Cord.

    WILDCAT: Weakly Supervised Learning of Deep ConvNets for Image Classification, Pointwise Localization and Segmentation.

    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [11] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision (IJCV), 111(1):98–136, 2015.
  • [12] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision (IJCV), 88(2):303–338, 2010.
  • [13] W. Ge, S. Yang, and Y. Yu. Multi-evidence filtering and fusion for multi-label classification, object detection and semantic segmentation based on weakly supervised learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [14] R. Girshick. Fast r-cnn. In IEEE International Conference on Computer Vision (ICCV), pages 1440–1448, 2015.
  • [15] R. Gokberk Cinbis, J. Verbeek, and C. Schmid. Multi-fold mil training for weakly supervised object localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2409–2416, 2014.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [17] Z. Jie, Y. Wei, X. Jin, J. Feng, and W. Liu. Deep self-taught learning for weakly supervised object localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [18] A. Kolesnikov and C. H. Lampert. Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In European Conference on Computer Vision (ECCV), 2016.
  • [19] O. Russakovsky, Y. Lin, K. Yu, and L. Fei-Fei. Object-centric spatial pooling for image classification. In European Conference on Computer Vision (ECCV), pages 1–15, 2012.
  • [20] Y. Shen, R. Ji, S. Zhang, W. Zuo, and Y. Wang. Generative adversarial learning towards fast weakly supervised detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [21] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), 2014.
  • [22] P. Siva, C. Russell, and T. Xiang. In defence of negative mining for annotating weakly labelled data. In European Conference on Computer Vision (ECCV), pages 594–608, 2012.
  • [23] P. Siva and T. Xiang. Weakly supervised object detector learning with model drift detection. In IEEE International Conference on Computer Vision (ICCV), pages 343–350, 2011.
  • [24] H. O. Song, R. Girshick, S. Jegelka, J. Mairal, Z. Harchaoui, and T. Darrell. On learning to localize objects with minimal supervision. In International Conference on Machine Learning (ICML), pages 1611–1619, 2014.
  • [25] H. O. Song, Y. J. Lee, S. Jegelka, and T. Darrell. Weakly-supervised discovery of visual pattern configurations. In Advances in Neural Information Processing Systems (NIPS), pages 1637–1645, 2014.
  • [26] P. Tang, X. Wang, X. Bai, and W. Liu. Multiple instance detection network with online instance classifier refinement. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [27] P. Tang, X. Wang, A. Wang, Y. Yan, W. Liu, J. Huang, and A. Yuille. Weakly supervised region proposal network and object detection. In European Conference on Computer Vision (ECCV), 2018.
  • [28] J. R. R. Uijlings, K. E. A. V. De Sande, T. Gevers, and A. W. M. Smeulders. Selective search for object recognition. International Journal of Computer Vision (IJCV), 104(2):154–171, 2013.
  • [29] F. Wan, P. Wei, J. Jiao, Z. Han, and Q. Ye. Min-entropy latent model for weakly supervised object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1297–1306, 2018.
  • [30] Y. Wei, J. Feng, X. Liang, M.-M. Cheng, Y. Zhao, and S. Yan. Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [31] Y. Wei, Z. Shen, B. Cheng, H. Shi, J. Xiong, J. Feng, and T. Huang. Ts2c: tight box mining with surrounding segmentation context for weakly supervised object detection. In European Conference on Computer Vision (ECCV), 2018.
  • [32] Y. Wu and K. He. Group normalization. In European Conference on Computer Vision (ECCV), 2018.
  • [33] X. Zhang, J. Feng, H. Xiong, and Q. Tian. Zigzag learning for weakly supervised object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [34] Z.-H. Zhou. A brief introduction to weakly supervised learning. National Science Review, 5(1):44–53, 2018.