To be Critical: Self-Calibrated Weakly Supervised Learning for Salient Object Detection

09/04/2021 ∙ by Yongri Piao, et al. ∙ Dalian University of Technology 0

Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations. Despite of the success of previous works, explorations on an effective training strategy for the saliency network and accurate matches between image-level annotations and salient objects are still inadequate. In this work, 1) we propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions, liberating the saliency network from error-prone propagation caused by pseudo labels. 2) we prove that even a much smaller dataset (merely 1.8 models to achieve better performance as well as generalizability. This sheds new light on the development of WSOD and encourages more contributions to the community. Comprehensive experiments demonstrate that our method outperforms all the existing WSOD methods by adopting the self-calibrated strategy only. Steady improvements are further achieved by training on the proposed dataset. Additionally, our method achieves 94.7 methods on average. And what is more, the fully supervised models adopting our predicted results as "ground truths" achieve successful results (95.6 BASNet and 97.3 time for pixel-level annotation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Salient object detection (SOD) aims to segment objects in an image that visually attract human attention most. It plays an important role in many computer vision and robotic vision tasks 

[5], such as image segmentation [26] and visual tracking [17]. Recently, deep learning based methods [28, 37, 16, 11, 48, 6, 9, 8] have proved its superiority and achieved remarkable progress. Success of those methods, however, heavily relies a large number of highly accurate pixel-level annotations, which are time-consuming and labor-intensive to collect. A trade-off between testing accuracy and training annotation cost has long existed in the SOD task.

To alleviate this predicament, several attempts have been made to explore different weakly supervised formats, such as noisy label [29, 34], scribble [47, 44] and image-level annotation (i.e., classification label). Image-level annotation based WSOD methods usually adopt a two-stage scheme, which leverages a classification network to generate pseudo labels and then trains a saliency network on these labels. In this paper, we focus on this most challenging problem of developing WSOD by only using image-level annotation.

Fig. 1: The visual saliency predictions during the training process of different models, in which SC represents our proposed self-calibrated training strategy. Column data represents image, ground truth and pseudo label, noting that the ground truth is just for exhibition and not used in our framework.

Some pioneering works [36, 24, 45]

pursue accurate pseudo labels to train a saliency network and achieve good performance. However, given the fact that pseudo labels are still a far cry from the ground truths, the error remaining unaddressed in the pseudo labels can propagate to the generated predictions. This is consistent with the fact that as the number of epochs increases, the parameters of the model are updated and the prediction curve goes from underfitting to optimal to overfitting. Interestingly, we observe that the relatively good results containing global representations of saliency can be predicted at the early training process (

e.g., epoch-5), while the predictions are more prone to error at the latter training process (e.g., epoch-20), as shown in the first two rows in Figure 1. This inspires us to go one step further exploring how this global representation can be evolved as the model is properly trained.

Moreover, previous works adopt existing large-scale datasets, e.g.

, ImageNet 

[10]

and COCO 

[27], to perform WSOD. However, an observable fact should not be ignored that there is an inherent inconsistency between image classification and SOD task. For example, many classification labels do not match the salient objects in both single-object and multi-object cases in ImageNet, as illustrated in Figure 2. Such cross-domain inconsistency caused by those mismatched samples impairs the generalizability of models and prevents WSOD methods from achieving optimal results.

In this work, our core insight is that we can design a self-calibrated training strategy and exploit saliency-based image-level annotations to address the aforementioned challenges. To be specific, we 1) aim to calibrate our network with progressively updated labels to curb the spread of errors in low-quality pseudo labels during the training process. 2) develop reliable matches for which image-level annotations are correctly corresponding to salient objects. The source code will be released upon publication. Concretely, our contributions are as follows:

Fig. 2: Cross-domain inconsistency between ImageNet dataset and salient object detecion. (a) and (b) represent single-object and multi-object cases, respectively.
  • We propose a self-calibrated training strategy to prevent the network from propagating the negative influence of error-prone pseudo labels. A mutual calibration loop is established between pseudo labels and network predictions to promote each other.

  • We open up a fresh perspective on that even a much smaller dataset (merely % of ImageNet) with well-matched image-level annotations allows WSOD to achieve better performance. This encourages more existing data to be correctly annotated and further paves the way for the booming future of WSOD.

  • Our method outperforms existing WSOD methods on all metrics over five benchmark datasets, and meanwhile achieves averagely % performance of state-of-the-art fully supervised methods. We also demonstrate that our method retains its competitive edge on most metrics even without our proposed dataset.

  • We extend the proposed method to other fully supervised SOD methods. Our offered pseudo labels enable these methods to achieve comparatively high accuracy (% for BASNet [32] and % for ITSD [52] on F-measure) while being free of pixel-level annotations, costing only % of labeling time for pixel-level annotation.

Ii Related Work

Ii-a Salient Object Detection

Early SOD methods mainly focus on detecting salient objects by utilizing handcraft features and setting various priors, such as center prior [20], boundary prior [43] and so on [53, 21]. Recently, deep learning based methods demonstrate its advantages and achieve remarkable improvements. Plenty of promising works [18, 40, 35, 49, 39] are proposed and present various effective architectures. Among them, Hou et al.[18] present short connections to integrate the low-level and high-level features, and predict more detailed saliency maps. Wu et al.[40] propose a novel cascaded partial decoder framework and utilize generated relatively precise attention map to refine high-level features. in [35, 49], researchers propose to explore boundary of the salient objects to predict a more detailed prediction. Although appealing performance these methods have achieved, vast high-quality pixel-level annotations are needed for training their models, which are time-consuming and laborious.

Fig. 3: Overall framework of our proposed method. In the first stage, classification labels are used to supervise classification network to generate CAMs and further produce pseudo labels. In the second stage, we train a saliency network with the above pseudo labels and propose a self-calibrated strategy to correct labels and predictions progressively.

Ii-B Weakly Supervised Salient Object Detection

For achieving a trade-off between labeling efficiency and model performance, researchers aim to perform salient object detect with low-cost annotations. To this end, WSOD is presented and achieves an appealing performance with image-level annotations only.

Wang et al.[36]

design a foreground inference network (FIN) to predict saliency maps from image-level annotations, and introduce a global smooth pooling (GSP) to combine the advantages of global average pooling (GAP) and global max pooling (GMP), which explicitly computes the activation of salient objects. In

[24], Li et al.also perform WSOD based on image-level annotations, they adopt a recurrent self-training strategy and propose a conditional random field based graphical model to cleanse the noisy pixel-wise annotations by enhancing the spatial coherence as well as salient object localization. Based on a traditional method MB+ [46], more accurate saliency maps are generated in less than one second per image. Zeng et al.[45] intelligently utilize multiple annotations (i.e.,, classification and caption annotations) and design a multi-source weak supervision framework to integrate information from various annotations. Benefited from multiple annotations and an interactive training strategy, a really sample saliency network can also achieve appealing performance. All the above methods target to train a classification network (on existing large-scale multiple objects dataset, i.e.,, ImageNet [10] or Microsoft COCO [27]) to generate class activation maps (CAMs) [51], then perform different refinement methods to generate pseudo labels. Supervised by these pseudo labels directly, a saliency network is trained and predicts the final saliency maps.

Different from the aforementioned works, we argue that: 1) Developing an effective training strategy encourages more accurate predictions even under the supervision of inaccurate pseudo labels which would mislead the networks. 2) Establishing accurate matches between classification labels and salient objects could facilitate the further development of WSOD.

Iii The Proposed Method

In this section, we describe the details of our two-stage framework. As illustrated in the Figure 3, in the first training stage, we train a normal classification network based on the proposed saliency-based dataset, to generate more accurate pseudo labels. We then develop a saliency network using the pseudo labels in the second stage. A self-calibrated training strategy is proposed in this stage to immune network from inaccurate pseudo labels and encourage more accurate predictions.

Iii-a From Image-level to Pixel-level

Class activation maps (CAMs) [51] localize the most discriminative regions in an image using only a normal classification network and build a preliminary bridge from image-level annotations to pixel-level segmentation tasks. In this paper, we adopt CAMs following the same setting of [3], to generate pixel-level pseudo labels in the first training stage. To better understand our proposed approach, we will describe the generation of CAMs in a brief way.

For a classification network, we discard all the fully connected layers and apply an extra global average pooling (GAP) layer as well as a convolution layer as previous works do. In the training phase, we take images in classification dataset as input, and compute its classification scores as follows:

(1)

where represents the features from the last convolution block, denotes the global average pooling operation and as well as are the learnable parameters of the convolution layer. In the inference phase, we compute the CAMs of images in DUTS-Train dataset as follows:

(2)

where and

denote the relu activation function and normalization function, respectively.

and are the shared parameters learnd in the training phase, represents the classification scores for category and represents the total number of categories. In this phase, multi-scale inference strategy is adopted, which rescales the original image into four sizes and computes the average CAMs as the final output.

As Ahn et al.[3] have pointed out, CAMs mainly concentrate on the most discriminative regions and are too coarse to serve as pseudo labels. Various refinements have been conducted to generate pseudo labels. Different from [36, 45] using an clustering algorithm SLIC [2], a plug-and-play module PAMR [4] is adopt in our method. It performs refinement using the low-level color information of RGB images, which can be inserted into our framework flexibly and efficiently. Following the settings of [36, 45], we also adopt CRF [23] for a further refinement. Note that it is only used to generate pseudo labels in our method.

Iii-B Self-calibrated Training Strategy

In the second training stage, a saliency network is trained with the pseudo labels generated in the first training stage. As is mentioned above, the relatively good results containing global representations of saliency is gradually degraded as the training process continues. A straightforward method to tackle this dilemma is setting a validation set to pick the best result during the training process. However, we argue that it may lead to sub-optimal results because 1)

despite good saliency representations are learned at the early training stage, the predictions are coarse and lack detail as the loss function is still converging (as shown in the

row in Figure 1). 2) the capability of networks to learn saliency representation is not fully excavated. 3) we believe that WSOD should not use any pixel-level ground truth in the training process, even as a validation set. Following this main idea, we propose to establish a mutual calibration loop during the training process in which error-prone pseudo labels are recursively updated and calibrate network for better predictions in turn.

0:    The images from DUTS-Train dataset, ;   The predictions of saliency network, ;   The original pseudo labels generated in the stage, .
0:    the updated pseudo labels, .
1:  Performing training stage, maximum epoch is .
2:  for  to  do
3:     Refined predictions: PAMR(, );
4:     if   then
5:         
6:     else
7:         
8:     end if
9:     weighting factor = ;
10:     Updating pseudo labels: ;
11:  end for
Algorithm 1 Self-calibrated training strategy

Insight: As is discussed in the Section I, under the supervision of noisy pseudo labels, the saliency network goes from optimal to overfitting. On the one hand, in our weakly supervised settings, this “overfitting” manifestes itself as the network being affected by the noisy pseudo labels and learning the inaccurate noise information in them, which heavily restricts the performance of WSOD. It is also worth to mention that this is fundamentally different from the “overfitting” in supervised learning, the latter means that the network learns the biased information in a less comprehensive training set. On the other hand, we conclude reasons of the optimal point before overfitting as: 1) Although many pseudo labels are noisy and inaccurate, the whole pseudo labels still describe general saliency cues. It can provide a roughly correct guidance for the saliency network. 2) Before the loss converges, the saliency network is prone to learn the regular and generalized saliency cues rather than the irregular and noisy information in pseudo labels. Such kind of robustness is also discussed in [15]. Motivated by the above analyses, we propose a self-calibrated training strategy to effectively utilize the robustness and tackle the negative overfitting.

To be specific, supervised by inaccurate pseudo labels , we take the predictions of the saliency network as saliency seeds. As is illustrated in Figure 3, coarse but more accurate seeds are predicted during the first few epochs regardless of the inaccurate supervision of error-prone pseudo labels. We take these seeds as correction terms to calibrate and update the original pseudo labels , while performing refinement again with PAMR. Detailed procedure is presentd in Algorithm 1, here we set a threshold to

for the binarization operation on refined predictions

. We conduct self-calibrated strategy throughout the training process, that is, it is performed on each training batch. The loss function for this training stage can be described as:

(3)

where is the weighting factor that is illustrated in Algorithm 1. The intuition is that as the training process goes on, the saliency prediction is more accurate and larger weight should be given. , and represent the elements of , and refined predictions , respectively.

As is illustrated in the Figure 3, equipped with our proposed self-calibrated training strategy, inaccurate pseudo labels are progressively updated, and in turn supervise the network. This mutual calibration loop finally encourages both accurate pseudo labels and predictions.

Fig. 4: Detailed structure of our saliency network. We adopt a simple encoder-decoder architecture and take prediction as our final result.

Iii-C Saliency Network

As for the saliency network, we adopt a simple encoder-decoder architecture without any auxiliary modules, which is usually served as baseline for fully-supervised SOD methods [18, 40]. As illustrated in Figure 4, for an image from DUTS-Train dataset, we take features , and from the encoder, to generate , and through two convolution layers, and then adopt a bottom-up strategy to perform feature fusion, which can be denoted as:

(4)

where

represents the sigmoid function,

and denote the convolution and concatenation operation, respectively. represents upsampling feature maps to the same size.

In the decoder, the number of output channels of all the middle convolution layers are set to 64 for acceleration. Note that our final prediction is predicted in an end-to-end manner in the test phase without any post-processing.

Iv Dataset Construction

To explore the advantages of accurate matches between image-level annotations and corresponding salient objects, we establish a saliency-based classification dataset, which ensures all the classification labels correspond to the salient objects. Following this main idea, we relabel an existing widely-adopted saliency training set DUTS-Train [36] with well-matched image-level annotations, namely DUTS-Cls dataset. It fits with WSOD better than existing large-scale classification datasets due to the accurate matches, and facilitates the further improvements for WSOD.

Fig. 5: Our introduced DUTS-Cls is a saliency-based dataset with image-level annotations, containing 44 categories and 5959 images, in which all the classification labels correspond to the most salient objects in images.

To be specific, we select and label images in DUTS-Train with image-level annotations, while discarding rare categories because only several images are contained. The proposed DUTS-Cls dataset contains 44 categories and 5959 images. As is illustrated in Figure 5, it reaches a relative equilibrium in terms of image numbers of each category and covers most common categories.

Methods Sup. ECSSD DUTS-Test HKU-IS DUT-OMRON PASCAL-S
WSS [36] I .811 .869 .823 .104 .748 .795 .654 .100 .822 .896 .821 .079 .725 .768 .603 .109 .744 .791 .715 .139
ASMO [24] I .802 .853 .797 .110 .697 .772 .614 .116 - - - - .752 .776 .622 .101 .717 .772 .693 .149
MSW [45] I&C .827 .884 .840 .096 .759 .814 .684 .091 .818 .895 .814 .084 .756 .763 .609 .109 .768 .790 .713 .133
Ours- I .836 .887 .838 .083 .770 .830 .689 .079 .836 .907 .822 .064 .743 .807 .643 .085 .778 .818 .742 .111
Ours I .858 .901 .853 .071 .776 .829 .688 .077 .850 .918 .835 .058 .766 .817 .667 .078 .781 .824 .749 .108
TABLE I: Quantitative comparisons of E-measure (), S-measure (), F-measure () and MAE () metrics over five benchmark datasets. The supervision type (Sup.) I indicates using image-level annotations only, and I&C represents developing WSOD on both image-level annotations and caption annotations simultaneously. Num. represents the number of training samples. - means unavailable results, Ours- and Ours represent our method trained on ImageNet and proposed DUTS-Cls dataset, respectively. The best two results are marked in boldface and magenta.
Fig. 6: Visual comparisons of our method with existing WSOD methods as well as six state-of-the-art fully supervised SOD methods (marked with *) in some challengling scenes.

It is worth mentioning that labeling image-level annotations is quite fast, which only takes less than 1 seconds per image. Compared to about 3 minutes [30] for labeling a pixel-level ground truth, it takes less than % of the time and labor cost for a sample. Annotating DUTS-Cls dataset (5959 samples) only costs % of labeling time than annotating the whole DUTS-Train dataset (10553 samples) with pixel-level ground truth. This indicates that exploring WSOD with image-level annotation is quite efficient. Moreover, the DUTS-Cls dataset with well-matched image-level annotations offers a better choice for WSOD than ImageNet, and we genuinely hope it could contribute to the community and encourage more existing data to be correctly annotated at image level.

V Experiments

V-a Implementation Details

We implement our method on the Pytorch toolbox with a single RTX 2080Ti GPU. The backbone adopted in our method is DenseNet-169 

[19], which is same as the latest work [45]. During the first training stage, we train a classification network on our proposed DUTS-Cls dataset. In this stage, we adopt the Adam optimization algorithm [22], the learning rate is set to 1e-4 and maximum epoch is set to . In the second training stage, we only take the RGB images from DUTS-Train as our training set. In this stage, we use Adam optimization algorithm with the learning rate 3e-6 and maximum epoch 25. The batch size of both training stages is set to and all the training and testing images are resized to .

Fig. 7: Comparison of our method with 9 fully supervised methods on ECSSD dataset. The blue column represents the performance of each fully supervised methods and the orange one indicates ours. The corresponding data denote the percentages of performance of our method in different fully supervised methods.

Hyperparameters setting. For the weighting factor of self-calibrated strategy, we conduct hyper-parameter experiments on ECSSD [42] dataset to pick the optimal value through F-measure [1]. According to the results (0.5 to 0.848, 0.6 to 0.853 and 0.7 to 0.849), we finally set the hyper-parameter to 0.6.

V-B Datasets and Evaluation Metrics

For a fair comparison, we train our model on ImageNet and our proposed DUTS-Cls dataset respectively, the results are shown in Table I. We conduct comparisons on five following widely-adopted test datasets. ECSSD [42]: contains 1000 images which cover various scenes. DUT-OMRON [43]: includes 5168 challenging images consisting of single or multiple salient objects with complex contours and backgrounds. PASCAL-S [26]

: is collected from the validation set of the PASCAL VOC semantic segmentation dataset

[12], and contains 850 challenging images. HKU-IS [25]: includes 4447 images, many of which contain multiple disconnected salient objects. DUTS [36]: is the largest salient object detection benchmark, which contains 10553 training samples (DUTS-Train) and 5019 testing samples (DUTS-Test). Most images in DUTS-Test are challenging with various locations and scales.

To evaluate our method in a comprehensive and reliable way, we adopt four well-accepted metrics, including S-measure [13], E-measure [14], F-measure [1] as well as Mean Absolute Error (MAE).

V-C Comparison with State-of-the-arts

We compare our method with all the existing image-level annotation based WSOD methods: WSS [36], ASMO [24] and MSW [45]. To further demonstrate the effectiveness of our weakly supervised methods, we also compare the proposed method with nine state-of-the-art fully supervised methods including DSS [18], RNet [11], DGRL [38], BASNet [32], PFA [50], CPD [40], SCRN [41], ITSD [52] and MINet [31], all of which are trained on pixel-level ground truth and based on DNNs. For a fair comparison, we use the saliency maps provided by authors and perform the same evaluation code for all methods.

Quantitative evaluation. Table I

shows the quantitative comparison on four evaluation metrics over five datasets. It can be seen that our method outperforms all the weakly supervised methods on all metrics. Especially,

% improvement on HKU-IS and % on DUT-OMRON are achieved on MAE metric. Our method also improves the performance on two challenging datasets DUT-ORMON and PASCAL-S by a large margin, which indicates that our method can explore more accurate saliency cues even in complex scenes. Additionally, the proposed saliency-based dataset with well-matched image-level annotations enables our method to achieve better performance, while far less training samples (less than % of the latest work MSW [45]) are required. To prove the effect of our method in a more objective manner, we also train our method on ImageNet dataset following the previous works. The results of ”Ours-” shown in Table I demonstrate that our method can outperform existing methods on most metrics even without the proposed dataset thanks to the effective strategy. Moreover, we also compare our method with nine state-of-the-art fully supervised methods. It can be seen in Figure 7 that our method, even with the image-level annotations only and a simple baseline network without any auxiliary modules, can also achieve % accuracy of fully supervised methods on average.

Qualitative evaluation. In Figure 6, we show the qualitative comparisons of our method with existing three WSOD methods as well as six state-of-the-art fully supervised methods. It can be seen that our method could discriminate salient objects from various challenging scenes (such as small objects case in the row and complex background cases in the and rows) and achieve more complete and accurate predictions. Moreover, compared with the fully supervised methods, our method also predicts comparable and even better results in some cases, such as the complete house and log in the and rows. But we would like to point out that our results also need to be improved in term of the boundary of the salient objects.

Dataset Strategy ECSSD DUTS-Test HKU-IS DUT-OMRON PASCAL-S
ImageNet DUTS-Cls - + SC
0.776 0.121 0.642 0.094 0.773 0.090 0.568 0.111 0.694 0.140
0.836 0.096 0.675 0.085 0.822 0.075 0.648 0.083 0.735 0.126
0.838 0.083 0.689 0.079 0.822 0.064 0.643 0.085 0.742 0.111
0.853 0.071 0.688 0.077 0.835 0.058 0.667 0.078 0.749 0.108
TABLE II: Quantitative results of ablation studies. Dataset represents different training sets used in the first training stage. Strategy denotes training strategy used in the second stage, - indicates the baseline model without any training strategy and +SC represents adopting our proposed self-calibrated strategy.

V-D Ablation Studies

Fig. 8: Visual analysis of the effectiveness of our proposed self-calibrated strategy during the training process, noting that the ground truth is just for exhibition and not used in our framework.

Effect of the self-calibrated strategy. We conduct experiments on both ImageNet ( and rows) and DUTS-Cls ( and rows) settings in Table II. It can be seen that the proposed self-calibrated strategy can not only enhance the performance of our method on ImageNet setting greatly, but also achieve great improvements even on the DUTS-Cls setting, especially on MAE metrics. Besides, the effectiveness of the proposed self-calibrated strategy can also be demonstrated by the visual results in Figure 8. It can be seen that the proposed strategy can keep and enhance the globally good representations during the training process, and predict accurate saliency maps even supervised by error-prone pseudo labels. Moreover, for a comprehensive evaluation, 1) We change the pseudo label by using two traditional SOD methods BSCA [33] and MR [43], and then train our model with and without the proposed strategy respectively, the results are shown in the first four rows in Table III. 2) We further apply our method on the lasted work MSW [45] by just adding our proposed strategy in the last two rows in Table III. These results strongly prove that the self-calibrated strategy can not only work well on our method, but also effective for other pseudo labels and other works.

Method Strategy MAE
BSCA [33] - 0.846 0.884 0.814 0.084
+ SC +0.007 +0.009 +0.018 -0.008
MR [43] - 0.839 0.884 0.823 0.085
+ SC +0.014 +0.010 +0.016 -0.009
MSW [45] - 0.827 0.884 0.840 0.096
+ SC +0.017 +0.012 +0.014 -0.014
TABLE III: The effectiveness of our proposed self-calibrated strategy on ECSSD dataset. + SC indicates simply applying our self-calibrated strategy during the training process.
Fig. 9: Visual analysis of effect of DUTS-Cls datset. CAM and CAM represent the CAMs generated by training on ImageNet and our DUTS-Cls dataset, respectively. Heatmap is adopted for better visualization.

Effect of the DUTS-Cls dataset. We introduce a saliency-based dataset with well-matched image-level annotations to offer a better choice for WSOD. The first two rows in Table II demonstrate that DUTS-Cls dataset encourages the baseline model to achieve remarkable improvements, compared to ImageNet dataset. And as is illustrated in the last two rows in Table II, it also proves its superiority by a steady improvement on most metrics even if good enough performance is already achieved by adopting the self-calibrated strategy. This is consistent with our argument that the cross-domain inconsistency does impede the performance of WSOD, and a saliency-based dataset can settle this matter better. Additionally, we visualize the CAMs trained on ImageNet (named CAM) and DUTS-Cls (named CAM) in Figure 9, it can be seen that CAM

have higher activation level within the salient objects trained on well-matched DUTS-Cls dataset.

Last but not least, to further prove the effectiveness of the proposed DUTS-Cls dataset objectively, we also train the latest work MSW [45] on the DUTS-Cls dataset. As is shown in Figure 10, by simply replacing ImageNet with DUTS-Cls, considerable improvements are achieved in less training iterations. It is worth to mention that the DUTS-Cls dataset reaches less than % percent of ImageNet in terms of sample size. This strongly demonstrates the effectiveness and generalizability of the well-matched DUTS-Cls dataset for WSOD.

Fig. 10: Experiments on the effect of our proposed DUTS-Cls dataset. We conduct experiments on the classification branch of the latest work MSW [45] for a fair comparison, the results are tested on the ECSSD dataset.

V-E Effectiveness on Unseen Category

The category number of classification dataset inevitably influences the performance of WSOD. Unlike ImageNet including 200 various categories, our proposed DUTS-Cls dataset only contains 44 categories. It is necessary to evaluate the effectiveness of our method as well as DUTS-Cls dataset on unseen categories.

To this end, we choose THUR [7] as the benchmark dataset for this experiment. THUR is a high-quality saliency dataset which consists of five categories including butterfly, coffee mug, dog, giraffe and airplane. The category airplane is unseen to our DUTS-Cls dataset but seen to ImageNet, while the category giraffe is unseen to both ImageNet and DUTS-Cls dataset. As is illustrated in the and rows in Table IV, DUTS-Cls dataset encourages better predictions on the whole THUR dataset, and also outperforms ImageNet by a large margin on both airplane and giraffe categories. It proves the generalizability and effectiveness of the proposed DUTS-Cls dataset. Besides, the superiority of our method on unseen categories can be demonstrated in the and rows of Table IV. Moreover, except the airplane and giraffe categories, our method also behaves well on other various unseen categories such as the cases shown in Figure 6. It further supports the effect of our method on unseen categories.

Method Dataset THUR THUR-plane THUR-giraffe
MSW [45] ImageNet 0.624 0.104 0.716 0.079 0.547 0.088
Ours ImageNet 0.676 0.089 0.788 0.055 0.550 0.088
DUTS-Cls 0.689 0.082 0.809 0.050 0.588 0.073
TABLE IV: The quantitative results of effectiveness on unseen category. Dataset represents the training set used in the first training stage, THUR-plane and THUR-giraffe denote the samples of plane and giraffe in THUR dataset, respectively.

V-F Applications

We extend our method to fully supervised methods by replacing manually labeled ground truth with our generated predictions on training set. To be specific, we infer predictions using our trained model on DUTS-Train dataset and adopt CRF for a further refinement.v It can be seen in Figure 11 that trained with our offered predictions as supervision, BASNet [32] and ITSD [52] achieve % and % of their fully supervised accuracy on F-measure without any pixel-level annotations. Additionally, our method also achieves % accuracy of its fully supervised accuracy on F-measure. These experiments indicate that our method can serve as an alternative to provide pixel-level supervisions for fully supervised SOD methods while maintaining comparatively high accuracy. This costs only % of pixel-level annotation time and labor.

Fig. 11: Comparisons of different methods trained on our offered labels (the right one) and ground truth (the left one) on ECSSD dataset. The number on each data pair denotes the corresponding percentage.

Vi Conclusion

In this paper, we propose a novel self-calibrated training strategy and introduce a saliency-based dataset with well-matched image-level annotations for WSOD. The proposed strategy establishes a mutual calibration loop between pseudo labels and network predictions, which effectively prevents the network from propagating the negative influence of error-prone pseudo labels. We also argue that cross-domain inconsistency exists between SOD and existing large-scale classification datasets, and impedes the development of WSOD. To offer a better choice for WSOD and encourage more contributions to the community, we introduce a saliency-based classification dataset DUTS-Cls to settle this matter well. Extensive experiments demonstrate the superiority of our method and effectiveness of our two ideas. In addition, our method can serve as an alternative to provide pixel-level labels for fully supervised SOD methods while maintaining comparatively high performance, costing only % of labeling time for pixel-level annotation.

References

  • [1] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk (2009) Frequency-tuned salient region detection. In

    2009 IEEE conference on computer vision and pattern recognition

    ,
    pp. 1597–1604. Cited by: §V-A, §V-B.
  • [2] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence 34 (11), pp. 2274–2282. Cited by: §III-A.
  • [3] J. Ahn, S. Cho, and S. Kwak (2019) Weakly supervised learning of instance segmentation with inter-pixel relations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2209–2218. Cited by: §III-A, §III-A.
  • [4] N. Araslanov and S. Roth (2020) Single-stage semantic segmentation from image labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4253–4262. Cited by: §III-A.
  • [5] A. Borji, M. Cheng, H. Jiang, and J. Li (2015) Salient object detection: a benchmark. IEEE transactions on image processing 24 (12), pp. 5706–5722. Cited by: §I.
  • [6] Z. Chen, R. Cong, Q. Xu, and Q. Huang (2020) DPANet: depth potentiality-aware gated attention network for RGB-D salient object detection. IEEE Transactions on Image Processing. Cited by: §I.
  • [7] M. Cheng, N. J. Mitra, X. Huang, and S. Hu (2014) Salientshape: group saliency in image collections. The visual computer 30 (4), pp. 443–453. Cited by: §V-E.
  • [8] R. Cong, J. Lei, H. Fu, Q. Huang, X. Cao, and C. Hou (2018) Co-saliency detection for RGBD images based on multi-constraint feature matching and cross label propagation. IEEE Transactions on Image Processing 27 (2), pp. 568–579. Cited by: §I.
  • [9] R. Cong, J. Lei, H. Fu, F. Porikli, Q. Huang, and C. Hou (2019) Video saliency detection via sparsity-based reconstruction and propagation. IEEE Transactions on Image Processing 28 (10), pp. 4819–4931. Cited by: §I.
  • [10] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: §I, §II-B.
  • [11] Z. Deng, X. Hu, L. Zhu, X. Xu, J. Qin, G. Han, and P. Heng (2018) R3net: recurrent residual refinement network for saliency detection. In

    Proceedings of the 27th International Joint Conference on Artificial Intelligence

    ,
    pp. 684–690. Cited by: §I, §V-C.
  • [12] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §V-B.
  • [13] D. Fan, M. Cheng, Y. Liu, T. Li, and A. Borji (2017) Structure-measure: a new way to evaluate foreground maps. In Proceedings of the IEEE international conference on computer vision, pp. 4548–4557. Cited by: §V-B.
  • [14] D. Fan, C. Gong, Y. Cao, B. Ren, M. Cheng, and A. Borji (2018) Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint arXiv:1805.10421. Cited by: §V-B.
  • [15] J. Fan, Z. Zhang, and T. Tan (2020)

    Employing multi-estimations for weakly-supervised semantic segmentation

    .
    In 2020 IEEE/CVF European Conference on Computer Vision, ECCV 2020, Vol. 12362, pp. 332–348. Cited by: §III-B.
  • [16] M. Feng, H. Lu, and E. Ding (2019) Attentive feedback network for boundary-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1623–1632. Cited by: §I.
  • [17] S. Hong, T. You, S. Kwak, and B. Han (2015)

    Online tracking by learning discriminative saliency map with convolutional neural network

    .
    In

    International conference on machine learning

    ,
    pp. 597–606. Cited by: §I.
  • [18] Q. Hou, M. Cheng, X. Hu, A. Borji, Z. Tu, and P. H. Torr (2017) Deeply supervised salient object detection with short connections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3203–3212. Cited by: §II-A, §III-C, §V-C.
  • [19] G. Huang, Z. Liu, and K. Q. Weinberger (2017) Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269. Cited by: §V-A.
  • [20] Z. Jiang and L. S. Davis (2013) Submodular salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2043–2050. Cited by: §II-A.
  • [21] J. Kim, D. Han, Y. Tai, and J. Kim (2014) Salient region detection via high-dimensional color transform. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 883–890. Cited by: §II-A.
  • [22] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §V-A.
  • [23] P. Krähenbühl and V. Koltun (2011) Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pp. 109–117. Cited by: §III-A.
  • [24] G. Li, Y. Xie, and L. Lin (2018) Weakly supervised salient object detection using image labels. arXiv preprint arXiv:1803.06503. Cited by: §I, §II-B, TABLE I, §V-C.
  • [25] G. Li and Y. Yu (2015)

    Visual saliency based on multiscale deep features

    .
    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5455–5463. Cited by: §V-B.
  • [26] Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille (2014) The secrets of salient object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 280–287. Cited by: §I, §V-B.
  • [27] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §I, §II-B.
  • [28] N. Liu and J. Han (2016) Dhsnet: deep hierarchical saliency network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 678–686. Cited by: §I.
  • [29] T. Nguyen, M. Dax, C. K. Mummadi, N. Ngo, T. H. P. Nguyen, Z. Lou, and T. Brox (2019) DeepUSPS: deep robust unsupervised saliency prediction via self-supervision. In Advances in Neural Information Processing Systems 32, pp. 204–214. Cited by: §I.
  • [30] Y. Niu, Y. Geng, X. Li, and F. Liu (2012) Leveraging stereopsis for saliency analysis. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 454–461. Cited by: §IV.
  • [31] Y. Pang, X. Zhao, L. Zhang, and H. Lu (2020) Multi-scale interactive network for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9413–9422. Cited by: §V-C.
  • [32] X. Qin, Z. Zhang, C. Huang, C. Gao, M. Dehghan, and M. Jagersand (2019) Basnet: boundary-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7479–7489. Cited by: 4th item, §V-C, §V-F.
  • [33] Y. Qin, H. Lu, Y. Xu, and H. Wang (2015) Saliency detection via cellular automata. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 110–119. Cited by: §V-D, TABLE III.
  • [34] P. Siva, C. Russell, T. Xiang, and L. Agapito (2013)

    Looking beyond the image: unsupervised learning for object saliency and detection

    .
    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3238–3245. Cited by: §I.
  • [35] J. Su, J. Li, Y. Zhang, C. Xia, and Y. Tian (2019) Selectivity or invariance: boundary-aware salient object detection. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, pp. 3798–3807. Cited by: §II-A.
  • [36] L. Wang, H. Lu, Y. Wang, M. Feng, D. Wang, B. Yin, and X. Ruan (2017) Learning to detect salient objects with image-level supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 136–145. Cited by: §I, §II-B, §III-A, TABLE I, §IV, §V-B, §V-C.
  • [37] L. Wang, L. Wang, H. Lu, P. Zhang, and X. Ruan (2016) Saliency detection with recurrent fully convolutional networks. In European conference on computer vision, pp. 825–841. Cited by: §I.
  • [38] T. Wang, L. Zhang, S. Wang, H. Lu, G. Yang, X. Ruan, and A. Borji (2018) Detect globally, refine locally: a novel approach to saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3127–3135. Cited by: §V-C.
  • [39] J. Wei, S. Wang, and Q. Huang (2020) Fnet: fusion, feedback and focus for salient object detection. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pp. 12321–12328. Cited by: §II-A.
  • [40] Z. Wu, L. Su, and Q. Huang (2019) Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3907–3916. Cited by: §II-A, §III-C, §V-C.
  • [41] Z. Wu, L. Su, and Q. Huang (2019) Stacked cross refinement network for edge-aware salient object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7264–7273. Cited by: §V-C.
  • [42] Q. Yan, L. Xu, J. Shi, and J. Jia (2013) Hierarchical saliency detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1155–1162. Cited by: §V-A, §V-B.
  • [43] C. Yang, L. Zhang, H. Lu, X. Ruan, and M. Yang (2013) Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3166–3173. Cited by: §II-A, §V-B, §V-D, TABLE III.
  • [44] S. Yu, B. Zhang, J. Xiao, and E. Lim (2020) Structure-consistent weakly supervised salient object detection with local saliency coherence. ArXiv abs/2012.04404. Cited by: §I.
  • [45] Y. Zeng, Y. Zhuge, H. Lu, L. Zhang, M. Qian, and Y. Yu (2019) Multi-source weak supervision for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6074–6083. Cited by: §I, §II-B, §III-A, TABLE I, Fig. 10, §V-A, §V-C, §V-C, §V-D, §V-D, TABLE III, TABLE IV.
  • [46] J. Zhang, S. Sclaroff, Z. Lin, X. Shen, B. Price, and R. Mech (2015) Minimum barrier salient object detection at 80 fps. In Proceedings of the IEEE international conference on computer vision, pp. 1404–1412. Cited by: §II-B.
  • [47] J. Zhang, X. Yu, A. Li, P. Song, B. Liu, and Y. Dai (2020) Weakly-supervised salient object detection via scribble annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12546–12555. Cited by: §I.
  • [48] Q. Zhang, R. Cong, C. Li, M. Cheng, Y. Fang, X. Cao, Y. Zhao, and S. Kwong (2021) Dense attention fluid network for salient object detection in optical remote sensing images. IEEE Transactions on Image Processing 30, pp. 1305–1317. Cited by: §I.
  • [49] J. Zhao, J. Liu, D. Fan, Y. Cao, J. Yang, and M. Cheng (2019) EGNet: edge guidance network for salient object detection. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, pp. 8778–8787. Cited by: §II-A.
  • [50] T. Zhao and X. Wu (2019) Pyramid feature attention network for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3085–3094. Cited by: §V-C.
  • [51] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016) Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929. Cited by: §II-B, §III-A.
  • [52] H. Zhou, X. Xie, J. Lai, Z. Chen, and L. Yang (2020) Interactive two-stream decoder for accurate and fast saliency detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9141–9150. Cited by: 4th item, §V-C, §V-F.
  • [53] W. Zhu, S. Liang, Y. Wei, and J. Sun (2014) Saliency optimization from robust background detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2814–2821. Cited by: §II-A.