Salient object detection (SOD) aims to segment objects in an image that visually attract human attention most. It plays an important role in many computer vision and robotic vision tasks, such as image segmentation  and visual tracking . Recently, deep learning based methods [28, 37, 16, 11, 48, 6, 9, 8] have proved its superiority and achieved remarkable progress. Success of those methods, however, heavily relies a large number of highly accurate pixel-level annotations, which are time-consuming and labor-intensive to collect. A trade-off between testing accuracy and training annotation cost has long existed in the SOD task.
To alleviate this predicament, several attempts have been made to explore different weakly supervised formats, such as noisy label [29, 34], scribble [47, 44] and image-level annotation (i.e., classification label). Image-level annotation based WSOD methods usually adopt a two-stage scheme, which leverages a classification network to generate pseudo labels and then trains a saliency network on these labels. In this paper, we focus on this most challenging problem of developing WSOD by only using image-level annotation.
pursue accurate pseudo labels to train a saliency network and achieve good performance. However, given the fact that pseudo labels are still a far cry from the ground truths, the error remaining unaddressed in the pseudo labels can propagate to the generated predictions. This is consistent with the fact that as the number of epochs increases, the parameters of the model are updated and the prediction curve goes from underfitting to optimal to overfitting. Interestingly, we observe that the relatively good results containing global representations of saliency can be predicted at the early training process (e.g., epoch-5), while the predictions are more prone to error at the latter training process (e.g., epoch-20), as shown in the first two rows in Figure 1. This inspires us to go one step further exploring how this global representation can be evolved as the model is properly trained.
Moreover, previous works adopt existing large-scale datasets, e.g.
and COCO, to perform WSOD. However, an observable fact should not be ignored that there is an inherent inconsistency between image classification and SOD task. For example, many classification labels do not match the salient objects in both single-object and multi-object cases in ImageNet, as illustrated in Figure 2. Such cross-domain inconsistency caused by those mismatched samples impairs the generalizability of models and prevents WSOD methods from achieving optimal results.
In this work, our core insight is that we can design a self-calibrated training strategy and exploit saliency-based image-level annotations to address the aforementioned challenges. To be specific, we 1) aim to calibrate our network with progressively updated labels to curb the spread of errors in low-quality pseudo labels during the training process. 2) develop reliable matches for which image-level annotations are correctly corresponding to salient objects. The source code will be released upon publication. Concretely, our contributions are as follows:
We propose a self-calibrated training strategy to prevent the network from propagating the negative influence of error-prone pseudo labels. A mutual calibration loop is established between pseudo labels and network predictions to promote each other.
We open up a fresh perspective on that even a much smaller dataset (merely % of ImageNet) with well-matched image-level annotations allows WSOD to achieve better performance. This encourages more existing data to be correctly annotated and further paves the way for the booming future of WSOD.
Our method outperforms existing WSOD methods on all metrics over five benchmark datasets, and meanwhile achieves averagely % performance of state-of-the-art fully supervised methods. We also demonstrate that our method retains its competitive edge on most metrics even without our proposed dataset.
We extend the proposed method to other fully supervised SOD methods. Our offered pseudo labels enable these methods to achieve comparatively high accuracy (% for BASNet  and % for ITSD  on F-measure) while being free of pixel-level annotations, costing only % of labeling time for pixel-level annotation.
Ii Related Work
Ii-a Salient Object Detection
Early SOD methods mainly focus on detecting salient objects by utilizing handcraft features and setting various priors, such as center prior , boundary prior  and so on [53, 21]. Recently, deep learning based methods demonstrate its advantages and achieve remarkable improvements. Plenty of promising works [18, 40, 35, 49, 39] are proposed and present various effective architectures. Among them, Hou et al. present short connections to integrate the low-level and high-level features, and predict more detailed saliency maps. Wu et al. propose a novel cascaded partial decoder framework and utilize generated relatively precise attention map to refine high-level features. in [35, 49], researchers propose to explore boundary of the salient objects to predict a more detailed prediction. Although appealing performance these methods have achieved, vast high-quality pixel-level annotations are needed for training their models, which are time-consuming and laborious.
Ii-B Weakly Supervised Salient Object Detection
For achieving a trade-off between labeling efficiency and model performance, researchers aim to perform salient object detect with low-cost annotations. To this end, WSOD is presented and achieves an appealing performance with image-level annotations only.
Wang et al.
design a foreground inference network (FIN) to predict saliency maps from image-level annotations, and introduce a global smooth pooling (GSP) to combine the advantages of global average pooling (GAP) and global max pooling (GMP), which explicitly computes the activation of salient objects. In, Li et al.also perform WSOD based on image-level annotations, they adopt a recurrent self-training strategy and propose a conditional random field based graphical model to cleanse the noisy pixel-wise annotations by enhancing the spatial coherence as well as salient object localization. Based on a traditional method MB+ , more accurate saliency maps are generated in less than one second per image. Zeng et al. intelligently utilize multiple annotations (i.e.,, classification and caption annotations) and design a multi-source weak supervision framework to integrate information from various annotations. Benefited from multiple annotations and an interactive training strategy, a really sample saliency network can also achieve appealing performance. All the above methods target to train a classification network (on existing large-scale multiple objects dataset, i.e.,, ImageNet  or Microsoft COCO ) to generate class activation maps (CAMs) , then perform different refinement methods to generate pseudo labels. Supervised by these pseudo labels directly, a saliency network is trained and predicts the final saliency maps.
Different from the aforementioned works, we argue that: 1) Developing an effective training strategy encourages more accurate predictions even under the supervision of inaccurate pseudo labels which would mislead the networks. 2) Establishing accurate matches between classification labels and salient objects could facilitate the further development of WSOD.
Iii The Proposed Method
In this section, we describe the details of our two-stage framework. As illustrated in the Figure 3, in the first training stage, we train a normal classification network based on the proposed saliency-based dataset, to generate more accurate pseudo labels. We then develop a saliency network using the pseudo labels in the second stage. A self-calibrated training strategy is proposed in this stage to immune network from inaccurate pseudo labels and encourage more accurate predictions.
Iii-a From Image-level to Pixel-level
Class activation maps (CAMs)  localize the most discriminative regions in an image using only a normal classification network and build a preliminary bridge from image-level annotations to pixel-level segmentation tasks. In this paper, we adopt CAMs following the same setting of , to generate pixel-level pseudo labels in the first training stage. To better understand our proposed approach, we will describe the generation of CAMs in a brief way.
For a classification network, we discard all the fully connected layers and apply an extra global average pooling (GAP) layer as well as a convolution layer as previous works do. In the training phase, we take images in classification dataset as input, and compute its classification scores as follows:
where represents the features from the last convolution block, denotes the global average pooling operation and as well as are the learnable parameters of the convolution layer. In the inference phase, we compute the CAMs of images in DUTS-Train dataset as follows:
where andand are the shared parameters learnd in the training phase, represents the classification scores for category and represents the total number of categories. In this phase, multi-scale inference strategy is adopted, which rescales the original image into four sizes and computes the average CAMs as the final output.
As Ahn et al. have pointed out, CAMs mainly concentrate on the most discriminative regions and are too coarse to serve as pseudo labels. Various refinements have been conducted to generate pseudo labels. Different from [36, 45] using an clustering algorithm SLIC , a plug-and-play module PAMR  is adopt in our method. It performs refinement using the low-level color information of RGB images, which can be inserted into our framework flexibly and efficiently. Following the settings of [36, 45], we also adopt CRF  for a further refinement. Note that it is only used to generate pseudo labels in our method.
Iii-B Self-calibrated Training Strategy
In the second training stage, a saliency network is trained with the pseudo labels generated in the first training stage. As is mentioned above, the relatively good results containing global representations of saliency is gradually degraded as the training process continues. A straightforward method to tackle this dilemma is setting a validation set to pick the best result during the training process. However, we argue that it may lead to sub-optimal results because 1)
despite good saliency representations are learned at the early training stage, the predictions are coarse and lack detail as the loss function is still converging (as shown in therow in Figure 1). 2) the capability of networks to learn saliency representation is not fully excavated. 3) we believe that WSOD should not use any pixel-level ground truth in the training process, even as a validation set. Following this main idea, we propose to establish a mutual calibration loop during the training process in which error-prone pseudo labels are recursively updated and calibrate network for better predictions in turn.
Insight: As is discussed in the Section I, under the supervision of noisy pseudo labels, the saliency network goes from optimal to overfitting. On the one hand, in our weakly supervised settings, this “overfitting” manifestes itself as the network being affected by the noisy pseudo labels and learning the inaccurate noise information in them, which heavily restricts the performance of WSOD. It is also worth to mention that this is fundamentally different from the “overfitting” in supervised learning, the latter means that the network learns the biased information in a less comprehensive training set. On the other hand, we conclude reasons of the optimal point before overfitting as: 1) Although many pseudo labels are noisy and inaccurate, the whole pseudo labels still describe general saliency cues. It can provide a roughly correct guidance for the saliency network. 2) Before the loss converges, the saliency network is prone to learn the regular and generalized saliency cues rather than the irregular and noisy information in pseudo labels. Such kind of robustness is also discussed in . Motivated by the above analyses, we propose a self-calibrated training strategy to effectively utilize the robustness and tackle the negative overfitting.
To be specific, supervised by inaccurate pseudo labels , we take the predictions of the saliency network as saliency seeds. As is illustrated in Figure 3, coarse but more accurate seeds are predicted during the first few epochs regardless of the inaccurate supervision of error-prone pseudo labels. We take these seeds as correction terms to calibrate and update the original pseudo labels , while performing refinement again with PAMR. Detailed procedure is presentd in Algorithm 1, here we set a threshold to
for the binarization operation on refined predictions. We conduct self-calibrated strategy throughout the training process, that is, it is performed on each training batch. The loss function for this training stage can be described as:
where is the weighting factor that is illustrated in Algorithm 1. The intuition is that as the training process goes on, the saliency prediction is more accurate and larger weight should be given. , and represent the elements of , and refined predictions , respectively.
As is illustrated in the Figure 3, equipped with our proposed self-calibrated training strategy, inaccurate pseudo labels are progressively updated, and in turn supervise the network. This mutual calibration loop finally encourages both accurate pseudo labels and predictions.
Iii-C Saliency Network
As for the saliency network, we adopt a simple encoder-decoder architecture without any auxiliary modules, which is usually served as baseline for fully-supervised SOD methods [18, 40]. As illustrated in Figure 4, for an image from DUTS-Train dataset, we take features , and from the encoder, to generate , and through two convolution layers, and then adopt a bottom-up strategy to perform feature fusion, which can be denoted as:
represents the sigmoid function,and denote the convolution and concatenation operation, respectively. represents upsampling feature maps to the same size.
In the decoder, the number of output channels of all the middle convolution layers are set to 64 for acceleration. Note that our final prediction is predicted in an end-to-end manner in the test phase without any post-processing.
Iv Dataset Construction
To explore the advantages of accurate matches between image-level annotations and corresponding salient objects, we establish a saliency-based classification dataset, which ensures all the classification labels correspond to the salient objects. Following this main idea, we relabel an existing widely-adopted saliency training set DUTS-Train  with well-matched image-level annotations, namely DUTS-Cls dataset. It fits with WSOD better than existing large-scale classification datasets due to the accurate matches, and facilitates the further improvements for WSOD.
To be specific, we select and label images in DUTS-Train with image-level annotations, while discarding rare categories because only several images are contained. The proposed DUTS-Cls dataset contains 44 categories and 5959 images. As is illustrated in Figure 5, it reaches a relative equilibrium in terms of image numbers of each category and covers most common categories.
It is worth mentioning that labeling image-level annotations is quite fast, which only takes less than 1 seconds per image. Compared to about 3 minutes  for labeling a pixel-level ground truth, it takes less than % of the time and labor cost for a sample. Annotating DUTS-Cls dataset (5959 samples) only costs % of labeling time than annotating the whole DUTS-Train dataset (10553 samples) with pixel-level ground truth. This indicates that exploring WSOD with image-level annotation is quite efficient. Moreover, the DUTS-Cls dataset with well-matched image-level annotations offers a better choice for WSOD than ImageNet, and we genuinely hope it could contribute to the community and encourage more existing data to be correctly annotated at image level.
V-a Implementation Details
We implement our method on the Pytorch toolbox with a single RTX 2080Ti GPU. The backbone adopted in our method is DenseNet-169, which is same as the latest work . During the first training stage, we train a classification network on our proposed DUTS-Cls dataset. In this stage, we adopt the Adam optimization algorithm , the learning rate is set to 1e-4 and maximum epoch is set to . In the second training stage, we only take the RGB images from DUTS-Train as our training set. In this stage, we use Adam optimization algorithm with the learning rate 3e-6 and maximum epoch 25. The batch size of both training stages is set to and all the training and testing images are resized to .
Hyperparameters setting. For the weighting factor of self-calibrated strategy, we conduct hyper-parameter experiments on ECSSD  dataset to pick the optimal value through F-measure . According to the results (0.5 to 0.848, 0.6 to 0.853 and 0.7 to 0.849), we finally set the hyper-parameter to 0.6.
V-B Datasets and Evaluation Metrics
For a fair comparison, we train our model on ImageNet and our proposed DUTS-Cls dataset respectively, the results are shown in Table I. We conduct comparisons on five following widely-adopted test datasets. ECSSD : contains 1000 images which cover various scenes. DUT-OMRON : includes 5168 challenging images consisting of single or multiple salient objects with complex contours and backgrounds. PASCAL-S 
: is collected from the validation set of the PASCAL VOC semantic segmentation dataset, and contains 850 challenging images. HKU-IS : includes 4447 images, many of which contain multiple disconnected salient objects. DUTS : is the largest salient object detection benchmark, which contains 10553 training samples (DUTS-Train) and 5019 testing samples (DUTS-Test). Most images in DUTS-Test are challenging with various locations and scales.
V-C Comparison with State-of-the-arts
We compare our method with all the existing image-level annotation based WSOD methods: WSS , ASMO  and MSW . To further demonstrate the effectiveness of our weakly supervised methods, we also compare the proposed method with nine state-of-the-art fully supervised methods including DSS , RNet , DGRL , BASNet , PFA , CPD , SCRN , ITSD  and MINet , all of which are trained on pixel-level ground truth and based on DNNs. For a fair comparison, we use the saliency maps provided by authors and perform the same evaluation code for all methods.
Quantitative evaluation. Table I
shows the quantitative comparison on four evaluation metrics over five datasets. It can be seen that our method outperforms all the weakly supervised methods on all metrics. Especially,% improvement on HKU-IS and % on DUT-OMRON are achieved on MAE metric. Our method also improves the performance on two challenging datasets DUT-ORMON and PASCAL-S by a large margin, which indicates that our method can explore more accurate saliency cues even in complex scenes. Additionally, the proposed saliency-based dataset with well-matched image-level annotations enables our method to achieve better performance, while far less training samples (less than % of the latest work MSW ) are required. To prove the effect of our method in a more objective manner, we also train our method on ImageNet dataset following the previous works. The results of ”Ours-” shown in Table I demonstrate that our method can outperform existing methods on most metrics even without the proposed dataset thanks to the effective strategy. Moreover, we also compare our method with nine state-of-the-art fully supervised methods. It can be seen in Figure 7 that our method, even with the image-level annotations only and a simple baseline network without any auxiliary modules, can also achieve % accuracy of fully supervised methods on average.
Qualitative evaluation. In Figure 6, we show the qualitative comparisons of our method with existing three WSOD methods as well as six state-of-the-art fully supervised methods. It can be seen that our method could discriminate salient objects from various challenging scenes (such as small objects case in the row and complex background cases in the and rows) and achieve more complete and accurate predictions. Moreover, compared with the fully supervised methods, our method also predicts comparable and even better results in some cases, such as the complete house and log in the and rows. But we would like to point out that our results also need to be improved in term of the boundary of the salient objects.
V-D Ablation Studies
Effect of the self-calibrated strategy. We conduct experiments on both ImageNet ( and rows) and DUTS-Cls ( and rows) settings in Table II. It can be seen that the proposed self-calibrated strategy can not only enhance the performance of our method on ImageNet setting greatly, but also achieve great improvements even on the DUTS-Cls setting, especially on MAE metrics. Besides, the effectiveness of the proposed self-calibrated strategy can also be demonstrated by the visual results in Figure 8. It can be seen that the proposed strategy can keep and enhance the globally good representations during the training process, and predict accurate saliency maps even supervised by error-prone pseudo labels. Moreover, for a comprehensive evaluation, 1) We change the pseudo label by using two traditional SOD methods BSCA  and MR , and then train our model with and without the proposed strategy respectively, the results are shown in the first four rows in Table III. 2) We further apply our method on the lasted work MSW  by just adding our proposed strategy in the last two rows in Table III. These results strongly prove that the self-calibrated strategy can not only work well on our method, but also effective for other pseudo labels and other works.
Effect of the DUTS-Cls dataset. We introduce a saliency-based dataset with well-matched image-level annotations to offer a better choice for WSOD. The first two rows in Table II demonstrate that DUTS-Cls dataset encourages the baseline model to achieve remarkable improvements, compared to ImageNet dataset. And as is illustrated in the last two rows in Table II, it also proves its superiority by a steady improvement on most metrics even if good enough performance is already achieved by adopting the self-calibrated strategy. This is consistent with our argument that the cross-domain inconsistency does impede the performance of WSOD, and a saliency-based dataset can settle this matter better. Additionally, we visualize the CAMs trained on ImageNet (named CAM) and DUTS-Cls (named CAM) in Figure 9, it can be seen that CAM
have higher activation level within the salient objects trained on well-matched DUTS-Cls dataset.Last but not least, to further prove the effectiveness of the proposed DUTS-Cls dataset objectively, we also train the latest work MSW  on the DUTS-Cls dataset. As is shown in Figure 10, by simply replacing ImageNet with DUTS-Cls, considerable improvements are achieved in less training iterations. It is worth to mention that the DUTS-Cls dataset reaches less than % percent of ImageNet in terms of sample size. This strongly demonstrates the effectiveness and generalizability of the well-matched DUTS-Cls dataset for WSOD.
V-E Effectiveness on Unseen Category
The category number of classification dataset inevitably influences the performance of WSOD. Unlike ImageNet including 200 various categories, our proposed DUTS-Cls dataset only contains 44 categories. It is necessary to evaluate the effectiveness of our method as well as DUTS-Cls dataset on unseen categories.
To this end, we choose THUR  as the benchmark dataset for this experiment. THUR is a high-quality saliency dataset which consists of five categories including butterfly, coffee mug, dog, giraffe and airplane. The category airplane is unseen to our DUTS-Cls dataset but seen to ImageNet, while the category giraffe is unseen to both ImageNet and DUTS-Cls dataset. As is illustrated in the and rows in Table IV, DUTS-Cls dataset encourages better predictions on the whole THUR dataset, and also outperforms ImageNet by a large margin on both airplane and giraffe categories. It proves the generalizability and effectiveness of the proposed DUTS-Cls dataset. Besides, the superiority of our method on unseen categories can be demonstrated in the and rows of Table IV. Moreover, except the airplane and giraffe categories, our method also behaves well on other various unseen categories such as the cases shown in Figure 6. It further supports the effect of our method on unseen categories.
|MSW  ImageNet||0.624||0.104||0.716||0.079||0.547||0.088|
We extend our method to fully supervised methods by replacing manually labeled ground truth with our generated predictions on training set. To be specific, we infer predictions using our trained model on DUTS-Train dataset and adopt CRF for a further refinement.v It can be seen in Figure 11 that trained with our offered predictions as supervision, BASNet  and ITSD  achieve % and % of their fully supervised accuracy on F-measure without any pixel-level annotations. Additionally, our method also achieves % accuracy of its fully supervised accuracy on F-measure. These experiments indicate that our method can serve as an alternative to provide pixel-level supervisions for fully supervised SOD methods while maintaining comparatively high accuracy. This costs only % of pixel-level annotation time and labor.
In this paper, we propose a novel self-calibrated training strategy and introduce a saliency-based dataset with well-matched image-level annotations for WSOD. The proposed strategy establishes a mutual calibration loop between pseudo labels and network predictions, which effectively prevents the network from propagating the negative influence of error-prone pseudo labels. We also argue that cross-domain inconsistency exists between SOD and existing large-scale classification datasets, and impedes the development of WSOD. To offer a better choice for WSOD and encourage more contributions to the community, we introduce a saliency-based classification dataset DUTS-Cls to settle this matter well. Extensive experiments demonstrate the superiority of our method and effectiveness of our two ideas. In addition, our method can serve as an alternative to provide pixel-level labels for fully supervised SOD methods while maintaining comparatively high performance, costing only % of labeling time for pixel-level annotation.
Frequency-tuned salient region detection.
2009 IEEE conference on computer vision and pattern recognition, pp. 1597–1604. Cited by: §V-A, §V-B.
-  (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence 34 (11), pp. 2274–2282. Cited by: §III-A.
-  (2019) Weakly supervised learning of instance segmentation with inter-pixel relations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2209–2218. Cited by: §III-A, §III-A.
-  (2020) Single-stage semantic segmentation from image labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4253–4262. Cited by: §III-A.
-  (2015) Salient object detection: a benchmark. IEEE transactions on image processing 24 (12), pp. 5706–5722. Cited by: §I.
-  (2020) DPANet: depth potentiality-aware gated attention network for RGB-D salient object detection. IEEE Transactions on Image Processing. Cited by: §I.
-  (2014) Salientshape: group saliency in image collections. The visual computer 30 (4), pp. 443–453. Cited by: §V-E.
-  (2018) Co-saliency detection for RGBD images based on multi-constraint feature matching and cross label propagation. IEEE Transactions on Image Processing 27 (2), pp. 568–579. Cited by: §I.
-  (2019) Video saliency detection via sparsity-based reconstruction and propagation. IEEE Transactions on Image Processing 28 (10), pp. 4819–4931. Cited by: §I.
-  (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: §I, §II-B.
R3net: recurrent residual refinement network for saliency detection.
Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 684–690. Cited by: §I, §V-C.
-  (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §V-B.
-  (2017) Structure-measure: a new way to evaluate foreground maps. In Proceedings of the IEEE international conference on computer vision, pp. 4548–4557. Cited by: §V-B.
-  (2018) Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint arXiv:1805.10421. Cited by: §V-B.
Employing multi-estimations for weakly-supervised semantic segmentation. In 2020 IEEE/CVF European Conference on Computer Vision, ECCV 2020, Vol. 12362, pp. 332–348. Cited by: §III-B.
-  (2019) Attentive feedback network for boundary-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1623–1632. Cited by: §I.
Online tracking by learning discriminative saliency map with convolutional neural network. In
International conference on machine learning, pp. 597–606. Cited by: §I.
-  (2017) Deeply supervised salient object detection with short connections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3203–3212. Cited by: §II-A, §III-C, §V-C.
-  (2017) Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269. Cited by: §V-A.
-  (2013) Submodular salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2043–2050. Cited by: §II-A.
-  (2014) Salient region detection via high-dimensional color transform. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 883–890. Cited by: §II-A.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §V-A.
-  (2011) Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pp. 109–117. Cited by: §III-A.
-  (2018) Weakly supervised salient object detection using image labels. arXiv preprint arXiv:1803.06503. Cited by: §I, §II-B, TABLE I, §V-C.
Visual saliency based on multiscale deep features. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5455–5463. Cited by: §V-B.
-  (2014) The secrets of salient object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 280–287. Cited by: §I, §V-B.
-  (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §I, §II-B.
-  (2016) Dhsnet: deep hierarchical saliency network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 678–686. Cited by: §I.
-  (2019) DeepUSPS: deep robust unsupervised saliency prediction via self-supervision. In Advances in Neural Information Processing Systems 32, pp. 204–214. Cited by: §I.
-  (2012) Leveraging stereopsis for saliency analysis. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 454–461. Cited by: §IV.
-  (2020) Multi-scale interactive network for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9413–9422. Cited by: §V-C.
-  (2019) Basnet: boundary-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7479–7489. Cited by: 4th item, §V-C, §V-F.
-  (2015) Saliency detection via cellular automata. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 110–119. Cited by: §V-D, TABLE III.
Looking beyond the image: unsupervised learning for object saliency and detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3238–3245. Cited by: §I.
-  (2019) Selectivity or invariance: boundary-aware salient object detection. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, pp. 3798–3807. Cited by: §II-A.
-  (2017) Learning to detect salient objects with image-level supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 136–145. Cited by: §I, §II-B, §III-A, TABLE I, §IV, §V-B, §V-C.
-  (2016) Saliency detection with recurrent fully convolutional networks. In European conference on computer vision, pp. 825–841. Cited by: §I.
-  (2018) Detect globally, refine locally: a novel approach to saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3127–3135. Cited by: §V-C.
-  (2020) Fnet: fusion, feedback and focus for salient object detection. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pp. 12321–12328. Cited by: §II-A.
-  (2019) Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3907–3916. Cited by: §II-A, §III-C, §V-C.
-  (2019) Stacked cross refinement network for edge-aware salient object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7264–7273. Cited by: §V-C.
-  (2013) Hierarchical saliency detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1155–1162. Cited by: §V-A, §V-B.
-  (2013) Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3166–3173. Cited by: §II-A, §V-B, §V-D, TABLE III.
-  (2020) Structure-consistent weakly supervised salient object detection with local saliency coherence. ArXiv abs/2012.04404. Cited by: §I.
-  (2019) Multi-source weak supervision for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6074–6083. Cited by: §I, §II-B, §III-A, TABLE I, Fig. 10, §V-A, §V-C, §V-C, §V-D, §V-D, TABLE III, TABLE IV.
-  (2015) Minimum barrier salient object detection at 80 fps. In Proceedings of the IEEE international conference on computer vision, pp. 1404–1412. Cited by: §II-B.
-  (2020) Weakly-supervised salient object detection via scribble annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12546–12555. Cited by: §I.
-  (2021) Dense attention fluid network for salient object detection in optical remote sensing images. IEEE Transactions on Image Processing 30, pp. 1305–1317. Cited by: §I.
-  (2019) EGNet: edge guidance network for salient object detection. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, pp. 8778–8787. Cited by: §II-A.
-  (2019) Pyramid feature attention network for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3085–3094. Cited by: §V-C.
-  (2016) Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929. Cited by: §II-B, §III-A.
-  (2020) Interactive two-stream decoder for accurate and fast saliency detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9141–9150. Cited by: 4th item, §V-C, §V-F.
-  (2014) Saliency optimization from robust background detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2814–2821. Cited by: §II-A.