Contour Loss: Boundary-Aware Learning for Salient Object Segmentation

08/06/2019
by   Zixuan Chen, et al.
SUN YAT-SEN UNIVERSITY
16

We present a learning model that makes full use of boundary information for salient object segmentation. Specifically, we come up with a novel loss function, i.e., Contour Loss, which leverages object contours to guide models to perceive salient object boundaries. Such a boundary-aware network can learn boundary-wise distinctions between salient objects and background, hence effectively facilitating the saliency detection. Yet the Contour Loss emphasizes on the local saliency. We further propose the hierarchical global attention module (HGAM), which forces the model hierarchically attend to global contexts, thus captures the global visual saliency. Comprehensive experiments on six benchmark datasets show that our method achieves superior performance over state-of-the-art ones. Moreover, our model has a real-time speed of 26 fps on a TITAN X GPU.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 9

page 10

05/28/2021

Recursive Contour Saliency Blending Network for Accurate Salient Object Detection

Contour information plays a vital role in salient object detection. Howe...
02/02/2021

Learning Crisp Boundaries Using Deep Refinement Network and Adaptive Weighting Loss

Significant progress has been made in boundary detection with the help o...
02/19/2018

Salient Object Detection by Lossless Feature Reflection

Salient object detection, which aims to identify and locate the most sal...
10/13/2021

Saliency Detection via Global Context Enhanced Feature Fusion and Edge Weighted Loss

UNet-based methods have shown outstanding performance in salient object ...
05/27/2015

Inner and Inter Label Propagation: Salient Object Detection in the Wild

In this paper, we propose a novel label propagation based method for sal...
01/02/2019

Photo-Sketching: Inferring Contour Drawings from Images

Edges, boundaries and contours are important subjects of study in both c...
06/27/2019

Region Refinement Network for Salient Object Detection

Albeit intensively studied, false prediction and unclear boundaries are ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Salient object segmentation, which aims to extract the most conspicuous object regions in visual range, has become an attractive computer vision research topic over the decades. A vast family of saliency algorithms have been proposed to tackle the saliency segmentation problem, which distinguishes whether a pixel pertains to a noticeable object or inconsequential background. Due to a mixture of both object and background, pixels closed to the boundary between object and background are error-prone.

(a) (b) (c) (d)
Figure 1: Visual examples of the proposed method and PCA [19]. From left to right: (a) original image, (b) ground truth, (c) ours, (d) PCA [19]

Early methods [8, 3, 35] determine the saliency by utilizing hand-crafted appearance features. These methods often focus on the low-level visual features and are struggling to achieve satisfactory results in the image with a complex scene. Compared with the methods based on hand-crafted features and prior knowledge, the fully convolution network (FCN) based frameworks [21, 26, 17]

made remarkable progress by exploiting high-level semantic information. To learn the binarized saliency segmentation, these networks usually adopt cross entropy loss as objective function. However, cross entropy loss only considers sample distribution while neglecting the appearance cues of target objects, such as boundaries and inner textures. Especially, the boundary is regarded very important for saliency segmentation.

To leverage the boundary information, multi-task architecture [22, 15] was designed to aggregate the features acquired from boundary and saliency labels. As the supervised information is distinct between boundary and saliency branches, simply aggregating these features may lead to incompatible interference, thus their models are hard to converge. Since aforementioned saliency frameworks cannot well exploit boundary information to determine the contour pixels, these methods may obtain a sub-optimal result with vague boundaries.

Because the final decision of salient object regions relies on the spatial contexts, exploiting contextual information in models can lessen the misdirections of insignificant background. Recently, attention mechanism was exploited to obtain attended features for capturing global contexts by [19, 32, 39]. However, as these attended features are generated by softmax in global forms, they only emphasize several significant pixels and abnegate other information in images. Therefore, for the high-resolution image, it is not a good choice to capture the global contexts by softmax-wise attention modules, which easily leads to overfitting in training.

To address aforementioned issues of boundary-aware learning and attention mechanism for saliency segmentation, we propose a novel segmentation loss and an effective global attention module, i.e., Contour Loss and hierarchical global attention module (HGAM). The aim of Contour Loss is to guide the network to perceive the object boundaries to learn the boundary-wise distinctions between salient objects and background. Motivated by the focal loss [18], we apply spatial weight maps in cross entropy loss, which assigns a relatively high value to emphasize the pixels near object borders in training. As a result, the trained model is sensitive to the boundary-wise distinctions in images. Since the Contour Loss focuses on local boundaries, HGAM is proposed to hierarchically attend to global contextual information for alleviating background distractions. Different from the abovementioned attention modules which work with softmax, HGAM is based on global contrast thus can capture global contexts in all resolutions. Our baseline model is based on FPN [17] architecture with VGG-16 [27] backbone, which is refined by employing residual blocks instead of simple convolution layers in decoder module. With the help of the abovementioned techniques, our network yields state-of-the-art performance on six benchmarks.

Followings are the summary of our main contributions:

1) We propose Contour Loss to guide networks to perceive salient object boundaries. Consequently, boundary-aware features can be obtained to facilitate final predictions on object boundaries.

2) We present the hierarchical global attention module (HGAM) to attend to global contexts for reducing background distractions.

3) We construct a network based on FPN architecture and incorporate those proposed methods for joint training.

4) Comprehensive experimental results and extensive in-depth analysis can explain the outperformance of proposed methods. In addition, our model is very fast which has speed of 26 fps on an NVIDIA TITAN X GPU.

2 Related Works

2.1 Salient Object Segmentation

In recent years, various frameworks, including conventional methods and fully convolution network (FCN) based models, have been presented to address the problems of salient object segmentation. We briefly review these two categories of methods in the followings.

2.1.1 Conventional methods

Conventional saliency detection methods utilize prior knowledge, as well as hand-crafted appearance features to capture salient regions. Considering obvious distinctions between salient regions and background in pictures, local contrast is used to determine the pixel is conspicuous or not by [10]. Inspired by the effectiveness of local contrast, Cheng et al. [3] proposed to capture the salient regions by global contrast. To exploit different appearance cues for refining the saliency quality, multi-level segmentation model is designed by [34, 9] to hierarchically aggregate these cues. As conventional methods only leverage low-level visual features, the lack of semantic information can lead to failures in complex situations.

2.1.2 FCN-based method

With the development of deep learning, remarkable progress has been made by FCN-based models. Different from conventional methods, high-level semantic features can be exploited by FCNs to achieve better results.

Typical FCNs.

Long et al. [21] first build a FCN for addressing semantic segmentation problems. Ronneberger et al. [26]

propose the U-Net architecture, which consists of a contracting path, a symmetric expanding path and lateral connections to integrate features with the same resolution. To exploit the potential of deep feature pyramids, Lin et al.

[17] present the FPN architecture, which based on U-Net and employs hierarchical predictions. These architectures are popularly followed by later related works.

Recurrent structures.

Inspired by RNNs, some recurrent structures have been proposed to tackle saliency segmentation problems. Kuen et al. [12] first design a recurrent network with convolution and deconvolution layers to enhance saliency maps from coarse to fine. Liu and Han [20] present an U-Net based architecture, which refines saliency maps by recursively integrating hierarchical predictions. Wang et al. [30] utilize saliency results as feedback signals to improve saliency performance. In [37]

, Zhang et al. propose a complex recurrent structure to recursively extract and aggregate features. Although these advanced recurrent structures can better leverage the potential of hierarchical features, the lack of heuristic knowledge limits their capability.

Attention networks.

The aim of attention mechanism is to adaptively select significant features, in other words, alleviating background distractions. Since both attention and saliency have similar contextual meanings in pictures, recently many researchers adopt attention mechanism for saliency detection. To alleviate background distractions, Wang et al. [32] obtain attention maps from encoded features to attend to the global contexts, while Zhang et al. utilize both spatial and channel-wise attentions in [39]. Because softmax only emphasizes several pixels in image, softmax-wise attention modules are hard to capture global contexts in high-resolution. To tackle this problem, Liu et al. [19] propose global and local attention modules to capture global contexts and local contexts in low-resolution and high-resolution respectively, and Chen et al. [2] employ hierarchical predictions as attention maps, which can attend to global contexts in all resolutions. As not all the features in background regions are helpless for saliency determination especially in deep layers, the predicted maps which are trained to close to annotation masks may lose some crucial information. In contrast to the aforementioned attention modules, the proposed HGAM can not only capture global contexts in all resolutions but also considers some crucial information from background regions.

Figure 2: Overall architecture of the proposed network with VGG-16 [27] backbone. represents the feature of th level in backbone. indicates the residual feature generated by the th residual block. denotes the th resampled feature with resolution, and is generated by . is the th HGAM, which receives , and previous HGAM message to guide . denotes the final saliency output generated by guided .

2.2 Boundary-aware Learning

One of the major challenges in saliency segmentation is to determine the conspicuous object boundaries. Some researchers have pay attention to this point.

Since superpixel methods like SLIC [1] can obtain the regions by aggregating adjacent pixels with similar attributes, they are usually adopted to refine saliency results. Yang et al. [35] propose a background-prior method, which utilize superpixel methods to obtain regions and detect salient regions by ranking the similarity of foreground or background units. To revise the vague boundaries, [7, 14, 13] employ superpixel algorithms to generate the object contours by these over-segmented regions. Because superpixel relies on the distinction of pixel integration, it cannot well segment the pixels from low contrast regions. Besides, these superpixel-based methods often have a huge computational cost.

Insteads of using superpixel methods for boundary determination, recent researches prefer to straightly leverage contour information in an entire framework. As a conventional method, [23] build a two-stream framework for the mixture of texture and contour. Luo et al. [22] and Xin et al. [15] present a multi-task network architecture based on U-Net, which predicts both saliency and contour maps of the corresponding salient objects. Due to the great distinctions between the saliency and boundary maps, it leads to inconsistent interference by simply aggregating these features. Therefore, these models which are difficult to converge may generate sub-optimal results.

Different from abovementioned boundary-aware methods, the proposed Contour Loss can help the model to perceive the object boundaries by focusing on the boundary pixels, which is more robust and easier to be convergence.

3 Proposed Methods

Our proposed method mainly integrates a basic network with a Contour Loss and a hierarchical global attention module (HGAM), which aim at acquiring boundary-aware features and hierarchically integrating global contexts in all resolutions to enhance saliency results. We describe our methods and baseline network in the following subsections. The overall network structure is shown in Fig 2.

3.1 Baseline Network

The FPN [17] based baseline model without HGAMs and is shown in Fig 2. The network mainly consists of two categories of modules: encoder module and decoder module.

For encoder module, we adopt the VGG-16 [27]

backbone which is pretrained on ImageNet

[4] for image classification. As the resolution of input image is , to adapt the saliency segmentation task, we utilize the backbone to extract feature maps at 5 levels, which can be represented as encoded features with the resolution . Since are extracted at multi-levels, they contain both low-level visual cues and high-level semantic information from different resolutions. To integrate these multi-level information, we transfer to decoder module.

Because residual block is better than pure convolution layer in aggregating the multi-scale features, our decoder module is constructed by 5 residual blocks corresponding to . After the decoder module has received , it generates the residual features and each can be formulated as:

(1)

where

stands for the convolution together with ReLU layers with parameters

. and represent the channel-wise concatenation and the upsample operation by a factor 2 respectively. To achieve the hierarchical predictions like FPN, we resample to resolution for obtaining the upsampled features , then utilize these feature maps to generate the hierarchical predictions . and can be formulated as:

(2)

where denotes upsampling features to resolution and stands for the convolution together with the Sigmoid layers with parameters .

As the , …, are based on from low to high resolutions, these prediction maps can receive various supervised information from coarse to fine. To better leverage these various feedbacks from loss for updating parameters, in the training phase, the loss is calculated by the weighted sum of like [6], it can be formulated as:

(3)

where is the annotation mask, the and represent the cross entropy loss and its weighted combination respectively.

is the hyperparameter of corresponding prediction

. In testing, we adopt the as saliency result.

3.2 Contour Loss

Salient object segmentation aims at capturing the most conspicuous objects in input images. Suppose images only contain two parts: the background and salient objects. For most pixels, they locate at the inside of the objects or background, which indicate that they are far from the object borders. Intuitively, their contexts are relatively pure because only object or background pixels are shown in receptive fields except for few noise pixels. Consequently, saliency networks can well classify these pixels without auxiliary techniques. However, pixels located at the boundary between background and salient objects are so ambiguous that even experienced people are difficult to determine their labels. From the perspective of features, these vectors extracted from motley image pixels fall near the hyperplanes, acting as hard examples. As general saliency networks only apply pixel-wise binary classification, while neglect the boundary cues and train all pixels equally by cross entropy loss, they usually predict broad outline of target objects but are inferior in precise boundaries.

Base on the above observations, we argue that border pixels, as well as the hard examples in saliency maps, deserve much higher attention in the training phase. Inspired by focal loss [18], assigning higher weights to focus on these hard examples is theoretically and technologically convincing. Towards this end, we apply spatial weight maps in cross entropy loss, which assigns relatively high value to emphasize pixels near the salient object borders. The spatial weight map can be formulated as:

(4)

where and represent dilation and erosion operations with the mask respectively. The object boundaries can be obtained by the difference between dilated and eroded images. is a hyperparameter for assigning the high value which is set to 5 empirically. To endow the pixels which are closed but not located at the boundaries with a moderate weight, we also adopt Guass function with a range. denotes the ones matrix with resolution to set the pixels which is aloof from object boundaries to 1. Compared with some boundary operators, such as Laplace operator, the above approach can generate thicker object contours for considerable error rates.

Generally, the proposed Contour Loss is implemented as the following formula:

(5)

where , and represent the spatial weight map, annotation map and predicted saliency map of the pixel respectively. In implementation, since our network outputs multiple intermediate saliency maps, Contour Loss is applied to all intermediate maps to supervise network in the training process. In other words, as we adopt Contour Loss, the in Eq 3 represents .

Figure 3: Inner structure of the th HGAM. Yellow, blue and gray dotted line represent attention stream, prediction stream and attention guidance respectively. and severally denote width and height of . and are the th HGAM message and attention map respectively.
fps SOD PASCAL-S ECSSD HKU-IS DUTS-TE DUT-O
conventional methods
MR[35] 1.1 0.584 0.237 0.597 0.209 0.677 0.173 0.620 0.180 0.490 0.220 0.516 0.210
DRFI[9] 1.6 0.697 0.223 0.698 0.207 0.786 0.164 0.777 0.145 0.647 0.175 0.690 0.108
ResNet-50 [5] backbone
SRM[31] 14 0.845 0.132 green0.862 0.098 0.917 0.054 0.906 0.046 0.827 0.059 0.769 0.069
BRN[32] 5.9 blue0.858 blue0.104 0.861 blue0.071 0.921 green0.045 0.916 blue0.037 0.829 green0.051 0.790 green0.063
VGG-19 [27] backbone
PAGRN[39] 0.861 0.092 0.921 0.064 blue0.922 0.048 blue0.857 0.055 blue0.806 0.072
VGG-16 [27] backbone
RFCN[30] 9 0.807 0.166 0.850 0.132 0.898 0.095 0.898 0.080 0.783 0.090 0.738 0.095
Amulet[37] 16 0.808 0.145 0.828 0.103 0.915 0.059 0.896 0.052 0.778 0.085 0.743 0.098
UCF[38] 23 0.803 0.169 0.846 0.128 0.911 0.078 0.886 0.074 0.771 0.117 0.735 0.132
NLDF[22] 12 0.842 0.130 0.845 0.112 0.905 0.063 0.902 0.048 0.812 0.066 0.753 0.080
RA[2] 35 0.844 0.124 0.834 0.104 0.918 0.059 0.913 0.045 0.826 0.055 0.786 blue0.062
CKT[15] 23 0.829 0.119 0.846 0.081 0.910 0.054 0.896 0.048 0.807 0.062 0.757 0.071
BMP[36] 22 0.851 green0.106 green0.862 green0.074 green0.928 blue0.044 0.920 green0.038 0.850 blue0.049
PCA[19] 5.6 green0.855 0.108 blue0.873 0.088 blue0.931 0.047 green0.921 0.042 green0.851 0.054 green0.794 0.068
Ours 26 red0.873 red0.095 red0.883 red0.063 red0.933 red0.037 red0.932 red0.031 red0.872 red0.042 red0.825 red0.058
Table 1: Quantitative comparisons of different saliency models on six benchmark datasets in terms of maximum -measure and which are marked as and in this table. redRed, blueblue and greengreen text indicate the best, second best and third best performance respectively. The computation speed (fps) are obtained on an NVIDIA TITAN X GPU.

3.3 Hierarchical Global Attention Module

The aim of salient object segmentation is to detect evident object regions, in other words, remove insignificant regions. Although an original picture may contain multi-objects, not all the objects are conspicuous for saliency maps. Thus, negligible information can lead to a sub-optimal result by distracting the models from salient regions. To alleviate distractions of background, attention mechanism is a useful auxiliary module for salient object segmentation. Since attention module can leverage the contextual information to generate a weight map, this map can guide the model to abate the insignificant features. However, existing attention modules often adopt softmax function, which enormously emphasizes several important pixels and endows the others with a very small value. Therefore these attention modules cannot attend to global contexts in high-resolution, which easily lead to overfitting in training.

Due to the above observations, instead of using softmax-wise attention modules, we utilize a novel function which is based on global contrast to attend to global contextual information. Since a region is conspicuous in feature maps, each pixel in the region is also significant with a relatively large value, for example, over the mean. In other words, the inconsequential features often have a relatively small value in feature maps, which are often smaller than the mean. Thus, an intuitive method to abnegate the insignificant features is pixel-wisely subtracting the average value from feature maps. After the subtraction, we can conduct a pixel-wise classification in feature maps: the positive pixels represent significant features while the negative ones denote inconsequential features. Accordingly, the attention map can be generated as:

(6)

where is the input feature map, and

represent the average and variance value of

respectively. denotes a regularization term which is set to 0.1 empirically, while is a small value to avoid zero-division as the default setting. Compared with softmax results, the pixel-wise disparity of our attention maps is more reasonable, in other words, our attention method can retain conspicuous regions from feature maps in high-resolution.

Since attention maps do not hold the labels, they are usually generated or supervised by predicted maps or ground truth masks which only retains the salient regions. However, as models may also need information from background regions to determine salient objects, these attention maps may miss some crucial information. In contrast to these attention modules like [2], we exploit the feature maps which are near the predictions to generate unsupervised attention maps. Therefore, our attention maps can not only leverage the strong feedbacks of supervised information to update themselves, but also contain the background information which may be crucial for saliency determination.

Towards this end, we propose our hierarchical global attention module (HGAM) to capture the multi-scale global contexts. As shown in Fig 3, for a given HGAM , it receives three inputs: the encoded feature , the upsampled feature which is near the prediction , and the previous HGAM message . To extract the global contextual information from input features, we adopt maxpool and avgpool layers to deal with for obtaining contextual features and respectively, which is suggested by [33]. We also make channel-wise compression of to obtain , while can be generated by the previous HGAM message as:

(7)

After obtaining , …, , we make channel-wise concatenation of them to generate the th HGAM message . The th attention map also can be generated by Eq 6 with as input feature. Since obtain these two outputs, is transferred to next HGAM , while is utilized to guide the as:

(8)

where is the guided feature map and represents the element-wise multiplication.

Different from the baseline network utilizes as final output, since guides by recursively aggregating the multi-scale HGAM messages, we exploit the guided feature to generate the final prediction , which is also included in . Therefore, in training, in Eq 3 can be rewrote as:

(9)

In testing, we adopt as our final prediction to evaluate our model.

Figure 4: PR curves of ours and other state-of-the-art methods.

4 Experiments

4.1 Experimental Settings

Evaluation Datasets.

To evaluate the performance of our model, six public saliency segmentation datasets are exploited. DUTS [29] is a large scale saliency benchmark dataset which contains 10,553 images as trainging set (DUTS-TR) and 5,019 images as testing set (DUTS-TE). In the experiments, we adopt DUTS-TR to train our model and DUTS-TE for evaluation. For comprehensive evaluation, we also utilize SOD [24], PASCAL-S [16], ECSSD [34], HKU-IS [13] and DUT-O [35] for testing, which contain 300, 850, 1,000, 4,447 and 5,168 images respectively. Note that for the testing on the abovementioned databases, no corresponding fine-tuning is carried.

Implementation Details.

Our experiments are based on the Pytorch

[25] framework and run on a PC machine with a single NVIDIA TITAN X GPU (with 12G memory).

For training, we adopt DUTS-TR as training set and utilize data augmentation, which resamples each image to before random flipping, and randomly crops the

region. We employ stochastic gradient descent (SGD) as the optimizer with a momentum (0.9) and a weight decay (1e-4). We also set basic learning rate to 1e-3 and finetune the VGG-16

[27] backbone with a 0.05 times smaller learning rate. Since the saliency maps of hierarchical predictions are coarse to fine from to , we set the incremental weights with these predictions. Therefore , …, are set to 0.3, 0.4, 0.6, 0.8, 1 respectively in both Eq 3 and 9

. The minibatch size of our network is set to 10. The maximum iteration is set to 150 epochs with the learning rate decay by a factor of 0.05 for each 10 epochs. As it costs less than 500s for one epoch including training and evaluation, the total training time is below 21 hours.

For testing, follow the training settings, we also resize the feeding images to , and only utilize the final output . Since the testing time for each image is 0.038s, our model achieves 26 fps speed with resolution.

Evaluation Metrics.

To evaluate different algorithms, we adopt three metrics for the quality of saliency maps, including the precision-recall (PR) curves, -measure [28] and mean absolute error ().

To evaluate the robustness of saliency results in different thresholds, we utilize the PR curve to demonstrate the relation of precision and recall by thresholding saliency maps from 0 to 255.

The -measure is a weighted combination of precision and recall value for saliency maps, which can be calculated by

(10)

where is set to 0.3 as suggested in [28]. To alleviate the unfairness caused by different thresholds in papers, we report the maximum -measure as suggested by [22, 36], which selects the best score over all thresholds from 0 to 255.

For comprehensive comparisons, we also adopt the metric to evaluate the pixel-wise average absolute difference between the saliency map and its corresponding ground truth mask ,

(11)

where and represent the width and height of a given picture respectively.

4.2 Comparison with State-of-the-arts

To evaluate the performance, we compare our method with 13 state-of-the-art algorithms on aforementioned six public benchmarks in terms of visual evaluation, PR curve, maximum -measure and metrics. These methods include 2 conventional algorithms: DRFI [9], MR [35], as well as 11 deep learning models: RFCN [30], Amulet [37], UCF [38], NLDF [22], SRM [31], PAGRN [39], BRN [32], CKT [15], BMP [36], PCA [19] and RA [2].

(a) image (b) gt (c) ours(++) (d) (e) + (f) +
Figure 5: Visual examples under different settings. , and denote baseline network, Contour Loss and HGAM respectively.
Visual Comparison.

The visual comparison between ours and other state-of-the-arts is shown in Fig 7. It can be observed that our method well detect the target objects in various situations, i.e., containing the object too huge or too small (rows 1 and 2), object touching image edges (row 1), object touching other inconsequential items (row 3), multi-objects (row 4) and object appearance similar with background (row 5). It is also worth noting that our results have finer boundaries and more precise localization of salient regions, which thanks to the effect of Contour Loss and HGAM respectively.

F-measure and MAE.

In Table 1, we show quantitative evaluation results between ours and other superior methods under maximum -measure and metrics. To the best of our knowledge, as only utilize the VGG-16 backbone without any post-processing methods like CRF [11], our model surpasses all existing networks and significantly refresh state-of-the-art performance on benchmarks by 1 to 2 percent.

PR curve.

In Fig 4, we compare our approach with other state-of-the-art methods in terms of PR curve on 4 benchmarks. It can be observed that our model consistently outperforms all the other methods.

Computational Cost.

Besides, Table 1 also provides the average testing time for each image among the state-of-the-arts on an NVIDIA TITAN X GPU. We can see that our approach only takes 0.038s (corresponding to 26fps) to generate a saliency map, which is faster than other mainstream methods.

(a) image (b) gt (c) (d) (e) (f) (g) (h)
Figure 6: Visualization of attention maps.
DUTS-TE[29] DUT-O[35]
0.848 0.050 0.787 0.070
+ 0.861 0.044 0.806 0.063
+ 0.860 0.048 0.801 0.069
ours(++) 0.872 0.042 0.825 0.058
Table 2: Comparison of different settings in terms of maximum -measure and metric, which are marked as and in this table. , and represent the baseline network, Contour Loss and HGAM respectively. Bold text indicates the best performance in table.
image gt ours PCA[19] BMP[36] PAGRN[39] BRN[32] CKT[15] UCF[38] DRFI[5]
Figure 7: Visual comparison between ours and other state-of-the-art methods.

4.3 Ablation Study

To evaluate the effectiveness of the proposed Contour Loss and HGAM, we show the results of quantitative and visual comparison under different settings. Table 2 shows the quantitative comparison which demonstrates that only utilizing Contour Loss or HGAM can enhance the baseline performance by nearly 2 percent. As incorporating Contour Loss and HGAM can make a further improvement on two massive datasets by 1 to 2 precent, which proves that Contour Loss and HGAM refine the saliency results from different aspects.

In Fig 5, compared with the baseline result, Contour Loss can obtain a finer boundaries while HGAM is better in eliminating background distractions. Since our result outperforms the other results both in boundaries and background elimination, it proves that incorporating Contour Loss and HGAM can lead to mutual promotion in training.

4.4 HGAM Visualizaiton

As shown in Fig 6, we visualize the attention maps generated by HGAM to further understand how it works. We can observe that these attention maps show fine-to-coarse locations of salient objects from to , which greatly matches the global attention mechanism in different resolutions. It is worth pondering that, different from other attention maps which only focus on salient regions, assigns a higher value to the regions which are corresponding to background. As well abnegates these background regions, we reckon that the model also needs to perceive background regions for eliminating the insignificant features. Moreover, as shows the clear boundaries of salient objects, it is convincingly proved that the mutual promotion of Contour Loss and HGAM in boundary-aware learning.

5 Conclusion

We propose the Contour Loss and the HGAM to help networks learn to better detect saliency objects in visual range. The Contour Loss forces to learn boundary-wise distinctions between salient objects and background, while HGAM enables the models to capture global contextual information in all resolutions. Experimental results on six datasets demonstrate that our proposed approach outperforms 13 state-of-the-art methods under different evaluation metrics.

References

  • [1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE TPAMI 34 (11), pp. 2274–2282. Cited by: §2.2.
  • [2] S. Chen, X. Tan, B. Wang, and X. Hu (2018) Reverse attention for salient object detection. In ECCV, Cited by: §2.1.2, §3.3, Table 1, §4.2.
  • [3] M. Cheng, N. J. Mitra, X. Huang, P. H. Torr, and S. Hu (2015) Global contrast based salient region detection. IEEE TPAMI 37 (3), pp. 569–582. Cited by: §1, §2.1.1.
  • [4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, Cited by: §3.1.
  • [5] K. He, X. Zhang, S. Ren, and S. Jian (2016) Deep residual learning for image recognition. In CVPR, Cited by: Table 1, Figure 7.
  • [6] Q. Hou, M. Cheng, X. Hu, A. Borji, Z. Tu, and P. Torr (2017) Deeply supervised salient object detection with short connections. In CVPR, Cited by: §3.1.
  • [7] P. Hu, B. Shuai, J. Liu, and G. Wang (2017) Deep level sets for salient object detection. In CVPR, Cited by: §2.2.
  • [8] L. Itti, C. Koch, and E. Niebur (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE TPAMI (11), pp. 1254–1259. Cited by: §1.
  • [9] H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li (2013) Salient object detection: a discriminative regional feature integration approach. In CVPR, Cited by: §2.1.1, Table 1, §4.2.
  • [10] D. A. Klein and S. Frintrop (2011) Center-surround divergence of feature statistics for salient object detection. In ICCV, Cited by: §2.1.1.
  • [11] P. Krähenbühl and V. Koltun (2011) Efficient inference in fully connected crfs with gaussian edge potentials. In NIPS, Cited by: §4.2.
  • [12] J. Kuen, Z. Wang, and G. Wang (2016) Recurrent attentional networks for saliency detection. In CVPR, Cited by: §2.1.2.
  • [13] G. Li and Y. Yu (2015) Visual saliency based on multiscale deep features. In CVPR, Cited by: §2.2, §4.1.
  • [14] X. Li, L. Zhao, L. Wei, M. Yang, F. Wu, Y. Zhuang, H. Ling, and J. Wang (2016)

    Deepsaliency: multi-task deep neural network model for salient object detection

    .
    TIP 25 (8), pp. 3919–3930. Cited by: §2.2.
  • [15] X. Li, F. Yang, H. Cheng, W. Liu, and D. Shen (2018) Contour knowledge transfer for salient object detection. In ECCV, Cited by: §1, §2.2, Table 1, Figure 7, §4.2.
  • [16] Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille (2014) The secrets of salient object segmentation. In CVPR, Cited by: §4.1.
  • [17] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie (2017) Feature pyramid networks for object detection. In CVPR, Cited by: §1, §1, §2.1.2, §3.1.
  • [18] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In ICCV, Cited by: §1, §3.2.
  • [19] N. Liu, J. Han, and M. Yang (2018) PiCANet: learning pixel-wise contextual attention for saliency detection. In CVPR, Cited by: Figure 1, §1, §2.1.2, Table 1, Figure 7, §4.2.
  • [20] N. Liu and J. Han (2016) Dhsnet: deep hierarchical saliency network for salient object detection. In CVPR, Cited by: §2.1.2.
  • [21] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In CVPR, Cited by: §1, §2.1.2.
  • [22] Z. Luo, A. Mishra, A. Achkar, J. Eichel, S. Li, and P. Jodoin (2017) Non-local deep features for salient object detection. In CVPR, Cited by: §1, §2.2, Table 1, §4.1, §4.2.
  • [23] A. Manno-Kovacs (2019-02) Direction selective contour detection for salient objects. TCSVT 29 (2), pp. 375–389. Cited by: §2.2.
  • [24] V. Movahedi and J. H. Elder (2010) Design and perceptual validation of performance measures for salient object segmentation. In CVPR Workshops, Cited by: §4.1.
  • [25] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. In NIPS Workshops, Cited by: §4.1.
  • [26] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, pp. 234–241. Cited by: §1, §2.1.2.
  • [27] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556. External Links: Link, 1409.1556 Cited by: §1, Figure 2, §3.1, Table 1, §4.1.
  • [28] S. Susstrunk, R. Achanta, F. Estrada, and S. Hemami (2009) Frequency-tuned salient region detection. In CVPR, Cited by: §4.1, §4.1.
  • [29] L. Wang, H. Lu, Y. Wang, M. Feng, D. Wang, B. Yin, and X. Ruan (2017) Learning to detect salient objects with image-level supervision. In CVPR, Cited by: §4.1, Table 2.
  • [30] L. Wang, L. Wang, H. Lu, P. Zhang, and X. Ruan (2016) Saliency detection with recurrent fully convolutional networks. In ECCV, Cited by: §2.1.2, Table 1, §4.2.
  • [31] T. Wang, A. Borji, L. Zhang, P. Zhang, and H. Lu (2017) A stagewise refinement model for detecting salient objects in images. In ICCV, Cited by: Table 1, §4.2.
  • [32] T. Wang, L. Zhang, S. Wang, H. Lu, G. Yang, X. Ruan, and A. Borji (2018) Detect globally, refine locally: a novel approach to saliency detection. In CVPR, Cited by: §1, §2.1.2, Table 1, Figure 7, §4.2.
  • [33] S. Woo, J. Park, J. Lee, and I. So Kweon (2018) CBAM: convolutional block attention module. In ECCV, Cited by: §3.3.
  • [34] Q. Yan, L. Xu, J. Shi, and J. Jia (2013) Hierarchical saliency detection. In CVPR, Cited by: §2.1.1, §4.1.
  • [35] C. Yang, L. Zhang, H. Lu, X. Ruan, and M. Yang (2013) Saliency detection via graph-based manifold ranking. In CVPR, Cited by: §1, §2.2, Table 1, §4.1, §4.2, Table 2.
  • [36] L. Zhang, J. Dai, H. Lu, Y. He, and G. Wang (2018) A bi-directional message passing model for salient object detection. In CVPR, Cited by: Table 1, Figure 7, §4.1, §4.2.
  • [37] P. Zhang, D. Wang, H. Lu, H. Wang, and X. Ruan (2017) Amulet: aggregating multi-level convolutional features for salient object detection. In ICCV, Cited by: §2.1.2, Table 1, §4.2.
  • [38] P. Zhang, D. Wang, H. Lu, H. Wang, and B. Yin (2017) Learning uncertain convolutional features for accurate saliency detection. In ICCV, Cited by: Table 1, Figure 7, §4.2.
  • [39] X. Zhang, T. Wang, J. Qi, H. Lu, and G. Wang (2018) Progressive attention guided recurrent network for salient object detection. In CVPR, Cited by: §1, §2.1.2, Table 1, Figure 7, §4.2.