Salient Object Detection via High-to-Low Hierarchical Context Aggregation

12/28/2018 ∙ by Yun Liu, et al. ∙ Nankai University 22

Recent progress on salient object detection mainly aims at exploiting how to effectively integrate convolutional side-output features in convolutional neural networks (CNN). Based on this, most of the existing state-of-the-art saliency detectors design complex network structures to fuse the side-output features of the backbone feature extraction networks. However, should the fusion strategies be more and more complex for accurate salient object detection? In this paper, we observe that the contexts of a natural image can be well expressed by a high-to-low self-learning of side-output convolutional features. As we know, the contexts of an image usually refer to the global structures, and the top layers of CNN usually learn to convey global information. On the other hand, it is difficult for the intermediate side-output features to express contextual information. Here, we design an hourglass network with intermediate supervision to learn contextual features in a high-to-low manner. The learned hierarchical contexts are aggregated to generate the hybrid contextual expression for an input image. At last, the hybrid contextual features can be used for accurate saliency estimation. We extensively evaluate our method on six challenging saliency datasets, and our simple method achieves state-of-the-art performance under various evaluation metrics. Code will be released upon paper acceptance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 7

Code Repositories

HCA

Salient Object Detection via High-to-Low Hierarchical Context Aggregation


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Salient object detection, also known as saliency detection, aims at simulating the human vision system to detect the most conspicuous and eye-attracting objects or regions in a natural image [1, 7]

. The progress in saliency detection has been beneficial to a wide range of vision applications, including image retrieval

[11], visual tracking [33]

, scene classification

[36], content-ware video compression [61]

, and weakly supervised learning

[46, 47]. Although numerous valuable models have been presented [25, 4, 57, 29, 17, 53, 15] and significant progress has been made, it remains as an open problem to accurately detect salient objects in static images, especially in some complicated scenarios.

Figure 1: Visualization of our learned contexts at different sides of the neural network. The contexts at lower sides are learned under the guidance of top global contexts to only emphasize the details of salient objects.

Conventional saliency detection methods [7, 19, 39]

usually design hand-crafted low-level features and heuristic priors, which are difficult to describe semantic objects and scenes. Recent progress on saliency detection is mainly beneficial from convolutional neural networks (CNN)

[32, 26, 57, 45, 54, 21, 22]. The backbone of CNN usually consists of several blocks of stacked convolutional and pooling layers, in which the blocks near to network inputs are called bottom sides and otherwise top sides. It is well accepted that the top sides of CNN contain semantic meaningful information while the bottom sides contain complementary spatial details [48, 30, 16]. Therefore, current state-of-the-art saliency detectors [4, 51, 45, 54, 29, 44, 55, 43, 16] mainly aim at designing complex network structures to fuse the features or results from various side-outputs. For example, Hou et al. [16] carefully selected several combination sets of various side-output results and fused the combination results for accurate saliency segmentation. Wang et al. [44] proposed a recurrent module to filter out noisy information for side-output features. Although significant progress has been made in this direction [16, 55, 44], the side-output fusion strategies have become more and more complex. Do we have to continue this direction for the further improvement of saliency detection?

To answer this question, we notice that some recent studies [58, 52] find CNN can learn global contextual information for input images at top convolution layers by enlarging receptive fields. This is not directly applicable to saliency detection, because saliency detection requires not only global contextual information but also local spatial details. Instead of fusing side-output features complicatedly as in [4, 57, 51], we consider constructing hierarchical contextual features. Specifically, we flow global contextual information obtained at top sides into bottom sides. The top contextual information will learn to guide the bottom sides to construct the contextual features at fine spatial scales only emphasizing salient objects. Hence the obtained contexts are different from side-output features or some combinations of them which only contain or at least emphasize local representations for an image. A visualization of contexts learned by our model can be found in Figure 1.

Intuitively, the hierarchical contexts should be learned in a high-to-low manner, which means the top sides should learn contexts first and then bottom sides can learn contexts at large spatial resolutions using the information flowing from the top sides. Hence we build an hourglass network and add intermediate supervision after the context module at each side. In the training process, we find the top sides can be automatically optimized first, which is consistent with our hypothesis. This will be demonstrated in Section 4. At last, we simply aggregate hierarchical contexts for accurate salient object detection. The experimental results demonstrate our simple idea can favorably outperform recent state-of-the-art methods that use heavily engineered networks. Our contributions can be summarized as three folds:

  • We build an hourglass network with intermediate supervision to learn hierarchical contexts, which are generated with the guidance of global contextual information and thus only emphasize salient objects at different scales.

  • We propose a hierarchical context aggregation module to ensure the network is optimized from the top sides to bottom sides. We aggregate the learned hierarchical contexts at different scales to perform accurate salient object detection unlike previous studies [16, 55, 43] that fuse side-output features or some complex combinations of side-outputs.

  • We extensively compare our method with recent state-of-the-art methods on six popular datasets. Our simple method favorably outperforms these competitors under various metrics.

2 Related Work

Salient object detection is a very active research field due to its wide applications and challenging scenarios. Here, we briefly divide the related work into four parts to review the development of saliency detection and context learning.

Heuristic saliency detection

methods usually extract hand-crafted low-level features and apply machine learning models to classify these features. Some heuristic saliency priors are utilized to ensure the accuracy, such as color contrast

[1, 7], center prior [20, 19] and background prior [50, 60]. DRFI [19]

is a comprehensive representative of this kind of methods by integrating various features and priors. However, it is difficult for the low-level features to describe semantic information, and the saliency priors are not robust enough for complicated scenarios. Hence deep learning based methods have dominated this fields due to their powerful representation capability.

Region-based saliency detection appears in the early era of deep learning based saliency. These approaches view each image patch as a basic processing unit to perform saliency detection. Lee et al. [21]

utilized both low-level hand-crafted features and high-level deep features to classify candidate regions as salient or not. The low-level features are compared with other parts of an image to form a distance map that is then encoded by the CNN. Wang

et al. [40] presented a two-stage training strategy to sort the segmented object proposals in which the first stage extracts features and the second stage predicts the saliency score for each region. Li et al. [23] extracted multi-scale deep features which are used to infer the saliency scores for image segments.


Figure 2: Overall framework of our proposed method. Our effort starts from the VGG16 network [38]. We add an additional convolution block at the end of the convolution layers of VGG16, resulting in six convolution blocks in total. The contexts at each convolution block are learned in a high-to-low manner to ensure that each block is guided by all higher layers to generate scale-aware contexts. The Hierarchical Context Aggregation (HCA) module can guarantee the optimization order is high-to-low and aggregate the generated hierarchical contexts to predict the final saliency maps.

CNN-based image-to-image saliency detection models [4, 57, 51, 45, 54, 29, 44, 17, 55, 5, 43, 27, 28, 16, 24, 32, 26] take saliency detection as a pixel-wise binary classification task and perform image-to-image predictions. For example, Chen et al. [5] proposed a two-stream network which consists of a fixation stream and a semantic stream. Zhang et al. [57] introduced an attention guided network that progressively integrates multiple layer-wise attention for saliency detection. Islam et al. [17] introduced a new deep learning solution with a hierarchical representation of relative saliency and stage-wise refinement.

How to effectively fuse multi-level CNN features is the main research direction for CNN-based saliency detection methods [4, 51, 45, 54, 29, 44, 55, 43, 16, 24, 32, 26]. There are too many studies to list here, but the general trend of recent designs is becoming more and more complicated. We will provide detailed discussion about these methods in Section 4. Compared with them, we focus on a simple yet effective design in this paper.

Context learning is recently discovered in semantic segmentation [58, 52]. Zhao et al. [58] added a pyramid pooling module for global context construction upon the final layer of the deep network, by which they significantly improved the performance of semantic segmentation. Zhang et al. [52] built context encoding module using the encoding layer [9] on the top of neural network to conduct accurate semantic segmentation. In saliency detection, Wang et al. [43] followed [58] to use the pyramid pooling module to extract contextual information. Zhao et al. [59] proposed a global context module and a local context module to extract the global and local contexts. The global context module is fed with a superpixel-centered large window including the full image, while the local context module takes a superpixel-centered small window with a small image patch. Hence the the goal to extract multi-contexts in [59] is achieved by multi-scale inputs.

The full literature review of salient object detection is out the scope of this paper. Please refer to [2, 8, 12] for a more comprehensive survey. In this paper, we focus on the context learning rather than previous multi-level feature fusion for the improvement of saliency detection. Different from [43] that uses multiple networks, each of which has a pyramid pooling module [58] at the top, we propose an elegant single network. Different from [59] that uses multi-scale inputs, we use single-scale inputs to extract multi-level contexts. The resulting model is simple yet effective.

3 Approach

In this section, we will elaborate our proposed framework for salient object detection. We first introduce our base network in Section 3.1. Then, we present a Mirror-linked Hourglass Network (MLHN) in Section 3.2. A detailed description of the Hierarchical Context Aggregation (HCA) module is finally provided in Section 3.3. We show an overall network architecture in Figure 2.

3.1 Base Network

To tackle the salient object detection, we follow recent studies [5, 43, 16] to use fully convolutional networks. Specifically, we use the well-known VGG16 network [38]

as our backbone net, whose final fully connected layers are removed to serve for image-to-image translation. Salient object detection usually requires global information to judge which objects are salient

[7], so enlarging the receptive field of the network would be helpful. To this end, we remain the final pooling layer as in [16] and follow [3] to transform the last two fully connected layers to convolution layers, one of which has the kernel size of with 1024 channels and another of which has the kernel size of with 1024 channels as well. Therefore, there are five pooling layers in the backbone net. They divide the convolution layers into six convolution blocks, which are denoted as from bottom to top, respectively. We consider as the top valve that controls the overall contextual information flow in the network. The resolution of feature maps in each convolution block is the half of the preceding one. Following [16, 48], the side-output of each convolution block means the connection from the last layer of this block.

3.2 Mirror-linked Hourglass Network

Based on the backbone net, we build a Mirror-linked Hourglass Network (MLHN). An overview of MLHN is displayed in Figure 2. More concretely, we upsample the convolution block by two times and connect a convolution layer (w/o non-linearization) after . The resulting two feature maps are fused using an element-wise summation operation. For the upsampling, the side-output of is first connected to a

convolution layer (w/o non-linearization) which follows by a deconvolution layer. This deconvolution upsamples a features map by 2 times using bilinear interpolation. A crop operation is performed to ensure the upsampled feature map of

has equal size to the feature map of . To convert the fused feature map into contextual information, two sequential convolution layers are then connected to obtain contextual features . These two convolution layers play a role of transform function, which uses the contextual information of to guide the features of to generate contexts . The contextual features can be obtained in the similar way. For a clear presentation, this can be formulated as

(1)

A standard encoder-decoder network can be formulated as

(2)

In this way, the proposed MLHN gradually flow top contextual information into lower sides, so the lower sides are expected to only emphasize the details of salient regions in an image.

The two sequential convolution layers (orange box in Figure 2) are with kernel size for and kernel size for . The numbers of output channels are 512, 256, 256, 128 and 128 from to , respectively. On one hand, the encoded features in the base network are connected to the decoder part in a Mirror-linked way. On the other hand, the proposed network is symmetric with as its center, just like an hourglass. Hence we call our network Mirror-linked Hourglass Network (MLHN).


Figure 3: Hierarchical Context Aggregation (HCA) module used in our proposed network. All sides of the backbone have intermediate supervision to ensure that the optimization is performed from high sides to lower sides, so that every side can learn the contextual information. The hierarchical contexts from all sides are concatenated for final saliency map prediction.

Figure 4: Illustration of different multi-scale deep learning architectures: (a) hyper feature learning; (b) FCN style; (c) HED style; (d) DSS style; (e) encoder-decoder networks; (f) our HCA network. The connections in above figures can be any network configurations, e.g. any types of CNN layers or combinations of them.

3.3 Hierarchical Context Aggregation

Intuitively, the proposed MLHN should be optimized from the top sides to bottom sides, because the global contextual information is contained in the top sides and will be flowed to bottom sides gradually. Therefore, unlike previous encoder-decoder networks [37, 31] that impose supervision at the final layer of decoder, we adopt supervision at all context learning stages, i.e. , through a Hierarchical Context Aggregation (HCA) module. The HCA module is shown in Figure 3.

The side-output of each decoder side is first connected with two convolution layers, which are with kernel size of for , for and for . The numbers of channels for them are 512, 512, 256, 256, 128 and 128, respectively. Then, we add a convolution layer without non-linearization to decrease the number of channels to 25 for all sides. The 25-channel map is the context map at each side. A deconvolution layer with fixed bilinear kernel is employed to upsample the context map into the size of original image. In order to better understand this process, we formulate it as

(3)

in which

is a linear transformation for channel reorganization and

is to transform the fused features at each stage into contexts at various scales.

The saliency prediction map can be obtained by simply adding a (w/o non-linearization) convolution. We put the intermediate supervision here for each side to help the top sides to be optimized first. The upsampled context maps () for all sides are aggregated using a standard concatenation. A convolution and a convolution are followed to further fuse the hierarchical contexts for the final high-quality prediction of saliency maps. We empirically find large kernel sizes are a bit helpful here, but large kernel sizes will also lead to slow speed because the aggregated context map is in the size of original image. Therefore, we do not use two or larger kernel sizes.

The essential function of HCA lies in three aspects. Firstly, the intermediate supervision of HCA can help MLHN be optimized from top to bottom, so that the global contextual information at top sides will flow to bottom sides gradually. Secondly, the added convolution layers can encourage each side to generate contexts at the corresponding scale. Thirdly, the hierarchical contexts at all sides are aggregated for final saliency map prediction, unlike previous methods [16, 48, 30] that compute final results by fusing results of various side-outputs.

4 Architectural Analyses

Due to the nature of the multi-scale and multi-level learning in deep neural networks, there have emerged a large number of architectures that are designed to utilize the hierarchical deep features. For example, multi-scale learning can use skip-layer connections [13, 31] which is widely accepted owning their strong capabilities to fuse hierarchical deep features inside the networks. On the other hand, multi-scale learning can use encoder-decoder networks that progressively decode the hierarchical deep representation learned in the encoder backbone net. We have seen these two structures applied in various vision tasks.

We continue our discussion by briefly categorizing inside multi-scale deep learning into five classes: hyper feature learning, FCN style, HED style, DSS style and encoder-decoder networks. An overall illustration of them is summarized in Figure 4. Our following discussion of them will clearly show the differences between our proposed HCA network and previous efforts on multi-scale learning.

Hyper feature learning: Hyper feature learning [13] is the most intuitive way to purse multi-scale information, as illustrated in Figure 4(a). Examples of this structure for saliency include [24, 51, 5, 43, 27]. These models concatenate/sum multi-scale deep features from multiple levels of backbone nets [24, 51] or branches of the multi-stream nets [5, 43, 27]. The fused hyper features are then used for final predictions.


Figure 5: Side loss at the first 2000 training iterations. At the beginning, the loss of top sides drop quickly, but the bottom sides manage to have smaller loss at last.

FCN style: Since the top sides of neural networks usually contain more reliable semantic information, a reasonable revision of hyper feature learning is to progressively fuse deep features from upper layers to lower layers [31, 37], as shown in Figure 4(b). The top semantic features will combine with bottom low-level features to capture fine-grained details. The feature fusion can be a simple element-wise summation [31], a simple feature map concatenation (U-Net) [37], or more complex designs based on them.

Most of recent saliency models fall into this category [57, 45, 54, 29, 44, 17, 55]. They differ from each other by applying different fusion strategies. One notable similarity of these models is that the final prediction is produced using the fused feature maps at the largest scale. Hence the final fused features are expected to learn both global semantic information and local low-level details. To better achieve this goal, recent state-of-the-art models have designed very complex fusion strategies [29, 44, 4].

Methods DUTS-test ECSSD SOD HKU-IS DUT-OMRON THUR15K
MAE MAE MAE MAE MAE MAE
Non-deep learning
DRFI [19] 0.649 0.154 0.777 0.161 0.704 0.217 0.774 0.146 0.652 0.138 0.670 0.150
VGG16 [38] backbone
MDF [23] 0.707 0.114 0.807 0.138 0.764 0.182 - - 0.680 0.115 0.669 0.128
LEGS [40] 0.652 0.137 0.830 0.118 0.733 0.194 0.766 0.119 0.668 0.134 0.663 0.126
DCL [24] 0.785 0.082 0.895 0.080 0.831 0.131 0.892 0.063 0.733 0.095 0.747 0.096
DHS [26] 0.807 0.066 0.903 0.062 0.822 0.128 0.889 0.053 - - 0.752 0.082
ELD [21] 0.727 0.092 0.866 0.081 0.758 0.154 0.837 0.074 0.700 0.092 0.726 0.095
RFCN [42] 0.782 0.089 0.896 0.097 0.802 0.161 0.892 0.080 0.738 0.095 0.754 0.100
NLDF [32] 0.806 0.065 0.902 0.066 0.837 0.123 0.902 0.048 0.753 0.080 0.762 0.080
DSS [16] 0.827 0.056 0.915 0.056 0.842 0.122 0.913 0.041 0.774 0.066 0.770 0.074
Amulet [55] 0.778 0.085 0.913 0.061 0.795 0.144 0.897 0.051 0.743 0.098 0.755 0.094
UCF [56] 0.772 0.112 0.901 0.071 0.805 0.148 0.888 0.062 0.730 0.120 0.758 0.112
PiCA [29] 0.837 0.054 0.923 0.049 0.836 0.102 0.916 0.042 0.766 0.068 0.783 0.083
C2S [25] 0.811 0.062 0.907 0.057 0.819 0.122 0.898 0.046 0.759 0.072 0.775 0.083
RAS [4] 0.831 0.059 0.916 0.058 0.847 0.123 0.913 0.045 0.785 0.063 0.772 0.075
HCA (ours) 0.858 0.044 0.933 0.042 0.856 0.108 0.927 0.031 0.791 0.057 0.788 0.071
ResNet [14] backbone
SRM [43] 0.826 0.059 0.914 0.056 0.840 0.126 0.906 0.046 0.769 0.069 0.778 0.077
BRN [44] 0.827 0.050 0.919 0.043 0.843 0.103 0.910 0.036 0.774 0.062 0.769 0.076
PiCA [29] 0.853 0.050 0.929 0.049 0.852 0.103 0.917 0.043 0.789 0.065 0.788 0.081
HCA (ours) 0.875 0.040 0.942 0.036 0.865 0.099 0.934 0.029 0.819 0.054 0.796 0.069
Table 1: Comparison of the proposed HCA and 16 competitors in terms of the metrics of and MAE on six datasets. We report results on both VGG16 [38] backbone and ResNet [14] backbone. The top three models in each column are highlighted in red, green and blue, respectively. For ResNet based methods, we only highlight the top performance.

HED style: HED-like networks [48, 30] add deep supervision at the intermediate sides to perform predictions, and the final result is a combination of predictions at all sides (shown in Figure 4(c)). Unlike multi-scale feature fusion, HED performs multi-scale prediction fusion. Chen et al. [4] followed this style to perform saliency detection.

DSS style: DSS network [16] is an extension of HED architecture. The side-output of each network side is fused with side-outputs from some of the upper sides. For each side, which upper sides to choose for fusion is carefully selected by experiments. The difference between HED and DSS can be clearly seen in Figure 4(d).

Encoder-decoder networks: To benefit from the powerful representation capability of deep networks, one can also decode the high-level representation at the top layers [35], as displayed in Figure 4(e). The decoder gradually enlarges its resolution to decode local information from upper layers.

HCA network: We show a streamlined diagram of our proposed HCA network in Figure 4(f). Its left part looks a bit like an FCN (Figure 4(b)) or an encoder-decoder network (Figure 4(e)) with parallel connections. Unlike the FCN and encoder-decoder nets that perform predictions using the final fused hybrid features, our HCA network aggregates hierarchical contexts to perform predictions. The contexts are learned in a high-to-low manner through the proposed HCA module, so that the firstly optimized top sides can generate global contextual information to guide lower layers to produce scale-specific contexts. We show a demonstration of this high-to-low optimization in Figure 5, which includes the loss curves of all sides during training. We can clearly see that is optimized first, then , , , and follow sequentially. Without carefully designed feature fusion strategies [29, 55, 44, 4], the simple HCA can learn high-quality contexts for accurate salient object detection.


    Image       DRFI       MDF        DCL        DHS       RFCN        DSS        SRM       Amulet       UCF        BRN        PiCA        C2S         RAS         Ours          GT

Figure 6: Qualitative comparison of HCA and 13 state-of-the-art methods.

5 Experiments

5.1 Experimental Setup

Implementation Details.

We implement the proposed network using the well-known Caffe

[18] framework. The convolution layers contained in original VGG16 [38]

are initialized using the publicly available pretrained ImageNet model

[10]

. The weights of other layers are initialized from the zero-mean Gaussian distribution with standard deviation 0.01. The upsampling operations are implemented by deconvolution layers with bilinear interpolation kernels which will be frozen in the training process. The network is optimized using SGD with learning rate policy of

poly, in which the current learning rate equals the base one multiplying . The hyper parameters and are set to 0.9 and 20000, respectively, so that the training takes 20000 iterations in total. The initial learning rate is set to 1e-7. The momentum and weight decay are set to 0.9 and 0.0005, respectively. All the experiments in this paper are performed on a TITAN Xp GPU.

Datasets. We extensively evaluate our method on six popular datasets, including DUTS [41], ECSSD [49], SOD [34], HKU-IS [23], THUR15K [6] and DUT-OMRON [50]. These six datasets consist of 15572, 1000, 300, 4447, 6232 and 5168 natural complex images with corresponding pixel-wise ground truth labeling. Among them, DUTS dataset [41] is a latest released challenging dataset consisting of 10553 training images and 5019 test images in very complex scenarios. For fair comparison, we follow recent studies [44, 29, 43, 51] to use DUTS training set for training and test on the DUTS test set and other five datasets.

Evaluation Criteria.

We utilize two evaluation metrics to evaluate our method as well as other state-of-the-art salient object detectors, including max F-measure score and mean absolute error (MAE). Given a predicted saliency map with continuous probability values, we can convert it into binary maps with arbitrary thresholds and computing corresponding precision/recall values. Taking the average of precision/recall values over all images in a dataset, we can get many mean precision/recall pairs. Moreover, F-measure score is an overall performance indicator:

(4)

in which is usually set to 0.3 to emphasize more on precision. We follow recent studies [32, 16, 55, 56, 29, 25, 4] to report max across different thresholds. Given a saliency map and the corresponding ground truth that are normalized to [0, 1], MAE can be calculated as

(5)

where and represent the height and width, respectively. denotes the saliency score at location , similar to .

No. Module Side 1 Side 2 Side 3 Side 4 Side 5 Side 6
1 MLHN -
2 HCA
3 MLHN -
4 MLHN
5 HCA
6 MLHN -
7 HCA
* MLHN -
HCA
Table 2: Experimental settings of ablation studies. * means the default settings used in this paper. The column of Module indicates which module is changed, and another model remains the default settings in the meanwhile.
No. DUTS-test ECSSD SOD HKU-IS DUT-OMRON THUR15K
MAE MAE MAE MAE MAE MAE
1 0.856 0.045 0.932 0.043 0.853 0.107 0.924 0.031 0.783 0.058 0.784 0.071
2 0.860 0.044 0.931 0.043 0.855 0.114 0.927 0.031 0.786 0.058 0.788 0.070
3 0.858 0.045 0.930 0.043 0.848 0.110 0.926 0.031 0.785 0.058 0.787 0.070
4 0.857 0.045 0.933 0.042 0.850 0.113 0.925 0.032 0.789 0.058 0.785 0.071
5 0.856 0.045 0.931 0.043 0.854 0.110 0.925 0.031 0.786 0.071 0.786 0.058
6 0.858 0.044 0.933 0.042 0.851 0.110 0.927 0.030 0.783 0.057 0.789 0.069
7 0.858 0.044 0.930 0.042 0.852 0.111 0.926 0.031 0.786 0.058 0.787 0.070
* 0.858 0.044 0.933 0.042 0.856 0.108 0.927 0.031 0.791 0.057 0.788 0.071
Table 3: Evaluation results of ablation studies. See Table 2 for experimental settings with corresponding numbers.

5.2 Performance Comparison

We compare our proposed salient object detector with 16 recent state-of-the-art saliency models, including DRFI [19], MDF [23], LEGS [40], DCL [24], DHS [26], ELD [21], RFCN [42], NLDF [32], DSS [16], SRM [43], Amulet [55], UCF [56], BRN [44], PiCA [29], C2S [25] and RAS [4]. Among them, DRFI [19] is the state-of-the-art non-deep-learning based method, and the other 15 models are all based on deep learning. We do not report MDF [23] results on the HKU-IS [23] dataset because MDF uses a part of HKU-IS for training. Due to the same reason, we do not report DHS [26] results on the DUT-OMRON [50]. For fair comparison, all these models are tested using their publicly available code and pretrained models released by the authors with default settings. We also report the results of the ResNet-101 [14] version of our proposed HCA. Since ResNet is deep enough to capture global contexts, we exclude the sixth side () in HCA.

Table 1 summarizes the numeric comparison in terms of and MAE on six datasets. HCA can significantly outperform other competitors in most cases, which demonstrates its effectiveness. With the VGG16 [38] backbone, the values of HCA are 2.1%, 1.0%, 0.9%, 1.1%, 0.6% and 0.5% higher than the second best method on the DUTS, ECSSD, SOD, HKU-IS, DUT-OMRON and THUR15K datasets, respectively. On the SOD dataset in terms of MAE metric, HCA performs slightly worse than the best result. PiCA [29] seems to achieves the second place. With the ResNet backbone, the performance gap between the proposed HCA and other ResNet based competitors is much larger than with VGG16 backbone net. Specifically, the values of HCA are 2.2%, 1.3%, 1.3%, 1.7%, 3.0% and 0.8% higher than the second best method on six datasets, respectively.

We also provide a qualitative comparison in Figure 6. For objects with various shapes and scales, HCA can well segment the entire objects with fine details (1-2 rows). HCA is also robust with complicated background (3-5 rows), multiple objects (6-7 rows) and confusing stuff (8 row).

5.3 Ablation Studies

To evaluate the influences of various design choices of MLHN and HCA (the 2Conv blocks in Figure 2 and Figure 3), we extensively perform seven ablation studies with VGG16 backbone. The detailed experimental settings and corresponding evaluation results are shown in Table 2 and Table 3, respectively. We can observe that our proposed method is not sensitive to different parameter settings, and the default design achieves slightly better results. These ablation studies can also reflect some interesting phenomena. For example, the experiment #5 suggests larger convolution kernel at sixth side is helpful to obtain accurate global contexts. The experiments #6 and #7 demonstrate introducing more convolution channels is useless to the performance. Interestingly, we observe that the default convolution parameter settings are similar to DSS [16] although we have different network architecture (see Section 4). Perhaps it is due to the intrinsic properties of backbone nets.

6 Conclusion

Salient object detection is highly related to the global contextual information which can be used to judge which parts of an image are salient. Motivated by this, we propose a simple yet effective method in this paper. Our method starts from the top sides of neural networks and gradually flows the top global contexts into lower sides to obtain hierarchical contexts. These hierarchical contexts are aggregated for the final salient object detection. Our method reaches the new state-of-the-art on six datasets when compared with 16 recent saliency models. In the future, we plan to apply the proposed network architecture into other vision tasks that need global information.

References

  • [1] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk. Frequency-tuned salient region detection. In IEEE CVPR, pages 1597–1604, 2009.
  • [2] A. Borji, M.-M. Cheng, H. Jiang, and J. Li. Salient object detection: A benchmark. IEEE TIP, 12(24):5706–5722, 2015.
  • [3] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE TPAMI, 40(4):834–848, 2018.
  • [4] S. Chen, X. Tan, B. Wang, and X. Hu. Reverse attention for salient object detection. In ECCV, 2018.
  • [5] X. Chen, A. Zheng, J. Li, and F. Lu. Look, perceive and segment: Finding the salient objects in images via two-stream fixation-semantic CNNs. In IEEE ICCV, pages 1050–1058, 2017.
  • [6] M.-M. Cheng, N. J. Mitra, X. Huang, and S.-M. Hu. Salientshape: Group saliency in image collections. The Visual Computer, 30(4):443–453, 2014.
  • [7] M.-M. Cheng, N. J. Mitra, X. Huang, P. H. Torr, and S.-M. Hu. Global contrast based salient region detection. IEEE TPAMI, 37(3):569–582, 2015.
  • [8] R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang. Review of visual saliency detection with comprehensive information. IEEE TCSVT, 2018.
  • [9] H. Z. J. X. K. Dana. Deep TEN: Texture encoding network. In IEEE CVPR, pages 708–717, 2017.
  • [10] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE CVPR, pages 248–255, 2009.
  • [11] Y. Gao, M. Wang, Z.-J. Zha, J. Shen, X. Li, and X. Wu. Visual-textual joint relevance learning for tag-based social image search. IEEE TIP, 22(1):363–376, 2013.
  • [12] J. Han, D. Zhang, G. Cheng, N. Liu, and D. Xu. Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Processing Magazine, 35(1):84–100, 2018.
  • [13] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In IEEE CVPR, pages 447–456, 2015.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE CVPR, pages 770–778, 2016.
  • [15] S. He, J. Jiao, X. Zhang, G. Han, and R. W. Lau. Delving into salient object subitizing and detection. In IEEE ICCV, pages 1059–1067, 2017.
  • [16] Q. Hou, M.-M. Cheng, X. Hu, A. Borji, Z. Tu, and P. Torr. Deeply supervised salient object detection with short connections. In IEEE CVPR, pages 5300–5309, 2017.
  • [17] M. A. Islam, M. Kalash, and N. D. Bruce. Revisiting salient object detection: Simultaneous detection, ranking, and subitizing of multiple salient objects. In IEEE CVPR, pages 7142–7150, 2018.
  • [18] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM MM, pages 675–678, 2014.
  • [19] H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li. Salient object detection: A discriminative regional feature integration approach. In IEEE CVPR, pages 2083–2090, 2013.
  • [20] Z. Jiang and L. S. Davis. Submodular salient region detection. In IEEE CVPR, pages 2043–2050, 2013.
  • [21] G. Lee, Y.-W. Tai, and J. Kim. Deep saliency with encoded low level distance map and high level features. In IEEE CVPR, pages 660–668, 2016.
  • [22] G. Li, Y. Xie, L. Lin, and Y. Yu. Instance-level salient object segmentation. In IEEE CVPR, pages 247–256, 2017.
  • [23] G. Li and Y. Yu. Visual saliency based on multiscale deep features. In IEEE CVPR, pages 5455–5463, 2015.
  • [24] G. Li and Y. Yu. Deep contrast learning for salient object detection. In IEEE CVPR, pages 478–487, 2016.
  • [25] X. Li, F. Yang, H. Cheng, W. Liu, and D. Shen. Contour knowledge transfer for salient object detection. In ECCV, pages 355–370, 2018.
  • [26] N. Liu and J. Han. DHSNet: Deep hierarchical saliency network for salient object detection. In IEEE CVPR, pages 678–686, 2016.
  • [27] N. Liu and J. Han. A deep spatial contextual long-term recurrent convolutional network for saliency detection. IEEE TIP, 27(7):3264–3274, 2018.
  • [28] N. Liu, J. Han, T. Liu, and X. Li. Learning to predict eye fixations via multiresolution convolutional neural networks. IEEE TNNLS, 29(2):392–404, 2018.
  • [29] N. Liu, J. Han, and M.-H. Yang. PiCANet: Learning pixel-wise contextual attention for saliency detection. In IEEE CVPR, pages 3089–3098, 2018.
  • [30] Y. Liu, M.-M. Cheng, X. Hu, K. Wang, and X. Bai. Richer convolutional features for edge detection. In IEEE CVPR, pages 5872–5881, 2017.
  • [31] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In IEEE CVPR, pages 3431–3440, 2015.
  • [32] Z. Luo, A. K. Mishra, A. Achkar, J. A. Eichel, S. Li, and P.-M. Jodoin. Non-local deep features for salient object detection. In IEEE CVPR, pages 6609–6617, 2017.
  • [33] V. Mahadevan and N. Vasconcelos. Saliency-based discriminant tracking. In IEEE CVPR, 2009.
  • [34] V. Movahedi and J. H. Elder. Design and perceptual validation of performance measures for salient object segmentation. In IEEE Conf. Comput. Vis. Pattern Recog. Worksh., pages 49–56, 2010.
  • [35] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In IEEE ICCV, pages 1520–1528, 2015.
  • [36] Z. Ren, S. Gao, L.-T. Chia, and I. W.-H. Tsang. Region-based saliency detection and its application in object recognition. IEEE TCSVT, 24(5):769–779, 2014.
  • [37] O. Ronneberger, P. Fischer, and T. Brox. U-Net: convolutional networks for biomedical image segmentation. In MICCAI, pages 234–241, 2015.
  • [38] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [39] N. Tong, H. Lu, X. Ruan, and M.-H. Yang. Salient object detection via bootstrap learning. In IEEE CVPR, pages 1884–1892, 2015.
  • [40] L. Wang, H. Lu, X. Ruan, and M.-H. Yang. Deep networks for saliency detection via local estimation and global search. In IEEE CVPR, pages 3183–3192, 2015.
  • [41] L. Wang, H. Lu, Y. Wang, M. Feng, D. Wang, B. Yin, and X. Ruan. Learning to detect salient objects with image-level supervision. In IEEE CVPR, pages 136–145, 2017.
  • [42] L. Wang, L. Wang, H. Lu, P. Zhang, and X. Ruan. Saliency detection with recurrent fully convolutional networks. In ECCV, pages 825–841, 2016.
  • [43] T. Wang, A. Borji, L. Zhang, P. Zhang, and H. Lu. A stagewise refinement model for detecting salient objects in images. In IEEE ICCV, pages 4019–4028, 2017.
  • [44] T. Wang, L. Zhang, S. Wang, H. Lu, G. Yang, X. Ruan, and A. Borji. Detect globally, refine locally: A novel approach to saliency detection. In IEEE CVPR, pages 3127–3135, 2018.
  • [45] W. Wang, J. Shen, X. Dong, and A. Borji. Salient object detection driven by fixation prediction. In IEEE CVPR, pages 1711–1720, 2018.
  • [46] Y. Wei, J. Feng, X. Liang, M.-M. Cheng, Y. Zhao, and S. Yan. Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In IEEE CVPR, pages 1568–1576, 2017.
  • [47] Y. Wei, H. Xiao, H. Shi, Z. Jie, J. Feng, and T. S. Huang. Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation. In IEEE CVPR, pages 7268–7277, 2018.
  • [48] S. Xie and Z. Tu. Holistically-nested edge detection. In IEEE ICCV, pages 1395–1403, 2015.
  • [49] Q. Yan, L. Xu, J. Shi, and J. Jia. Hierarchical saliency detection. In IEEE CVPR, pages 1155–1162, 2013.
  • [50] C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang. Saliency detection via graph-based manifold ranking. In IEEE CVPR, pages 3166–3173, 2013.
  • [51] Y. Zeng, H. Lu, L. Zhang, M. Feng, and A. Borji. Learning to promote saliency detectors. In IEEE CVPR, pages 1644–1653, 2018.
  • [52] H. Zhang, K. Dana, J. Shi, Z. Zhang, X. Wang, A. Tyagi, and A. Agrawal. Context encoding for semantic segmentation. In IEEE CVPR, pages 7151–7160, 2018.
  • [53] J. Zhang, T. Zhang, Y. Dai, M. Harandi, and R. Hartley. Deep unsupervised saliency detection: A multiple noisy labeling perspective. In IEEE CVPR, pages 9029–9038, 2018.
  • [54] L. Zhang, J. Dai, H. Lu, Y. He, and G. Wang. A bi-directional message passing model for salient object detection. In IEEE CVPR, pages 1741–1750, 2018.
  • [55] P. Zhang, D. Wang, H. Lu, H. Wang, and X. Ruan. Amulet: Aggregating multi-level convolutional features for salient object detection. In IEEE ICCV, pages 202–211, 2017.
  • [56] P. Zhang, D. Wang, H. Lu, H. Wang, and B. Yin. Learning uncertain convolutional features for accurate saliency detection. In IEEE ICCV, pages 212–221, 2017.
  • [57] X. Zhang, T. Wang, J. Qi, H. Lu, and G. Wang. Progressive attention guided recurrent network for salient object detection. In IEEE CVPR, pages 714–722, 2018.
  • [58] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In IEEE CVPR, pages 2881–2890, 2017.
  • [59] R. Zhao, W. Ouyang, H. Li, and X. Wang. Saliency detection by multi-context deep learning. In IEEE CVPR, pages 1265–1274, 2015.
  • [60] W. Zhu, S. Liang, Y. Wei, and J. Sun. Saliency optimization from robust background detection. In IEEE CVPR, pages 2814–2821, 2014.
  • [61] F. Zund, Y. Pritch, A. Sorkine-Hornung, S. Mangold, and T. Gross. Content-aware compression using saliency-driven image retargeting. In ICIP, pages 1845–1849, 2013.