A multilayer backpropagation saliency detection algorithm and its applications

03/26/2018 ∙ by Chunbiao Zhu, et al. ∙ Peking University 0

Saliency detection is an active topic in the multimedia field. Most previous works on saliency detection focus on 2D images. However, these methods are not robust against complex scenes which contain multiple objects or complex backgrounds. Recently, depth information supplies a powerful cue for saliency detection. In this paper, we propose a multilayer backpropagation saliency detection algorithm based on depth mining by which we exploit depth cue from three different layers of images. The proposed algorithm shows a good performance and maintains the robustness in complex situations. Experiments' results show that the proposed framework is superior to other existing saliency approaches. Besides, we give two innovative applications by this algorithm, such as scene reconstruction from multiple images and small target object detection in video.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 10

page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Salient object detection is a process of getting the visual attention region precisely. The attention is the behavioral and cognitive process of selectively concentrating on one aspect of the environment while ignoring other things.

Early work on computing saliency aims to locate the visual attention region. Recently the field has been extended to locate and refine the salient regions and objects. Served as a foundation of various multimedia applications, salient object detection has been widely used in content-aware editing Chang2011Content

, image retrieval 

Cheng2014SalientShape , object recognition Alexe2012Measuring , object segmentation Girshick2013Rich , compression Itti2004Automatic , image retargeting Sun2011Scale , etc.

Generally speaking, saliency detection frameworks mainly use top-down or bottom-up approaches. Top-down approaches are task-driven and need supervised learning. While bottom-up approaches usually use low-level cues, such as color features, distance features, depth features and heuristic saliency features.

The most used features are heuristic saliency features and discriminative saliency features. Various measures based on heuristic saliency features have been proposed, including pixel-based or patch-based contrast Ma2003Contrast ; Liu2006Region ; Achanta2008Salient ; Valenti2009Image , region-based contrast Cheng2011Global ; Jiang2011Automatic ; Krahenbuhl2012Saliency ; Li2013Contextual ; Jiang2013Salient ; Ran2013What ; Shi2013PISA ; Li2013Estimating , pseudo-background Wei2012Geodesic ; Jiang2013Saliency ; Yang2013Saliency ; Li2013Saliency ; Liu2014Adaptive ; Zhu2014Saliency , and similar images Marchesotti2009A ; Siva2013Looking . Some measures proposed use discriminative saliency features, such as multi-scale contrast Liu2011Learning , center-surround contrast Jiang2013Salient , and color spatial compactness Mehrani2010Saliency . And other measures use image over-segmentation Cheng2014Efficient

, outlier 

Wu2012A ; Xie2013Bayesian ; Lu2011Salient ; Chang2011Fusing , wavelet features Imamoglu2013A ; You2010A , which can provide multi-scale spatial and frequency analysis at the same time for saliency detection, and other features representation Sun2017An ; Zhao2017Continuous .

Although these methods can make full use of 1-2 RGB-based features, they are not robust to specific situations and lead to the inaccuracy of results on challenging salient object detection datasets.

Recently, advances in 3D data acquisition techniques have motivated the adoption of structural features, improving the discrimination between different objects with the similar appearance. some algorithms Zhu2017Salient ; Peng2014RGBD ; Cheng2014Depth ; Geng2012Leveraging ; zhu2017innovative ; zhu2017three adopt depth cue to deal with the challenging scenarios. In Zhu2017Salient , Zhu et al. propose a framework based on cognitive neuroscience, and use depth cue to represent the depth of real field. In Cheng2014Depth , Cheng et al. compute salient stimuli in both color and depth spaces. In Peng2014RGBD , Peng et al. provide a simple fusion framework that combines existing RGB-based saliency with new depth-based saliency. In Geng2012Leveraging , Geng et al. define saliency using depth cue computed from stereo images. Their results show that stereo saliency is a useful consideration compared to previous visual saliency analysis. All of them demonstrate the effectivity of depth cue in the improvement of salient object detection.

However, depth cue cannot make the saliency results robust when a salient object has low depth contrast compared to the background. So, how to integrate depth cue with RGB information is the problem that needs to be solved.

In this paper, we propose a multilayer backpropagation saliency detection algorithm based on depth mining zhu2017multilayer

to overcome the aforementioned problem. First, we obtain the center-bias saliency map and depth map in the preprocessing stage. Then, we dispose of the input images in three different layers, respectively. In the first layer, we use original depth cue and other cues to calculate preliminary saliency value. In the second layer, we apply processed depth cue and other cues to compute intermediate saliency value. In the third layer, we employ reprocessed depth cue and other cues to get final saliency value. The framework of the proposed algorithm is illustrated in Fig. 1. The experiments show that the proposed algorithm is both effective and robustness in saliency detection. Besides, we give two innovative applications to demonstrate the potential of saliency detection for broad applications in computer vision and computer graphics.

In summary, the main contributions of our work include:

1. We propose a multilayer backpropagation saliency detection algorithm based on depth mining, which has a good performance in saliency detection.

2. We use the proposed algorithm to the application of image montage, and the constructed image montages’ exquisiteness illustrates that our method is outperformance.

3. We employ a novel approach to the small target detection application, the detection results are superb.

In addition, although saliency detection has recently attracted much attention, its practical usage for real vision tasks has yet to be justified. Our method validates the usefulness of saliency detection by implementing two applications.

The rest of the paper is organized as follows: we introduce the multilayer backpropagation saliency detection algorithm in Sect. 2. In Sect. 3, we show the experimental results of the proposed algorithm. In Sect. 4, we give an application of image montage to reconstruct scenes from multiple images. We further show a novel approach to the application of small target detection in Sect. 5. At last, we conclude the paper in Sect. 6.

Figure 1: The framework of the proposed algorithm.

2 Proposed Algorithm

As shown in Fig. 1, the framework of the proposed algorithm contains four layers, including the preprocessing layer, the first layer, the second layer and the third layer.

2.1 The Preprocessing Layer

Figure 2: The visual process of the proposed algorithm.

In the preprocessing layer, we imitate the human eyes to obtain center-bias saliency map and depth map.

Center-bias Saliency Map.

Inspired by cognitive neuroscience, human eyes use central fovea to locate objects and make them clearly visible. Therefore, most of the images taken by cameras always locate salient object around the center. Aiming to get center-bias saliency map, we use BSCA algorithm Qin2015Saliency . It constructs global color distinction and spatial distance matrix based on clustered boundary seeds and integrates them into a background-based map. Thus it can improve the center-bias, erasing the image edge effect. As shown in the preprocessing stage of Fig. 2 (c), the center-bias saliency map can remove the surroundings of the image and reserve most of the salient regions. We denote this center-bias saliency map as .

Depth Map.

Similarly, biology prompting shows that people perceive the distance and the depth of the object mainly relies on two eyes, which provide the clues, and we call it binocular parallax. Therefore, the depth cue can imitate the depth of real field. The depth map used in the experimental datasets is taken by Kinect device. And we denote the depth map as .

2.2 The First Layer

In the first layer, we extract color and depth features from the original image and the depth map , respectively.

First, the image is segmented into regions based on color via the -means algorithm. Define:

(1)

where is the color saliency of region , , and represent regions and respectively, is the Euclidean distance between region and region in L*a*b color space, represents the area ratio of region compared with the whole image, is the spatial weighted term of the region , set as:

(2)

where is the Euclidean distance between the centers of region and , is the parameter controlling the strength of .

Similar to color saliency, we define:

(3)

where is the depth saliency of , is the Euclidean distance between region and region in depth space.

In most cases, a salient object always locate at the centre of an image or close to a camera. Therefore, we assign the weights to both centre-bias and depth for both color and depth images. The weight of the region k is set as:

(4)

where represents the Gaussian normalization, is Euclidean distance, is the position of the region , is the center position of this map, is the number of pixels in region , is the depth weight, which is set as:

(5)

where represents the maximum depth of the image, and is the depth value of region , is a fixed value for a depth map, set as:

(6)

where represents the minimum depth of the image.

Second, the preliminary saliency value of the region is calculated as:

(7)

Third, to get a refinement results, we refine the preliminary saliency map with the help of the center-bias and depth maps. The final preliminary saliency value is calculated as following:

(8)

where is the negation operation which can enhance the saliency degree of front regions as shown in Fig. 2(d), because the foreground object has low depth value in depth map while the background object has high depth value. is the center-bias saliency value calculated in the preprocessing layer, which can improve the center-bias, erasing the image edge effect as shown in Fig. 2(c).

Input: original maps , center-bias map,depth maps ;
Output: the final preliminary saliency values ;

1:  for each region do:
2:  compute color saliency values and depth saliency values ;
3:  calculate the center-bias and depth weights ;
4:  get the preliminary saliency values ;
5:  calculate the final preliminary saliency values ;
6:  end for
7:  return  the final preliminary saliency values .
Algorithm 1 Procedure for the first layer

2.3 The Second Layer

In the second layer, first, we set:

(9)

where represents the extended map. represents processing of three RGB channels, respectively.

The extended map is displayed in Fig. 2(e), from which the salient objects’ edges are prominent.

Second, we use extended map to replace . Then, we calculate intermediate saliency value via the first layer’ via Eq. 7. We get:

(10)

where is the intermediate saliency value.

Third, to refine intermediate saliency value, we apply the backpropagation to enhance the intermediate saliency value by mixing the result of the first layer. And we define our final intermediate saliency value as:

(11)

Input: extended map , depth maps ;
Output: the final intermediate saliency values ;

1:  for each region do:
2:  compute color saliency values and depth saliency values ;
3:  calculate the center-bias and depth weights ;
4:  get the intermediate saliency value ;
5:  calculate the final intermediate saliency values ;
6:  end for
7:  return  the final intermediate saliency values .
Algorithm 2 Procedure for the second layer

2.4 The Third Layer

In the third layer, first, we reprocess the depth cue by filtering the depth map via the following formula:

(12)

where represents the filtered depth map. is the parameter which controls the length of . In general, salient objects always have the small depth value compared to background, thus, by Eq. 12, we can filter out the background noises.

Second, we polarize the filtered depth map shown in Fig. 2(g) via the following formula:

(13)

Third, we extend the filtered depth map to the color images via the Eq. 9. We denote the reprocessed depth map as

We use filtered depth map to replace . Then, by Eq. 7, we get the third layer saliency value denoted as:

(14)

Fourth, to refine , we apply the backpropagation of and as following:

(15)

From the Fig. 2, we can see the visual results of the proposed algorithm. The main steps of the proposed salient object detection algorithm are summarized in Algorithm 1.

Input: extended map , depth maps ;
Output: the final saliency values ;

1:  for each region do:
2:  compute color saliency values and depth saliency values ;
3:  calculate the center-bias and depth weights ;
4:  get the intermediate saliency value ;
5:  calculate the final saliency values ;
6:  end for
7:  return  the final saliency values .
Algorithm 3 Procedure for the third layer

3 Experiments

3.1 Datasets and Evaluation Indicators

Datasets.

We evaluate the performance of the proposed saliency detection algorithm on two RGBD standard datasets: RGBD1* Cheng2014Depth and RGBD2* Peng2014RGBD . RGBD1* has 135 indoor images taken by Kinect with the resolution of . This dataset has complex backgrounds and irregular shapes of salient objects. RGBD2* contains 1000 images with both indoor and outdoor images.

Evaluation indicators.

Experimental evaluations are based on standard measurements including precision-recall curve, ROC curve, MAE (Mean Absolute Error), F-measure, Max-P(the maximum value of precision), Min-P(the minimum value of precision), Max-R(the maximum value of recall), Min-R(the minimum value of recall). Among them, the MAE is formulated as:

(16)

where is the number of the testing images, is the area of the ground truth of image , is the area of detection result of image .

And the F-measure is formulated as:

(17)

3.2 Ablation Study

We first validate the effectiveness of each step in our method: the first step results, the second step results and the third step results. Table. 1 shows the validation results on two datasets. We can clear see the accumulated processing gains after each step, and the final saliency results shows a good performance. After all, it proves that each steps in our algorithm is effective for generating the final saliency maps.

Each Steps of the Proposed Algorithm
MAE Values on RGBD1* Dataset 0.1065 0.0880 0.0781
MAE Values on RGBD2* Dataset 0.1043 0.0900 0.0852
F-measure Values on RGBD1* Dataset 0.5357 0.6881 0.7230
F-measure Values on RGBD2* Dataset 0.5452 0.7025 0.7190
Table 1: The validation results of each step in the proposed algorithm.
Figure 3: (a): PR curve of different methods on RGBD1* dataset. (b): ROC curve of different methods on RGBD1* dataset.(c): PR curve of different methods on RGBD1* dataset. (d): ROC curve of different methods on RGBD2* dataset.
Methods MAE F-measure Max-P Min-P Max-R Min-R
FT 0.2049 0.2804 0.3875 0.1262 1 0
SIM 0.3740 0.3345 0.5076 0.1262 1 0
HS 0.1849 0.5361 0.6581 0.1262 1 0.1187
BSCA 0.1851 0.5826 0.7977 0.1262 1 0.0306
LPS 0.1406 0.5452 0.6951 0.1262 1 0.0026
RGBD1 0.3079 0.5410 0.8561 0.1262 1 0.1731
RGBD2 0.1165 0.4912 0.7699 0.1262 1 0.0049
OURS1 0.1065 0.5357 0.9000 0.0074 1 0
OURS2 0.0880 0.6881 0.9249 0.1262 1 0.1118
OURS 0.0781 0.7230 0.9181 0.1262 1 0.2669
Table 2: The evaluation results on RGBD1* dataset, the best results are shown in boldface.
Methods MAE F-measure Max-P Min-P Max-R Min-R
FT 0.2168 0.3270 0.3894 0.1291 1 0
SIM 0.3957 0.2927 0.3847 0.1291 1 0
HS 0.1909 0.6003 0.7503 0.1291 1 0.1859
BSCA 0.1754 0.5925 0.7616 0.1291 1 0.0525
LPS 0.1252 0.5890 0.6831 0.1291 1 0.0166
RGBD1 0.3207 0.4843 0.7771 0.1291 1 0.2228
RGBD2 0.1087 0.5957 0.8148 0.1291 1 0.0070
OURS1 0.1043 0.5452 0.8029 0.0150 1 0
OURS2 0.0900 0.7025 0.8477 0.1291 1 0.2445
OURS 0.0852 0.7190 0.8347 0.1291 1 0.4071
Table 3: The evaluation results on RGBD2* dataset, the best results are shown in boldface.
Figure 4: Visual comparison of saliency maps on two datasets. (a) - (l) represent: input images, ground truth, FT, SIM, HS, BSCA, LPS, RGBD1, RGBD2, OURS1, OURS2 and OURS, respectively.

3.3 Comparison

To further illustrate the effectiveness of our algorithm, we compare our proposed methods with FT Achanta2009Frequency , SIM Murray2011Saliency , HS Shi2016Hierarchical , BSCA Qin2015Saliency , LPS Li2015Inner , RGBD1 Cheng2014Depth , RGBD2 Peng2014RGBD . We use the codes provided by the author to reproduce their experiments. For all the compared methods, we use the default settings suggested by the authors. Besides, to show that the contribution of depth mining using multi-layers, we add the intermediate results (OURS1 and OURS2) for the first layer and second layer in the experiment comparisons. And for the Eq. 2 and Eq. 12, we take and , respectively, which has the best contribution to the results.

The precision and recall evaluation results and ROC evaluation results are shown in Fig. 3. From the precision-recall curves and ROC curves, we can see that our multi-layers’ saliency detection results can achieve better results on both RGBD1* and RGBD2* datasets.

Other evaluation results on both RGBD1* and RGBD2* datasets are shown in Table 2 and Table 3, respectively. The best results are shown in boldface. Compared with the MAE values, it can be observed that our saliency detection method is superior and can obtain more precise salient regions than that of other approaches. Besides, the proposed algorithm is the most robust.

The visual comparisons are given in Fig. 4, which clearly demonstrate the advantages of our method. We can see that our method can detect both single salient object and multiple salient objects more precisely. Besides, by intermediate results, it shows that by exploiting depth cue information of more layers, our proposed method can get more accurate and robust performance. In contrast, the compared methods may fail in some situations.

4 Image Montage Application

In this section, we use the proposed algorithm to an innovative application of image montage. Our image montage application is divided into six stages, including saliency detection, object segmentation, color changing, object resizing, object removal and scene reconstruction.The performance of most stages is highly dependent on salient.

4.1 Salient Object Detection

To get the image montage, first, we gather some objects that we are interested in. Therefore, we use the proposed algorithm to obtain those objects. Since the proposed algorithm has more detection precision regions, the following stages will reduce errors and can achieve better visual effect of image montage. We use the proposed algorithm to get the object saliency maps shown in Fig. 5(b).

Figure 5: Image materials. (a): original maps (b): saliency maps (c): segmentation maps (d)-(g): color changing maps.

4.2 Object Segmentation

After getting the object we are interested in, then we segment them from the original image scene. In this stage, we use our salient object results and the original images’ RGB channel to recover the salient object maps into color maps. Shown as:

(18)

where represents the segmentation map. represents the processing of three RGB channels, respectively. is the saliency values calculated by the proposed algorithm.

All the results are shown in Fig. 5(c). And we can see that the more accurate salient maps we use, the more accurate segmentation results we will have.

4.3 Color Changing

After getting the objects we are interested in, we also want to change the salient objects’ color. Thus we use our saliency maps as the sample maps and use original maps’ RGB values to change the RGB values. The results are shown in Fig. 5(d)-(f).

4.4 Object Resizing

There are some situations that the salient objects are too large to fill new pictures. Therefore, we use bilinear interpolation to resize the salient objects and the background scenes.

Figure 6: Two-Layer Removal Framework.
Figure 7: Removal results. From the first line to the third line are original maps, saliency maps and removal maps, respectively.

4.5 Object Removal

In this stage, we want to get images’ background and remove the object that we want to change. Inspired by the Criminisi algorithm Criminisi2003Object , this algorithm divided three stages. First, the user selects the target region and then compute the patch priorities to propagate texture and structure information. At last, updating confidence values to get the final results.

Figure 8: Four scenarios are reconstructed by the image montage application.

For this algorithm, we simplify the first stage which replacing the human marked regions with our salient object detection maps. And for some objects have the shadow residues, we propose a two-layer removal algorithm for removal the shadow or other noises, first-layer is the object removal, and second-layer is the noises removal. This two-layer is shown in Fig. 6.

And the removal results are shown in Fig. 7. From the results, we can see that a high precision salient object detection results can reduce excessive expansion area and save erosion operations’ time in removal algorithm.

4.6 Scene Reconstruction

After getting the segmented objects and background sceneries we are interested in, then we can reconstruct the scenes as we like. We reconstruct four scenes which are shown in Fig. 8. From which we can see that the good performance of the proposed algorithm can lead to an exquisite image montage.

5 Small Target Detection Application

In this section, we proposed a novel approach to detect small targets by combing the proposed algorithm. Our small target detection application is divided into two stages, including dark channel prior location and small target accurate detection.

Figure 9: The small target detection results. (a1)-(a6) represent different frames of original video.(b1)-(b6) represent different frames of the dark channel prior location results.(c1)-(c6) represent different frames of the proposed algorithm combined with dark channel prior detection results. (d1)-(d6) represent different frames of the BSCA method Qin2015Saliency . (e1)-(e6) represent different frames of the ground truth.

5.1 Dark Channel Prior Location

The dark channel prior is a popular prior which is widely used in image haze removal field. It is based on the statistics of outdoor haze-free images. The dark channel can detect the most haze-opaque region and improve the atmospheric light estimation. Inspired by dark channel prior 

He2009Single , we find that the foreground and background have the different transmissivity, so, we can distinguish the foreground objects from the backgrounds. We combine this theory and the proposed saliency detection algorithm to small target detection fields. And we denote the results map of dark channel prior as .

5.2 Small Target Accurate Detection

In this stage, we combine the proposed algorithm and dark channel prior to detect the small target. First, we use the dark channel prior to replace the depth map . Then, we use the proposed algorithm to detect the small target. The experimental results on the small target detection datasets Lou2016Small by applying the proposed algorithm to small target detections, which are shown in Fig. 9. From the comparison, we can get that our detection results are better than the other methods, such as BSCA Qin2015Saliency .

6 Conclusion

In this paper, we proposed a multilayer backpropagation saliency detection algorithm based on depth mining. First, we get the additional cues from the preprocessing layer. Then, the proposed algorithm exploits depth cue information of three layers: in the first layer, we mix depth cue to prominent salient object; in the second layer, we extend depth map to prominent salient object’ edges; in the third layer, we reprocess depth cue to eliminate background noises. And the experiments’ results show that the proposed method outperforms the existing algorithms in both accuracy and robustness in different scenarios. Besides, we demonstrate an innovative application of our algorithm to image montage and the experimental results show that a precision salient object detection can lead to a fine image montage. At last, we give a novel approach to the small target detection application. To encourage future work, we make the source codes and other related materials public. All these can be found on our project website.

References

  • (1) R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk. Frequency-tuned salient region detection. In

    Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on

    , pages 1597–1604, 2009.
  • (2) Bogdan Alexe, Thomas Deselaers, and Vittorio Ferrari. Measuring the objectness of image windows. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11):2189–202, 2012.
  • (3) Che Han Chang, Chia Kai Liang, and Yung Yu Chuang. Content-aware display adaptation and interactive editing for stereoscopic images. IEEE Transactions on Multimedia, 13(4):589–601, 2011.
  • (4) Ming Ming Cheng, Niloy J Mitra, Xiaolei Huang, and Shi Min Hu. Salientshape: group saliency in image collections. The Visual Computer, 30(4):443–453, 2014a.
  • (5) Yupeng Cheng, Huazhu Fu, Xingxing Wei, Jiangjian Xiao, and Xiaochun Cao. Depth enhanced saliency detection method. 55(1):23–27, 2014b.
  • (6) Yujie Geng. Leveraging stereopsis for saliency analysis. In Computer Vision and Pattern Recognition, pages 454–461, 2012.
  • (7) Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 580–587, 2013.
  • (8) Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel prior. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1956–1963, 2009.
  • (9) L Itti. Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Press, 2004.
  • (10) Pat S. Chavez Jr. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data ☆. Remote Sensing of Environment, 24(3):459–479, 1988.
  • (11) T Judd, K Ehinger, F Durand, and A Torralba. Learning to predict where humans look. In IEEE International Conference on Computer Vision, ICCV 2009, Kyoto, Japan, September 27 - October, pages 2106–2113, 2009.
  • (12) Hongyang Li, Huchuan Lu, Zhe Lin, and Xiaohui Shen. Inner and inter label propagation: Salient object detection in the wild. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 24(10):3176–3186, 2015.
  • (13) Jing Lou, Wei Zhu, Huan Wang, and Mingwu Ren. Small target detection combining regional stability and saliency in a color image. Multimedia Tools and Applications, pages 1–18, 2016.
  • (14) N. Murray, M. Vanrell, X. Otazu, and C. A. Parraga. Saliency estimation using a non-parametric low-level vision model. 42(7):433–440, 2011.
  • (15) Houwen Peng, Bing Li, Weihua Xiong, Weiming Hu, and Rongrong Ji. RGBD Salient Object Detection: A Benchmark and Algorithms. Springer International Publishing, 2014.
  • (16) Yao Qin, Huchuan Lu, Yiqun Xu, and He Wang. Saliency detection via cellular automata. In IEEE Conference on Computer Vision and Pattern Recognition, pages 110–119, 2015.
  • (17) Jianping Shi, Qiong Yan, Xu Li, and Jiaya Jia. Hierarchical image saliency detection on extended cssd. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(4):717, 2016.
  • (18) Jin Sun and Haibin Ling. Scale and object aware image retargeting for thumbnail browsing. In International Conference on Computer Vision, pages 1511–1518, 2011.
  • (19) Chunbiao Zhu, Ge Li, Wenmin Wang, and Ronggang Wang. Salient object detection with complex scene based on cognitive. In IEEE Third International Conference on Multimedia Big Data. IEEE, pages 33–37, 2017.
  • (20) A. Criminisi, P. Prez, and K. Toyama. Object removal by exemplar-based inpainting. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 721–728, 2003.
  • (21) Radhakrishna Achanta, Francisco Estrada, Patricia Wils, and Sabine Sstrunk. Salient region detection and segmentation. 5008:66–75, 2008.
  • (22) Ming Ming Cheng, Guo Xin Zhang, N. J Mitra, Xiaolei Huang, and Shi Min Hu. Global contrast based salient region detection. In Computer Vision and Pattern Recognition, pages 409–416, 2011.
  • (23) Huaizu Jiang, Jingdong Wang, Zejian Yuan, Tie Liu, Nanning Zheng, and Shipeng Li. Automatic salient object segmentation based on context and shape prior. In British Machine Vision Conference, 2011.
  • (24) Peng Jiang, Haibin Ling, Jingyi Yu, and Jingliang Peng. Salient region detection by ufo. 2013.
  • (25) Philipp Krahenbuhl. Saliency filters: Contrast based filtering for salient region detection. In IEEE Conference on Computer Vision and Pattern Recognition, pages 733–740, 2012.
  • (26) Jia Li, Yonghong Tian, Lingyu Duan, and Tiejun Huang. Estimating visual saliency through single image optimization. IEEE Signal Processing Letters, 20(9):845–848, 2013a.
  • (27) Xi Li, Yao Li, Chunhua Shen, Anthony Dick, and Anton Van Den Hengel. Contextual hypergraph modeling for salient object detection. pages 3328–3335, 2013b.
  • (28) Feng Liu and Michael Gleicher. Region enhanced scale-invariant saliency detection. In IEEE International Conference on Multimedia and Expo, pages 1477–1480, 2006.
  • (29) Yu Fei Ma and Hong Jiang Zhang. Contrast-based image attention analysis by using fuzzy growing. In Eleventh ACM International Conference on Multimedia, Berkeley, Ca, Usa, November, pages 374–381, 2003.
  • (30) Margolin Ran, Ayellet Tal, and Lihi Zelnik-Manor. What makes a patch distinct? In IEEE Conference on Computer Vision and Pattern Recognition, pages 1139–1146, 2013.
  • (31) Keyang Shi, Keze Wang, Jiangbo Lu, and Liang Lin. Pisa: Pixelwise image saliency by aggregating complementary appearance contrast measures with spatial priors. In Computer Vision and Pattern Recognition, pages 2115–2122, 2013.
  • (32) R Valenti, N Sebe, and T Gevers. Image saliency by isocentric curvedness and color. In IEEE International Conference on Computer Vision, pages 2185–2192, 2009.
  • (33) Bowen Jiang, Lihe Zhang, Huchuan Lu, Chuan Yang, and Ming Hsuan Yang.

    Saliency detection via absorbing markov chain.

    In IEEE International Conference on Computer Vision, pages 1665–1672, 2013.
  • (34) Xiaohui Li, Huchuan Lu, Lihe Zhang, Ruan Xiang, and Ming Hsuan Yang. Saliency detection via dense and sparse reconstruction. In IEEE International Conference on Computer Vision, pages 2976–2983, 2013.
  • (35) Risheng Liu, Junjie Cao, Zhouchen Lin, and Shiguang Shan.

    Adaptive partial differential equation learning for visual saliency detection.

    In Computer Vision and Pattern Recognition, pages 3866–3873, 2014.
  • (36) Yichen Wei, Fang Wen, Wangjiang Zhu, and Jian Sun. Geodesic saliency using background priors. In European Conference on Computer Vision, pages 29–42, 2012.
  • (37) Chuan Yang, Lihe Zhang, Huchuan Lu, Xiang Ruan, and Ming Hsuan Yang. Saliency detection via graph-based manifold ranking. In Computer Vision and Pattern Recognition, pages 3166–3173, 2013.
  • (38) Wangjiang Zhu, Shuang Liang, Yichen Wei, and Jian Sun. Saliency optimization from robust background detection. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2814–2821, 2014.
  • (39) Luca Marchesotti, Claudio Cifarelli, and Gabriela Csurka. A framework for visual saliency detection with applications to image thumbnailing. In IEEE International Conference on Computer Vision, pages 2232–2239, 2009.
  • (40) Zhu, Chunbiao, Li, Ge. A Three-pathway Psychobiological Framework of Salient Object Detection Using Stereoscopic Technology. ICCVW, pages 3008–3014, 2017.
  • (41) Parthipan Siva, Chris Russell, Tao Xiang, and Lourdes Agapito.

    Looking beyond the image: Unsupervised learning for object saliency and detection.

    In IEEE Conference on Computer Vision and Pattern Recognition, pages 3238–3245, 2013.
  • (42) Ming Ming Cheng, Jonathan Warrell, Wen Yan Lin, Shuai Zheng, Vibhav Vineet, and Nigel Crook. Efficient salient region detection with soft image abstraction. In IEEE International Conference on Computer Vision, pages 1529–1536, 2014.
  • (43) Huaizu Jiang, Jingdong Wang, Zejian Yuan, Yang Wu, Nanning Zheng, and Shipeng Li. Salient object detection: A discriminative regional feature integration approach. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2083–2090, 2013.
  • (44) Zhu, Chunbiao, Li, Ge, et al. An Innovative Salient Object Detection Using Center-Dark Channel Prior. ICCVW, pages 1509–1515, 2017.
  • (45) T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H. Y. Shum. Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2):353, 2011.
  • (46) Paria Mehrani. Saliency segmentation based on learning and graph cut refinement. 2010.
  • (47) Kai Yueh Chang, Tyng Luh Liu, Hwann Tzong Chen, and Shang Hong Lai. Fusing generic objectness and visual saliency for salient object detection. In IEEE International Conference on Computer Vision, pages 914–921, 2011.
  • (48) Yao Lu, Wei Zhang, Hong Lu, and Xiangyang Xue. Salient object detection using concavity context. In International Conference on Computer Vision, pages 233–240, 2011.
  • (49) Ying Wu and Xiaohui Shen. A unified approach to salient object detection via low rank matrix recovery. In IEEE Conference on Computer Vision and Pattern Recognition, pages 853–860, 2012.
  • (50) Yulin Xie, Huchuan Lu, and Ming Hsuan Yang. Bayesian saliency via low and mid level cues. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 22(5):1689–1698, 2013.
  • (51) You X, Du L, Cheung Y, et al. A blind watermarking scheme using new nontensor product wavelet filter banks. IEEE Transactions on Image Processing, pages 3271-3284, 2010.
  • (52) Imamoglu N, Lin W, Fang Y. A saliency detection model using low-level features based on wavelet transform. IEEE transactions on multimedia, pages 96-105, 2013.
  • (53) Sun X, Huang Z, Yin H, et al. An Integrated Model for Effective Saliency Prediction. AAAI, pages 274-281, 2017.
  • (54) Zhao S, Yao H, Gao Y, et al.

    Continuous Probability Distribution Prediction of Image Emotions via Multitask Shared Sparse Regression.

    IEEE Transactions on Multimedia, pages 632-645, 2017.
  • (55) Zhu, Chunbiao, Li, Ge, et al. A Multilayer Backpropagation Saliency Detection Algorithm Based on Depth Mining. International Conference on Computer Analysis of Images and Patterns, pages 14–23, 2017.