I Introduction
Saliency detection is a preprocessing step in computer vision which aims at finding salient objects in an image [2]
. Saliency helps allocate computing resources to the most informative striking objects in an image, rather than processing the background. This is very appealing for many computer vision tasks such as object tracking, image and video compression, video summarization, image retrieval and classification. A lot of previous effort has been spent on this problem and has resulted in several methods
[4, 5]. Yet, saliency detection in arbitrary images remains to be a very challenging task, in particular over images with several objects amidst high background clutter.On the one hand, unsupervised methods are usually more economical than supervised ones because no training data is needed. But they usually require a prior hypothesis about salient objects, and their performance heavily depend on reliability of the utilized prior. Take a recently popular label propagation approach as an example (e.g., [12][36][14][32]). First, seeds are selected according to some prior knowledge (e.g., boundary background prior), and then, labels are propagated from seeds to unlabeled regions. They work well in most of the cases, but their results will be inaccurate if the seeds are wrongly chosen. For instance, when image boundary regions serve as background seed, the output will be unsatisfactory if the salient objects touch the image boundary (see the first row of Figure 1(d)).
On the other hand, supervised methods are generally more efficient. Compared with unsupervised methods based on heuristic rules, supervised methods can learn more representative properties of salient objects from numerous training images. The prime example is deep learning based methods
[30][37][18][17]. Owning to their hierarchical architecture, deep neural networks (e.g., CNNs
[16]) can learn highlevel semantically rich features. Consequently, these methods are able to detect semantically salient objects in complex backgrounds. However, offline training a CNN needs a great deal of training data. As a result, using CNNs for saliency detection, although effective, is relatively less economical than unsupervised approaches.In this paper, we attempt to overcome the aforementioned drawbacks. To begin with, the saliency detection problem is formulated as a Saliency Game among image regions. Our main motivation in formulating saliency and attention in a gametheoretic manner is the very essence of attention which is the competition among objects to enter high level processing. Most previous methods formulate saliency detection as minimizing one single energy function that incorporates saliency priors. Different image regions are often considered through adding terms in the energy function (e.g., [38]) or sequentially (e.g., [36]
). If the priors are wrong, optimization of their energy function might lead to wrong results. In contrast, we define one specific payoff function for each superpixel which incorporates multiple cues including spatial position prior, objectness prior, and support from others. Adopting two independent priors makes the proposed method more robust since when one prior is inappropriate, the other might work. The goal of the proposed Saliency Game is to maximize the payoff of each player given other players’ strategies. This can be regarded as maximizing many competing objective functions simultaneously. The game equilibrium automatically provides a tradeoff, so that when some image region can not be assigned a right saliency value by optimizing one objective function (e.g., due to misleading prior), optimization of the other objective functions might help to give them a right saliency value. This approach seems very natural for attention modeling and saliency detection, as also features and objects compete to capture our attention.
In addition, it is known that one main factor for the astonishing success of deep neural networks is their powerful ability to learn highlevel semanticallyrich features. Using features extracted from a pretrained CNN to build an unsupervised method seems a considerable option, as it allows utilizing the aforementioned strength while avoiding timeconsuming training. However, rich semantic information comes with the cost of diluting image features through convolution and pooling layers. Due to this, we also use traditional color features as supplementary information. To make full use of these two complementary features for better detection results, we avoid taking the weighted sum of the raw results generated by the above Saliency Game in the two feature spaces. Instead, we further propose an Iterative Random Walk algorithm across two feature spaces, deep features and the traditional CIELab color features, to refine saliency maps. In every iteration of the Iterative Random Walk, the propagation in the two feature spaces are penalized by the latest output of each other. Figure
2 shows the pipeline of our algorithm.In a nutshell, the main contributions of our work include:

We propose a novel unsupervised Saliency Game to detect salient objects. Adopting two independent priors improves robustness. The nature of game equilibria assures accuracy when both priors are unsatisfactory,

An Iterative Random Walk algorithm across two feature spaces is proposed that takes advantage of the complementary relationship between the color feature space and the deep feature space to further refine the results.
Ii Related work
Some saliency works have followed an unsupervised approach. In [12], saliency of each region was defined as its absorbed time from boundary nodes, which measures its global similarity with all boundary regions. Yang et al. . [36] ranked the similarity of image regions with foreground or background cues via graphbased manifold ranking. Saliency value of each image element was determined based on its relevance to given seeds. In [14], saliency pattern was mined to find foreground seeds according to prior maps. Foreground labels were propagated to unlabeled regions. Tong et al. . [27] proposed a learning algorithm to bootstrap training samples generated from prior maps. These methods exploited either boundary background prior or foreground prior from a prior map, while we adopt two different priors in our method for robustness purposes. Priors only act as weak guidance with very small weights in the payoff function of our proposed Saliency Game.
Some deep learning based saliency detection methods have achieved great performance. In [30], two deep neural networks were trained, one to extract local features and the other to conduct a global search. Zhao et al. . [37] proposed a multicontext deep neural network taking both global and local context into consideration. Li et al. . [18] explored highquality visual features extracted from deep neural networks to improve the accuracy of saliency detection. In [17], high level deep features and low level handcrafted features were integrated in a unified deep learning framework for saliency detection. All above methods needed a lot of time and many images for training. In this work, we are not against the CNN models, but we combine deep features with traditional color features in an unsupervised way, which result in an efficient unsupervised method complementary to CNNs that does on par with the above models that need labeled training data. Hopefully, this will encourage new models that can utilize both labeled and unlabeled data.
Furthermore, there are many computer vision and learning tasks in which game theory has been applied successfully. A grouping game among data points was proposed in
[28]. Albarelli et al. . [3] proposed a noncooperative game between two sets of objects to be matched. A game between a regionbased segmentation model and a boundarybased segmentation model was proposed in [6] to integrate two submodules. Erdem et al. [9] formulated a multiplayer game for transduction learning, whereby equilibria correspond to consistent labeling of the data. In [23], Miller et al. showed that the relaxation labeling problem [11] is equivalent to finding Nash equilibria for polymatrix nperson games. However, to the best of our knowledge, game theory has not yet been used for salient object detection.Iii Definitions and symbols
In this section, we introduce definitions and symbols that will be used throughout the paper.
Superpixels: In our model, the processing units are superpixels segmented from an input image by the SLIC algorithm [2]. denotes the enumeration of the set of superpixels. , , denotes the mask of the th superpixel, where indicates that the pixel located at of the input image belongs to the th superpixel, and , otherwise.
Features: We use FCN32s [22] features due to its great success in semantic segmentation. We choose the output of the Conv5
layer as feature maps for the input image. This is because features in the last layers of CNNs encode semantic abstraction of objects and are robust to appearance variations. Since the feature maps and the image are not of the same resolution, we resize the feature maps to the input image size via linear interpolation. We denote the affinity between the
th superpixel and the th superpixel in deep feature space as , which is defined to be their Gaussian weighted Euclidean distance:(1) 
where
is the deep feature vector of superpixel
. Each superpixel is represented by the mean deep feature vector of all its contained pixels.The semanticallyrich deep features can help accurately locate the targets but fail to describe the lowlevel information. Therefore, we also employ color features as a complement to deep features. Inspired by [12], we use CIELab color histograms to describe superpixels’ color appearance. With CIELab color space divided into ranges, the color feature vector of the th superpixel is denoted as . Affinity between superpixels and in the color feature space is denoted as , which is defined to be their Gauss weighted Chisquare distance:
(2) 
Neighbor: We adopt a definition of 2hoop neighbor which is frequently used in superpixel based saliency detection methods. The set of the th superpixel’s neighbors is denoted as , where indicates the set of superpixels who share at least one common edge with the th superpixel. and are defined as follows:
(3) 
(4) 
where denotes the set of superpixels in image boundary.
Iv Saliency game
Here, we formulate a noncooperative game among superpixels to detect salient objects in an input image. The input image is firstly segmented into superpixels which act as players in the Saliency Game. Each player chooses to be ”background” or ”foreground” as its pure strategy and its mixed strategy corresponds to this superpixel’s saliency value. After showing their strategies, players obtain some payoff according to both their own and other players’ strategies. Payoff is determined by a payoff function which incorporates position and objectness cues as well as support from others. We use each player’s mixed strategy in the Nash equilibrium of the proposed Saliency Game as the saliency value of this superpixel in the output saliency map. Such an equilibrium corresponds to a steady state where each player plays a strategy that maximize its own payoff when the remaining players’ strategies are kept fixed, which provides a globally plausible saliency detection result.
Iva Game setting
The pure strategy set is denoted as , indicating ”to be foreground” or ”to be background”, respectively. All superpixels’ pure strategies are collectively called a pure strategy profile, denoted as . The strategy profile set is denoted as . denotes a single payoff that superpixel obtains, when playing pure strategy against superpixel who holds a pure strategy , in their 2person game. There are four possible values for that can be put into a matrix ,
(5) 
Payoff of superpixel in pure strategy profile , where the th superpixel’s pure strategy is the th component of vector , is denoted as . Payoff of superpixel when it adopts a pure strategy (not necessarily the th component of ), while all other superpixels adopt pure strategies in pure strategy profile is denoted as . We make an assumption that the total payoff of superpixel for playing with all others is the summation of payoffs for playing 2player games with every other single superpixel. Formally, we assume that and .
A pure best reply for player against a pure strategy profile is a pure strategy such that no other pure strategy gives a higher payoff to against . The ith player’s pure bestreply correspondence, which maps each pure strategy profile to a pure strategy , is denoted as :
(6) 
The combined pure bestreply correspondence is defined as the cartesian product of all players’ pure bestreply correspondence:
(7) 
A pure strategy profile is a pure Nash equilibrium if .
A probability distribution over the pure strategy set is termed as a mixed strategy. Mixed strategy of the
th superpixel is denoted as a 2dimensional vector , while and . The set of mixed strategies is denoted as . A pure strategy thereby can be regarded as an extreme mixed strategy where only one component is 1 and the other one is 0, e.g., th player’s pure strategy is equivalent to its mixed strategy because and . Correspondingly, expected payoff of superpixel for playing mixed strategy against superpixel holding mixed strategy is denoted as . We also denote , and to be mixed strategy version of , and . Similarly, a mixed Nash equilibrium is also defined to be a mixed strategy profile which is a mixed best reply to itself. These symbols or definitions are not stated here individually due to limited space.From the definition of the Nash equilibrium above, it can be inferred that in a Nash equilibrium of a game, each player adopts a strategy that maximizes its own payoff when other players’ strategies are fixed.
IvB Payoff function
We have assumed in Section IVA that the total payoff of superpixel for playing with all others is the summation of every single payoff in its 2person games with every other superpixel. Hence, here we focus on modeling payoff of every 2person game. We define the payoff of superpixel for its 2person game with as a weighted sum of three terms:
(8) 
where , , and indicate the th superpixel’s position prior, objectness prior and support that superpixel gives to superpixel , respectively. and are parameters controlling the weight of the first two terms.
Position: Position prior term in the payoff function is formulated based on the observation that salient objects often fall at the image center. The position term should give a greater payoff when, a) Center superpixels choose to be foreground, and b) Boundary superpixels choose to be background. Assuming to be the image center, and to be the center coordinate of superpixel , the position prior term is defined as follows,
(9) 
Objectness: Generally, objects attract more attention than background clutter. Hence superpixels which are part of an object are more likely to be salient. The objectness term should give a greater payoff when, a) Superpixels with high objectness choose to be foreground, and b) Superpixels with low objectness choose to be background.
We exploit the geodestic object proposal (GOP) [15] method to extract a set of object segmentations, and define the objectness of a superpixel according to its overlap with all GOP proposals as follows:
(10) 
where is the set of object candidate masks generated by the GOP method, where indicates that the pixel located at of the input image belongs to the th object proposal, and , otherwise. is the mask of the th superpixel as in Section III.
Support: With a much larger weight in the payoff function ( and being small), support from others is the main source of payoff obtained by each superpixel. When playing with an opponent, each superpixel judges if the opponent’s strategy is right or wrong with its own stance, and provides a higher or lower even negative support to the opponent accordingly. More precisely,

Each superpixel takes a neutral attitude to opponents who hold different pure strategies from itself, and provides them zero support.

If an opponent adopts the same pure strategy as superpixel ,

if the opponent’s strategy is similar to it, then superpixel provides the opponent a great support in recognition of its choice.

else if the opponent is not similar with it, then superpixel provides the opponent a small even negative support as punishment.

Formally, the support term is defined as follows,
(11) 
where is a positive constant, is the affinity between superpixels and , defined as and in Section III.
So far, we have modeled the payoff that superpixel obtains by playing a 2person pure strategy game with superpixel . The expected payoff that superpixel obtains by playing mixed strategy game with all others adopting strategies in mixed strategy profile can be given based on the definition stated at the beginning of this section,
(12) 
IvC Computing equilibrium
We use Replicator Dynamics [26] to compute the mixed strategy Nash equilibrium of the proposed Saliency Game. In Replicator Dynamics, a population of individuals play the game, generation after generation. A selection process acts on the population, causing the number of users holding fitter strategies to grow faster. We use discrete time Replicator Dynamics to find the equilibrium of the game, iterating until ,
(13) 
where represents the th component of the th player’s mixed strategy at time , is a vector whose th component is 1, while other components are 0. We set the initial mixed strategies of player to . is background birthrate for an individual, which is set to a positive number to make sure is positive for all in [34]. There could be multiple equilibria in a game, likewise in the proposed Saliency Game. Replicator Dynamics might reach different Nash equilibria if the initial state is set to different interior points of . Empirically, we find that is a good initialization leading to plausible saliency detection.
In the proposed Saliency Game, each superpixel inspects strategies of all other superpixels and takes a stance by providing large or small even negative support. Usually, no matter what strategy a superpixel adopts, there are both protesters and supporters. Game equilibira provide a good trade off among different influences. Thus, in the equilibrium of the proposed Saliency Game with payoff function as defined in Eqn. 8, each superpixel chooses a strategy that suits itself best given its position, objectness, and support from others. Doing so has two advantages: 1) the center position prior and the objectness prior are almost independent, when one prior is unsatisfactory, the other may work.
As shown in the first row of Figure 3, the little pug appears away from image center, but since it is the only object in the image, the objectness prior identifies it correctly. 2) the two priors only serve as weak guidance and obtain small weights in the payoff function. Even when they are both unsatisfactory on some image regions, pressure from peers will impel these regions to get proper saliency values in the equilibrium of the game. As shown in the second row of Figure 3, although only heads of the people are high in position prior and objectness prior, the produced saliency map can highlight the entire object. From the third row of Figure 3, we can see that the proposed algorithm also suppresses background effectively when prior highlights background areas by mistake. Note that in order to illustrate effectiveness of the proposed Saliency Game, only color feature is used in the three shown cases.
V Iterative random walk
Traditional color features are of highresolution, so saliency maps generated in color space are detailed and with clear edges. But due to lack of highlevel information, sometimes they fail to locate the targets accurately (see Figure 4(c)). On the contrary, since deep features encode highlevel concept of objects well, saliency maps generated in the deep feature space are able to find correct salient objects in an image. But due to several layers of convolution and pooling, these features are too coarse. Thus the generated saliency maps are indistinctive, as shown in Figure 4(d).
Accordingly, here, we use both complementary features for a better result. However, as shown in Figure 4(e), although the weighted sum of the two is slightly better, they are not satisfactory. To solve this problem, in this section, inspired by the metric fusion presented in [29], we propose an Iterative Random Walk method to best exploit this two complementary feature spaces. In the proposed Iterative Random Walk, metrics in the two feature spaces are fused as stated in [29] (corss fusion in Eqn. 16 is the work of Tu et al. .), in addition, we also make the two propagation penalized by the latest output of each other (cross propagation in Eqn. 17 and Eqn. 18 is our work). Figure 5 shows that both cross fusion and cross penalization contribute.
With superpixels as nodes, a neighbor graph and a complete graph are constructed in both feature space (deep and color features). The affinity between two superpixels is assigned to the edge weight. Four weight matrices are defined:

and : weight matrices of neighbor and complete graphs in the deep feature space, respectively.

and : weight matrices of neighbor and complete graphs in the color space, respectively.
In the complete graphs, there is an edge between every pair of nodes, while in the neighbor graphs, each node is connected only to its neighbors. , and are defined as follows,
(14) 
(15) 
and are defined similarly but using . See Section III for definitions of and .
Firstly, let , , and to be the number of iterations. Symbols with superscript or correspond to variables in the color or deep feature space, respectively. Following [29], we fuse these four affinity matrices as follows,
(16) 
Then, using the fused affinity matrices, we let the propagation results in the two feature spaces penalize each other. Two random walk energy functions are defined as follows,
(17) 
(18) 
where is the label vector, is the th superpixel’s label, and is a parameter.
By minimizing the two energy functions above, we have,
(19) 
(20) 
where is the Laplacian matrix. and are set to the results of the Saliency Game stated in Section IV. After rounds, the iteration converges and the final saliency map is obtained as:
(21) 
where and control the weight of the two results.
As shown in Figure 4(f), through the Iterative Random Walk, information from the color space helps cut the whole salient object clearly. Semantic information from deep features helps locate the target object accurately. Also, objects that could not be detected in one feature space can be detected with the help of results from the other feature space.
Vi Experiments and Results
In this section, we evaluate the proposed method on 6 benchmark datasets: ECSSD [35] (1000 images), PASCALS [20] (850 images), MSRA5000 [21] (5000 images), HKUIS [18] (4447 images), DUTOMRON [36] and SOD [24].
We compare our algorithm with 11 stateoftheart methods including BL [27], BSCA [25], DRFI [13], DSR [19], HS [35], LEGS [30], MCDL [37], MR [36], RC [7], wCO [38], and KSR [33]. Results of different methods are provided by authors or achieved by running available codes.
Via Parameter setting
All parameters are set once fixed over all the datasets. We segment an image into 100, 150, 200, and 250 superpixels (i.e., 4 segmentation image), run the algorithm on each map, and average the four outputs to form the final saliency map. is set to 0.1 and is set to . The parameters controlling the weight of each term in the payoff function (Eqn. 8) are set to , respectively. in Eqn. 11 is set to 0.007. in Eqn. 19 and Eqn. 20 is set to 1. In Eqn. 21, we set , and .
The proposed method is implemented in MATLAB on a PC with a 3.6GHz CPU and 32GB RAM. It takes about 2.3 seconds to generate a saliency map, excluding the time for deep feature extraction and superpixel segmentation.
ViB Evaluation metrics
We use precisionrecall curve, Fmeasure curve, Fmeasure and AUC to quantitatively evaluate the experimental results. The precision value is defined as the ratio of salient pixels correctly assigned to all salient pixels in the map to be evaluated, while the recall value corresponds to the percentage of the detected salient pixels with respect to all salient pixels in the groundtruth map. The Fmeasure is an overall performance indicator computed by the weighted harmonic of precision and recall. We set
as suggested in [1] to emphasize the precision.Given a saliency map with intensity values normalized to the range of 0 and 1, a series of binary maps are produced by using several fixed thresholds in . We compute the precision/recall pairs of all the binary maps to plot the precisionrecall curves and the Fmeasure curves. As suggested in [1], we use twice the mean value of the saliency maps as the threshold to generate binary maps for computing Fmeasure. Notice that some works have reported slightly different Fmeasures using different thresholds.
ViC Algorithm validation
To demonstrate the effectiveness of each step of our algorithm, we test the proposed Saliency Game and the Iterative Random Walk (with and without metric fusion) separately on ECSSD and PASCALS datasets. PR curves in Figure 5 show that:

The proposed saliency game algorithm achieves favorable performance. Note that even when using only simple color features, as a fully unsupervised method, our proposed Saliency Game (C in Figure 5) algorithm is comparable with supervised methods.

The Iterative Random Walk improves performance in both deep feature space (D in Figure 5) and color space CD(IRW). Comparing results of Iterative Random Walk CD(IRW), Iterative Random Walk without metric fusion CD(IRW*), and weighted sum of results in the two feature spaces CD(ave), demonstrates the advantage of cross penalization and metric fusion in the Iterative Random Walk.
In addition, as an unsupervised approach, our method is economical and practical. Although some deep learning based methods outperform ours in few cases, a lot of time and a large number of training samples are required to assure their effectiveness. Otherwise, performance of these methods might not be as well as ours. To demonstrate this, we show comparison in terms of Fmeasure between our method and RFCN [31] finetuned on different number of training samples in Figure 6. RFCN is a recently proposed deep learning based method that achieved excellent performance. However, it can be seen from the figure that RFCN does not do well without finetuning. Its Fmeasure increases as the number of training samples grows. Our method is equivalent to RFCN finetuned on about 50009000 images.
ViD Comparison with stateoftheArt methods
As is shown in Figure 10, Figure 11, Table I and Table II
, our proposed method compares favorably against 11 stateoftheart approaches over six different datasets. Among models, BL, BSCA, DSR, HS, MR, RC, wCO are unsupervised methods. DRFI, LEGS, MCDL, KSR are supervised methods. DRFI learns a random forest regressor, LEGS and MCDL train a convolutional neural network, KSR learns a classifier and a subspace projection to rank object proposals based on RCNN features. For a fair comparison, we do not provide evaluation results of DRFI, LEGS, MCDL, and KSR methods on MSRA5000 dataset since these methods all randomly select images from this dataset for training. Further, since LEGS also selects images from PASCALS dataset, we do not show its performance over the PASCALS dataset. Visual comparison of the proposed method against stateoftheart on different datasets is shown in Figure
12, 13, 14, 15.ViE Sensitivity analysis
In this section, we test sensitivity of the proposed Saliency Game to parameters , , and sensitivity of the Iterative Random Walk to parameters and . As shown in Figure 7 and 8, the performance in terms of Fmeansure score almost keeps the same when varying the parameters a little, so the proposed method is not sensitive to these parameters.
ViF Equilibria
The proposed Saliency Game is a special category of games named polymatrix games [10], where each player plays a twoplayer game against each other and his payoff is then the sum of the payoffs from each of the twoplayer games [8]. Howson et al. [10] showed that every polymatrix game has at least one equilibrium. Therefore, the proposed Saliency Game also has at least one, but could have more than one equilibria. Replicator Dynamics is invoked to find a Nash equilibrium of the game, in which different Nash equilibria might be reached if the initial state is set to different interior points of . Empirically, we find that is a good initialization leading to plausible saliency detection. In this section, we show the saliency detection results corresponding to other four Nash equilibria, reached by Replicator Dynamics starting from four different interior points of . We denote the initial state used in the paper as , and the other four initial states as , , , and . Each of them is a matrix, where the th column vector (denoted as , , , , respectively) corresponds to the mixed strategy of superpixel . Variables , , and are set as follows:
(22) 
(23) 
(24) 
(25) 
where is the saliency value of superpixel computed by another saliency detection method. In this experiment, we use MR [36] model to compute . We try four different initial states to test whether inducing prior knowledge into the initial state leads to a better saliency detection result. Each of the five different Nash equilibria corresponds to a saliency detection result. We show the quantitative comparison of the five different results in terms of Fmeasure curves and PR curves in Figure 9. It can be seen that the initial state without any prior knowledge, which is adopted in the paper, leads to the best saliency detection.
Vii Summary and Conclusion
We propose a novel saliency detection algorithm. Firstly, we formulate a Saliency Game among superpixels, and a saliency map is generated according to each region’s strategy in the Nash equilibrium of the proposed Saliency Game. Secondly, an iterative random walk that combines a deep feature and a color feature is constructed to refine the saliency maps generated in the last step. Extensive experiments over four benchmark datasets demonstrate that the proposed algorithm achieves favorable performance against stateoftheart methods. The sensitivity analysis shows the robustness of the proposed method to parameter changes.
Different from most previous methods that formulate saliency detection as minimizing one single energy function, the gametheoretic approach can be regarded as maximizing many competing objective functions simultaneously. The game equilibrium automatically provides a tradeoff. This seems very natural for attention modeling and saliency detection, as also features and objects compete to capture our attention. In addition, compared with CNN based saliency detection methods which need to be trained on images with pixellevel masks as ground truth, the proposed method extracts features from a pretrained CNN and combines them with color features in an unsupervised manner. This provides an efficient complement to CNNs that does on par with models that need labeled training data. Hopefully, our approach will encourage future models that can utilize both labeled and unlabeled data.
Acknowledgment
The authors would like to thank…
References

[1]
R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk.
Frequencytuned salient region detection.
In
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition
, pages 1597–1604, 2009.  [2] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. Slic superpixels. 2010.
 [3] A. Albarelli, B. S. Rota, A. Torsello, and M. Pelillo. Matching as a noncooperative game. In Proceedings of the IEEE International Conference on Computer Vision, pages 1319–1326, 2009.
 [4] A. Borji, M.M. Cheng, H. Jiang, and J. Li. Salient object detection: A benchmark. IEEE Transactions on Image Processing, 24(12):5706–5722, 2015.
 [5] A. Borji and L. Itti. Stateoftheart in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell., 35(1):185–207, 2013.
 [6] A. Chakraborty and J. S. Duncan. Gametheoretic integration for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(1):12–30, 1999.
 [7] M.M. Cheng, N. J. Mitra, X. Huang, P. H. Torr, and S.M. Hu. Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3):569–582, 2015.
 [8] A. Deligkas, J. Fearnley, T. P. Igwe, and R. Savani. An empirical study on computing equilibria in polymatrix games. 2016.
 [9] A. Erdem and M. Pelillo. Graph transduction as a noncooperative game. Neural Computation, 24(3):700–723, 2012.
 [10] J. Howson, Joseph T. Equilibria of polymatrix games. Management Science, 18(5Part1):312–318, 1972.
 [11] R. A. Hummel and S. W. Zucker. On the foundations of relaxation labeling processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5(3):267–287, 1983.

[12]
B. Jiang, L. Zhang, H. Lu, C. Yang, and M. H. Yang.
Saliency detection via absorbing markov chain.
In Proceedings of the IEEE International Conference on Computer Vision, pages 1665–1672, 2013.  [13] H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li. Salient object detection: A discriminative regional feature integration approach. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 2083–2090, 2013.
 [14] Y. Kong, L. Wang, X. Liu, H. Lu, and R. Xiang. Pattern mining saliency. In Proceedings of European Conference on Computer Vision, 2016.
 [15] P. Krähenbühl and V. Koltun. Geodesic object proposals. In Proceedings of European Conference on Computer Vision, pages 725–739, 2014.
 [16] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 [17] G. Lee, Y.W. Tai, and J. Kim. Deep saliency with encoded low level distance map and high level features. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016.
 [18] G. Li and Y. Yu. Visual saliency based on multiscale deep features. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 5455–5463, 2015.
 [19] X. Li, H. Lu, L. Zhang, X. Ruan, and M.H. Yang. Saliency detection via dense and sparse reconstruction. In Proceedings of the IEEE International Conference on Computer Vision, pages 2976–2983, 2013.
 [20] Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille. The secrets of salient object segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 280 – 287, 2014.
 [21] T. Liu, J. Sun, N. N. Zheng, X. Tang, and H. Y. Shum. Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2):353–67, 2011.
 [22] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1337–1342, 2015.
 [23] D. A. Miller and S. W. Zucker. Copositiveplus lemke algorithm solves polymatrix games. Operations Research Letters, 10(5):285–290, 1991.
 [24] V. Movahedi and J. H. Elder. Design and perceptual validation of performance measures for salient object segmentation. In Computer Vision and Pattern Recognition Workshops, pages 49–56, 2010.
 [25] Y. Qin, H. Lu, Y. Xu, and H. Wang. Saliency detection via cellular automata. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2015.
 [26] P. D. Taylor and L. B. Jonker. Evolutionarily stable strategies and game dynamics. Journal of Theoretical Biology, 40(12):145–156, 1978.
 [27] N. Tong, H. Lu, X. Ruan, and M.H. Yang. Salient object detection via bootstrap learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1884–1892, 2015.
 [28] A. Torsello, S. R. Bulo, and M. Pelillo. Grouping with asymmetric affinities: A gametheoretic perspective. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 292–299, 2006.
 [29] Z. Tu, Z. H. Zhou, W. Wang, J. Jiang, and B. Wang. Unsupervised metric fusion by cross diffusion. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 2997–3004, 2012.

[30]
L. Wang, H. Lu, R. Xiang, and M. H. Yang.
Deep networks for saliency detection via local estimation and global search.
In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3183–3192, 2015.  [31] L. Wang, L. Wang, H. Lu, P. Zhang, and R. Xiang. Saliency detection with recurrent fully convolutional networks. In Proceedings of European Conference on Computer Vision, 2016.
 [32] Q. Wang, W. Zheng, and R. Piramuthu. Grab: Visual saliency via novel graph model and background priors. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016.
 [33] T. Wang, L. Zhang, H. Lu, C. Sun, and J. Qi. Kernelized subspace ranking for saliency detection. In Proceedings of European Conference on Computer Vision, 2016.
 [34] J. W. Weibull. Eolutionary Game Theory. MIT Press,, 1999.
 [35] Q. Yan, L. Xu, J. Shi, and J. Jia. Hierarchical saliency detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1155–1162, 2013.
 [36] C. Yang, L. Zhang, H. Lu, X. Ruan, and M.H. Yang. Saliency detection via graphbased manifold ranking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3166–3173, 2013.
 [37] R. Zhao, W. Ouyang, H. Li, and X. Wang. Saliency detection by multicontext deep learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1265–1274, 2015.
 [38] W. Zhu, S. Liang, Y. Wei, and J. Sun. Saliency optimization from robust background detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 2814–2821, 2014.
Comments
There are no comments yet.