Learning a time-dependent master saliency map from eye-tracking data in videos

02/02/2017 ∙ by Antoine Coutrot, et al. ∙ 0

To predict the most salient regions of complex natural scenes, saliency models commonly compute several feature maps (contrast, orientation, motion...) and linearly combine them into a master saliency map. Since feature maps have different spatial distribution and amplitude dynamic ranges, determining their contributions to overall saliency remains an open problem. Most state-of-the-art models do not take time into account and give feature maps constant weights across the stimulus duration. However, visual exploration is a highly dynamic process shaped by many time-dependent factors. For instance, some systematic viewing patterns such as the center bias are known to dramatically vary across the time course of the exploration. In this paper, we use maximum likelihood and shrinkage methods to dynamically and jointly learn feature map and systematic viewing pattern weights directly from eye-tracking data recorded on videos. We show that these weights systematically vary as a function of time, and heavily depend upon the semantic visual category of the videos being processed. Our fusion method allows taking these variations into account, and outperforms other state-of-the-art fusion schemes using constant weights over time. The code, videos and eye-tracking data we used for this study are available online: http://antoinecoutrot.magix.net/public/research.html

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Our visual environment contains much more information than we are able to perceive at once. Attention allows us to select the most relevant parts of the visual scene and bring the high-resolution part of the retina, the fovea, onto them. The modeling of visual attention has been the topic of numerous studies in many different research fields, from neurosciences to computer vision. This interdisciplinary interest led to the publication of a large number of computational models of attention, with many different approaches, see

[1] for an exhaustive review. In fact, being able to predict the most salient regions in a scene leads to a wide range of applications, like image segmentation [2], image quality assessment [3], image and video compression [4], image re-targeting [5], video summarization [6], object detection [7] and recognition [8], robot navigation [9], human-robot interaction [10], retinal prostheses [11], tumours identification in X-rays images [12].

Fig. 1:

State-of-the-art of feature map fusion schemes. They are classified in 4 families. 1- Image processing methods, eventually relying on cognitive priors, 2- Machine-learning methods, 3- Statistical modeling methods, 4- Learning from eye-data methods. These families can intersect. From top to bottom and left to right: Milanese 1994

[13], Itti 1998, 2000, 2005, [14, 15, 16], Ma 2005 [17], Frintrop 2005 [18], LeMeur 2007 [19], Cerf 2009 [20], Marat 2009 [21], Ehinger2009 [22], Xiao 2010 [23], Evangelopoulos 2013 [6], Erdem 2013 [24], Lu 2010 [25], Peng 2009 [26], Murray 2011 [27], Chamaret 2010 [28], Seo 2009 [29], Lee 2011 [30], Lin 2014 [31], Kavak 2013 [32], Zhao 2012 [33], Borji 2012 [34], Bruce 2005 [35], Han 2007 [36], Riche 2012 [37], Kienzle 2006 [38], Judd 2009 [39], Liang 2015 [40], Liang 2010 [41], Liu 2010 [42], Navalpakkam 2005 [43], Torralba 2006 [44], Zhang 2008 [45], Zhang 2009 [46], Boccignone 2008 [47], Elazary 2010 [48], Harel 2006 [49], Parks 2014 [50], Rudoy 2013 [51], Vincent 2009 [52], Couronne 2010 [53], Ho Phuoc 2010 [54], Gautier 2012 [55], Coutrot 2014 [56, 57], Peters 2007 [58], Zhao 2011 [59].

Determining which location in a scene will capture attention - and hence the gaze of observers - requires finding the most salient subset of the input visual stimuli. Saliency relies on both stimulus-driven (bottom-up) and observer or task-related (top-down) features. Most visual attention models focus on bottom-up processes and are guided by the Feature Integration Theory (FIT) proposed by Treisman and Gelade

[60]. They decompose a visual stimulus into several feature maps dedicated to specific visual features (such as orientations, spatial frequencies, intensity, luminance, contrast) [61, 14, 19, 21]. In each map, the spatial locations that locally differ from their surroundings are emphasized (conspicuity maps). Then, maps are combined into a master saliency map that points out regions the most likely to attract the visual attention, and the gaze, of observers. The visual features used in a model play a key role as they strongly affect its prediction performance. Over the years, authors refined their models by adding different features, such as center-bias [39], faces [62], text [20], depth [55], contextual information [44], or sound [56, 57]. Other approaches, moving away from the FIT, have also been proposed. They often rely on machine-learning or statistical modeling techniques such as graphs [49], conditional random fields [42]

, Markov models

[50], multiple kernel learning [32], adaptive boosting [33], Bayesian modeling [16], or information theory [31].
Major problems arise when attempting to merge feature maps having different dynamic ranges, stemming from different visual dimensions. Various fusion schemes have been used. In Fig. 1, we propose a classification of these methods according to four main approaches: image processing (eventually relying on cognitive priors), statistical modeling, machine-learning and learning from eye data. Note that some methods might belong to two classes, like ”statistical modeling” and ”learning from eye data”: statistical models can be used to learn the feature map weights that best fit eye-tracking data [51, 52, 53, 54, 55, 56, 57, 58, 59]. In the following, we focus on schemes involving a weighted linear combination of feature maps:

(1)

with S the master saliency map,

the maps of K features extracted from the image or video frame being processed, and

their associated weights. S and M are two-dimensional maps corresponding to the spatial dimensions of the visual scene. Non-linear fusion schemes based on machine-learning techniques such as multiple kernel learning can be efficient, but they often suffer from being used as a ”black box”. On the other hand, linear combinations are easier to interpret: the greater the weight, the more important the corresponding feature map in the master saliency map. Moreover, such linear fusion is biologically plausible [63].
In this linear context, we review two main approaches to determine the weights allocated to different feature maps. The first approach is based on priors (psychovisual or image-based), while the second is totally data-driven: the feature weights are learned to fit eye-tracking data.

I-a Computing weights via image processing and cognitive priors

Itti et al. proposed an efficient normalization scheme that had been taken up many times [14, 15]. First, all feature maps are normalized to the same dynamic range. Second, to promote feature maps having a sparse distribution of saliency and to penalize the ones having a more uniform spatial distribution, feature map regions are weighted by their local maximum. Finally, feature maps are simply averaged. Marat et al. apply the same normalization, and use a fusion taking advantage of the characteristics of the static (luminance) and the dynamic (motion) saliency maps [21]

. Their weights are respectively equal to the maximum of the static saliency map and to the skewness of the dynamic saliency map. A similar approach is adopted in

[20], where authors use a spatial competition mechanism based on the squared ratio of global maximum over average local maximum. This promotes feature maps with one conspicuous location to the detriment of maps presenting numerous conspicuous locations. Then feature maps are simply averaged. In [18], an analogous approach is proposed: each feature map is weighted by , with m the number of local maxima that exceed a given threshold. In [19], authors propose a refinement of the normalization scheme introduced by Milanese et al. in [13]. First, the dynamic range of each feature map is normalized by using the theoretical maximum of the considered feature. The saliency map is obtained by a sum of maps representing the inter and intra-feature competition.
Other saliency models simply weight feature maps with coefficients determined by testing various values, and keep the best empirical weights [17, 22, 23, 6].
Visual saliency models are often evaluated by comparing their outputs against the regions of the scene actually looked at by humans during an eye-tracking experiment. In some cases they can be very efficient, especially when attention is mostly driven by bottom-up processes.

However, visual exploration is not only shaped by bottom-up visual features, but is also heavily determined by numerous viewing behavioral biases, or systematic tendencies [64]. Some models solely based on viewing biases, i.e. blind to any visual information from the input visual scene have been even shown to outperform state-of-the-art saliency models [65]. For instance, observers tend to look more at the center of a scene rather than at the edges; this tendency is known as the center bias [66, 67]. Some authors introduced this bias in their models through a static and centered 2D Gaussian map, leading to significant improvements [55, 62]. Many eye-tracking experiments have shown that the center bias is time-dependent: stronger at the beginning of an exploration than at the end [68, 69]. Moreover, several studies have pointed out that, in many contexts, top-down factors such as semantic or task clearly take the precedence over bottom-up factors to explain gaze behavior [70, 71]. Visual exploration also relies on many individual characteristics such as personality or culture [72, 73]. Thus, time-independent fusion schemes only considering the visual features of the input often have a hard time accounting for the multivariate and stochastic nature of human exploration strategies [74].

I-B Learning weights from eye positions

To solve this issue, a few authors proposed to build models in a supervised manner, by learning a direct mapping from feature maps to the eye positions of several observers. Feature maps could encompass classic low-level visual features as well as maps representing viewing behavioral tendencies such as the center bias. The earliest learning-based approach was introduced by Peters & Itti in [58]. They used linear least square regression with constraints to learn the weights of feature maps from eye positions. Formally, let be a set of K feature maps,

the corresponding vector of weights and Y an eye position density map, represented as the recorded fixations convolved with an isotropic Gaussian kernel. The least square (LS) method estimates the weights

by solving

(2)

This method is repeated with success in [59] and [34]. Another method to learn the weights to linearly combine the visual feature maps is the Expectation-Maximisation algorithm [75]. It has first been applied in [52], and taken over in [53, 54, 55, 56]

. First, the eye position density map Y and the feature maps M are converted into probability density functions. After initializing the weights

, the following steps are repeated until convergence. Expectation: the current model (i.e. the current ) is hypothesized to be correct, and the expectation of the model likelihood is computed (via the eye position data). Maximization: are updated to maximize the value found at the previous step.
To be exhaustive, let us mention some other models that do not use a weighted linear combination of feature maps, but that are still trained on eye-tracking data. In [38, 39, 34, 40]

, a saliency model is learnt from eye movements using a support vector machine (SVM). In

[41]

, the authors refine a region-based attention model with eye-tracking data using a genetic algorithm. Finally, Zhong

et al.

first use a Markov chain to model the relationship between the image feature and the saliency, and then train a support vector regression (SVR) from eye-tracking data to predict the transition probabilities of the Markov chain

[76].

I-C Contributions of the present study

  1. We provide the reader with a comprehensive review of visual feature fusion schemes in saliency models, exhaustive for gaze-based approaches.

  2. So far, all these gaze-based saliency models have only been used with static images. Moreover, feature weights have generally been considered constant in time. Visual exploration is a highly dynamic process shaped by many time-dependent factors (e.g. center bias). To take these variations into account, we use a statistical shrinkage method (Least Absolute Shrinkage and Selection Operator, Lasso) to learn the weights of feature maps from eye data for a dynamic saliency model, i.e. for videos.

  3. For the first time, we demonstrate that the weights of the visual features depend both on time (beginning vs. end of visual exploration) and on the semantic visual category of the video being processed. Dynamically adapting the weights of the visual features across time allows to outperform state-of-the-art saliency models using constant feature weights.

Ii Least Absolute Shrinkage and Selection Operator algorithm

The different methods used in the literature to learn a master saliency map from eye-tracking data are enshrined within the same paradigm. Starting from predefined feature maps, they estimate the weights leading to the optimal master saliency map, i.e. the one that best fits actual eye-tracking data. Hence, the cornerstone of these methods is the estimation approach. Least square regression and Expectation-Maximization suffer from two flaws sharing the same root. The first one is prediction accuracy: least square estimates have low bias but large variance, especially when some feature maps are correlated. For instance, a high positive weight could be compensated by a high negative weight of the corresponding correlated feature. Second is interpretation: when a model is fitted with many features, it is difficult to seize the ”big picture”. In a nutshell, we would like to be able to sacrifice potentially irrelevant features to reduce the overall variance and improve the readability. This is precisely the advantage of Lasso over other learning methods such as least square regression or Expectation-Maximization

[77]. Although widely spread in other fields such as genetic [78]

or pattern recognition

[79], Lasso has never been used in the vision science society. It is a shrinkage method. Given a model with many features, it allows selecting the most relevant ones and discarding the others, leading to a more interpretable and efficient model [80]. Lasso shrinks the feature weights by imposing an L1-penalty on their size.

(3)

with a penalty parameter controlling the amount of shrinkage. If , the Lasso algorithm corresponds to the least square estimate: all the feature maps are taken into account. If , the weights

are shrunk toward zero, as well as the number of considered feature maps. Note that the feature maps M have to be normalized, Lasso being sensitive to the scale. Lasso is quite similar to the ridge regression

[81], just replacing the L2-penalty by a L1-penalty. The main advantage of L1 compared to L2 penalization is that L1 is even sparser: irrelevant feature maps are more effectively discarded [82, 80]. Computing the Lasso solution is a quadratic programming problem, and different approaches have been proposed to solve the problem. Here, we adapted the code proposed in the Sparse Statistical Modeling toolbox [83], and made is available online222http://antoinecoutrot.magix.net/public/code.html. is tuned to more or less penalize the feature map weights. A new vector is determined at each step . The model (i.e. the weights) having the weakest Bayesian Information Criterion (BIC) is chosen. BIC is a measure of the quality of a statistical model. To prevent overfitting, it takes into account both its likelihood and the number of parameters [84]. For a model M and an observation Y,

(4)

with the likelihood, K the number of feature maps, and n the number of points in Y. weights are signed, and their sum are not necessarily unit, so for interpretability purposes, we normalized them between 0 and 1.

Iii Practical Application

Fig. 2:

(a) Frame extracted from a video belonging to One Moving Object category. The white and black dots represent the eye positions of participants. (b) Modeling of the eye position density map (left) with a weighted linear combination of the static saliency map, the dynamic saliency map, a centre bias map, and a uniform distribution map.

Fig. 3: Temporal evolution of the weights of static saliency map, dynamic saliency map and center bias map learned via the Lasso algorithm for the four visual categories of videos. The temporal evolution of the uniform distribution weight is not represented because equals to zero for all frames. Error bars represent s.e.m.

We apply the Lasso algorithm to a video and eye-tracking database freely available online: http://antoinecoutrot.magix.net/public/databases.html

Iii-a Eye-tracking data

The eye movements of 15 participants were recorded using an eye-tracker (Eyelink 1000, SR Research, Eyelink, Ottawa, Canada) with a sampling rate of 1000 Hz and a nominal spatial resolution of 0.01 degree of visual angle. We recorded the eye movements of the dominant eye in a monocular pupil-corneal reflection tracking mode. For more details, see [56].

Iii-B Visual material

The videobase comprises 60 videos belonging to four visual categories: videos with faces, one moving object (one MO), several moving objects (several MO) and landscapes. Videos lasted between 10 and 31 seconds (720576 pixels, 25 fps), and were presented with their original monophonic soundtracks (Fs=48 kHz). Face videos present conversations between two to four people. We chose videos belonging to these categories since their regions of interest have very different spatial organization. For instance, observers’ gaze would be clustered on speakers’ face in Faces category, but would be more scattered in Landscapes category [85, 56]. For each video frame, we computed the following feature maps.
Eye position density map A 2D Gaussian kernel (std = 1 degree of visual angle) is added to each of the 15 recorded eye positions.
Static saliency map We used the spatio-temporal saliency model proposed in [21]. This biologically inspired model is based on luminance information. It processes the high spatial frequencies of the input to extract luminance orientation and frequency contrast through a bank of Gabor filters. Then, the filtered frames are normalized to strengthen the spatially distributed maxima. This map emphasizes the high luminance contrast.
Dynamic saliency map We used the dynamic pathway of the same model. First, camera motion compensation is performed to extract only the moving areas relative to the background. Then, through the assumption of luminance constancy between two successive frames, low spatial frequencies are used to extract moving areas. Finally, a temporal median filter is applied over five successive frames to remove potential noise from the dynamic saliency map. By giving the amplitude of the motion for each pixel, this map emphasizes moving areas.
Centre bias Most eye-tracking studies reported that observers tend to gaze more often at the center of the image than at the edges [66, 67]. As in [56], the center bias is modeled by a time-independent bi-dimensional Gaussian function centered at the screen center: , with the covariance matrix and the variance. We chose and proportional to the frame size (28x 22.5). The results presented in this study were obtained with and .
Uniform map All pixels have the same value (1 / (720576)). This map means that fixations might occur at all positions with equal probability. This feature is a catch-all hypothesis that stands for fixations that are poorly explained by other features. The lower the weight of this map is, the better the other features explain the eye fixation data.
Every map is then normalized as a 2D probability density function. As illustrated Fig. 2, we use the Lasso algorithm to learn for each frame the weights that lead to the best fit of the eye position density map. Fig. 3, we plot the temporal evolution of the feature weights across the first 50 frames (2 seconds). The uniform map weight is not represented as they all have been shrunk to zero. The weights are very different, both within and between visual categories. They all start by a sharp increase (static saliency, dynamic saliency) or decrease (centre bias) that lasts the first 15 frames (600 ms). Then, the weights plateau until the end of the video. The weight of static saliency stays small across the exploration, with no big difference between visual categories. The weight of dynamic saliency also is modest, except for Faces category. The weight of the centre bias is complementary to the one of the dynamic saliency, with rather high values, except for Faces visual category.
To better understand the singularity of Face maps, we added faces as a new feature for this visual category (Fig. 4). Face masks have been semi-automatically labelled using Sensarea. Sensarea is an authoring tool that automatically or semi-automatically performs spatio-temporal segmentation of video objects [86]. We learned new weights leading to the best fit of the eye position density map with the Lasso algorithm (Fig. 5). After a short dominance of the center bias (similar to the one observed Fig. 3), the face map weight clearly takes the precedence over the other maps. After that time, the uniform distribution weight is null, static saliency weight is negligible, and dynamic saliency and center bias weights are at the same level.

Fig. 4: (a) Frame extracted from a video belonging to Faces category. (b) Modeling of the eye position density map (left) with a weighted linear combination of static saliency map, dynamic saliency map, center-bias map, uniform map and a map with face masks.
Fig. 5: Temporal evolution of the weights of static saliency map, dynamic saliency map, center bias map and face map, learned via the Lasso algorithm for videos belonging to Faces category. The temporal evolution of the uniform map weight is not represented because null. Error bars represent s.e.m.
Fig. 6:

Temporal evolution of the Normalized Scanpath Saliency (NSS, left) and the Kullback-Leibler Divergence (KLD, right) for 5 different fusion schemes. 1- Fusion of the static and dynamic saliency and center bias maps as proposed in Marat 2009

[21]. 2- Simple mean of static and dynamic saliency, center bias and uniform maps. 3- Fusion of static and dynamic saliency, center bias and faces maps as proposed in Marat 2013 [62]. 4- Weights of static and dynamic saliency, center bias and faces maps learned with the Expectation-Maximization (EM) algorithm and 5- with the Lasso algorithm. The greater the NSS and the lower the KLD, the better the model. Error bars represent s.e.m.

Iii-C Evaluation and comparison of fusion schemes

We compared the performance of two time-dependent learning methods (Expectation-Maximization (EM) and Lasso) on Faces category with three other fusion schemes using the same feature maps. 1) The fusion introduced in [21]. This method only considers static saliency, dynamic saliency, and center bias maps, and does not take faces into account. After normalization, the authors multiply the static () and dynamic () saliency maps with the center bias map. Then, they use a fusion taking advantage of the characteristics of these maps:

(5)

with and . Dynamic saliency maps with a high skewness correspond, in general, to maps with only small and compact areas. The term gives more importance to the regions that are salient both in a static and dynamic way. 2) The mean of every feature map: static saliency, dynamic saliency, center bias, uniform distribution and faces. 3) The fusion proposed in [62]. This fusion scheme is similar to the one introduced in [21], but adds faces as a feature. A visual comparison of a time-dependent fusion (Lasso) with a time-independent fusion (Marat 2013) is available online.
For each frame, we compared the experimental eye position map with the master saliency map obtained for the four fusion schemes (Fig. 6

). Not to evaluate the saliency maps with the same eye positions as the ones we used to estimate their corresponding weights, we followed a ’leave-one-out’ approach. More precisely, the weights used to train the model for a given video originate from the average over the weights of every video but the one being processed. We used two evaluation metrics: the Normalized Scanpath Saliency (NSS,

[87]) and the Kullback-Leibler Divergence (KLD, [88]). We chose these two metrics because they are widely used for saliency model assessment [89] and because they provide two different pictures of models’ overall performance [90]

. The NSS acts like a z-score computed by comparing the master saliency map to the eye position density map. The KLD computes an overall dissimilarity between two distributions. The greater the NSS and the lower the KLD, the better the model. The results displayed Fig. 7 show a consistency between the two metrics: when the NSS of a fusion scheme is high, the corresponding KLD is low. We observe the same pattern as we did for the Lasso weights. There is a sharp increase or decrease that lasts the first 15 frames (600 ms). Then, the metrics plateau until the end of the video. We ran a two-way ANOVA with two within factors: the fusion scheme (Marat 2009 (no face), Mean, Marat 2013, EM and Lasso) and the period of visual exploration (1st and 2nd periods). We averaged NSS values over the first 15 frames (1st period, 600 ms) and from frame 16 to the end of the exploration (2nd period). We found an effect of fusion scheme (F(4,140)=161.4,

), period of visual exploration (F(1,140)=13.4, and of their interaction (F(4,140)=39.8, ). Simple main effect analysis showed that during the first period, all the fusion schemes are significantly different (all ), except EM and Lasso (p=0.98). EM and Lasso fusions are far above the other. During the second period, there is a significant difference between all fusion schemes, except between EM and Lasso (p=0.97), EM and Marat 2013 () and Lasso and Marat 2013 (). We ran the same analysis on KLD scores. We found an effect of fusion scheme (F(4,140)=215.6, ), period of visual exploration (F(1,140)=92, ) and of their interaction (F(4,140)=24.8, ). Simple main effect analysis showed that during both periods, all the fusion schemes are significantly different (all ), except EM and Lasso (p=0.90 in 1st period, p=0.68 in 2nd period). Lasso and EM fusions are below the other.
Overall, both time-dependent learning fusions EM and Lasso lead to significantly better results with both metrics during the first period of exploration. During the second period, EM and Lasso fusions are still better with KLD, and perform on a similar level as Marat 2013 with NSS. Here, EM and Lasso perform quite similarly. This can be explained by the relatively small number of features involved in the model (static saliency, dynamic saliency, center bias, faces and uniform maps). Lasso would probably lead to a better combination for larger sets of features (e.g. 33 features in [39]) by shrinking the weight of the less relevant ones to zero.

Iv Discussion

The focus of this paper is on time-dependent estimation of the weights of visual features extracted from dynamic visual scenes (videos). The weights are used to linearly combine low level visual feature and viewing bias maps into a master saliency map. The weights are learned frame by frame from actual eye-tracking data via the Lasso algorithm. We show that feature weights dramatically vary as a function of time and of the semantic visual category of the video.

Iv-a Influence of time

Many studies have shown that gaze behavior strongly evolves across the time span of the exploration. This evolution is driven by both stimulus-dependent properties and task or observer-dependent factors. A simple way to quantify this evolution is by measuring the inter-observers eye position spatial dispersion across time. When this value is small, the observers tend to share a similar strategy; when it increases, the strategies are more diverse. Different eye-tracking studies converged toward the same temporal pattern: at the very beginning of the exploration, the inter-observers dispersion is small, then increases and reaches a plateau [68, 91, 69, 92]. Moreover, if the stimulus is a video composed of several shots, the process is reset after each cut [69]. The low dispersion at the beginning of the exploration can be explained by the center bias. Indeed, the center of the screen being an optimal location for early information processing of the scene, most observers start from there [66]. Then, the observers adopt different exploration strategies, inducing an increase of the dispersion. When exploring static images, the dispersion keeps increasing [93, 68] while in dynamic scenes, the constant appearance of new information promotes bottom-up influences at the expense of top-down strategies. This induces a stable consistency between participants over time [94, 21, 69]. In this paper, we found the same temporal pattern in the weights of the feature maps. A short dominance of the center bias followed by a sharp increase (static saliency, dynamic saliency, faces) or decrease (centre bias) that lasts the first 15 frames (600 ms). Then, the weights plateau until the end of the video. Hence, one of the advantages of learning the weights of feature maps via actual eye-tracking data is that it allows capturing some oculomotor systematic tendencies, such as the center bias. One could argue that this interest is limited since it only concerns the first 600 ms of the exploration. However, an analysis of 160 Hollywood-style films released from 1935 to 2010 shows a linear decrease of mean shot duration [95]. Over 75 years, average shot durations have declined from about 15 to only 3.5 s. Since the center bias dominance is reset after each cut, it concerns almost 20% of the movie.

Iv-B Influence of the semantic visual category of the video

Our results show that feature weights also heavily rely on the semantic visual category of the video being processed. For instance, weights of dynamic saliency maps are globally higher in Several Moving Objects category than in Landscapes category, where motion is almost absent. The only exception concerns the weights of static saliency maps, which remain quite low for every visual category. Previous studies have shown that low-level static saliency predicts gaze position very poorly [94, 91, 74]. In [65], Tatler & Vincent even showed that a model solely based on oculomotor biases, and thus blind to current visual information, outperforms popular low-level saliency models. Our results are in line with this finding: in every visual category, weights of static saliency maps are small compared to the ones of center bias. When the visual scene involves faces, the situation is different. The weights of dynamic saliency maps are much higher, at the expense of the weights of center bias. However, adding faces as a feature for this category show that the face regions explain around 80% of the recorded eye positions. These two models (with and without considering face as a feature) show that face regions are correlated with motion. This can be understood under the light of the audiovisual speech literature, where many studies have pointed out the importance of gestures (lip motion, facial dynamic expression, head movements) in speech perception [96, 97, 98, 99].

V Conclusion

When exploring a complex natural scene, gaze can be driven by a multitude of factors including bottom-up features, top-down features, and viewing biases. In this paper we show that algorithms such as Lasso or Expectation-Maximization can be used to jointly estimate the variations of the weight of all these features across time and semantic visual categories directly from eye-tracking data. This allows to tune the feature weights across the time course of the exploration. By doing so, our method outperforms other state-of-the-art fusion strategies based on the same visual features. Since many studies have shown systematic variations in gaze behavior patterns between different groups of people (e.g. based on culture, age, gender…), our method could be used to optimally tune a saliency model for a specific population. For instance, it has been shown that while viewing photographs with a focal object on a complex background, Americans fixate more on focal objects than do the Chinese, who make more saccades to the background [72]. Hence, training a model with Chinese’s or Americans’ eye data would certainly lead to different feature weights. In the same vein, males have been shown to be less exploratory than females, especially when looking at faces [100]. This approach can be compared with saccadic models [101, 102]. Instead of outputting saliency maps, saccadic models generate plausible visual scanpaths, i.e. the actual sequence of fixations and saccades an observer would do while exploring stimuli. Like our Lasso-based learning model, saccadic models can be tailored for specific populations with specific gaze patterns (e.g. younger vs. older observers). Another application of our method is related to saliency aggregation. Recent studies have shown that combining saliency maps from different models can outperform any saliency model taken alone [103, 104]. Applying the Lasso algorithm to find the best saliency model combination could significantly enhance the accuracy and the robustness of the prediction.

References

  • [1] A. Borji and L. Itti, “State-of-the-art in Visual Attention Modeling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 185–207, 2013.
  • [2] A. Mishra and Y. Aloimonos, “Active Segmentation,” International Journal of Humanoid Robotics, vol. 6, pp. 361–386, 2009.
  • [3] Q. Ma, L. Zhang, and B. Wang, “New Strategy for Image and Video Quality Assessment,” Journal of Electronic Imaging, vol. 19, no. 1, p. 011019, 2010.
  • [4] C. Guo and L. Zhang, “A Novel Multiresolution Spatiotemporal Saliency Detection Model and Its Applications in Image and Video Compression,” IEEE Transactions on Image Processing, vol. 19, no. 1, pp. 185–198, 2010.
  • [5] S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-Aware Saliency Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 1915–1926, 2012.
  • [6] G. Evangelopoulos, A. Zlatintsi, A. Potamianos, P. Maragos, K. Rapantzikos, G. Skoumas, and Y. Avrithis, “Multimodal Saliency and Fusion for Movie Summarization based on Aural, Visual, and Textual Attention,” IEEE Transactions on Multimedia, vol. 15, no. 7, pp. 1553–1568, 2013.
  • [7] N. J. Butko and J. R. Movellan, “Optimal scanning for faster object detection,” in IEEE International Conference on Computer Vision and Pattern Recognition, 2009, pp. 2751–2758.
  • [8] S. Han and N. Vasconcelos, “Biologically plausible saliency mechanisms improve feedforward object recognition,” Vision Research, vol. 50, no. 22, pp. 2295–2307, 2010.
  • [9] J. Ruesch, M. Lopes, A. Bernardino, J. Hörnstein, J. Santos-Victor, and R. Pfeifer, “Multimodal Saliency-Based Bottom-Up Attention, A Framework for the Humanoid Robot iCub,” in IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 2008, pp. 962–967.
  • [10] A. Zaraki, D. Mazzei, M. Giuliani, and D. De Rossi, “Designing and Evaluating a Social Gaze-Control System for a Humanoid Robot,” IEEE Transactions on Human-Machine Systems, vol. 44, no. 2, pp. 157–168, 2014.
  • [11] N. Parikh, L. Itti, and J. Weiland, “Saliency-based image processing for retinal prostheses,” Journal of Neural Engineering, vol. 7, no. 1, p. 016006, 2010.
  • [12] B.-W. Hong and M. Brady, “A topographic representation for mammogram segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, Montréal, Canada, 2003, pp. 730–737.
  • [13] R. Milanese, H. Wechsler, S. Gill, J. M. Bost, and T. Phun, “Integration of bottom-up and top-down cues for visual attention using non-linear relaxation,” in IEEE Conference on Computer Vision and Pattern Recognition, Seattle, USA, 1994, pp. 781–785.
  • [14] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998.
  • [15] L. Itti and C. Koch, “A saliency-based search mechanism for overt and covert shifts of visual attention,” Vision Research, vol. 40, pp. 1489–1506, 2000.
  • [16] L. Itti and P. Baldi, “Bayesian Surprise Attracts Human Attention,” in Neural Information Processing Systems, 2005, pp. 547–554.
  • [17] Y.-F. Ma, X.-S. Hua, L. Lu, and H.-J. Zhang, “A Generic Framework of User Attention Model and Its Application in Video Summarization,” IEEE Transactions on Multimedia, vol. 7, no. 5, pp. 907–919, 2005.
  • [18] S. Frintrop, G. Backer, and E. Rome, “Goal-directed search with a top-down modulated computational attention system,” in Pattern Recognition, DAGM Symposium, Vienna, Austria, 2005, pp. 117–124.
  • [19] O. Le Meur, P. Le Callet, and D. Barba, “Predicting visual fixations on video based on low-level visual features,” Vision Research, vol. 47, pp. 2483–2498, 2007.
  • [20] M. Cerf, E. P. Frady, and C. Koch, “Faces and text attract gaze independent of the task: Experimental data and computer model,” Journal of Vision, vol. 9, no. 12, pp. 1–15, 2009.
  • [21] S. Marat, T. Ho-Phuoc, L. Granjon, N. Guyader, D. Pellerin, and A. Guérin-Dugué, “Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos,” International Journal of Computer Vision, vol. 82, no. 3, pp. 231–243, 2009.
  • [22] K. A. Ehinger, B. Hidalgo-Sotelo, A. Torralba, and A. Oliva, “Modelling search for people in 900 scenes: A combined source model of eye guidance,” Visual Cognition, vol. 17, no. 6-7, pp. 945–978, 2009.
  • [23] X. Xiao, C. Xu, and Y. Rui, “Video based 3d reconstruction using spatio-temporal attention analysis,” in IEEE International Conference on Multimedia and Expo, 2010, pp. 1091–1096.
  • [24] E. Erdem and A. Erdem, “Visual saliency estimation by nonlinearly integrating features using region covariances,” Journal of Vision, vol. 13, no. 4, pp. 1–20, 2013.
  • [25] T. Lu, Y. Yuan, D. Wu, and H. Yu, “Video retargeting with nonlinear spatial-temporal saliency fusion,” in IEEE International Conference on Image Processing, Hong Kong, 2010, pp. 1801–1804.
  • [26] J. Peng and Q. Xiao-Lin, “Keyframe-based video summary using visual attention clues,” IEEE Transactions on Multimedia, no. 2, pp. 64–73, 2009.
  • [27] N. Murray, M. Vanrell, X. Otazu, and C. A. Parraga, “Saliency Estimation Using a Non-Parametric Low-Level Vision Model,” in IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 2011, pp. 433–440.
  • [28] C. Chamaret, J. C. Chevet, and O. Le Meur, “Spatio-Temporal Combination of Saliency Maps and Eye-Tracking Assessment of Different Strategies,” in IEEE International Conference on Image Processing, Hong Kong, 2010, pp. 1077–1080.
  • [29] H. J. Seo and P. Milanfar, “Static and space-time visual saliency detection by self-resemblance,” Journal of vision, vol. 9, no. 12, p. 15, 2009.
  • [30] W.-F. Lee, T.-H. Huang, S.-L. Yeh, and H. H. Chen, “Learning-based prediction of visual attention for video signals,” IEEE Transactions on Image Processing, vol. 20, no. 11, pp. 3028–3038, 2011.
  • [31] R. J. Lin and W. S. Lin, “A computational visual saliency model based on statistics and machine learning,” Journal of Vision, vol. 14, no. 9, pp. 1–18, 2014.
  • [32] Y. Kavak, E. Erdem, and A. Erdem, “Visual saliency estimation by integrating features using multiple kernel learning,” in IEEE Conference on Computer Vision and Pattern Recognition, Portland, Oregon, USA, 2013.
  • [33] Q. Zhao and C. Koch, “Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost,” Journal of Vision, vol. 12, no. 6, pp. 1–15, 2012.
  • [34] A. Borji, “Boosting Bottom-up and Top-down Visual Features for Saliency Estimation,” in IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012, pp. 438–445.
  • [35] N. Bruce and J. K. Tsotsos, “Saliency Based in Information Maximization,” in Advances in neural information processing systems, 2005, pp. 155–162.
  • [36] B. Han and B. Zhou, “High speed visual saliency computation on gpu,” in IEEE International Conference on Image Processing, vol. 1, San Antonio, TX, USA, 2007, pp. 361–364.
  • [37] N. Riche, M. Mancas, M. Duvinage, M. Mibulumukini, B. Gosselin, and T. Dutoit, “RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis,” Signal Processing : Image Communication, vol. 28, no. 6, pp. 642–658, 2013.
  • [38] W. Kienzle, F. A. Wichmann, B. Schölkopf, and M. O. Franz, “A Nonparametric Approach to Bottom-Up Visual Saliency,” Advances in Neural Information Processing Systems, pp. 689–696, 2006.
  • [39] T. Judd, K. Ehinger, F. Durand, and A. Torralba, “Learning to predict where humans look,” in IEEE International Conference on Computer Vision, Kyoto, Japan, 2009, pp. 2106–2113.
  • [40]

    M. Liang and X. Hu, “Feature Selection in Supervised Saliency Prediction,”

    IEEE Transactions on Cybernetics, vol. 45, no. 5, pp. 900–912, 2015.
  • [41] Z. Liang, H. Fu, Z. Chi, and D. Feng, “Refining a Region Based Attention Model Using Eye Tracking Data,” in IEEE International Conference on Image Processing, Hong Kong, 2010, pp. 1105–1108.
  • [42] T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to Detect a Salient Object,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 2, pp. 353–367, 2010.
  • [43] V. Navalpakkam and L. Itti, “Modeling the influence of task on attention,” Vision research, vol. 45, no. 2, pp. 205–231, 2005.
  • [44] A. Torralba, A. Oliva, M. S. Castelhano, and J. M. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search.” Psychological Review, vol. 113, no. 4, pp. 766–786, 2006.
  • [45] L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell, “SUN: A Bayesian framework for saliency using natural statistics,” Journal of Vision, vol. 8, no. 7, pp. 1–20, 2008.
  • [46] L. Zhang, M. H. Tong, and G. W. Cottrell, “Sunday: Saliency using natural statistics for dynamic analysis of scenes,” in Proceedings of the 31st Annual Cognitive Science Conference, 2009, pp. 2944–2949.
  • [47] G. Boccignone, “Nonparametric bayesian attentive video analysis.” in IEEE International Conference on Computer Vision and Pattern Recognition, Tampa, FL, USA, 2008, pp. 1–4.
  • [48] L. Elazary and L. Itti, “A Bayesian model for efficient visual search and recognition,” Vision Research, vol. 50, no. 14, pp. 1338–1352, 2010.
  • [49] J. Harel, C. Koch, and P. Perona, “Graph-Based Visual Saliency,” Advances in Neural Information Processing Systems, pp. 545–552, 2006.
  • [50] D. Parks, A. Borji, and L. Itti, “Augmented saliency model using automatic 3D head pose detection and learned gaze following in natural scenes,” Vision Research, pp. 1–14, 2014.
  • [51] D. Rudoy, D. B. Goldman, E. Shechtman, and L. Zelnik-Manor, “Learning video saliency from human gaze using candidate selection,” in Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 4321–4328.
  • [52] B. T. Vincent, R. J. Baddeley, A. Correani, T. Troscianko, and U. Leonards, “Do we look at lights? Using mixture modelling to distinguish between low - and high - level factors in natural image viewing,” Visual Cognition, vol. 17, no. 6-7, pp. 856–879, 2009.
  • [53] T. Couronné, A. Guérin-Dugué, M. Dubois, P. Faye, and C. Marendaz, “A statistical mixture method to reveal bottom-up and top-down factors guiding the eye-movements,” Journal of Eye Movement Research, vol. 3, no. 2, pp. 1–13, 2010.
  • [54] T. Ho-Phuoc, N. Guyader, and A. Guérin-Dugué, “A Functional and Statistical Bottom-Up Saliency Model to Reveal the Relative Contributions of Low-Level Visual Guiding Factors,” Cognitive Computation, vol. 2, pp. 344–359, 2010.
  • [55] J. Gautier and O. Le Meur, “A Time-Dependent Saliency Model Combining Center and Depth Biases for 2D and 3D Viewing Conditions,” Cognitive Computation, vol. 4, pp. 1–16, 2012.
  • [56] A. Coutrot and N. Guyader, “How saliency, faces, and sound influence gaze in dynamic social scenes,” Journal of Vision, vol. 14, no. 8, pp. 1–17, 2014.
  • [57] ——, “An Audiovisual Attention Model for Natural Conversation Scenes,” in IEEE International Conference on Image Processing, Paris, France, 2014.
  • [58] R. J. Peters and L. Itti, “Beyond bottom-up: Incorporating task-dependent influences into a computational model of spatial attention,” in IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 2007, pp. 1–8.
  • [59] Q. Zhao and C. Koch, “Learning a saliency map using fixated locations in natural scenes,” Journal of Vision, vol. 11, no. 3, pp. 1–15, 2011.
  • [60] A. M. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognitive psychology, vol. 12, pp. 97–136, 1980.
  • [61] C. Koch and S. Ullman, “Shifts in selective visual attention: Towards the underlying neural circuitry,” Human Neurobiology, vol. 4, pp. 219–227, 1985.
  • [62] S. Marat, A. Rahman, D. Pellerin, N. Guyader, and D. Houzet, “Improving Visual Saliency by Adding ‘Face Feature Map’ and ‘Center Bias’,” Cognitive Computation, vol. 5, no. 1, pp. 63–75, 2013.
  • [63] H. Nothdurft, “Salience from feature contrast: additivity accross dimensions,” Vision Research, vol. 40, pp. 1183–1201, 2000.
  • [64] B. W. Tatler and B. T. Vincent, “Systematic tendencies in scene viewing,” Journal of Eye Movement Research, vol. 2, no. 5, pp. 1–18, 2008.
  • [65] ——, “The prominence of behavioural biases in eye guidance,” Visual Cognition, vol. 17, no. 6-7, pp. 1029–1054, 2009.
  • [66] B. W. Tatler, “The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions,” Journal of Vision, vol. 7, no. 14, pp. 1–17, 2007.
  • [67] P.-H. Tseng, R. Carmi, I. G. M. Cameron, D. P. Munoz, and L. Itti, “Quantifying center bias of observers in free viewing of dynamic natural scenes,” Journal of Vision, vol. 9, no. 7, pp. 1–16, 2009.
  • [68] B. W. Tatler, R. J. Baddeley, and I. D. Gilchrist, “Visual correlates of fixation selection: effects of scale and time,” Vision Research, vol. 45, pp. 643–659, 2005.
  • [69] A. Coutrot, N. Guyader, G. Ionescu, and A. Caplier, “Influence of soundtrack on eye movements during video exploration,” Journal of Eye Movement Research, vol. 5, no. 4, pp. 1–10, 2012.
  • [70] M. Nyström and K. Holmqvist, “Semantic Override of Low-level Features in Image Viewing – Both Initially and Overall,” Journal of Eye Movement Research, vol. 2, no. 2, pp. 1–11, 2008.
  • [71] S. Rahman and N. Bruce, “Visual Saliency Prediction and Evaluation across Different Perceptual Tasks,” PLoS ONE, vol. 10, no. 9, p. e0138053, 2015.
  • [72] H. F. Chua, J. E. Boland, and R. E. Nisbett, “Cultural variation in eye movements during scene perception,” Proceedings of the National Academy of Sciences of the United States of America (PNAS), vol. 102, no. 35, pp. 12 629–12 633, 2005.
  • [73] E. F. Risko, N. C. Anderson, S. Lanthier, and A. Kingstone, “Curious eyes: Individual differences in personality predict eye movement behavior in scene-viewing,” Cognition, vol. 122, no. 1, pp. 86–90, 2012.
  • [74] B. W. Tatler, M. M. Hayhoe, M. F. Land, and D. H. Ballard, “Eye guidance in natural vision: Reinterpreting salience,” Journal of Vision, vol. 11, no. 5, pp. 1–23, 2011.
  • [75] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum Likelihood from Incomplete Data via the EM Algorithm,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 39, no. 1, pp. 1–38, 1977.
  • [76] M. Zhong, X. Zhao, X.-c. Zou, J. Z. Wang, and W. Wang, “Markov chain based computational visual attention model that learns from eye tracking data,” Pattern Recognition Letters, vol. 49, no. C, pp. 1–10, 2014.
  • [77] R. Tibshirnani, “Regression Shrinkage and Selection via the Lasso,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996.
  • [78] N. Yi and S. Xu, “Bayesian lasso for quantitative trait loci mapping,” Genetics, vol. 179, no. 2, pp. 1045–1055, 2008.
  • [79] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. S. Huang, and S. Yan, “Sparse representation for computer vision and pattern recognition,” Proceedings of the IEEE, vol. 98, no. 6, pp. 1031–1044, 2010.
  • [80] T. Hastie, R. Tibshirnani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, ser. Springer Series in Statistics.   Springer-Verlag New York Inc, 2009.
  • [81] A. N. Tikhonov, “On the stability of inverse problems,” Comptes Rendus (Doklady) de l’Academie des Sciences de l’URSS, vol. 39, no. 5, pp. 195–198, 1943.
  • [82] A. Y. Ng, “Feature selection, L1 vs. L2 regularization, and rotational invariance,” in International Conference on Machine Learning, Banff, Alberta, Canada, 2004.
  • [83] K. Sjöstrand, L. H. Clemmensen, R. Larsen, and B. Ersbøll, “SpaSM: A Matlab Toolbox for Sparse Statistical Modeling,” Journal of Statistical Software, pp. 1–24, 2012.
  • [84] G. E. Schwarz, “Estimating the dimension of a model,” Annals of Statistics, vol. 6, no. 2, pp. 461–464, 1978.
  • [85] A. Coutrot and N. Guyader, “Toward the Introduction of Auditory Information in Dynamic Visual Attention Models,” in 14th International Workshop on Image and Audio Analysis for Multimedia Interactive Services, Paris, France, 2013.
  • [86] P. Bertolino, “Sensarea: an Authoring Tool to Create Accurate Clickable Videos,” in Workshop on Content-Based Multimedia Indexing, Annecy, France, 2012.
  • [87] R. J. Peters, A. Iyer, L. Itti, and C. Koch, “Components of bottom-up gaze allocation in natural images,” Vision Research, vol. 45, pp. 2397–2416, 2005.
  • [88] S. Kullback and R. A. Leibler, “On Information and Sufficiency,” The Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79–86, 1951.
  • [89] O. Le Meur and T. Baccino, “Methods for comparing scanpaths and saliency maps: strengths and weaknesses,” Behavior Research Methods, vol. 45, no. 1, pp. 251–266, 2013.
  • [90] N. Riche, M. Duvinage, M. Mancas, B. Gosselin, and T. Dutoit, “Saliency and Human Fixations: State-of-the-art and Study of Comparison Metrics,” in Proceedings of the 14th International Conference on Computer Vision, Sydney, Australia, 2013, pp. 1–8.
  • [91] P. K. Mital, T. J. Smith, R. L. Hill, and J. M. Henderson, “Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion,” Cognitive Computation, vol. 3, no. 1, pp. 5–24, 2010.
  • [92] H. X. Wang, J. Freeman, E. P. Merriam, U. Hasson, and D. J. Heeger, “Temporal eye movement strategies during naturalistic viewing,” Journal of Vision, vol. 12, no. 1, pp. 1–27, 2012.
  • [93] D. Parkhurst, K. Law, and E. Niebur, “Modeling the role of salience in the allocation of overt visual attention,” Vision Research, vol. 42, pp. 107–123, 2002.
  • [94] R. Carmi and L. Itti, “Visual causes versus correlates of attentional selection in dynamic scenes,” Vision Research, vol. 46, no. 26, pp. 4333–4345, 2006.
  • [95] J. E. Cutting, K. L. Brunick, J. E. DeLong, C. Iricinschi, and A. Candan, “Quicker, faster, darker: Changes in Hollywood film over 75 years,” i-Perception, vol. 2, no. 6, pp. 569–576, 2011.
  • [96] Q. Summerfield, “Some preliminaries to a comprehensive account of audio-visual speech perception,” in Hearing by Eye: The Psychology of Lipreading, B. Dodd and R. Campbell, Eds.   New York, USA: Lawrence Erlbaum, 1987, pp. 3–51.
  • [97] J.-L. Schwartz, J. Robert-Ribes, and P. Escudier, “Ten years after Summerfield. a taxonomy of models of audiovisual fusion in speech perception,” in Hearing by Eye II. Advances in the Psychology of Speechreading and Auditory-visual Speech, R. Campbell, B. Dodd, and D. K. Burnham, Eds.   Hove, UK: Psychology Press, 1998, pp. 85–108.
  • [98] G. Bailly, P. Perrier, and E. Vatikiotis-Bateson, Audiovisual Speech Processing.   Cambridge, UK: Cambridge University Press, 2012.
  • [99] M. L. H. Võ, T. J. Smith, P. K. Mital, and J. M. Henderson, “Do the eyes really have it? Dynamic allocation of attention when viewing moving faces,” Journal of Vision, vol. 12, no. 13, pp. 1–14, 2012.
  • [100] A. Coutrot, N. Binetti, C. Harrison, I. Mareschal, and A. Johnston, “Face exploration dynamics differentiate men and women,” Journal of Vision, vol. 16, no. 14, pp. 1–19, 2016.
  • [101] O. Le Meur and Z. Liu, “Saccadic model of eye movements for free-viewing condition,” Vision Research, vol. 116, no. B, pp. 152–164, 2015.
  • [102] O. Le Meur and A. Coutrot, “Introducing context-dependent and spatially-variant viewing biases in saccadic models,” Vision Research, vol. 121, no. C, pp. 72–84, 2016.
  • [103] O. Le Meur and Z. Liu, “Saliency aggregation: Does unity make strength?” in Asian Conference on Computer Vision, Singapore, 2014, pp. 18–32.
  • [104] A. S. Danko and S. Lyu, “Fused methods for visual saliency estimation,” in SPIE, Image Processing: Machine Vision Applications, San Francisco, CA, USA, 2015.