Reconciling saliency and object center-bias hypotheses in explaining free-viewing fixations

03/30/2015 ∙ by Ali Borji, et al. ∙ University of Wisconsin-Milwaukee 0

Predicting where people look in natural scenes has attracted a lot of interest in computer vision and computational neuroscience over the past two decades. Two seemingly contrasting categories of cues have been proposed to influence where people look: low-level image saliency and high-level semantic information. Our first contribution is to take a detailed look at these cues to confirm the hypothesis proposed by Henderson henderson1993eye and Nuthmann & Henderson nuthmann2010object that observers tend to look at the center of objects. We analyzed fixation data for scene free-viewing over 17 observers on 60 fully annotated images with various types of objects. Images contained different types of scenes, such as natural scenes, line drawings, and 3D rendered scenes. Our second contribution is to propose a simple combined model of low-level saliency and object center-bias that outperforms each individual component significantly over our data, as well as on the OSIE dataset by Xu et al. xu2014predicting. The results reconcile saliency with object center-bias hypotheses and highlight that both types of cues are important in guiding fixations. Our work opens new directions to understand strategies that humans use in observing scenes and objects, and demonstrates the construction of combined models of low-level saliency and high-level object-based information.



There are no comments yet.


page 1

page 3

page 4

page 6

page 7

page 9

page 10

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

EYE movements are proxies for overt visual attention. They help us understand how humans and animals allocate their perceptual and cognitive resources towards a limited portion of the observed visual data. They also inform us about characteristics of the filtered data. Understanding and modeling human attentional behavior has become increasingly important recently for two reasons: 1) the abundance of visual data in daily life demands highly efficient filtering methods with low computational complexity, specifically when dealing with natural scenes and videos, and 2) there are many applications in computer vision and robotics such as image/video compression, scene understanding, image thumb-nailing, photo collages, human-robot interaction, and robot localization and navigation that could utilize resource allocation methods. See 

[4, 5, 6, 7, 8, 9, 10, 11, 12, 13] for comprehensive reviews on visual attention.

Where do people look during free viewing of images of natural scenes? A tremendous amount of research in cognitive and computer vision communities has investigated this question for more than a decade, yet it still remains a hot topic [4, 14]. Two types of cues111

It is not easy to demarcate the category of some cues (e.g., object center-bias, text, face). Some authors have classified cues that influence eye movements into three categories: pixel, object, and semantic. Please see 

[3]. are believed to influence eye movements in this task: 1) low-level image features (a.k.a., bottom-up visual saliency) such as contrast, edge content, intensity bispectra, color, motion, symmetry, and surprise, and 2) high-level features (i.e., object and semantic information) such as faces and people [15, 16, 17], text [18], object center priors [1, 2], image center priors [19, 20], horizontal bias in scene viewing (only a left-ward bias for right handers, no effect for left handers) [21], semantic object distances[22], scene global context [23], emotions [24], memory [25, 26], gaze direction [27, 28], culture [29], and survival-related features such as food, sex, danger, pleasure, and pain [30, 31]. Note that, while here we focus on a free-viewing task, some of these factors also play a role in top-down task-driven visual attention [32, 33, 34, 35, 36, 37, 38, 39].

I-a Object center-bias

As an alternative theory for the hypothesis of image-based saliency (low-level image features, such as contrast, color, and orientation [40, 41, 42, 43, 44, 45]), the object-based hypothesis of attention considers objects as the unit of attention. The latter relates to the cognitive relevance theory and the role of cognitive top-down knowledge in attention. According to this theory, objects are manipulated to perform a task (e.g., in sandwich making [46]222Volunteers were asked to make peanut butter and jelly sandwiches. The participants wore headgear that simultaneously tracked the movement of their eyes and recorded the scene before them.). Overall, the idea of object-based attention is sensible, as to understand a scene one needs to localize objects, identify them, and establish their spatial relations. Eye movements tell us how a scene is understood by where they land. There has been some debate whether objects or saliency better predict fixations and the landscape still remains unclear [47, 48]. Note that object center-bias is different than image center-bias [19], which is the tendency of observers to preferentially look towards the center of images.

The first fixation-based evidence for object center-bias was demonstrated by Henderson [1]. He recorded eye movements of observers on line drawings of objects and found that viewers’ first fixations tended to be near the center of an object, and that there was a greater tendency to undershoot the center than to overshoot. Later, Trukenbrod and Engbert [49] reported a similar finding on a serial visual search task. A more detailed investigation of the object center-bias for objects embedded in naturalistic scenes was conducted by Nuthmann & Henderson [2]333And also in another recent study [50].. These authors measured the fixation landing positions within objects during free viewing of natural scenes, and showed that the preferred viewing location (PVL) for real objects in scenes was close to the center of the object (as shown in Figure 1). They also found that when compared to the PVL for real objects, there was less evidence for a PVL for human fixations within saliency proto-objects [51], identified by an extension to the Itti saliency map model. They argued in favor of object-based visual attention and proposed that during naturalistic scene viewing, the eye-movement control system directs eyes in terms of object units. Overall, these findings match with previous findings that observers look at the center of words while reading [52]. Another piece of evidence comes from a work of Elazary & Itti [53] who showed that objects are usually more salient than the background.

Belardinelli & Butz [54] measured the distribution of fixation locations on objects over three tasks: 1) object classification (one of two objects), 2) mimicking lifting an object (lifting task), and 3) mimicking opening an object (opening task). They found that fixations were drawn to different task-relevant locations. Based on this, they suggested that attention first chooses objects of interest and then fixations are drawn to the most informative points. This result supports previous findings on the influence of task on attention. Eyes extract visual information in a goal-oriented anticipatory fashion even when single actions are to be performed on the same object.

Inspired by the salient object detection models in computer vision (i.e., defining saliency at the level of objects as in [55]), Dziemianko et al. [56] applied models of salient object detection to fixation prediction, similar to Borji et al. [57]. They implemented and evaluated three models of salient object detection on fixations over two tasks444This data is available at: 1) visual counting: counting the number of occurrences of a cued target object and 2) object naming: naming objects present in the scene. In their analysis, they inserted a Gaussian blob at the center of a bounding box around an object. They showed that the object-based interpretation of saliency provided by these models is a substantially better predictor of fixation locations than traditional pixel-based saliency. This result is in alignment with findings by Borji et al. [57].

Xu et al.[3] studied the effects of several types of attributes on gaze guidance during free-viewing at three levels: the pixel-level, the object-level, and the semantic-level

. Pixel-level attributes included contrast, edge content, color, etc. Object-level attributes included size, convexity, solidity, complexity, and eccentricity. Semantic high-level attributes contained smell, sound, face, text, taste, touch, watchability, and operability. Using images with annotated objects and regression, they learned which factors were important in predicting fixations (e.g., faces and text were more important, but sound and motion less so). One of the factors they considered (categorized under object- or semantic-level attributes) was object center-bias. They fitted a two-dimensional normal distribution to the spatial distribution of the fixations in the object-centered coordinate system and used it to weight the object center

555They did not specifically mention how they defined the center of an object or whether they used bounding boxes. It seems, however, that similar to Nuthmann & Henderson [2] and Dziemianko et al. [56] they used bounding boxes.. Although they found that adding object- and semantic-level attributes increased fixation prediction performance, unfortunately they did not explicitly measure the ‘added value’ of object-center bias.

Fig. 1: Object-based center-bias. a) An image with a sample annotated object (a basket). Note how loose the bounding box is in this case. b) A close up of the object bounding box and fixations (shown in red). Note that some fixations fall outside the object and on the background. The center of the object is the origin of the coordinate system for fixations. c) Distribution of the horizontal component of landing positions for objects (red circles) and the corresponding distribution of the vertical component of within-object landing positions (blue squares). Circles are data and curves are fitted using truncated Gaussians. The vertical broken line indicates the center of the object. Horizontal and vertical lines are overlaid. (d) Corresponding smoothed two-dimensional viewing location histogram. The intersection of the two broken lines marks the center of the object. Images are taken with permission from Nuthmann & Henderson [2].

Several works have used object information to build attention models at the object level (e.g., 

[58, 59, 60, 61, 62, 53, 63]). Some of these models propose how attention should be deployed to different objects at different times to fulfill a task. Some others, similar to our goal here, have explained fixations in the context of free-viewing. For example, Kavak et al. [62] used a bank of object detectors to give higher weight to regions inside objects. Recently, Stoll et al. [64] also proposed an approach to account for object driven fixations and concluded that objects predict fixations better than saliency when combined with bottom-up saliency.

Despite some previous evidence for the object-center hypothesis, three challenges still exist that need to be resolved. First: the fact that observers tend to look near the center of objects could be because saliency might also be high in those regions. In other words, do observers look at the center of object simply because saliency is higher there compared to at the object boundary? Nuthman et al. did not directly control for this confounding factor. Instead, they measured the distribution of saliency at salient patches/proto-objects and showed that compared to the distinct PVL for real objects, there was less evidence for a PVL for human fixations within saliency proto-objects. But this analysis does not seem to address this confound. Instead, here we measure the magnitude of low-level saliency inside the object. In a complementary analysis, we combine both saliency and object center-bias to see whether or not there is added value.

Second: how we can define the center of an object? This is a challenging task due to variety of object parameters such as shape, size, concavity/convexity, symmetry, etc. Almost all previous studies have used bounding boxes which might not be a good option in many cases (e.g., the center of the bounding box may fall outside of the object area for a concave object). Further, using bounding boxes causes confusion and inaccuracy in assigning fixations to the foreground object or background. For example, in the analysis of Nuthman et al. in Figure 1.b, several points from the background are also included. To address this challenge, we first use object boundary polygons instead of bounding boxes. Second, we apply object center-bias on each individual object from its center of mass666The center of mass (CoM) is calculated using the standard methods. The and coordinates of the CoM are, respectively, the average of the coordinates of the pixels and the average of the coordinates of the pixels that make up the object. towards the outside.

Third: this challenge is in regards to the complexity of stimulus set, since natural scenes are inherently complex. For example, observers may have different viewing behavior depending on the complexity of the scene. They may visit the center of the object for an image with few (large) objects but may not do so for objects amidst scene clutter. In order to answer this question, one needs large amounts of data. To address the challenge of complexity, we run our experiment over a large amount of data from two datasets with a variety of images and objects.

I-B Contributions

In summary, we offer the following contributions in this work:

  1. We verify the hypothesis that “observers tend to look near the center of objects in scene free-viewing” and establish that this effect is independent of low-level bottom-up saliency.

  2. We construct a combined model of object center-bias and saliency. To do so, we answer the following questions: a) How can we construct an object center-bias map to emphasize object centers? b) What is the best way to combine this map with image saliency (addition or multiplication)?

Ii Data

Ii-a Our Data

Ii-A1 Stimuli

Stimuli consisted of 60 color images (30 synthetic, 30 natural). Figure 2 shows some examples of our stimuli. Images were resized to 1920 1080 pixels by adding gray margins while preserving the aspect ratio. We intentionally did not include stimuli with persons, animals, or faces, mainly because these objects have interesting parts on their ends. We chose images from different categories (line drawings, 3-D rendered cartoonic images, etc.) with different types of objects. Object boundaries were manually traced. Our methodology for selecting objects was to only label objects that were completely unoccluded in the image. This was done so that the analysis of a center bias effect would not be influenced by objects whose computed center of mass was different from the theoretical center of mass. We attempted to choose images with less photographer bias777Tendency of photographers to frame interesting objects at the center of the image. and with multiple objects off the image center, thus reducing the effect of center-bias on fixations.

Ii-A2 Observers

Seventeen observers (4 male, 13 female) participated in this experiment (mean age = , std = ). Observers were students at the University of Southern California (USC) from the following majors: Neuroscience, Psychology, Biology, Business, Biomedical Engineering, and Accounting. The experimental methods were approved by the USC’s Institutional Review Board (IRB). Observers had normal or corrected-to-normal vision and were compensated by course credits. Observers were asked to freely watch the images.

Ii-A3 Apparatus and procedure

Observers sat 130 cm away from a 42 inch monitor screen such that scenes subtended approximately of visual angle. A chin/head rest was used to minimize head movements. Stimuli were presented at 60Hz at a resolution of pixels in random order. Eye movements were recorded via an SR Research Eyelink eye tracker (spatial resolution of 0.5) sampling at 1000 Hz. Each image was shown for 30 seconds followed by a 5 seconds delay (gray screen). The eye tracker was calibrated using a 5-point calibration method at the beginning of each recording session.

Fig. 2: Sample images from our dataset along with annotated objects and fixations of all observers. Notice how certain locations inside some objects attract more fixations than others.

Ii-B OSIE dataset

The OSIE (“Object and Semantic Images and Eye-tracking”) dataset888 was created by Xu et al., [3] to explore how object and semantic saliency can be used for predicting where observers look in free viewing of natural scenes. It contains eye tracking data of 15 participants over a set of 700 images (for 3 seconds viewing time). Each image has been manually segmented into a collection of objects by one person. Semantic attributes of objects have also been manually labeled (e.g., operability, watchability, text). This dataset introduced two novel contributions: First, it contains a large number of object categories and several objects have semantic meanings and second, the majority of the images contain multiple dominant objects. Figure 4 shows example images from the OSIE dataset along with fixations and object annotations. Please refer to Xu et al. [3] for more details on this dataset.

OSIE dataset is suitable for our purposes because it has a variety of images from different categories. Further, object boundaries have been carefully annotated on this dataset for a large number of objects.

Figure 3 illustrates statistics of the OSIE dataset. The majority (87.01%) of objects occupy equal or less than 10% of the image area. 52.68% of objects contain equal or less than 10% of the fixations on the image. We observe that normalized size of the most salient object (object at the peak of the fixation map; 1012 out of overall 5551 object annotations) is usually larger than regular objects as shown in Figure 3 second row. 74.90% of most salient objects occupy equal or less than 10% of the image area. Similarly, only about 5% of most salient objects contain equal or less than 10% of the fixations on the image. About 14% of the most salient objects contain equal to or more than 50% of the fixations in the image. This can also be observed from the third row of Figure 3 which shows the relationship of normalized object size versus the fraction of fixations over all object annotations. Insets in Figure 3 show the average annotation map and average fixation map. As in other eye movement datasets, a large degree of fixation center-bias is observed on this dataset.

Fig. 3: Statistics of the OSIE dataset. a) histogram of normalized object size, b) histogram of fraction of fixations (number of fixations on an object over number of all image fixations), c) histogram of normalized salient object size. Salient object is the one with the maximum fraction of fixations on it, d) similar to b but for salient objects, e & f) plot of fraction of fixations as a function of normalized object size. ‘Frequency’ on the y-axis indicates the number of occurrences.

On average, 5.18 and 7.93 objects are annotated over our dataset and OSIE, respectively (median: 5 vs. 7). The total number of fixations on our dataset is 76,869 (over 60 images). This figure for OSIE dataset is 98,321 (over 700 images). Figure 5 shows a histogram of annotated objects and the average annotation map over the two datasets.

Fig. 4: Sample images from the OSIE dataset along with object annotations and fixations. Due to shorter presentation times (5 seconds vs. 30 seconds in ours), there are fewer fixations in OSIE images than in ours.
Fig. 5: (a) Histograms of annotated objects per image over the two datasets. Images in the OSIE dataset contain more object annotations on average compared to our dataset. (b) Average object annotation map over two datasets.

Iii Measuring object center-bias

In this section, we verify the object center-bias hypothesis by measuring the distribution of fixations inside objects. To do so, we need a way to define the center of an object. We choose the center of mass of an object as the object center. Then, we grow circles from the object center such that each circle (tube) contains an additional of the object area. In other words, the difference of object coverage between each successive pair of concentric circles is of the whole object area. We repeat this operation until all object area is covered, Figure 6.b inset shows an example of this operation. We call this map, ”object center-bias map” and denote it by ”O”.

For each of the circular regions (tubes), we then count the number of fixations that fall on that region. Figure 6

.a shows the distribution (converted to probability density function) of fixations over the 10 circles averaged over all objects on each dataset. As it shows, as one moves away from the object center toward the object boundary, the probability of fixations declines (almost linearly).

Figure 6.b shows the distribution of saliency (average saliency inside each tube) using the AWS saliency model [65] from center to boundary of the objects. Here, again we observe a decline in saliency as moving from object center toward the object boundary. Similar to fixations, this decline is sharper on our dataset than on the OSIE. This result indicates that on average, saliency is higher at the object center which, as discussed in the introduction, may explain some of the additional fixations in that region. To answer whether saliency can explain all fixations or not (i.e., discounting the effect of saliency confound), in the next section we follow a modeling approach by adding these two components. The rational is as follows: if we observe a boost in saliency in predicting fixations by adding object center-bias, we can then conclude that object center-bias has an (independent) added value to what early saliency already offers.

To explore the generality of the hypothesis over all objects and the factors that it may depend on, we define an object center-biased index which is the sum of fixation densities inside the first 5 inner-most circles/rings over the sum of fixation densities inside all ten circles/rings (i.e., over the entire object):


where is the density of fixations inside the i-th tube. The higher the objcntidx, the more tendency of fixations towards the object center. Figure. 7 demonstrates the histogram of objcntidx indices on our dataset. For the majority of objects (200 out of 311) this index is higher than 0.5, which would be the value if fixations were distributed uniformly over the entire object. As expected, objects with high objcntidx often have content at the image center (Figure. 7.b, e.g., book, grandfather clock) while objects with low objcntidx usually have imbalanced/tilted features on one side (Figure. 7.c, e.g., sword, microphone). We notice that affordance and shape of the object also influences where people look inside it. For example, in the microphone case, there are more features around its tip including salient edges which differ from their neighbors (hence high saliency there) which attracts more fixations (similar argument for the sword). Replacing the circles with bounding boxes (i.e., rectangular tubes) shows the same pattern of results.

Fig. 6: (a) Distribution of fixations over the object area from the inner-most ring (1 in x-axis) to the outer-most ring (10 in x-axis). Note that the difference in rings adds to the object area and not the entire circle (i.e., it is incremental). (b) Distribution of saliency using the AWS saliency model, over both datasets. Inset shows an example object and the corresponding object map (denoted OBJ).
Fig. 7: (a) Histogram of object center-bias indices over our data. An index above 0.5 means more center-bias. (b) Some objects with high indices, (c) Some objects with low indices. These objects usually have a salient part on one of their ends.

Iv Our augmented saliency model

Having seen that object center-bias effect exists on a majority of objects, in this section we propose a simple combined model of saliency and object center-bias. This model, in addition to having better fixation prediction accuracy, also helps further investigate the accuracy of the object center-bias hypothesis. We follow the previous line of research that linearly combines cues for computing saliency (e.g., [15, 16]). Our model is simply a weighted combination of the saliency map and the object center-bias map as follows:


where S is the saliency map, O is the object center-bias map, and is a parameter that controls the relative magnitude of the two maps. is just the pure bottom-up saliency map (AWS model), and is the pure object center-bias map. Through experiments, we learned that adding the term did not improve our results, so we discard it here. The S, O, and resulting SM maps are all normalized (sum to 1).

Figure 8.a shows the NSS999Normalized Scanpath Saliency [66]

, which is the average of the response values at human eye positions in a model’s saliency map that has been normalized to have zero mean and unit standard deviation. NSS = 1 indicates that the subjects’ eye positions fall in a region whose predicted saliency is one standard deviation above average. NSS

0 indicates that the model performs no better than picking a random position, and hence is at chance in predicting human gaze. scores of the combined model as a function of parameter . As increases, the NSS peaks and then declines over both datasets. Looking at the optimal for each dataset, we find that they are close to each other, 0.15 for our data and 0.35 for OSIE, which result in NSS scores of 1.45 and 1.705, respectively. This means that if we were to train the model over our data and test it on the OSIE dataset (or vice-versa), we would have achieved a better performance than both saliency and object center-bias maps on the destination test dataset. In other words, if we were to apply the best from one dataset to another, results would be still better than both saliency and object center-bias models. This means that our model generalizes well over datasets.

Figure 8 also shows higher performance over OSIE dataset compared to our dataset which can be attributed to two causes: 1) more objects are annotated in OSIE images than our images which results in a higher contribution of objects (mean 5.18 on our data vs. 7.93 over OSIE), and 2) viewing time is longer on our data which might have caused subjects to be driven more by the image background. We believe that the second cause is a more plausible explanation of this effect as we did not see a trend in performance as a function of the number of annotated objects on a scene. Further, while the number of images over OSIE dataset is about 12 times higher than our data, the number of fixations is nearly the same. Longer viewing time leads to fixations that fall on the background clutter and this results lower prediction accuracy since these fixations are not accounted by the object annotations.

Figure 8.b shows the results over both datasets for saliency alone, object map alone, and their optimal combination. Average NSS for AWS, OBJ (i.e., object center-bias map O), and the combined model (with optimal

) over our data in order are: 1.3302, 1.0828, and 1.4501. Combined model significantly outperforms the other two models (t-test, combined vs. AWS, p = 1.9301e-06; combined vs. OBJ, t-test, p = 2.9015e-16). AWS model here significantly outperforms the OBJ model (t-test; p = 7.0320e-06).

The average NSS for AWS, OBJ, and combined model over OSIE dataset in order are: 1.4530, 1.4554, and 1.7051. The combined model significantly outperforms the other two models (t-test, combined vs. AWS, p =3.1412e-69; combined vs. OBJ, t-test, p = 1.9295e-73). The difference between AWS and OBJ models is not statistically significant here (t-test; p = 0.9136). The difference between the combined model and the saliency model is smaller in our dataset compared to the OSIE dataset (9% vs. 17.27%). This could be due to the larger number of annotated objects in the OSIE images than in the images in our dataset. Interestingly, on OSIE, all tested values of other than 0 and 1 are above both AWS and the object center-bias models. Our object center-bias model is essentially similar to the model proposed by Einhäuser et al. [47]

with the difference that here we emphasize the object center instead of uniformly distributing activity over the entire object. Further, there is no object weighting based on memory recall (i.e., the same weight for all objects).

Figures 9 and 10 show scatter plots of saliency vs. combined model over our data and OSIE, respectively. Each dot in this plot represents the NSS score for one image. Over our dataset, for 91.67% of images, the combined model outperforms the AWS saliency model. This figure for the OSIE is 80.71%. These values for the combined model vs. object center-bias map over our data and OSIE, in order are 83.33% and 77.71%. On both datasets for less than 50% of the images, the object map wins over the saliency map (20% on our data and 48.71% over OSIE). For images where the combined model outperforms the saliency model significantly, there are usually few objects in the scene (e.g., Figure 10.b, images 1, 2, and 3) and scenes do not usually have much background clutter. For images where the combined model performs worse than saliency, usually interesting parts of the object do not happen at the object center (e.g., in people, where the entire body is annotated as one object, face is the most interesting part but it is not at the center; Figure 10.b, 4th image).

Fig. 8: (a) NSS score of the combined model as a function of . corresponds to pure saliency and corresponds to the pure object map. Note that over the whole range of

values, the combined model performs better than both the saliency and object models over the OSIE dataset. Over our data, since some objects are annotated and not all, a larger magnitude of the saliency model is necessary to make a superior combined model. (b) Average NSS score of the models over our data and the OSIE dataset. Error bars indicate standard error of the mean (s.e.m).

Fig. 9: (a) Image-wise comparison between the NSS score of saliency vs. a combined model for all images in our data. Each dot is for one image. Performance of the combined model is with the optimal . Even with a small number of annotated objects per image, we observe an increase in performance of the combined model. The percentage of images for which the combined model performs better than each individual component is also shown. For 55 images, the combined model outperforms the AWS model (50 with respect to the OBJ model). (b) Two images with their corresponding prediction maps. For the first image, the saliency map already explains many of the fixations (i.e, high NSS) so inserting object center-bias, although helpful, does not add much to the score. For the second image, the object map brings a lot of value.
Fig. 10: Similar to Figure 9 but over the OSIE dataset. (a) NSS score of saliency vs. combined model, b) Sample images with their corresponding prediction maps. These images were chosen to show cases where map combination increases performance (compared to AWS) drastically (images 1 & 2), moderately (3), and a case where combination slightly hinders performance (4). On image 4, each person was annotated as one object and emphasis was placed at the center of their body while fixations were drawn to their heads. Better performance would have been achieved if human heads were annotated on this image.

Figure 11 shows the NSS score for three different types of object center-bias including linear weighting (our implementation so far), constant weighting (uniform distribution of weight over the entire object), and Gaussian weighting (which weights the 10 circles/rings using a normalized Gaussian function) over (a) our data and (b) OSIE dataset. Results do not show a big difference in performance or in optimal . We find that linear weighting of object center-bias is the best strategy consistently over both datasets.

We also noticed that replacing polygons with bounding boxes (similar to [2]) over OSIE dataset results in NSS of 1.112 which is above NSS of 1.083 using polygons but overall does not significantly improve the combination performance. The higher performance using bounding box is because it better accounts for fixations around the edges of objects.

V Discussion

In this work, we verified the validity of the object center-bias hypothesis in the context of free-viewing. We believe there might be an even stronger effect of object center-bias in the presence of a task. According to the cognitive relevance theory (see [46]) objects are more important when there is a task (compared to free-viewing). Some interesting tasks in this regard include: 1) Asking subjects to count the number of objects in a scene, 2) Asking subjects to manipulate objects (e.g., in a coffee-making task). In the latter, subjects may also look at those features that are related to the task (e.g., handle of the kettle) as suggested in Belardinelli & Butz [54]. It has also been shown that in object categorization, human subjects fixate on informative parts of objects (See Hartendorp et al. [67]). Some other interesting tasks here include: aesthetic judgment, interestingness judgment, visual search, and scene memorization.

Here, we discuss some important parameters for further investigation of the object-center hypothesis that should be taken into account in future studies. The first parameter is scene clutter. The manner in which humans attend to objects might be different depending upon whether they are viewing a simple scene with few objects or a complex scene with several objects and/or an amorphous background. In a complex scene, viewers may quickly scan the image in order to collect more information which may cause them to be driven to spatial outliers. The second parameter, related to the first one, is scale. If objects are shown to observers in a large scale (and hence larger objects sizes), then they may not tend to look at the empty central regions inside the object specially if they don’t contain features (imagine close up view of a white board). The third parameter concerns object symmetry. It has been shown in Kootstra et al. 

[68] that people tend to look at the center of symmetrical objects. The question that arises here is “Are object center-bias and symmetry two different cues?”. In other words, “Do people look at the center of asymmetrical objects?”. The fourth parameter regards viewing constellated objects made of several components. Object concavity/convexity is the fifth parameter. For example, what happens if the center of the object lies outsides the object?

To investigate above-mentioned parameters we recommend two approaches: First, more systematic studies over simple synthetic scenes are desirable. For example, imagine a plain object with no features inside. As soon as a salient point/region is inserted somewhere inside the object (but off-center), most likely viewers will not look at the center anymore (or will look less). This is in alignment with our analysis in this paper which was testing whether saliency peaks at the center of the objects in the real world or not. Another similar analysis would be collecting objects with no salient points inside and test whether viewers still look around the object center (similar to some of our images). Overall, the main difficulty in investigating the object center-bias arises from the fact that there is large variety of objects in natural scenes. Indeed, the object-center effect is stronger for some certain types of objects. Second, we believe that large scale object annotated datasets (e.g., datasets by Greene [69]101010 mrgreene/labelme.html, Cheng et al., [70]111111, and Li et al., [71]121212 can be very useful to understand how saliency and object information are related in scene viewing and understanding.

Fig. 11: NSS score with different types of object center-bias emphasis over (a) our data, (b) OSIE dataset. Results do not show a big difference in performance or in optimal . It seems that linear weighting is the best strategy over both datasets.

In contrast to Nuthmann & Henderson’s conclusion [2] which stated that “… attentional selection in scenes is object-based. Saliency only has an indirect effect on attention, acting through its correlation with objects …”, our results suggest that both low-level saliency and object information (here object center-bias) contribute (although correlated) to attention during scene free-viewing. This finding aligns with our previous results in Borji et al. [48] where we criticized the hypothesis by Einhäuser et al. [47] that “Objects predict fixations better than early saliency” and showed that saliency is a better predictor of fixations in free-viewing131313At least with the way that Einhäuser et al. used objects to build a model. If they had added object-center bias to their model, most likely they would have achieved much better results compared with saliency alone (i.e., the OBJ model in our work).. Einhäuser et al., built a map with object regions weighted by their recall frequency in a scene viewing (for memory testing) task. Although the debate whether saliency or objects are better predictors of fixations is still ongoing, the bottom-line is that both factors contribute independently to guiding fixations.

Is object center-bias a bottom-up or top-down cue? It is true that object center can be computed by a simple computationally-efficient early processing (using proto-objects [51]) but the mechanism that chooses to drive saccades to the center of objects (even in presence of more salient edge regions) seems to be a top-down process. By analogy to the face cue that attracts attention and gaze, there might be some dedicated neural circuitries for driving saccades to the object center. This is in alignment with the object-based theory of attention which states that objects are the unit of attention. Actual implementation of this mechanism needs to be further investigated by neurophysiology and psychophysics studies.

Are eye movements driven by objects or by early saliency? And by extension, is attention object-based [72, 73, 51, 74, 75, 76, 47, 1, 2, 64] or saliency-driven [40, 41, 42, 43, 44]? Based on our results here (as well as previous studies [3, 64, 56, 54, 62, 63]), we believe that both forms of attention guidance do occur. However, this needs to be studied further, for example by carefully controlling the scene complexity and background clutter. One approach would be using objects with no texture inside them (e.g., shapes) and see whether observers look at object centers. One piece of evidence that eye movements are driven by early saliency comes from the fact that eye movements are driven to salient regions in scenes where there are no well-defined objects (e.g., fractal scenes [66]). Evidence in favor of object-based attention comes from the finding that fixations are driven to the center of objects [2, 50]. The interplay between these two forms of attention in daily life still remains to be investigated further.

Are saliency and object center-bias independent cues? In other words, do they both contribute to guiding gaze? Here, we showed that a simple linear combined map of both cues outperforms each individual map. This indirectly shows that there is an added value in their combination which means that these maps are not subsets of each other. In a more direct analysis, in a parallel study to ours, Stoll et al. [64] have addressed this question. They modified their stimuli by fading edges of objects (effectively reducing saliency) and then measured the performance of early saliency models versus an object center-biased model. They showed that performance of early saliency models degraded drastically over modified stimuli while performance of object center bias remained the same. From this, they concluded that saliency and object center bias are two different cues.

Some of the saliency models that have done well in previous benchmarks (e.g., [14]) might have implicitly emphasized object center more (e.g., [65, 77]). For example, the AWS model generates some notion of objecthood using proto-objects and whitening. Thus, without being fully aware of the object center-bias hypothesis, these models have been able to predict fixations better. Explicit integration of this effect into saliency models (similar to our work here) or using more recent models (e.g., Boosting or Conditional Random Fields (CRF)) could be an interesting direction for future modeling.

In addition to datasets used here, some other annotated datasets exist which can be used to further investigate the relationships between bottom-up saliency and object center-bias and also study the above-mentioned factors. Three examples include: 1) the dataset by Greene [69] which is mainly designed for scene categorization and understanding research ( A total of 48,167 objects have been hand-labeled in 3,499 scenes from 16 categories using the LabelMe tool, 2) the UCSB dataset created by Koehler et al. [78]141414
. This dataset contains 800 images. One hundred observers performed four tasks (22 performed explicit saliency judgment, 20 performed free viewing, 20 performed saliency search, and 38 performed a cued object search task), and 3) a dataset recently introduced by Li et al. [79] known as PASCAL-S. These authors first segment all objects and then assign saliency orders to objects. This dataset contains eye movements of 8 observers over 850 images from the PASCAL VOC dataset [80].

Vi Conclusion

In this study, we first evaluated the object center-bias hypothesis by Henderson [1] and Nuthmann & Henderson [2] over two datasets in the free-viewing task. We found (results in section III) that both fixation density and bottom-up saliency are high at the center of objects, making saliency a potential confounding factor for the object-center hypothesis. To address this confound, we then proposed a combined model of saliency and object center-bias that outperforms each component significantly. This proves the object center-biased hypothesis and indicates that both saliency and object information contribute to gaze guidance in scene viewing. Although both saliency and object center-bias correlate with each other, neither is a subset of the other and that is why their combination performs better than each cue individually. We also noticed that this finding is consistent whether using bounding boxes or polygons, and using different saliency models or weighting approaches. Overall, our results support those of recent works that object center-bias improves fixation prediction (e.g., Xu et al., [3] and Stoll et al., [64]) which further support the hypothesis that fixations are driven by objects as well as early saliency.

We hope that our work will open new directions to understand strategies that humans use in object and scene observation and will help construct more predictive saliency models in the future.

Acknowledgments We would like to thank Prof. Laurent Itti for providing his eye tracking equipment. We would also like to thank reviewers for their helpful comments on an earlier version of this manuscript. Please refer to first author’s homepage for data and code:


  • [1] J. M. Henderson, “Eye movement control during visual object processing: effects of initial fixation position and semantic constraint.” Canadian Journal of Experimental Psychology, vol. 47, no. 1, p. 79, 1993.
  • [2] A. Nuthmann and J. M. Henderson, “Object-based attentional selection in scene viewing,” Journal of vision, vol. 10, no. 8, 2010.
  • [3] J. Xu, M. Jiang, S. Wang, M. S. Kankanhalli, and Q. Zhao, “Predicting human gaze beyond pixels,” vol. 14, no. 1, pp. 1–20, 2014.
  • [4] A. Borji and L. Itti, “State-of-the-art in visual attention modeling,” vol. 35, no. 1, pp. 185–207, 2013.
  • [5] M. Hayhoe and D. Ballard, “Eye movements in natural behavior,” Trends in cognitive sciences, vol. 9, no. 4, pp. 188–194, 2005.
  • [6] L. Itti and C. Koch, “Computational modelling of visual attention,” Nature reviews neuroscience, vol. 2, no. 3, pp. 194–203, 2001.
  • [7] M. F. Land and M. Hayhoe, “In what ways do eye movements contribute to everyday activities?” Vision research, vol. 41, no. 25, pp. 3559–3565, 2001.
  • [8] V. Navalpakkam and L. Itti, “Modeling the influence of task on attention,” Vision research, vol. 45, no. 2, pp. 205–231, 2005.
  • [9] A. C. Schütz, D. I. Braun, and K. R. Gegenfurtner, “Eye movements and perception: A selective review,” Journal of vision, vol. 11, no. 5, p. 9, 2011.
  • [10] B. W. Tatler, M. M. Hayhoe, M. F. Land, and D. H. Ballard, “Eye guidance in natural vision: Reinterpreting salience,” Journal of vision, vol. 11, no. 5, p. 5, 2011.
  • [11] F. Baluch and L. Itti, “Mechanisms of top-down attention,” Trends in neurosciences, vol. 34, no. 4, pp. 210–224, 2011.
  • [12]

    A. Borji, “Boosting bottom-up and top-down visual features for saliency estimation,” in

    Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on

    .   IEEE, 2012, pp. 438–445.
  • [13] A. Borji, M.-M. Cheng, H. Jiang, and J. Li, “Salient object detection: A survey,” arXiv preprint arXiv:1411.5878, 2014.
  • [14] A. Borji, D. N. Sihite, and L. Itti, “Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study.” IEEE Trans. Image Processing., vol. 22, no. 1, pp. 55–69, 2012.
  • [15] M. Cerf, E. P. Frady, and C. Koch, “Faces and text attract gaze independent of the task: Experimental data and computer model,” Journal of Vision, vol. 9, no. 12, November 18 2009.
  • [16] T. Judd, K. Ehinger, F. Durand, and A. Torralba, “Learning to predict where humans look,” 2009, pp. 2106–2113.
  • [17] K. Humphrey and G. Underwood, “The potency of people in pictures: Evidence from sequences of eye fixations,” Journal of Vision, vol. 10, no. 10, 2010.
  • [18] H.-C. Wang and M. Pomplun, “The attraction of visual attention to texts in real-world scenes,” Journal of vision, vol. 12, no. 6, p. 26, 2012.
  • [19] B. W. Tatler, “The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions,” Journal of Vision, vol. 7, no. 14, p. 4, 2007.
  • [20] P.-H. Tseng, R. Carmi, I. G. Cameron, D. P. Munoz, and L. Itti, “Quantifying center bias of observers in free viewing of dynamic natural scenes,” Journal of vision, vol. 9, no. 7, p. 4, 2009.
  • [21] J. P. Ossandón, S. Onat, and P. König, “Spatial biases in viewing behavior,” Journal of Vision, vol. 14, no. 2, p. 20, 2014.
  • [22] A. D. Hwang, H.-C. Wang, and M. Pomplun, “Semantic guidance of eye movements in real-world scenes,” Vision research, vol. 51, no. 10, pp. 1192–1205, 2011.
  • [23] A. Torralba, A. Oliva, M. S. Castelhano, and J. M. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychological review, vol. 113, no. 4, pp. 766–786, Oct 2006.
  • [24] R. Subramanian, D. Shankar, N. Sebe, and D. Melcher, “Emotion modulates eye movement patterns and subse-quent memory for the gist and details of movie scenes,” 2014.
  • [25] J. A. Droll, M. M. Hayhoe, J. Triesch, and B. T. Sullivan, “Task Demands Control Acquisition and Storage of Visual Information.” Journal of Experimental Psychology Human Perception and Performance, vol. 31, no. 6, pp. 1416–1438, 2005.
  • [26] R. Carmi and L. Itti, “The role of memory in guiding attention during natural vision,” Journal of Vision, vol. 6, no. 9, p. 4, 2006.
  • [27] M. S. Castelhano, M. Wieth, and J. M. Henderson, “I see what you see: Eye movements in real-world scenes are affected by perceived direction of gaze,” in Attention in cognitive systems. Theories and systems from an interdisciplinary viewpoint, 2007, pp. 251–262.
  • [28] A. Borji, D. Parks, and L. Itti, “Complementary effects of gaze direction and early saliency in guiding fixations during free-viewing,” Journal of Vision, vol. 14, no. 13, 2014.
  • [29] H. F. Chua, J. E. Boland, and R. E. Nisbett, “Cultural variation in eye movements during scene perception,” Proceedings of the National Academy of Sciences of the United States of America, vol. 102, no. 35, pp. 12 629–12 633, 2005.
  • [30] K. Friston, G. Tononi, G. Reeke Jr, O. Sporns, and G. M. Edelman, “Value-dependent selection in the brain: simulation in a synthetic neural model,” Neuroscience, vol. 59, no. 2, pp. 229–243, 1994.
  • [31] J. Shen and L. Itti, “Top-down influences on visual attention during listening are modulated by observer sex,” Vision research, vol. 65, pp. 62–76, 2012.
  • [32] A. Yarbus, Eye movements and vision.   New York: Plenum., 1967.
  • [33] M. F. Land and D. N. Lee, “Where we look when we steer.” Nature, vol. 369, pp. 742–744, 1994.
  • [34] D. Ballard, M. Hayhoe, and J. Pelz, “Memory representations in natural tasks.” Journal of Cognitive Neuroscience., vol. 7, no. 1, pp. 66–80, 1995.
  • [35] M. F. Land and M. Hayhoe, “In what ways do eye movements contribute to everyday activities?” Vision research, vol. 41, no. 25-26, pp. 3559–3565, 12 2001.
  • [36] A. Borji, D. Sihite, and L. Itti, “What/where to look next? modeling top-down visual attention in complex interactive environments,” IEEE Transactions on Systems, Man, and Cybernetics, PART A-SYSTEMS AND HUMANS, 2014.
  • [37] A. Borji and L. Itti, “Defending yarbus: Eye movements predict observers’ task,” Journal of vision, 2014.
  • [38] A. Borji, A. Lennartz, and M. Pomplun, “What do eyes reveal about the mind?: Algorithmic inference of search targets from fixations,” Neurocomputing, vol. 149, pp. 788–799, 2015.
  • [39] S. Hajimirza, M. Proulx, and E. Izquierdo, “Reading users’ minds from their eyes: A method for implicit image annotation,” IEEE Transactions on Multimedia, vol. 14, no. 3, p. 805—815, 2012.
  • [40] A. Treisman and G. Gelade, “A feature integration theory of attention.” Cognitive Psychology., vol. 12, pp. 97–136, 1980.
  • [41] C. Koch and S. Ullman, “Shifts in selective visual attention: Towards the underlying neural circuitry,” Human Neurobiology, vol. 4, no. 4, pp. 219–227, 1985.
  • [42] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, Nov 1998.
  • [43] P. Reinagel and A. Zador, “Natural scenes at the center of gaze.” Network., vol. 10, pp. 341–50, 1999.
  • [44] D. Parkhurst, K. Law, and E. Niebur, “Modeling the role of salience in the allocation of overt visual attention.” Vision Research., vol. 42, no. 1, pp. 107–123, 2002.
  • [45] A. Borji and L. Itti, “Exploiting local and global patch rarities for saliency detection,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on.   IEEE, 2012, pp. 478–485.
  • [46] M. Hayhoe, A. Shrivastava, R. Mruczek, and J. Pelz, “Visual memory and motor planning in a natural task,” Journal of Vision, vol. 3, p. 49—63, 2003.
  • [47] W. Einhäuser, M. Spain, and P. Perona, “Objects predict fixations better than early saliency,” 2008.
  • [48] A. Borji, D. N. Sihite, and L. Itti, “Objects do not predict fixations better than early saliency: A re-analysis of einhäuser et al.’s data,” Journal of vision, vol. 13, no. 10, p. 18, 2013.
  • [49] H. A. Trukenbrod and R. Engbert, “Oculomotor control in a sequential search task,” Vision research, vol. 47, no. 18, pp. 2426–2443, 2007.
  • [50] M. Pajak and A. Nuthmann, “Object-based saccadic selection during scene perception: evidence from viewing position effects,” Journal of vision, vol. 13, no. 5, p. 2, 2013.
  • [51] R. A. Rensink, “The dynamic representation of scenes,” Visual cognition, vol. 7, no. 1-3, pp. 17–42, 2000.
  • [52] K. Rayner, S. P. Liversedge, A. Nuthmann, R. Kliegl, and G. Underwood, “Rayner’s 1979 paper,” Perception, vol. 38, no. 6, p. 895, 2009.
  • [53] L. Elazary and L. Itti, “Interesting objects are visually salient,” Journal of Vision, vol. 8, no. 3:3, pp. 1–15, Mar 2008.
  • [54] A. Belardinelli and M. V. Butz, “Gaze strategies in object identification and manipulation,” 2013.
  • [55] T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 33, no. 2, pp. 353–367, 2011.
  • [56] M. Dziemianko, A. Clarke, and F. Keller, “Object-based saliency as a predictor of attention in visual tasks.”
  • [57] A. Borji, D. N. Sihite, and L. Itti, “What stands out in a scene? a study of human explicit saliency judgment,” Vision research, vol. 91, pp. 62–77, 2013.
  • [58] K. Yun, Y. Peng, D. Samaras, G. J. Zelinsky, and T. L. Berg, “Exploring the role of gaze behavior and object detection in scene understanding,” Frontiers in psychology, vol. 4, 2013.
  • [59] Y. Sun, R. Fisher, F. Wang, and H. M. Gomes, “A computer vision model for visual-object-based attention and eye movements,” Computer Vision and Image Understanding, vol. 112, no. 2, pp. 126–142, 2008.
  • [60] K.-Y. Chang, T.-L. Liu, H.-T. Chen, and S.-H. Lai, “Fusing generic objectness and visual saliency for salient object detection,” in Computer Vision (ICCV), 2011 IEEE International Conference on.   IEEE, 2011, pp. 914–921.
  • [61] B. M’t Hart, H. C. Schmidt, C. Roth, and W. Einhäuser, “Fixations on objects in natural scenes: dissociating importance from salience,” Frontiers in psychology, vol. 4, 2013.
  • [62] Y. Kavak, E. Erdem, and A. Erdem, “Visual saliency estimation by integrating features using multiple kernel learning,” arXiv preprint arXiv:1307.5693, 2013.
  • [63] V. Yanulevskaya, J. Uijlings, J.-M. Geusebroek, N. Sebe, and A. Smeulders, “A proto-object-based computational model for visual saliency,” Journal of vision, vol. 13, no. 13, p. 27, 2013.
  • [64] J. Stoll, M. Thrun, A. Nuthmann, and W. Einhäuser, “Overt attention in natural scenes: Objects dominate features,” Vision research, vol. 107, pp. 36–48, 2015.
  • [65] A. Garcia-Diaz, V. Leboran, X. R. Fdez-Vidal, and X. M. Pardo, “On the relationship between optical variability, visual saliency, and eye fixations: A computational approach.” Journal of Vision., vol. 12, no. 6, 2012.
  • [66] R. J. Peters, A. Iyer, L. Itti, and C. Koch, “Components of bottom-up gaze allocation in natural images,” Vision Research, vol. 45, no. 8, pp. 2397–2416, Aug 2005.
  • [67] M. O. Hartendorp, S. Van der Stigchel, I. Hooge, J. Mostert, T. de Boer, and A. Postma, “The relation between gaze behavior and categorization: Does where we look determine what we see?” Journal of vision, vol. 13, no. 6, p. 6, 2013.
  • [68] G. Kootstra, B. de Boer, and L. R. Schomaker, “Predicting eye fixations on complex visual stimuli using local symmetry,” Cognitive computation, vol. 3, no. 1, pp. 223–240, 2011.
  • [69] M. R. Greene, “Statistics of high-level scene context,” Frontiers in psychology, vol. 4, 2013.
  • [70] M.-M. Cheng, N. J. Mitra, X. Huang, and S.-M. Hu, “Salientshape: Group saliency in image collections,” The Visual Computer, vol. 30, no. 4, pp. 443–453, 2014.
  • [71] Y. Li, X. Hou, C. Koch, J. Rehg, and A. Yuille, “The secrets of salient object segmentation.”   CVPR, 2014.
  • [72] J. Duncan, “Selective attention and the organization of visual information.” Journal of Experimental Psychology: General, vol. 113, no. 4, p. 501, 1984.
  • [73] R. Egly, J. Driver, and R. D. Rafal, “Shifting visual attention between objects and locations: evidence from normal and parietal lesion subjects.” Journal of Experimental Psychology: General, vol. 123, no. 2, p. 161, 1994.
  • [74] S. P. Vecera and M. J. Farah, “Does visual attention select objects or locations?” Journal of Experimental Psychology: General, vol. 123, no. 2, p. 146, 1994.
  • [75] L. Drummond and S. Shomstein, “Object-based attention: Shifting or uncertainty?” Attention, Perception, & Psychophysics, vol. 72, no. 7, pp. 1743–1755, 2010.
  • [76] J. Gottlieb and P. Balan, “Attention as a decision in information space,” Trends in cognitive sciences, vol. 14, no. 6, pp. 240–248, 2010.
  • [77] J. Harel, C. Koch, P. Perona, et al., “Graph-based visual saliency,” Advances in neural information processing systems, vol. 19, p. 545, 2007.
  • [78] K. Koehler, F. Guo, S. Zhang, and M. P. Eckstein, “What do saliency models predict?” Journal of vision, vol. 14, no. 3, p. 14, 2014.
  • [79] Y. Li, X. Hou, C. Koch, J. Rehg, and A. Yuille, “The secrets of salient object segmentation.”   CVPR, 2014.
  • [80] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” International journal of computer vision, vol. 88, no. 2, pp. 303–338, 2010.