Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet

11/04/2014 ∙ by Matthias Kümmerer, et al. ∙ 0

Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations. This lack in performance has been attributed to an inability to model the influence of high-level image features such as objects. Recent seminal advances in applying deep neural networks to tasks like object recognition suggests that they are able to capture this kind of structure. However, the enormous amount of training data necessary to train these networks makes them difficult to apply directly to saliency prediction. We present a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction. Using the well-known network of Krizhevsky et al. (2012), we come up with a new saliency model that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark. We show that the structure of this network allows new insights in the psychophysics of fixation selection and potentially their neural implementation. To train our network, we build on recent work on the modeling of saliency as point processes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Methods

In Figure 2, the model architecture is visualized. After an initial downsampling, the RGB input image is fed into the Krizhevsky network. The Krizhevsky architecture consists of stacked convolutions, each one followed by a rectifiying nonlinearity and optional maxpooling and response normalization. The final three fully connected layers of the Krizhevsky network were removed as we are only interested in spatially located features. Each layer (convolution, rectifier, pooling and normalization) results in a single image of response for each filter in the layer. To predict fixations, we first select one or multiple layers from the network. We rescale all the response images that we want to include in our model to the size of the largest layer of the network, resulting in a list of up to 3712 responses for each location in an image. Each of these responses is then individually normalized to have unit standard deviation on the full dataset. After this preprocessing, the features are fed into the following model.

At leach image location, our saliency model linearly combines the responses using weights . The resulting image is then convoled with a Gaussian kernel whose width is controlled by , yielding the saliency map

It is well known that fixation locations are strongly biased towards the center of an image (Tatler, 2007). To account for this center bias, the saliency prediction is linearly combined with a fixed center bias prediction :

To predict fixation probabilities, this output is finally fed into a softmax, yielding a probability distribution over the image:

For generalization, -regularization on the weights is used to encourage sparsity. For training fixations this yields the cost function

To quantify which layers help most in predicting the fixations and lead to least overfitting, we trained models on a variety of subsets of layers (see subsection 2.3 and Figure 5). We checked the generalization performance of these models on the remaining 540 images from MIT1003 that have not been used in training. As performance measure we use shuffled area under the curve (shuffled AUC) here (Tatler et al., 2005)

. In AUC, the saliency map is treated as a classifier score to separate fixations from “nonfixations”: presented with two locations in the image, the classifier chooses the location with the higher saliency value as fixation. The AUC measures the classification performance of this classifer. The standard AUC uses a uniform nonfixation distribution, while in the case of shuffled AUC, fixations from other images are used as nonfixations. As shuffled AUC assumes the saliency maps not include the biases of the prior distribution

(see Barthelmé et al., 2013) we had to use a uniform center bias for this evaluation.

1.1 Implementation details

For training, we used roughly half of the dataset MIT1003 (Judd et al., 2009). By using only the images of the most common size of pixels (resulting in 463 images), we were able to use the nonparametric estimate of the center bias described in Kümmerer et al. (2014) (mainly a 2d histrogram distribution fitted using the fixations from all other images).

Our implementation of the Krizhevsky network uses the architecture and trained filters as published by Jia et al. (2014) with the following modifications: the original architecture uses a fixed input size of . As we removed the fully connected layers, we do not need to restrict to a fixed input size but can feed arbitrary images into the network. Furthermore we use convolutions of type full

(i.e. zero-pad the input) instead of

valid which would result in convolution outputs that are smaller than the input. This modification is useful, because we need saliency predictions for every point in the image. Note that the caffe implementation of the Krizhevsky network differs slightly from the original architecture in Krizhevsky et al. (2012), as the pooling and the normalization layers have been switched. The subsampling factor for the inital downsampling of the images was set to 2.

The sparsity parameter was chosen using grid search and turned out to be in the final model. However, even setting it to much smaller values did have very little effect on training and test performance (see subsection 6.1

for more details). All calculations of log-likelihoods, cost functions and gradients were done in theano

(Bergstra et al., 2010). To minimize the cost function on the training set of fixations, the mini-batch based BFGS method as described in Sohl-Dickstein et al. (2014)

was used. It combines the benefits of batch based methods with the advantage of second order methods, yielding high convergence rates with next to no hyperparameter tuning. To avoid overfitting to the subjects, leave-one-out cross-validation over the 15 subjects contained in the database was used.

The code for our model including training and analysis will be published at http://www.bethgelab.org/code/deepgaze/.

2 Results

2.1 Performance Results

Figure 3: Performance of Deep Gaze I compared to a list of other influential models, expressed as the ratio of explained information (see text for details). All models except for Deep Gaze I have been postprocessed to account for a pointwise nonlinearity, center bias and blurring (see Kümmerer et al. (2014) for details).

We use an information theoretic measure to evaluate our model: log-likelihood. Log-likelihood is a principled measure for probabilistic models and has numerous advantages. See Kümmerer et al. (2014) for an extensive discussion.

Log-likelihoods are much easier to understand when expressed as difference of log-likelihood relative to a baseline model. This information gain111To be more precise, this value is an estimated expected information gain expresses how much more efficient the model is in describing the fixations than the baseline model: if a model with an information gain of is used to encode fixation data, it can save on average one bit per fixation compared to the baseline model.

The information gain is even more intuitive when compared to the explainable information gain, i.e., the information gain of the real distribution compared to the baseline model. This comparison yields a ratio of explained information gain to explainable information gain which will be called “explainable information gain explained” or just “information gain explained” in the following. See Kümmerer et al. (2014) for a more thorough explanation of this notion.

The baseline model is a non-parametric model of the image-independent prior distribution

, while the explainable information is estimated using a non-parametric model of the fixation distribution for a given image (which we call the gold standard model). The gold standard model is cross-validated between subjects and thus captures all the structure in the fixations that is purely due to the spatial structure of the image. See Kümmerer et al. (2014) for details on the baseline model and the gold standard model.

By expressing the information gain of a model as a percentage of the possible information gain, we can asses how far we have come in describing the fixations. It is important to note that this interpretation is only possible due to the fact that information gain is on a ratio scale (Michell, 1997): differences and ratios of information gains are meaningful – opposed to other measures like AUC.

In Figure 3, the percentage of information gain explained is plotted for our model in comparison to a range of influential saliency models, including the state-of-the-art models. Of the possible information gain, the best existing model (eDN) is able to explain only . Deep Gaze I is able to increase this information gain to .

a)

b)
Figure 4: Performance results on the MIT benchmark. (a): Shuffled AUC performance of Deep Gaze I (green bar, 71.69%) compared with all other models in the MIT benchmark. The x-axis is at the level of the center bias model. The three top performing models after Deep Gaze I are in order of decreasing performance: AWS (67.90%, Garcia-Diaz et al. (2012)), RARE2012 (66.54%, Riche et al. (2013)), and AIM (65.64%, Bruce & Tsotsos (2009)). (b) AUC performance of Deep Gaze I (green bar, 84.40%) compared with all other models in the MIT benchmark that performed better than the center bias. The x-axis is at the level of the center bias model. The three top performing models after Deep Gaze I are in order of decreasing performance: BMS (82.57%, Zhang & Sclaroff (2013)), Mixture of Saliency Models (82.09%, Han and Satoh, 2014), and eDN (81.92%, Vig et al. (2014)). Notice that AUC and shuffled AUC use different definitions of saliency map: While AUC expects the saliency maps to model the center bias, shuffled AUC explicitly does not and penalizes models that do. Therefore, for the shuffled AUC performances of Deep Gaze I the saliency maps have been calculated with a uniform prior distribution, while for the AUC performances the saliency maps have been calculated with a nonparametric prior (see text for details) 222Note that the MIT Saliency Benchmark webpage reports only performances for the saliency maps with the nonparametric prior. Therefore, there the shuffled AUC performance is lower.. Performances of other models from the MIT benchmark as of September 2014.

a)

b)

c)

d)

Figure 5: Performance of Deep Gaze I when trained on different subsets of the Krizhevsky layers: (a): Results for models that use layers from a given depth upwards. The left plot shows the percentage of explainable information gain explained on the images used in training for training subjects and test subjects (refer to subsection 2.1 for an explanation of this measure). The dotted line indicates the performance of the model we used in the MIT Saliency Benchmark (which only used the output of the convolutions of layer 5). The right plot shows the shuffled AUC on the images used in training and on the remaining test images. Here, the models have been averaged over all test subjects and the saliency maps assume uniform center bias, as expected by shuffled AUC (see subsection 2.2 for details). The dotted line indicates the performance of the final model on the test images. (b), (c), (d): Results for models that use layers up to a given depth (b), layers of a certain depth (c) and layers of a certain type (d). The plots are as in (a).

1.00

0.85

0.76

0.72

0.72

0.72

0.66

0.65
Figure 6: Analysis of used features I: (a) Patches of maximum response: Each square of patches shows for a specific feature of the Krizhevsky architecture the nine patches that led to highest response (resp. smallest response, if the feature has a negative weight in the model). Each patch corresponds to exactly the part of the image that contributes to the response in the location of maximum response. The features used have been choosen by the absolute value of the weight that Deep Gaze I assigned to them. The numbers over the patches show .

(a)

(b)

(c)

(d)

Figure 7: Analysis of used features II: Details for some of the patches from Figure 6 The four double columns (a) to (d) correspond to the first four features shown Figure 6. In each double column, the four rows correspond to the first four patches shown for this feature in Figure 6. The left column of each double column shows the patches in the context of the full image, while the feature’s response over the full image is shown in the right column. The position of the maximum is indicated by a dot.

2.2 Results on MIT Saliency Benchmark

We submitted our model to the MIT Saliency Benchmark (Bylinskii et al. ). The benchmark evaluates saliency models on a dataset of 300 images and 40 subjects. The fixations are not available to make training for these fixations impossible.

The MIT Saliency Benchmark evaluates models on a variety of metrics, including AUC with uniform nonfixation distribution and shuffled AUC (i.e. AUC with center bias as nonfixation distribution). The problem with these metrics is that most of them use different definitions of saliency maps. This hold especially for the two most used performance metrics: AUC and shuffled AUC. While AUC expects the saliency maps to model the center bias, shuffled AUC explicitly does not so and penalizes models that do (see Barthelmé et al. (2013) for details). As Deep Gaze I uses an explicit representation of the prior distribution, it is straightforward to produce the saliency maps according to both definitions of AUC: For AUC we use a nonparametric prior estimate, for shuffled AUC we use a uniform prior distribution. As the images of the dataset are of different size, we could not use our non-parametric center bias as is. Instead, we took all fixations from the full MIT-1003 dataset and transformed their position to be relative to a image of size

. Then we trained a Gaussian kernel density estimator on these fixations. This density estimate was then rescaled and renormalized for each image.

Doing so, we beat the state-of-the-art models in the MIT Saliency Benchmark by a large margin in AUC as well as shuffled AUC (see footnote 2): For shuffled AUC, we reach 71.69% compared to 67.90% for the best performing model AWS (center bias is at 50%). For AUC we reach 84.40% compared to 82.57% for the best performing model BMS (center bias is at 78.31%). Relative to the center bias, this is an increase of AUC performance by more than 40%.

2.3 Layer selection

The final model used only the convolutions of the top-most layer of the Krizhevsky-architecture. This is a principled choice: the top layer can be expected to include most high-level influences and the relu, pool and norm units are often viewed mainly as the nonlinearities needed to provide a new feature space for the next level of convolutions.

But this choice was also backed by a series of comparison models where more or other layers have been included in the model: In Figure 5, performance results are reported for models including layers from a given depth upwards (Figure 5a), layers up to a given depth (Figure 5b), layers of a given depth (Figure 5c) and layers of a given type (Figure 5d). It can be seen that the architecture chosen finally (layer 5 convolutions) generalizes best to the images of the test set in terms of shuffled AUC.

It is also worth noting that models including more layers are substantially better at predicting the test subjects fixations on the images used in training (Figure 5a, left plot): when using all layers, a performance of 83% information gain explained is reached for the test subjects. This suggests that the generalization problems of these models are not due to intersubject variability. They most probably suffer from the fact that the variety of objects in the training images is not rich enough, leading to overfitting to the images (not to the subjects). Therefore we can expect improved performance from using a larger set of images in training.

2.4 Analysis of used features

In this section we analyze which features of the Krizhevsky architecture contributed most to the fixation predictions. By getting a solid understanding of the involved features, we can hope to extract predictions from the model that can be tested psychophysically in the future.

In Figure 6, we took the 10 most weighted features from the 256 convolution features in layer 5. For each of these 10 features, we plotted the 9 patches from the dataset that led to the highest response (resp. lowest response for features with negative weight). In Figure 7, the first four patches of the first four features are shown in more detail: The patches are shown in the context of the entire image and also the feature’s response to this image is shown.

Clearly, the most important feature is sensitive to faces. The second most important feature seems to respond mainly to text. The third most important feature shows some sort of pop-out response: it seems to respond to whichever feature sticks out from an image: the sign of a bar in the first patch, two persons in a desert in the second patch and, most notably, the target in a visual search image in the fourth patch. Note that the salient feature depends heavily on the image context, so that a simple luminance or color contrast detector would not achieve the same effect.

This shows that Deep Gaze I is not only able to capture the influence of high level objects like faces or text, but also more abstract high-level concepts (like popout).

3 Discussion

Deep Gaze I was able to increase the explained information gain to compared to for state of the art models. On the MIT Saliency Benchmark we were also able to beat the state of the art models by a substantial margin. One main reason for this performance is the ability of our model to capture the influence of several high-level features like faces and text but also more abstract ones like popout (2.4).

It is important to note that all reported results from Deep Gaze I are direct model performances, without any fitting of a pointwise nonlinearity as performed in Kümmerer et al. (2014). This indicates that the deep layers provide a sufficiently rich feature space to enable fixation prediction via simple linear combination of the features. The convolution responses turned out to be most informative about the fixations.

While features trained on ImageNet have been shown to generalize to other recognition and detection tasks (e. g. Donahue et al., 2013; Razavian et al., 2014), to our knowledge this is the first work where ImageNet features have been used to predict behaviour.

Extending state-of-the-art neural networks with attention is an exciting new direction of research (Tang et al., 2014; Mnih et al., 2014). Humans use attention for efficient object recognition and we showed that Krizhevsky features work well for predicting human attention. Therefore it is likely that these networks could be brought closer to human performance by extending them with Krizhevsky features. This could be an interesting field for future research.

4 Conclusions

Our contribution in this work is twofold: First, we have shown that deep convolutional networks that have been trained on computer vision tasks like object detection boost saliency prediction. Using the well-known Krizhevsky network

(Krizhevsky et al., 2012), we were able to outperform state-of-the-art saliency models by a large margin, increasing the amount of explained information by compared to state-of-the art. We believe this approach will enable the creation of a new generation of saliency models with high predictive power and deep implications for psychophysics and neuroscience (Yamins et al., 2014; Zeiler & Fergus, 2013). An obvious next step suggested by this approach is to replace the Krizhevsky network by the ImageNet 2014 winning networks such as VGG (Simonyan & Zisserman, 2014) and GoogLeNet (Szegedy et al., 2014).

A second conceptual contribution of this work is to optimize the saliency model by maximizing the log-likelihood of a point process (see Barthelmé et al., 2013; Kümmerer et al., 2014).

We believe that the combination of high performance feature spaces for object recognition as obtained from the ImageNet benchmark with principled maximum likelihood learning opens the door for a “Deep Gaze” program towards explaining all the explainable information in the spatial image-based fixation structure.

5 Acknowledgements

This work was mainly supported by the German Research Foundation (DFG; priority program 1527, Sachbeihilfe BE 3848-1) and additionally by the German Ministry of Education, Science, Research and Technology through the Bernstein Center for Computational Neuroscience (FKZ 01GQ1002) and the German Excellency Initiative through the Centre for Integrative Neuroscience Tübingen (EXC307).

References

6 Supplementary Material

Figure 8: Performance of Deep Gaze I when trained on the conv5-layer with different regularization parameters. The left plot shows the percentage of explainable information gain explained on the images used in training for training subjects and test subjects (refer to subsection 2.1 for an explanation of this measure). The dotted line indicates the performance of the model we used in the MIT Saliency Benchmark (). The right plot shows the shuffled AUC on the images used in training and on the remaining test images. Here, the models have been averaged over all test subjects and the saliency maps assume uniform center bias, as expected by shuffled AUC (see subsection 2.2 for details). The dotted line indicates the performance of the final model on the test images.

6.1 Regularization

The model uses a regularization parameter to encourage sparsity in the feature weights (see section 1). This parameter was choosen using grid search. In Figure 8, training and test performances are shown for different choices of when fitting the model using only the final convolutional layer (as done in the final model). It can be seen that the choice of the regularization parameter had a visible but only very small effect on the test performance (especially if compared to the influences of the different layers used, see Figure 5).