In Figure 2, the model architecture is visualized. After an initial downsampling, the RGB input image is fed into the Krizhevsky network. The Krizhevsky architecture consists of stacked convolutions, each one followed by a rectifiying nonlinearity and optional maxpooling and response normalization. The final three fully connected layers of the Krizhevsky network were removed as we are only interested in spatially located features. Each layer (convolution, rectifier, pooling and normalization) results in a single image of response for each filter in the layer. To predict fixations, we first select one or multiple layers from the network. We rescale all the response images that we want to include in our model to the size of the largest layer of the network, resulting in a list of up to 3712 responses for each location in an image. Each of these responses is then individually normalized to have unit standard deviation on the full dataset. After this preprocessing, the features are fed into the following model.
At leach image location, our saliency model linearly combines the responses using weights . The resulting image is then convoled with a Gaussian kernel whose width is controlled by , yielding the saliency map
It is well known that fixation locations are strongly biased towards the center of an image (Tatler, 2007). To account for this center bias, the saliency prediction is linearly combined with a fixed center bias prediction :
To predict fixation probabilities, this output is finally fed into a softmax, yielding a probability distribution over the image:
For generalization, -regularization on the weights is used to encourage sparsity. For training fixations this yields the cost function
To quantify which layers help most in predicting the fixations and lead to least overfitting, we trained models on a variety of subsets of layers (see subsection 2.3 and Figure 5). We checked the generalization performance of these models on the remaining 540 images from MIT1003 that have not been used in training. As performance measure we use shuffled area under the curve (shuffled AUC) here (Tatler et al., 2005)
. In AUC, the saliency map is treated as a classifier score to separate fixations from “nonfixations”: presented with two locations in the image, the classifier chooses the location with the higher saliency value as fixation. The AUC measures the classification performance of this classifer. The standard AUC uses a uniform nonfixation distribution, while in the case of shuffled AUC, fixations from other images are used as nonfixations. As shuffled AUC assumes the saliency maps not include the biases of the prior distribution(see Barthelmé et al., 2013) we had to use a uniform center bias for this evaluation.
1.1 Implementation details
For training, we used roughly half of the dataset MIT1003 (Judd et al., 2009). By using only the images of the most common size of pixels (resulting in 463 images), we were able to use the nonparametric estimate of the center bias described in Kümmerer et al. (2014) (mainly a 2d histrogram distribution fitted using the fixations from all other images).
Our implementation of the Krizhevsky network uses the architecture and trained filters as published by Jia et al. (2014) with the following modifications: the original architecture uses a fixed input size of . As we removed the fully connected layers, we do not need to restrict to a fixed input size but can feed arbitrary images into the network. Furthermore we use convolutions of type full
(i.e. zero-pad the input) instead ofvalid which would result in convolution outputs that are smaller than the input. This modification is useful, because we need saliency predictions for every point in the image. Note that the caffe implementation of the Krizhevsky network differs slightly from the original architecture in Krizhevsky et al. (2012), as the pooling and the normalization layers have been switched. The subsampling factor for the inital downsampling of the images was set to 2.
The sparsity parameter was chosen using grid search and turned out to be in the final model. However, even setting it to much smaller values did have very little effect on training and test performance (see subsection 6.1
for more details). All calculations of log-likelihoods, cost functions and gradients were done in theano(Bergstra et al., 2010). To minimize the cost function on the training set of fixations, the mini-batch based BFGS method as described in Sohl-Dickstein et al. (2014)
was used. It combines the benefits of batch based methods with the advantage of second order methods, yielding high convergence rates with next to no hyperparameter tuning. To avoid overfitting to the subjects, leave-one-out cross-validation over the 15 subjects contained in the database was used.
The code for our model including training and analysis will be published at http://www.bethgelab.org/code/deepgaze/.
2.1 Performance Results
We use an information theoretic measure to evaluate our model: log-likelihood. Log-likelihood is a principled measure for probabilistic models and has numerous advantages. See Kümmerer et al. (2014) for an extensive discussion.
Log-likelihoods are much easier to understand when expressed as difference of log-likelihood relative to a baseline model. This information gain111To be more precise, this value is an estimated expected information gain expresses how much more efficient the model is in describing the fixations than the baseline model: if a model with an information gain of is used to encode fixation data, it can save on average one bit per fixation compared to the baseline model.
The information gain is even more intuitive when compared to the explainable information gain, i.e., the information gain of the real distribution compared to the baseline model. This comparison yields a ratio of explained information gain to explainable information gain which will be called “explainable information gain explained” or just “information gain explained” in the following. See Kümmerer et al. (2014) for a more thorough explanation of this notion.
The baseline model is a non-parametric model of the image-independent prior distribution, while the explainable information is estimated using a non-parametric model of the fixation distribution for a given image (which we call the gold standard model). The gold standard model is cross-validated between subjects and thus captures all the structure in the fixations that is purely due to the spatial structure of the image. See Kümmerer et al. (2014) for details on the baseline model and the gold standard model.
By expressing the information gain of a model as a percentage of the possible information gain, we can asses how far we have come in describing the fixations. It is important to note that this interpretation is only possible due to the fact that information gain is on a ratio scale (Michell, 1997): differences and ratios of information gains are meaningful – opposed to other measures like AUC.
In Figure 3, the percentage of information gain explained is plotted for our model in comparison to a range of influential saliency models, including the state-of-the-art models. Of the possible information gain, the best existing model (eDN) is able to explain only . Deep Gaze I is able to increase this information gain to .
2.2 Results on MIT Saliency Benchmark
We submitted our model to the MIT Saliency Benchmark (Bylinskii et al. ). The benchmark evaluates saliency models on a dataset of 300 images and 40 subjects. The fixations are not available to make training for these fixations impossible.
The MIT Saliency Benchmark evaluates models on a variety of metrics, including AUC with uniform nonfixation distribution and shuffled AUC (i.e. AUC with center bias as nonfixation distribution). The problem with these metrics is that most of them use different definitions of saliency maps. This hold especially for the two most used performance metrics: AUC and shuffled AUC. While AUC expects the saliency maps to model the center bias, shuffled AUC explicitly does not so and penalizes models that do (see Barthelmé et al. (2013) for details). As Deep Gaze I uses an explicit representation of the prior distribution, it is straightforward to produce the saliency maps according to both definitions of AUC: For AUC we use a nonparametric prior estimate, for shuffled AUC we use a uniform prior distribution. As the images of the dataset are of different size, we could not use our non-parametric center bias as is. Instead, we took all fixations from the full MIT-1003 dataset and transformed their position to be relative to a image of size
. Then we trained a Gaussian kernel density estimator on these fixations. This density estimate was then rescaled and renormalized for each image.
Doing so, we beat the state-of-the-art models in the MIT Saliency Benchmark by a large margin in AUC as well as shuffled AUC (see footnote 2): For shuffled AUC, we reach 71.69% compared to 67.90% for the best performing model AWS (center bias is at 50%). For AUC we reach 84.40% compared to 82.57% for the best performing model BMS (center bias is at 78.31%). Relative to the center bias, this is an increase of AUC performance by more than 40%.
2.3 Layer selection
The final model used only the convolutions of the top-most layer of the Krizhevsky-architecture. This is a principled choice: the top layer can be expected to include most high-level influences and the relu, pool and norm units are often viewed mainly as the nonlinearities needed to provide a new feature space for the next level of convolutions.
But this choice was also backed by a series of comparison models where more or other layers have been included in the model: In Figure 5, performance results are reported for models including layers from a given depth upwards (Figure 5a), layers up to a given depth (Figure 5b), layers of a given depth (Figure 5c) and layers of a given type (Figure 5d). It can be seen that the architecture chosen finally (layer 5 convolutions) generalizes best to the images of the test set in terms of shuffled AUC.
It is also worth noting that models including more layers are substantially better at predicting the test subjects fixations on the images used in training (Figure 5a, left plot): when using all layers, a performance of 83% information gain explained is reached for the test subjects. This suggests that the generalization problems of these models are not due to intersubject variability. They most probably suffer from the fact that the variety of objects in the training images is not rich enough, leading to overfitting to the images (not to the subjects). Therefore we can expect improved performance from using a larger set of images in training.
2.4 Analysis of used features
In this section we analyze which features of the Krizhevsky architecture contributed most to the fixation predictions. By getting a solid understanding of the involved features, we can hope to extract predictions from the model that can be tested psychophysically in the future.
In Figure 6, we took the 10 most weighted features from the 256 convolution features in layer 5. For each of these 10 features, we plotted the 9 patches from the dataset that led to the highest response (resp. lowest response for features with negative weight). In Figure 7, the first four patches of the first four features are shown in more detail: The patches are shown in the context of the entire image and also the feature’s response to this image is shown.
Clearly, the most important feature is sensitive to faces. The second most important feature seems to respond mainly to text. The third most important feature shows some sort of pop-out response: it seems to respond to whichever feature sticks out from an image: the sign of a bar in the first patch, two persons in a desert in the second patch and, most notably, the target in a visual search image in the fourth patch. Note that the salient feature depends heavily on the image context, so that a simple luminance or color contrast detector would not achieve the same effect.
This shows that Deep Gaze I is not only able to capture the influence of high level objects like faces or text, but also more abstract high-level concepts (like popout).
Deep Gaze I was able to increase the explained information gain to compared to for state of the art models. On the MIT Saliency Benchmark we were also able to beat the state of the art models by a substantial margin. One main reason for this performance is the ability of our model to capture the influence of several high-level features like faces and text but also more abstract ones like popout (2.4).
It is important to note that all reported results from Deep Gaze I are direct model performances, without any fitting of a pointwise nonlinearity as performed in Kümmerer et al. (2014). This indicates that the deep layers provide a sufficiently rich feature space to enable fixation prediction via simple linear combination of the features. The convolution responses turned out to be most informative about the fixations.
While features trained on ImageNet have been shown to generalize to other recognition and detection tasks (e. g. Donahue et al., 2013; Razavian et al., 2014), to our knowledge this is the first work where ImageNet features have been used to predict behaviour.
Extending state-of-the-art neural networks with attention is an exciting new direction of research (Tang et al., 2014; Mnih et al., 2014). Humans use attention for efficient object recognition and we showed that Krizhevsky features work well for predicting human attention. Therefore it is likely that these networks could be brought closer to human performance by extending them with Krizhevsky features. This could be an interesting field for future research.
Our contribution in this work is twofold: First, we have shown that deep convolutional networks that have been trained on computer vision tasks like object detection boost saliency prediction. Using the well-known Krizhevsky network(Krizhevsky et al., 2012), we were able to outperform state-of-the-art saliency models by a large margin, increasing the amount of explained information by compared to state-of-the art. We believe this approach will enable the creation of a new generation of saliency models with high predictive power and deep implications for psychophysics and neuroscience (Yamins et al., 2014; Zeiler & Fergus, 2013). An obvious next step suggested by this approach is to replace the Krizhevsky network by the ImageNet 2014 winning networks such as VGG (Simonyan & Zisserman, 2014) and GoogLeNet (Szegedy et al., 2014).
We believe that the combination of high performance feature spaces for object recognition as obtained from the ImageNet benchmark with principled maximum likelihood learning opens the door for a “Deep Gaze” program towards explaining all the explainable information in the spatial image-based fixation structure.
This work was mainly supported by the German Research Foundation (DFG; priority program 1527, Sachbeihilfe BE 3848-1) and additionally by the German Ministry of Education, Science, Research and Technology through the Bernstein Center for Computational Neuroscience (FKZ 01GQ1002) and the German Excellency Initiative through the Centre for Integrative Neuroscience Tübingen (EXC307).
- Barthelmé et al. (2013) Barthelmé, Simon, Trukenbrod, Hans, Engbert, Ralf, and Wichmann, Felix. Modelling fixation locations using spatial point processes. Journal of Vision, 13(12), 2013. doi: 10.1167/13.12.1.
- Bergstra et al. (2010) Bergstra, James, Breuleux, Olivier, Bastien, Frédéric, Lamblin, Pascal, Pascanu, Razvan, Desjardins, Guillaume, Turian, Joseph, Warde-Farley, David, and Bengio, Yoshua. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.
- Bruce & Tsotsos (2009) Bruce, Neil DB and Tsotsos, John K. Saliency, attention, and visual search: An information theoretic approach. Journal of vision, 9(3), 2009.
- Buswell (1935) Buswell, Guy Thomas. How people look at pictures. University of Chicago Press Chicago, 1935.
- (5) Bylinskii, Zoya, Judd, Tilke, Durand, Frédo, Oliva, Aude, and Torralba, Antonio. Mit saliency benchmark. http://saliency.mit.edu/.
Cerf et al. (2008)
Cerf, Moran, Harel, Jonathan, Einhaeuser, Wolfgang, and Koch, Christof.
Predicting human gaze using low-level saliency combined with face detection.In Platt, J.C., Koller, D., Singer, Y., and Roweis, S.T. (eds.), Advances in Neural Information Processing Systems 20, pp. 241–248. Curran Associates, Inc., 2008. URL http://papers.nips.cc/paper/3169-predicting-human-gaze-using-low-level-saliency-combined-with-face-detection.pdf.
Deng et al. (2009)
Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li.
Imagenet: A large-scale hierarchical image database.
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255. IEEE, 2009.
- Donahue et al. (2013) Donahue, Jeff, Jia, Yangqing, Vinyals, Oriol, Hoffman, Judy, Zhang, Ning, Tzeng, Eric, and Darrell, Trevor. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013.
- Garcia-Diaz et al. (2012) Garcia-Diaz, Antón, Leborán, Víctor, Fdez-Vidal, Xosé R, and Pardo, Xosé M. On the relationship between optical variability, visual saliency, and eye fixations: A computational approach. Journal of vision, 12(6):17, 2012.
- Itti et al. (1998) Itti, Laurent, Koch, Christof, and Niebur, Ernst. A model of saliency-based visual attention for rapid scene analysis. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20(11):1254–1259, 1998. doi: 10.1109/34.730558.
- Jia et al. (2014) Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
- Judd et al. (2009) Judd, Tilke, Ehinger, Krista, Durand, Frédo, and Torralba, Antonio. Learning to predict where humans look. In Computer Vision, 2009 IEEE 12th international conference on, pp. 2106–2113. IEEE, 2009.
Krizhevsky et al. (2012)
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E.
Imagenet classification with deep convolutional neural networks.In Advances in neural information processing systems, pp. 1097–1105, 2012.
- Kümmerer et al. (2014) Kümmerer, M., Wallis, T., and Bethge, M. How close are we to understanding image-based saliency? arXiv preprint arXiv:1409.7686, Sep 2014. URL http://arxiv.org/abs/1409.7686.
- Michell (1997) Michell, Joel. Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88(3):355–383, 1997.
- Mnih et al. (2014) Mnih, Volodymyr, Heess, Nicolas, Graves, Alex, et al. Recurrent models of visual attention. In Advances in Neural Information Processing Systems, pp. 2204–2212, 2014.
- Razavian et al. (2014) Razavian, Ali Sharif, Azizpour, Hossein, Sullivan, Josephine, and Carlsson, Stefan. Cnn features off-the-shelf: an astounding baseline for recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pp. 512–519. IEEE, 2014.
- Riche et al. (2013) Riche, Nicolas, Mancas, Matei, Duvinage, Matthieu, Mibulumukini, Makiese, Gosselin, Bernard, and Dutoit, Thierry. RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Processing: Image Communication, 28(6):642–658, July 2013. ISSN 09235965. doi: 10.1016/j.image.2013.03.009. URL http://linkinghub.elsevier.com/retrieve/pii/S0923596513000489.
- Simonyan & Zisserman (2014) Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556.
Sohl-Dickstein et al. (2014)
Sohl-Dickstein, Jascha, Poole, Ben, and Ganguli, Surya.
An adaptive low dimensional quasi-newton sum of functions optimizer.
International Conference on Machine Learning, 2014.
- Szegedy et al. (2014) Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. CoRR, abs/1409.4842, 2014. URL http://arxiv.org/abs/1409.4842.
- Tang et al. (2014) Tang, Yichuan, Srivastava, Nitish, and Salakhutdinov, Ruslan R. Learning generative models with visual attention. In Advances in Neural Information Processing Systems, pp. 1808–1816, 2014.
- Tatler (2007) Tatler, Benjamin W. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision, 7(14):4, 2007. doi: 10.1167/7.14.4.
- Tatler et al. (2005) Tatler, Benjamin W., Baddeley, Roland J., and Gilchrist, Iain D. Visual correlates of fixation selection: effects of scale and time. Vision Research, 45(5):643 – 659, 2005. ISSN 0042-6989. doi: http://dx.doi.org/10.1016/j.visres.2004.09.017. URL http://www.sciencedirect.com/science/article/pii/S0042698904004626.
- Vig et al. (2014) Vig, Eleonora, Dorr, Michael, and Cox, David. Large-scale optimization of hierarchical features for saliency prediction in natural images. In Computer Vision and Pattern Recognition, 2014. CVPR’14. IEEE Conference on. IEEE, 2014.
- Yamins et al. (2014) Yamins, Daniel LK, Hong, Ha, Cadieu, Charles F, Solomon, Ethan A, Seibert, Darren, and DiCarlo, James J. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, pp. 201403112, 2014.
- Zeiler & Fergus (2013) Zeiler, Matthew D and Fergus, Rob. Visualizing and understanding convolutional neural networks. arXiv preprint arXiv:1311.2901, 2013.
- Zhang & Sclaroff (2013) Zhang, Jianming and Sclaroff, Stan. Saliency detection: a boolean map approach. In Computer Vision (ICCV), 2013 IEEE International Conference on, pp. 153–160. IEEE, 2013.
- Zhang et al. (2008) Zhang, Lingyun, Tong, Matthew H, Marks, Tim K, Shan, Honghao, and Cottrell, Garrison W. Sun: A bayesian framework for saliency using natural statistics. Journal of Vision, 8(7), 2008.
6 Supplementary Material
The model uses a regularization parameter to encourage sparsity in the feature weights (see section 1). This parameter was choosen using grid search. In Figure 8, training and test performances are shown for different choices of when fitting the model using only the final convolutional layer (as done in the final model). It can be seen that the choice of the regularization parameter had a visible but only very small effect on the test performance (especially if compared to the influences of the different layers used, see Figure 5).