An Element Sensitive Saliency Model with Position Prior Learning for Web Pages

04/27/2018 ∙ by Jie Chang, et al. ∙ Shanghai Jiao Tong University 0

Understanding human visual attention is important for multimedia applications. Many studies have attempted to learn from eye-tracking data and build computational saliency prediction models. However, limited efforts have been devoted to saliency prediction for Web pages, which are characterized by more diverse content elements and spatial layouts. In this paper, we propose a novel end-to-end deep generative saliency model for Web pages. To capture position biases introduced by page layouts, a Position Prior Learning sub-network is proposed, which models position biases as multivariate Gaussian distribution using variational auto-encoder. To model different elements of a Web page, a Multi Discriminative Region Detection (MDRD) branch and a Text Region Detection(TRD) branch are introduced, which target to extract discriminative localizations and "prominent" text regions likely to correspond to human attention, respectively. We validate the proposed model with FiWI, a public Web-page dataset, and shows that the proposed model outperforms the state-of-art models for Web-page saliency prediction.



There are no comments yet.


page 3

page 4

page 5

page 7

page 11

page 12

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Dominated by the “bottom-up” attentive mechanism of visual cognition [17], human vision system tends to focus on certain regions instead of randomly spreading. Modeling this human visual attention is essential for evaluating media designs. Inspired by the above visual attention mechanism, many computational saliency models, which attempt to predict salient regions of given media content, have been investigated.

Most existing saliency prediction studies focus on natural images [4]. Building upon the biological evidences [24], low-level features such as color, contrast, luminance, edge orientations or intensity are adopted to help predict human attention [10, 8]. To capture the influence of content semantics, high-level features representing certain semantic concepts (e.g., faces, objects) are leveraged to further improve the prediction accuracy [11]

. With the recent development of deep neural network, many efforts have been made to simultaneously learn feature representations and saliency prediction models 

[16, 18, 6, 5]. More recently, adversarial training is leveraged to refine the predictive results of saliency model [15].

While much efforts have been devoted to saliency prediction of natural images, there have been very limited studies focusing on Web-page saliency [19, 14]. Different from natural images, Web pages are rich in scattered salient stimuli (e.g., logos, text, graphs, picture) [21] of un-equivalent influence to human’s short-term attention [3]. It is thus more difficult to model human attention on Web pages, not only requiring more complicated feature representations but also modeling spatial layouts. Existing studies on Web-page saliency [19, 14] mainly focus on exploring a better feature representation. However, they did not take the characteristics of Web-page saliency into consideration.

First, layout of Web pages greatly affect the deployment of human fixations, leading to a diverse set of reading patterns such as [3]. The above studies for Web-page saliency tend to represent the position-based visual preferences using manually constructed position-bias maps, which is unable to adaptively reflect the accurate Web-page layout. Hence, we explore to automatically model the position biases as a prior distribution with the help of a variational auto-encoder.

Second, different from natural images, there are many non-semantic elements in Web pages which may not grip human attention but unavoidably cause overmuch activated regions when we simple use a pre-trained CNN as a feature extractor, just as most previous works done. Instead, considering text and images are dominant elements in Web pages, we propose to adopt independent high-level semantic features for the two elements.

In our paper, we propose a deep generative saliency model for Web pages. As shown in Figure 1, the whole model consists of three sub-networks: Prior Learning Net (PL-Net) for modeling position biases, Element Feature Net (EF-Net) for extracting representations for different elements, and Prediction Net(P-Net) for generating the final saliency map. The PL-Net leverages a VAE-based Position Prior Learning (PPL) algorithm to automatically learn position biases of user viewing behaviors. The EF-Net contains three branches. In addition to an overall feature branch, a Multi Discriminative Region Detection (MDRD) branch and a Text Region Detection (TRD) branch are introduced, which extract discriminative localizations and prominent text regions, respectively. The whole model we proposed is a deep generative model which can be trained end-to-end. By experimenting on FiWI, a released Web-page dataset, our proposed algorithms distinguish our model and boost the performance of saliency prediction.

The main contributions of our studies are summarized as follows.

  • We model the diverse visual preference caused by page layouts with a VAE-based Position Prior Learning.

  • We explore element-based feature representations and leverage a MDRD branch and a TRD branch to capture the impact of images and text to human attention.

  • Experimental studies have shown that the proposed method outperforms the state-of-art models for Web-page saliency prediction.

2 Method

Figure 1 provides an overview of the proposed generative saliency model for Web pages. In the rest of this section, we present PPL, MDRD and TRD in details.

2.1 Position Prior Learning

To predict Web page saliency, it is important to model position biases introduced by page layouts. Unlike previous studies which adopt fixed position bias maps manually constructed beforehand, we propose a Position Prior Learning (PPL) algorithm based on Variational Auto-Encoder(VAE) [12]

to automatically learn such position biases. Based on the observation that similar position biases occurred on corresponding Web pages sharing similar layouts. We explore to model these postion biases as the mean and standard deviation of multivariate Gaussian distribution which are learned as latent variables in VAE (see Fig 


Figure 2: The architecture of Position Prior Learning (PPL) sub-network. The corresponding ground-truth of the trained stimulus is reconstructed by a variational auto-encoder for a learned posterior distribution. Meanwhile another latent distribution inferred from the generated prior map of PL-Net is aligned with the previous learned posterior by KL divergence term (marked in black).

Specifically, a set of true saliency maps are be used to optimized a VAE network including an encoder and a decoder for obtaining the posterior distribution . This training procedure follows the objective function below:


where and are respective parameters in encoder and decoder of VAE, is a standard normal prior, , and control the weights of the expectation term and KL-divergence terms. The learned variational approximate posterior can be formulated as a multivariate Gaussian with a diagonal covariance structure,


where the mean and standard deviation of the approximate posterior, and , are outputs of the encoding MLP ().

Meanwhile, the generated prior maps from PL-Net are fed into the parameter-sharing encoder . We also let the output of be a multivariate Gaussian structure,


where are reused parameters of encoder .

Then another KL-divergence representing the discrepancy between the above two approximate posterior, and ,


is calculated as the loss for training PL-Net so as to enable prior maps generated by PL-Net to possess similar latent variables with that of true saliency maps. indicates the parameters in PL-Net.

2.2 Multi Discriminative Region Detection

We propose Multiple Discriminative Region Detection (MDRD) to extract the remarkable object regions where human maybe easily focus on. Inspired by Class Activation Map (CAM) proposed in [25]

, first, we utilize a VGG16-GAP model trained on ImageNet 

[13] to predict the classification of each input stimuli image. Then we select top- categories predicted by the model, and calculate the average of top- categories CAM to get the multi-discriminative region map .


where is the position index of pixels, is the probabilistic value w.r.t class ; is the CAM of category ; is the number of all categories in ImageNet; is the function which return a set of class numbers whose predicted scores are in top-K.

is determined by the number of dominated eigenvalues after PCA implemented on the last convolutional layers.

2.3 Text Region Detection

Web pages are rich in text information which greatly attracted human fixations, hence our proposed Text Region Detection (TRD) aims to generate representation for prominent text information.

TRD is mainly implemented by a Text/Background Classifier

trained on the datasets (ICDAR [1] and SVT [22]) for character recognition. A well-trained is then performed around the resized multi-scale input stimulus by sliding window for generating the text saliency map. Guassian blur is applied to smooth the text saliency map.

2.4 Loss Function

The feature maps from PL-Net and EF-Net are concatenated together as the input of Prediction Network (P-Net). P-Net generates the final predicted saliency map based on stacked CNN structure. The loss function we defined between the predicted saliency map and its corresponding ground-truth is the linear combination of two terms as follows:


where are training parameters in EF-Net and P-Net; are predicted saliency maps from P-Net and are corresponding ground-truth; , are hyper-parameters to trade-off two loss terms where is defined as the cross entropy loss:


and is defined as the KL-divergence measuring the loss of information when distribution is used to approximate the distribution :


where indicates the pixel in both saliency maps and is a regularization constant.

3 Datasets & Metrics

FiWI is a dataset proposed in [19], which contains 149 Web page screenshots with eye-tracking fixation data collected from 11 observers. The observation is short-term as well as free-viewing to ensure the visual preference being driven by “bottom-up” visual mechanism. FiWI is categorized as Pictorial(50), Textual(50) and Mixed(49) images according to the different composition of text and pictures. Pictorial Web pages are occupied by pictures and less text, Textual Web pages contains informative text with high density and Mixed Web pages are a mix of pictures and text.

Evaluation Metrics For evaluating our performance quantitatively, three similarity metrics111 are adopted including Linear Correlation Coefficient (CC), Normalized Scanpath Saliency (NSS) and shuffled Area Under Curve (sAUC) [23].

Figure 3: Qualitative results and comparison to the state of the art. Compared with MMF and MKL without considering “promident” text information, our predicted saliency maps have more accurate response at textual location. We also outperform other baselines (AIMMlnet) proposed for natural images by avoiding patches of irrelevant high-response.

4 Experiments Results

In this section, we first qualitatively evaluate our model with existing nine studies proposed for saliency prediction. We also present a quantitative comparison on Pictorial, Text and Mixed images from FiWI. Furthermore, we analyze the effectiveness of each component of the proposed algorithm by removing position prior learning (PPL), multi-discriminative region detection (MDRD) and text region detection (TRD) from the whole network. Last, we experimentally verifies that the proposed TRD and MDRD can be plugged into other saliency models and we show that they lead to performance gains on top of two state-of-art saliency models, Sam [6] and Mlnet [5].

4.1 Performance Comparison

We compare our model with nine previous saliency models, including: MMF [14], MKL [19], AIM [2], SIG [9], SUN [23], GBVS [8], Itti [10], Sam [6] and Mlnet [5]. Figure 3 illustrates the comparison results among the models, which demonstrates that our model better represents human attention. In Table 1, we quantitatively compare the performance in terms of three evaluation metrics (sAUC, NSS and CC) for Pictorial/Text/Mixed Web pages. It can be seen that our model greatly outperforms other baselines in Pictorial&Text Web pages and is slightly better in Mixed Web pages.

Model Pictorial-webpage Text-webpage Mixed-webpage
Itti [10] 0.538 0.443 0.233 0.544 0.483 0.234 0.552 0.462 0.267
GBVS [8] 0.564 0.685 0.350 0.561 0.680 0.329 0.575 0.708 0.336
AWS [7] 0.633 0.806 0.401 0.648 0.867 0.406 0.645 0.854 0.393
Signat [9] 0.664 0.848 0.415 0.682 0.885 0.408 0.682 0.920 0.410
AIM [2] 0.663 0.907 0.453 0.679 0.936 0.445 0.678 0.943 0.439
SUN [23] 0.707 1.072 0.488 0.687 0.993 0.459 0.700 1.017 0.490
MKL [19] 0.723 0.880 0.429 0.741 0.861 0.410 0.730 0.891 0.433
MMF [14] 0.731 0.904 0.441 0.720 0.890 0.419 0.760 0.920 0.431
Mlnet [5] 0.703 0.912 0.530 0.711 0.802 0.463 0.725 0.905 0.522
Sam [6] 0.720 0.982 0.494 0.743 0.924 0.470 0.762 0.938 0.500
Ours 0.761 1.202 0.641 0.781 1.250 0.511 0.751 1.029 0.580
Table 1: Quantitative Measurements of Different Methods

4.2 Analysis of Each Module

We further analyze the respective effect of PPL, MDRD and TRD proposed in our Element Sensitive Saliency Model. First, to explore whether Position Prior Learning captures position bias in web page viewing, we illustrate in Fig 4 with three kinds of Web pages: web pages rich in text information, web pages arranged by pictures, and web pages combined with pictures and text. For each category, we average their corresponding prior maps generated by Prior Learning Network and we observe typical “F-shaped” and “top-left” bias in textual web pages; “center-arround” bias in pictorial images; “sidebar” and “top-left” bias in mixed web pages. That means the proposed PPL algorithm is able to capture common prior of position bias for Web pages with similar layouts.

Figure 4: Visualization of the typical position biases learned by Prior Learning Network. Last column shows the corresponding averaged results generated from Prior Learning Network for each kind of Web pages.

Then we intuitively visualize what TRD and MDRD extracted from original Web pages in figure 5. Representation from TRD shows that TRD selectively highlights locations where textual information is remarkable instead of simply detecting the edge lines of each character used in previous method. We see text in logos, headlines or subheadings have larger activation on text saliency maps, which is important for our model since human usually pay more attention on these regions than normal text in main bodies. Representation from MDRD also shows that the proposed MDRD could “pre-select” some special discriminative regions while suppress textual regions greatly. For comparison, as most previous works done, representation from pooling5 is extracted from pre-trained VGG16 [20], which shows feature maps generated by our MDRD are more sparse with most inconspicuous regions suppressed.

Figure 5: Visualization of what TRD and MDRD extracted.

Furthermore, we quantitatively illustrate the effectiveness of TRD / MDRD / PPL modules. Table 2 compares “Base”, “Base+TRD”, “Base+MDRD”, “Base+TRD+MDRD” and the proposed model, “Base+TRD+MDRD+PPL”, in terms of sAUC, NSS and CC metrics, which shows each module greatly contributes to saliency prediction for Web pages.

Prunning Experiment All Test Webpages
Baseline 0.545 0.561 0.402
Baseline+TRD 0.730 0.855 0.511
Baseline+MDRD 0.700 0.820 0.495
Baseline+TRD+MDRD 0.731 1.005 0.612
Baseline+TRD+MDRD+PPL 0.760 1.085 0.637
Table 2: Ablation Study

5 Conclusion

In this paper, we present a Element Sensitive Saliency Model for Web pages. The whole model consists of Element Feature Network (EF-Net), Prior Learning Network (PL-Net) and Prediction Network (P-Net). Compared with previous works, we propose VAE-based Position Prior Learning in PL-Net to automatically learn the various visual preference when human scan Web pages. Additionally, in EF-Net, we leverage Text Region Detection(TRD) and Multi-Discriminative Region Detection(MDRD) to handle specific challenges in this task. We experimentally verified that the proposed model outperforms the state-of-art models for Web-page saliency prediction.


  • [1]
  • [2] N. D. Bruce and J. K. Tsotsos. Saliency, attention, and visual search: An information theoretic approach. Journal of vision, 9(3):5–5, 2009.
  • [3] G. Buscher, E. Cutrell, and M. R. Morris. What do you see when you’re surfing?: using eye tracking to predict salient regions of web pages. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 21–30. ACM, 2009.
  • [4] Z. Bylinskii, T. Judd, A. Borji, L. Itti, F. Durand, A. Oliva, and A. Torralba. Mit saliency benchmark.
  • [5] M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara. A deep multi-level network for saliency prediction. In Pattern Recognition (ICPR), 2016 23rd International Conference on, pages 3488–3493. IEEE, 2016.
  • [6] M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara. Predicting human eye fixations via an lstm-based saliency attentive model. arXiv preprint arXiv:1611.09571, 2016.
  • [7] A. Garcia-Diaz, V. Leboran, X. R. Fdez-Vidal, and X. M. Pardo. On the relationship between optical variability, visual saliency, and eye fixations: A computational approach. Journal of vision, 12(6):17–17, 2012.
  • [8] J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In Advances in neural information processing systems, pages 545–552, 2007.
  • [9] X. Hou, J. Harel, and C. Koch. Image signature: Highlighting sparse salient regions. IEEE transactions on pattern analysis and machine intelligence, 34(1):194–201, 2012.
  • [10] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(12):1489–1506, 2000.
  • [11] T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In Computer Vision, 2009 IEEE 12th international conference on, pages 2106–2113. IEEE, 2009.
  • [12] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton.

    Imagenet classification with deep convolutional neural networks.

    In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [14] J. Li, L. Su, B. Wu, J. Pang, C. Wang, Z. Wu, and Q. Huang. Webpage saliency prediction with multi-features fusion. In Image Processing (ICIP), 2016 IEEE International Conference on, pages 674–678. IEEE, 2016.
  • [15] J. Pan, C. Canton, K. McGuinness, N. E. O’Connor, J. Torres, E. Sayrol, and X. Giro-i Nieto. Salgan: Visual saliency prediction with generative adversarial networks. arXiv preprint arXiv:1701.01081, 2017.
  • [16] J. Pan, E. Sayrol, X. Giroinieto, K. Mcguinness, and N. E. Oconnor. Shallow and deep convolutional networks for saliency prediction. In Computer Vision and Pattern Recognition, pages 598–606, 2016.
  • [17] R. A. Rensink. The dynamic representation of scenes. Visual cognition, 7(1-3):17–42, 2000.
  • [18] C. Shen and Q. Zhao. Learning to predict eye fixations for semantic contents using multi-layer sparse network. Neurocomputing, 138:61–68, 2014.
  • [19] C. Shen and Q. Zhao. Webpage saliency. In European conference on computer vision, pages 33–46. Springer, 2014.
  • [20] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [21] J. D. Still and C. M. Masciocchi. A saliency model predicts fixations in web interfaces. In 5 th International Workshop on Model Driven Development of Advanced User Interfaces (MDDAUI 2010), page 25. Citeseer, 2010.
  • [22] K. Wang and S. Belongie. Word spotting in the wild. In European Conference on Computer Vision, pages 591–604. Springer, 2010.
  • [23] L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell. Sun: A bayesian framework for saliency using natural statistics. Journal of vision, 8(7):32–32, 2008.
  • [24] X. Zhang, L. Zhaoping, T. Zhou, and F. Fang. Neural activities in v1 create a bottom-up saliency map. Neuron, 73(1):183–192, 2012.
  • [25] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba.

    Learning deep features for discriminative localization.

    In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, pages 2921–2929. IEEE, 2016.