Leverage eye-movement data for saliency modeling: Invariance Analysis and a Robust New Model

05/16/2019
by   Zhaohui Che, et al.
0

Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time consuming and expensive. Most of current studies on human attention and saliency modeling have used high quality stereotype stimuli. In real world, however, captured images undergo various types of transformations. Can we use these transformations to augment existing saliency datasets? Here, we first create a novel saliency dataset including fixations of 10 observers over 1900 images degraded by 19 types of transformations. Second, by analyzing eye movements, we find that observers look at different locations over transformed versus original images. Third, we utilize the new data over transformed images, called data augmentation transformation (DAT), to train deep saliency models. We find that label preserving DATs with negligible impact on human gaze boost saliency prediction, whereas some other DATs that severely impact human gaze degrade the performance. These label preserving valid augmentation transformations provide a solution to enlarge existing saliency datasets. Finally, we introduce a novel saliency model based on generative adversarial network (dubbed GazeGAN). A modified UNet is proposed as the generator of the GazeGAN, which combines classic skip connections with a novel center-surround connection (CSC), in order to leverage multi level features. We also propose a histogram loss based on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in terms of luminance distribution. Extensive experiments and comparisons over 3 datasets indicate that GazeGAN achieves the best performance in terms of popular saliency evaluation metrics, and is more robust to various perturbations. Our code and data are available at: https://github.com/CZHQuality/Sal-CFS-GAN.

READ FULL TEXT

page 1

page 4

page 5

page 7

page 10

page 11

page 12

page 13

research
05/16/2019

GazeGAN: A Generative Adversarial Saliency Model based on Invariance Analysis of Human Gaze During Scene Free Viewing

Data size is the bottleneck for developing deep saliency models, because...
research
10/10/2018

Invariance Analysis of Saliency Models versus Human Gaze During Scene Free Viewing

Most of current studies on human gaze and saliency modeling have used hi...
research
11/29/2016

Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model

Data-driven saliency has recently gained a lot of attention thanks to th...
research
04/25/2015

TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking

Traditional eye tracking requires specialized hardware, which means coll...
research
12/07/2021

Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection

Unsupervised Salient Object Detection (USOD) is of paramount significanc...
research
06/09/2022

GASP: Gated Attention For Saliency Prediction

Saliency prediction refers to the computational task of modeling overt a...
research
05/13/2020

Do Saliency Models Detect Odd-One-Out Targets? New Datasets and Evaluations

Recent advances in the field of saliency have concentrated on fixation p...

Please sign up or login with your details

Forgot password? Click here to reset