GASP: Gated Attention For Saliency Prediction

06/09/2022
by   Fares Abawi, et al.
0

Saliency prediction refers to the computational task of modeling overt attention. Social cues greatly influence our attention, consequently altering our eye movements and behavior. To emphasize the efficacy of such features, we present a neural model for integrating social cues and weighting their influences. Our model consists of two stages. During the first stage, we detect two social cues by following gaze, estimating gaze direction, and recognizing affect. These features are then transformed into spatiotemporal maps through image processing operations. The transformed representations are propagated to the second stage (GASP) where we explore various techniques of late fusion for integrating social cues and introduce two sub-networks for directing attention to relevant stimuli. Our experiments indicate that fusion approaches achieve better results for static integration methods, whereas non-fusion approaches for which the influence of each modality is unknown, result in better outcomes when coupled with recurrent models for dynamic saliency prediction. We show that gaze direction and affective representations contribute a prediction to ground-truth correspondence improvement of at least 5 saliency models without social cues. Furthermore, affective representations improve GASP, supporting the necessity of considering affect-biased attention in predicting saliency.

READ FULL TEXT

page 1

page 3

page 5

page 7

research
04/12/2019

Digging Deeper into Egocentric Gaze Prediction

This paper digs deeper into factors that influence egocentric gaze. Inst...
research
03/24/2018

Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition

We present a new computational model for gaze prediction in egocentric v...
research
11/29/2016

Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model

Data-driven saliency has recently gained a lot of attention thanks to th...
research
10/10/2018

Invariance Analysis of Saliency Models versus Human Gaze During Scene Free Viewing

Most of current studies on human gaze and saliency modeling have used hi...
research
05/16/2019

GazeGAN: A Generative Adversarial Saliency Model based on Invariance Analysis of Human Gaze During Scene Free Viewing

Data size is the bottleneck for developing deep saliency models, because...
research
05/16/2019

Leverage eye-movement data for saliency modeling: Invariance Analysis and a Robust New Model

Data size is the bottleneck for developing deep saliency models, because...
research
07/23/2019

Speech, Head, and Eye-based Cues for Continuous Affect Prediction

Continuous affect prediction involves the discrete time-continuous regre...

Please sign up or login with your details

Forgot password? Click here to reset