Saliency Revisited: Analysis of Mouse Movements versus Fixations

05/30/2017
by   Hamed R. Tavakoli, et al.
0

This paper revisits visual saliency prediction by evaluating the recent advancements in this field such as crowd-sourced mouse tracking-based databases and contextual annotations. We pursue a critical and quantitative approach towards some of the new challenges including the quality of mouse tracking versus eye tracking for model training and evaluation. We extend quantitative evaluation of models in order to incorporate contextual information by proposing an evaluation methodology that allows accounting for contextual factors such as text, faces, and object attributes. The proposed contextual evaluation scheme facilitates detailed analysis of models and helps identify their pros and cons. Through several experiments, we find that (1) mouse tracking data has lower inter-participant visual congruency and higher dispersion, compared to the eye tracking data, (2) mouse tracking data does not totally agree with eye tracking in general and in terms of different contextual regions in specific, and (3) mouse tracking data leads to acceptable results in training current existing models, and (4) mouse tracking data is less reliable for model selection and evaluation. The contextual evaluation also reveals that, among the studied models, there is no single model that performs best on all the tested annotations.

READ FULL TEXT

page 2

page 5

page 7

04/25/2015

TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking

Traditional eye tracking requires specialized hardware, which means coll...
10/24/2020

Classifying Eye-Tracking Data Using Saliency Maps

A plethora of research in the literature shows how human eye fixation pa...
06/30/2019

Predicting video saliency using crowdsourced mouse-tracking data

This paper presents a new way of getting high-quality saliency maps for ...
01/14/2013

Wavelet-based Scale Saliency

Both pixel-based scale saliency (PSS) and basis project methods focus on...
02/27/2020

A Proto-Object Based Dynamic Visual Saliency Model with an FPGA Implementation

The ability to attend to salient regions of a visual scene is an innate ...
10/28/2020

Automatic selection of eye tracking variables in visual categorization in adults and infants

Visual categorization and learning of visual categories exhibit early on...