Modeling human visual search: A combined Bayesian searcher and saliency map approach for eye movement guidance in natural scenes

09/17/2020
by   M. Sclar, et al.
0

Finding objects is essential for almost any daily-life visual task. Saliency models have been useful to predict fixation locations in natural images, but are static, i.e., they provide no information about the time-sequence of fixations. Nowadays, one of the biggest challenges in the field is to go beyond saliency maps to predict a sequence of fixations related to a visual task, such as searching for a given target. Bayesian observer models have been proposed for this task, as they represent visual search as an active sampling process. Nevertheless, they were mostly evaluated on artificial images, and how they adapt to natural images remains largely unexplored. Here, we propose a unified Bayesian model for visual search guided by saliency maps as prior information. We validated our model with a visual search experiment in natural scenes recording eye movements. We show that, although state-of-the-art saliency models perform well in predicting the first two fixations in a visual search task, their performance degrades to chance afterward. This suggests that saliency maps alone are good to model bottom-up first impressions, but are not enough to explain the scanpaths when top-down task information is critical. Thus, we propose to use them as priors of Bayesian searchers. This approach leads to a behavior very similar to humans for the whole scanpath, both in the percentage of target found as a function of the fixation rank and the scanpath similarity, reproducing the entire sequence of eye movements.

READ FULL TEXT

page 4

page 14

research
11/29/2017

Saccade Sequence Prediction: Beyond Static Saliency Maps

Visual attention is a field with a considerable history, with eye moveme...
research
12/19/2019

Line Drawings of Natural Scenes Guide Visual Attention

Visual search is an important strategy of the human visual system for fa...
research
01/05/2021

Look Twice: A Computational Model of Return Fixations across Tasks and Species

Saccadic eye movements allow animals to bring different parts of an imag...
research
02/06/2021

Predicting Eye Fixations Under Distortion Using Bayesian Observers

Visual attention is very an essential factor that affects how human perc...
research
12/10/2021

Benchmarking human visual search computational models in natural scenes: models comparison and reference datasets

Visual search is an essential part of almost any everyday human goal-dir...
research
08/14/2016

Can Peripheral Representations Improve Clutter Metrics on Complex Scenes?

Previous studies have proposed image-based clutter measures that correla...
research
07/31/2018

What am I searching for?

Can we infer intentions and goals from a person's actions? As an example...

Please sign up or login with your details

Forgot password? Click here to reset