Look Twice: A Computational Model of Return Fixations across Tasks and Species

01/05/2021
by   Mengmi Zhang, et al.
8

Saccadic eye movements allow animals to bring different parts of an image into high-resolution. During free viewing, inhibition of return incentivizes exploration by discouraging previously visited locations. Despite this inhibition, here we show that subjects make frequent return fixations. We systematically studied a total of 44,328 return fixations out of 217,440 fixations across different tasks, in monkeys and humans, and in static images or egocentric videos. The ubiquitous return fixations were consistent across subjects, tended to occur within short offsets, and were characterized by longer duration than non-return fixations. The locations of return fixations corresponded to image areas of higher saliency and higher similarity to the sought target during visual search tasks. We propose a biologically-inspired computational model that capitalizes on a deep convolutional neural network for object recognition to predict a sequence of fixations. Given an input image, the model computes four maps that constrain the location of the next saccade: a saliency map, a target similarity map, a saccade size map, and a memory map. The model exhibits frequent return fixations and approximates the properties of return fixations across tasks and species. The model provides initial steps towards capturing the trade-off between exploitation of informative image locations combined with exploration of novel image locations during scene viewing.

READ FULL TEXT

page 1

page 11

page 15

page 16

page 17

page 18

page 19

page 21

research
09/17/2020

Modeling human visual search: A combined Bayesian searcher and saliency map approach for eye movement guidance in natural scenes

Finding objects is essential for almost any daily-life visual task. Sali...
research
05/25/2020

What am I Searching for: Zero-shot Target Identity Inference in Visual Search

Can we infer intentions from a person's actions? As an example problem, ...
research
05/29/2021

FoveaTer: Foveated Transformer for Image Classification

Many animals and humans process the visual field with a varying spatial ...
research
04/13/2016

A Novel Method to Study Bottom-up Visual Saliency and its Neural Mechanism

In this study, we propose a novel method to measure bottom-up saliency m...
research
08/23/2013

Suspicious Object Recognition Method in Video Stream Based on Visual Attention

We propose a state of the art method for intelligent object recognition ...
research
07/31/2018

What am I searching for?

Can we infer intentions and goals from a person's actions? As an example...
research
12/06/2015

Vanishing point attracts gaze in free-viewing and visual search tasks

To investigate whether the vanishing point (VP) plays a significant role...

Please sign up or login with your details

Forgot password? Click here to reset