Gravitational Models Explain Shifts on Human Visual Attention

09/15/2020
by   Dario Zanca, et al.
0

Visual attention refers to the human brain's ability to select relevant sensory information for preferential processing, improving performance in visual and cognitive tasks. It proceeds in two phases. One in which visual feature maps are acquired and processed in parallel. Another where the information from these maps is merged in order to select a single location to be attended for further and more complex computations and reasoning. Its computational description is challenging, especially if the temporal dynamics of the process are taken into account. Numerous methods to estimate saliency have been proposed in the last three decades. They achieve almost perfect performance in estimating saliency at the pixel level, but the way they generate shifts in visual attention fully depends on winner-take-all (WTA) circuitry. WTA is implemented by the biological hardware in order to select a location with maximum saliency, towards which to direct overt attention. In this paper we propose a gravitational model (GRAV) to describe the attentional shifts. Every single feature acts as an attractor and the shifts are the result of the joint effects of the attractors. In the current framework, the assumption of a single, centralized saliency map is no longer necessary, though still plausible. Quantitative results on two large image datasets show that this model predicts shifts more accurately than winner-take-all.

READ FULL TEXT
research
02/11/2020

Toward Improving the Evaluation of Visual Attention Models: a Crowdsourcing Approach

Human visual attention is a complex phenomenon. A computational modeling...
research
01/01/2022

SalyPath360: Saliency and Scanpath Prediction Framework for Omnidirectional Images

This paper introduces a new framework to predict visual attention of omn...
research
06/10/2022

GAMR: A Guided Attention Model for (visual) Reasoning

Humans continue to outperform modern AI systems in their ability to flex...
research
11/08/2020

An HVS-Oriented Saliency Map Prediction Modeling

Visual attention is one of the most significant characteristics for sele...
research
12/02/2018

Plan-Recognition-Driven Attention Modeling for Visual Recognition

Human visual recognition of activities or external agents involves an in...
research
02/07/2018

FixaTons: A collection of Human Fixations Datasets and Metrics for Scanpath Similarity

In the last three decades, human visual attention has been a topic of gr...
research
03/23/2022

Reclaiming saliency: rhythmic precision-modulated action and perception

Computational models of visual attention in artificial intelligence and ...

Please sign up or login with your details

Forgot password? Click here to reset