CNN-based search model underestimates attention guidance by simple visual features

03/29/2021
by   Endel Poder, et al.
0

Recently, Zhang et al. (2018) proposed an interesting model of attention guidance that uses visual features learnt by convolutional neural networks for object recognition. I adapted this model for search experiments with accuracy as the measure of performance. Simulation of our previously published feature and conjunction search experiments revealed that CNN-based search model considerably underestimates human attention guidance by simple visual features. A simple explanation is that the model has no bottom-up guidance of attention. Another view might be that standard CNNs do not learn features required for human-like attention guidance.

READ FULL TEXT
research
11/22/2022

Simulating Human Gaze with Neural Visual Attention

Existing models of human visual attention are generally unable to incorp...
research
01/10/2017

What are the visual features underlying human versus machine vision?

Although Deep Convolutional Networks (DCNs) are approaching the accuracy...
research
03/21/2022

CNN Attention Guidance for Improved Orthopedics Radiographic Fracture Classification

Convolutional neural networks (CNNs) have gained significant popularity ...
research
05/22/2018

Global-and-local attention networks for visual recognition

State-of-the-art deep convolutional networks (DCNs) such as squeeze-and-...
research
12/03/2018

Towards Visual Feature Translation

Most existing visual search systems are deployed based upon fixed kinds ...
research
01/31/2020

Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning

Understanding how goal states control behavior is a question ripe for in...
research
01/18/2021

Deadeye: A Novel Preattentive Visualization Technique Based on Dichoptic Presentation

Preattentive visual features such as hue or flickering can effectively d...

Please sign up or login with your details

Forgot password? Click here to reset