Vanishing point attracts gaze in free-viewing and visual search tasks

12/06/2015
by   Ali Borji, et al.
0

To investigate whether the vanishing point (VP) plays a significant role in gaze guidance, we ran two experiments. In the first one, we recorded fixations of 10 observers (4 female; mean age 22; SD=0.84) freely viewing 532 images, out of which 319 had VP (shuffled presentation; each image for 4 secs). We found that the average number of fixations at a local region (80x80 pixels) centered at the VP is significantly higher than the average fixations at random locations (t-test; n=319; p=1.8e-35). To address the confounding factor of saliency, we learned a combined model of bottom-up saliency and VP. AUC score of our model (0.85; SD=0.01) is significantly higher than the original saliency model (e.g., 0.8 using AIM model by Bruce & Tsotsos (2009), t-test; p= 3.14e-16) and the VP-only model (0.64, t-test; p= 4.02e-22). In the second experiment, we asked 14 subjects (4 female, mean age 23.07, SD=1.26) to search for a target character (T or L) placed randomly on a 3x3 imaginary grid overlaid on top of an image. Subjects reported their answers by pressing one of two keys. Stimuli consisted of 270 color images (180 with a single VP, 90 without). The target happened with equal probability inside each cell (15 times L, 15 times T). We found that subjects were significantly faster (and more accurate) when target happened inside the cell containing the VP compared to cells without VP (median across 14 subjects 1.34 sec vs. 1.96; Wilcoxon rank-sum test; p = 0.0014). Response time at VP cells were also significantly lower than response time on images without VP (median 2.37; p= 4.77e-05). These findings support the hypothesis that vanishing point, similar to face and text (Cerf et al., 2009) as well as gaze direction (Borji et al., 2014) attracts attention in free-viewing and visual search.

READ FULL TEXT
research
09/04/2016

Vanishing point detection with convolutional neural networks

Inspired by the finding that vanishing point (road tangent) guides drive...
research
12/06/2015

Fixation prediction with a combined model of bottom-up saliency and vanishing point

By predicting where humans look in natural scenes, we can understand how...
research
04/12/2019

Digging Deeper into Egocentric Gaze Prediction

This paper digs deeper into factors that influence egocentric gaze. Inst...
research
04/16/2021

Noise-Aware Saliency Prediction for Videos with Incomplete Gaze Data

Deep-learning-based algorithms have led to impressive results in visual-...
research
04/18/2019

Computational Attention System for Children, Adults and Elderly

The existing computational visual attention systems have focused on the ...
research
03/07/2019

Ultrasound Image Representation Learning by Modeling Sonographer Visual Attention

Image representations are commonly learned from class labels, which are ...
research
01/05/2021

Look Twice: A Computational Model of Return Fixations across Tasks and Species

Saccadic eye movements allow animals to bring different parts of an imag...

Please sign up or login with your details

Forgot password? Click here to reset