Visual Decoding of Targets During Visual Search From Human Eye Fixations

06/19/2017
by   Hosnieh Sattar, et al.
0

What does human gaze reveal about a users' intents and to which extend can these intents be inferred or even visualized? Gaze was proposed as an implicit source of information to predict the target of visual search and, more recently, to predict the object class and attributes of the search target. In this work, we go one step further and investigate the feasibility of combining recent advances in encoding human gaze information using deep convolutional neural networks with the power of generative image models to visually decode, i.e. create a visual representation of, the search target. Such visual decoding is challenging for two reasons: 1) the search target only resides in the user's mind as a subjective visual pattern, and can most often not even be described verbally by the person, and 2) it is, as of yet, unclear if gaze fixations contain sufficient information for this task at all. We show, for the first time, that visual representations of search targets can indeed be decoded only from human gaze fixations. We propose to first encode fixations into a semantic representation and then decode this representation into an image. We evaluate our method on a recent gaze dataset of 14 participants searching for clothing in image collages and validate the model's predictions using two human studies. Our results show that 62 select the categories of the decoded image right. In our second studies we show the importance of a local gaze encoding for decoding visual search targets of user

READ FULL TEXT

page 6

page 7

research
11/27/2016

Predicting the Category and Attributes of Visual Search Targets Using Deep Gaze Pooling

Predicting the target of visual search from eye fixation (gaze) data is ...
research
03/20/2022

End-to-End Human-Gaze-Target Detection with Transformers

In this paper, we propose an effective and efficient method for Human-Ga...
research
02/18/2015

Prediction of Search Targets From Fixations in Open-World Settings

Previous work on predicting the target of visual search from human fixat...
research
05/05/2022

Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency

We aim to ask and answer an essential question "how quickly do we react ...
research
12/10/2021

Benchmarking human visual search computational models in natural scenes: models comparison and reference datasets

Visual search is an essential part of almost any everyday human goal-dir...
research
08/17/2021

Neural Photofit: Gaze-based Mental Image Reconstruction

We propose a novel method that leverages human fixations to visually dec...
research
06/05/2020

A Meta-Bayesian Model of Intentional Visual Search

We propose a computational model of visual search that incorporates Baye...

Please sign up or login with your details

Forgot password? Click here to reset