Decoding Attention from Gaze: A Benchmark Dataset and End-to-End Models

11/20/2022
by   Karan Uppal, et al.
0

Eye-tracking has potential to provide rich behavioral data about human cognition in ecologically valid environments. However, analyzing this rich data is often challenging. Most automated analyses are specific to simplistic artificial visual stimuli with well-separated, static regions of interest, while most analyses in the context of complex visual stimuli, such as most natural scenes, rely on laborious and time-consuming manual annotation. This paper studies using computer vision tools for "attention decoding", the task of assessing the locus of a participant's overt visual attention over time. We provide a publicly available Multiple Object Eye-Tracking (MOET) dataset, consisting of gaze data from participants tracking specific objects, annotated with labels and bounding boxes, in crowded real-world videos, for training and evaluating attention decoding algorithms. We also propose two end-to-end deep learning models for attention decoding and compare these to state-of-the-art heuristic methods.

READ FULL TEXT
research
07/26/2020

Towards End-to-end Video-based Eye-Tracking

Estimating eye-gaze from images alone is a challenging task, in large pa...
research
08/10/2023

More Than Meets the Eye: Analyzing Anesthesiologists' Visual Attention in the Operating Room Using Deep Learning Models

Patient's vital signs, which are displayed on monitors, make the anesthe...
research
12/29/2013

Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition

Systems based on bag-of-words models from image features collected at ma...
research
05/06/2013

A Computer Vision System for Attention Mapping in SLAM based 3D Models

The study of human factors in the frame of interaction studies has been ...
research
07/30/2013

An Integrated System for 3D Gaze Recovery and Semantic Analysis of Human Attention

This work describes a computer vision system that enables pervasive mapp...
research
03/04/2021

Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform

We have pioneered the Where-You-Look-Is Where-You-Go approach to control...
research
05/24/2019

Overt visual attention on rendered 3D objects

This work covers multiple aspects of overt visual attention on 3D render...

Please sign up or login with your details

Forgot password? Click here to reset