Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition

12/29/2013
by   Stefan Mathe, et al.
0

Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in `saccade and fixate' regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap. First, we complement existing state-of-the art large scale dynamic computer vision annotated datasets like Hollywood-2 and UCF Sports with human eye movements collected under the ecological constraints of the visual action recognition task. To our knowledge these are the first large human eye tracking datasets to be collected and made publicly available for video, vision.imar.ro/eyetracking (497,107 frames, each viewed by 16 subjects), unique in terms of their (a) large scale and computer vision relevance, (b) dynamic, video stimuli, (c) task control, as opposed to free-viewing. Second, we introduce novel sequential consistency and alignment measures, which underline the remarkable stability of patterns of visual search among subjects. Third, we leverage the significant amount of collected data in order to pursue studies and build automatic, end-to-end trainable computer vision systems based on human eye movements. Our studies not only shed light on the differences between computer vision spatio-temporal interest point image sampling strategies and the human fixations, as well as their impact for visual recognition performance, but also demonstrate that human fixations can be accurately predicted, and when used in an end-to-end automatic system, leveraging some of the advanced computer vision practice, can lead to state of the art results.

READ FULL TEXT

page 2

page 4

page 7

page 8

page 11

page 13

research
03/19/2022

DirecFormer: A Directed Attention in Transformer Approach to Robust Action Recognition

Human action recognition has recently become one of the popular research...
research
11/20/2022

Decoding Attention from Gaze: A Benchmark Dataset and End-to-End Models

Eye-tracking has potential to provide rich behavioral data about human c...
research
05/31/2020

In the Eye of the Beholder: Gaze and Actions in First Person Video

We address the task of jointly determining what a person is doing and wh...
research
03/09/2022

Using Human Gaze For Surgical Activity Recognition

Automatically recognizing surgical activities plays an important role in...
research
09/08/2022

Measuring Human Perception to Improve Open Set Recognition

The human ability to recognize when an object is known or novel currentl...
research
04/06/2020

Human action recognition with a large-scale brain-inspired photonic computer

The recognition of human actions in video streams is a challenging task ...
research
03/15/2019

Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset

We introduce a large-scale dataset of human actions and eye movements wh...

Please sign up or login with your details

Forgot password? Click here to reset