Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset

03/15/2019
by   Ruohan Zhang, et al.
1

We introduce a large-scale dataset of human actions and eye movements while playing Atari videos games. The dataset currently has 44 hours of gameplay data from 16 games and a total of 2.97 million demonstrated actions. Human subjects played games in a frame-by-frame manner to allow enough decision time in order to obtain near-optimal decisions. This dataset could be potentially used for research in imitation learning, reinforcement learning, and visual saliency.

READ FULL TEXT

page 3

page 4

research
12/05/2020

Selective Eye-gaze Augmentation To Enhance Imitation Learning In Atari Games

This paper presents the selective use of eye-gaze information in learnin...
research
05/10/2020

Optimal control of eye-movements during visual search

We study the problem of optimal oculomotor control during the execution ...
research
04/15/2019

Saliency Prediction on Omnidirectional Images with Generative Adversarial Imitation Learning

When watching omnidirectional images (ODIs), subjects can access differe...
research
10/07/2019

CrowdFix: An Eyetracking Dataset of Real Life Crowd Videos

Understanding human visual attention and saliency is an integral part of...
research
12/29/2013

Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition

Systems based on bag-of-words models from image features collected at ma...
research
06/23/2022

Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos

Pretraining on noisy, internet-scale datasets has been heavily studied a...
research
12/23/2020

Augmenting Policy Learning with Routines Discovered from a Single Demonstration

Humans can abstract prior knowledge from very little data and use it to ...

Please sign up or login with your details

Forgot password? Click here to reset