CrowdFix: An Eyetracking Dataset of Real Life Crowd Videos

10/07/2019
by   Memoona Tahira, et al.
15

Understanding human visual attention and saliency is an integral part of vision research. In this context, there is an ever-present need for fresh and diverse benchmark datasets, particularly for insight into special use cases like crowded scenes. We contribute to this end by: (1) reviewing the dynamics behind saliency and crowds. (2) using eye tracking to create a dynamic human eye fixation dataset over a new set of crowd videos gathered from the Internet. The videos are annotated into three distinct density levels. (3) Finally, we evaluate state-of-the-art saliency models on our dataset to identify possible improvements for the design and creation of a more robust saliency model.

READ FULL TEXT

page 1

page 5

page 7

page 11

research
10/07/2019

CrowdFix: An Eyetracking Data-set of Human Crowd Video

Understanding human visual attention and saliency is an integral part of...
research
01/30/2019

Understanding spatial correlation in eye-fixation maps for visual attention in videos

In this paper, we present an analysis of recorded eye-fixation data from...
research
01/26/2018

Supersaliency: Predicting Smooth Pursuit-Based Attention with Slicing CNNs Improves Fixation Prediction for Naturalistic Videos

Predicting attention is a popular topic at the intersection of human and...
research
08/23/2023

NPF-200: A Multi-Modal Eye Fixation Dataset and Method for Non-Photorealistic Videos

Non-photorealistic videos are in demand with the wave of the metaverse, ...
research
03/15/2019

Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset

We introduce a large-scale dataset of human actions and eye movements wh...
research
02/02/2017

Learning a time-dependent master saliency map from eye-tracking data in videos

To predict the most salient regions of complex natural scenes, saliency ...
research
01/23/2018

Revisiting Video Saliency: A Large-scale Benchmark and a New Model

In this work, we contribute to video saliency research in two ways. Firs...

Please sign up or login with your details

Forgot password? Click here to reset