Revisiting Video Saliency: A Large-scale Benchmark and a New Model

01/23/2018
by   Wenguan Wang, et al.
0

In this work, we contribute to video saliency research in two ways. First, we introduce a new benchmark for predicting human eye movements during dynamic scene free-viewing, which is long-time urged in this field. Our dataset, named DHF1K (Dynamic Human Fixation), consists of 1K high-quality, elaborately selected video sequences spanning a large range of scenes, viewpoints, motions, object types and background complexity. Existing video saliency datasets lack variety and generality of common dynamic scenes and fall short in covering challenging situations in unconstrained environments. In contrast, DHF1K makes a significant leap in terms of scalability, diversity and difficulty, and is expected to boost video saliency modeling. Second, we propose a novel video saliency model that augments the CNN-LSTM network architecture with an attention mechanism to enable fast, end-to-end saliency learning. The attention mechanism explicitly encodes static saliency information, thus allowing LSTM to focus on learning more flexible temporal saliency representation across successive frames. Such a design fully leverages existing large-scale static fixation datasets, avoids overfitting, and significantly improves training efficiency and testing performance. We thoroughly examine the performance of our model, with respect to the state of the art saliency models, on three large-scale datasets (i.e., DHF1K, Hollywood2, UCF sports). Experimental results over more than 1.2K testing videos containing 400K frames demonstrate that our model outperforms other competitors.

READ FULL TEXT

page 3

page 4

page 5

page 8

research
11/20/2020

ATSal: An Attention Based Architecture for Saliency Prediction in 360 Videos

The spherical domain representation of 360 video/image presents many cha...
research
10/08/2018

Saliency Prediction in the Deep Learning Era: An Empirical Investigation

Visual saliency models have enjoyed a big leap in performance in recent ...
research
10/07/2019

CrowdFix: An Eyetracking Data-set of Human Crowd Video

Understanding human visual attention and saliency is an integral part of...
research
09/21/2018

SG-FCN: A Motion and Memory-Based Deep Learning Model for Video Saliency Detection

Data-driven saliency detection has attracted strong interest as a result...
research
10/07/2019

CrowdFix: An Eyetracking Dataset of Real Life Crowd Videos

Understanding human visual attention and saliency is an integral part of...
research
02/15/2021

A Gated Fusion Network for Dynamic Saliency Prediction

Predicting saliency in videos is a challenging problem due to complex mo...
research
11/18/2014

Unsupervised Neural Architecture for Saliency Detection: Extended Version

We propose a novel neural network architecture for visual saliency detec...

Please sign up or login with your details

Forgot password? Click here to reset