Unsupervised uncertainty estimation using spatiotemporal cues in video saliency detection

01/06/2019
by   Tariq Alshawi, et al.
0

In this paper, we address the problem of quantifying reliability of computational saliency for videos, which can be used to improve saliency-based video processing and enable more reliable performance and risk assessment of such processing. Our approach is twofold. First, we explore spatial correlations in both saliency map and eye-fixation map. Then, we learn spatiotemporal correlations that define a reliable saliency map. We first study spatiotemporal eye-fixation data from a public dataset and investigate a common feature in human visual attention, which dictates correlation in saliency between a pixel and its direct neighbors. Based on the study, we then develop an algorithm that estimates a pixel-wise uncertainty map that reflects our confidence in the associated computational saliency map by relating a pixel's saliency to the saliency of its neighbors. To estimate such uncertainties, we measure the divergence of a pixel, in a saliency map, from its local neighborhood. Additionally, we propose a systematic procedure to evaluate the estimation performance by explicitly computing uncertainty ground truth as a function of a given saliency map and eye fixations of human subjects. In our experiments, we explore multiple definitions of locality and neighborhoods in spatiotemporal video signals. In addition, we examine the relationship between the parameters of our proposed algorithm and the content of the videos. The proposed algorithm is unsupervised, making it more suitable for generalization to most natural videos. Also, it is computationally efficient and flexible for customization to specific video content. Experiments using three publicly available video datasets show that the proposed algorithm outperforms state-of-the-art uncertainty estimation methods with improvement in accuracy up to 63 practical situations.

READ FULL TEXT

page 2

page 4

page 6

page 9

page 10

page 11

research
01/30/2019

Understanding spatial correlation in eye-fixation maps for visual attention in videos

In this paper, we present an analysis of recorded eye-fixation data from...
research
03/24/2015

Unsupervised Video Analysis Based on a Spatiotemporal Saliency Detector

Visual saliency, which predicts regions in the field of view that draw t...
research
09/03/2018

Learning Saliency Prediction From Sparse Fixation Pixel Map

Ground truth for saliency prediction datasets consists of two types of m...
research
01/31/2013

Fast non parametric entropy estimation for spatial-temporal saliency method

This paper formulates bottom-up visual saliency as center surround condi...
research
03/11/2016

Learning Gaze Transitions from Depth to Improve Video Saliency Estimation

In this paper we introduce a novel Depth-Aware Video Saliency approach t...
research
11/14/2018

How Drones Look: Crowdsourced Knowledge Transfer for Aerial Video Saliency Prediction

In ground-level platforms, many saliency models have been developed to p...
research
04/10/2019

Spatiotemporal Knowledge Distillation for Efficient Estimation of Aerial Video Saliency

The performance of video saliency estimation techniques has achieved sig...

Please sign up or login with your details

Forgot password? Click here to reset