ViDaS Video Depth-aware Saliency Network

05/19/2023
by   Ioanna Diamanti, et al.
0

We introduce ViDaS, a two-stream, fully convolutional Video, Depth-Aware Saliency network to address the problem of attention modeling “in-the-wild", via saliency prediction in videos. Contrary to existing visual saliency approaches using only RGB frames as input, our network employs also depth as an additional modality. The network consists of two visual streams, one for the RGB frames, and one for the depth frames. Both streams follow an encoder-decoder approach and are fused to obtain a final saliency map. The network is trained end-to-end and is evaluated in a variety of different databases with eye-tracking data, containing a wide range of video content. Although the publicly available datasets do not contain depth, we estimate it using three different state-of-the-art methods, to enable comparisons and a deeper insight. Our method outperforms in most cases state-of-the-art models and our RGB-only variant, which indicates that depth can be beneficial to accurately estimating saliency in videos displayed on a 2D screen. Depth has been widely used to assist salient object detection problems, where it has been proven to be very beneficial. Our problem though differs significantly from salient object detection, since it is not restricted to specific salient objects, but predicts human attention in a more general aspect. These two problems not only have different objectives, but also different ground truth data and evaluation metrics. To our best knowledge, this is the first competitive deep learning video saliency estimation approach that combines both RGB and Depth features to address the general problem of saliency estimation “in-the-wild". The code will be publicly released.

READ FULL TEXT

page 2

page 9

page 14

research
01/09/2020

STAViS: Spatio-Temporal AudioVisual Saliency Network

We introduce STAViS, a spatio-temporal audiovisual saliency network that...
research
03/11/2016

Learning Gaze Transitions from Depth to Improve Video Saliency Estimation

In this paper we introduce a novel Depth-Aware Video Saliency approach t...
research
01/05/2019

Adaptive Fusion for RGB-D Salient Object Detection

RGB-D salient object detection aims to identify the most visually distin...
research
07/03/2020

Synergistic saliency and depth prediction for RGB-D saliency detection

Depth information available from an RGB-D camera can be useful in segmen...
research
04/28/2023

A positive feedback method based on F-measure value for Salient Object Detection

The majority of current salient object detection (SOD) models are focuse...
research
04/05/2021

BTS-Net: Bi-directional Transfer-and-Selection Network For RGB-D Salient Object Detection

Depth information has been proved beneficial in RGB-D salient object det...
research
03/23/2017

Saliency-guided video classification via adaptively weighted learning

Video classification is productive in many practical applications, and t...

Please sign up or login with your details

Forgot password? Click here to reset