DAVE: A Deep Audio-Visual Embedding for Dynamic Saliency Prediction

05/25/2019
by   Hamed R. Tavakoli, et al.
0

This paper presents a conceptually simple and effective Deep Audio-Visual Eembedding for dynamic saliency prediction dubbed "DAVE". Several behavioral studies have shown a strong relation between auditory and visual cues for guiding gaze during scene free viewing. The existing video saliency models, however, only consider visual cues for predicting saliency over videos and neglect the auditory information that is ubiquitous in dynamic scenes. We propose a multimodal saliency model that utilizes audio and visual information for predicting saliency in videos. Our model consists of a two-stream encoder and a decoder. First, auditory and visual information are mapped into a feature space using 3D Convolutional Neural Networks (3D CNNs). Then, a decoder combines the features and maps them to a final saliency map. To train such model, data from various eye tracking datasets containing video and audio are pulled together. We further categorised videos into `social', `nature', and `miscellaneous' classes to analyze the models over different content types. Several analyses show that our audio-visual model outperforms video-based models significantly over all scores; overall and over individual categories. Contextual analysis of the model performance over the location of sound source reveals that the audio-visual model behaves similar to humans in attending to the location of sound source. Our endeavour demonstrates that audio is an important signal that can boost video saliency prediction and help getting closer to human performance.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 7

page 9

page 10

page 11

research
01/07/2021

Audiovisual Saliency Prediction in Uncategorized Video Sequences based on Audio-Video Correlation

Substantial research has been done in saliency modeling to develop intel...
research
03/11/2023

CASP-Net: Rethinking Video Saliency Prediction from an Audio-VisualConsistency Perceptual Perspective

Incorporating the audio stream enables Video Saliency Prediction (VSP) t...
research
12/11/2020

AViNet: Diving Deep into Audio-Visual Saliency Prediction

We propose the AViNet architecture for audiovisual saliency prediction. ...
research
06/22/2020

Characterizing Hirability via Personality and Behavior

While personality traits have been extensively modeled as behavioral con...
research
03/13/2018

A Learning-Based Visual Saliency Fusion Model for High Dynamic Range Video (LBVS-HDR)

Saliency prediction for Standard Dynamic Range (SDR) videos has been wel...
research
01/26/2018

Supersaliency: Predicting Smooth Pursuit-Based Attention with Slicing CNNs Improves Fixation Prediction for Naturalistic Videos

Predicting attention is a popular topic at the intersection of human and...
research
01/09/2020

STAViS: Spatio-Temporal AudioVisual Saliency Network

We introduce STAViS, a spatio-temporal audiovisual saliency network that...

Please sign up or login with your details

Forgot password? Click here to reset