DeepAI AI Chat
Log In Sign Up

A Learning-Based Visual Saliency Fusion Model for High Dynamic Range Video (LBVS-HDR)

by   Amin Banitalebi-Dehkordi, et al.

Saliency prediction for Standard Dynamic Range (SDR) videos has been well explored in the last decade. However, limited studies are available on High Dynamic Range (HDR) Visual Attention Models (VAMs). Considering that the characteristic of HDR content in terms of dynamic range and color gamut is quite different than those of SDR content, it is essential to identify the importance of different saliency attributes of HDR videos for designing a VAM and understand how to combine these features. To this end we propose a learning-based visual saliency fusion method for HDR content (LVBS-HDR) to combine various visual saliency features. In our approach various conspicuity maps are extracted from HDR data, and then for fusing conspicuity maps, a Random Forests algorithm is used to train a model based on the collected data from an eye-tracking experiment. Performance evaluations demonstrate the superiority of the proposed fusion method against other existing fusion methods.


A Learning-Based Visual Saliency Prediction Model for Stereoscopic 3D Video (LBVS-3D)

Over the past decade, many computational saliency prediction models have...

Learning a time-dependent master saliency map from eye-tracking data in videos

To predict the most salient regions of complex natural scenes, saliency ...

A Gated Fusion Network for Dynamic Saliency Prediction

Predicting saliency in videos is a challenging problem due to complex mo...

DAVE: A Deep Audio-Visual Embedding for Dynamic Saliency Prediction

This paper presents a conceptually simple and effective Deep Audio-Visua...

CrowdFix: An Eyetracking Data-set of Human Crowd Video

Understanding human visual attention and saliency is an integral part of...

Dynamically Visual Disambiguation of Keyword-based Image Search

Due to the high cost of manual annotation, learning directly from the we...