Automatic video scene segmentation based on spatial-temporal clues and rhythm

12/15/2014
by   Walid Mahdi, et al.
0

With ever increasing computing power and data storage capacity, the potential for large digital video libraries is growing rapidly.However, the massive use of video for the moment is limited by its opaque characteristics. Indeed, a user who has to handle and retrieve sequentially needs too much time in order to find out segments of interest within a video. Therefore, providing an environment both convenient and efficient for video storing and retrieval, especially for content-based searching as this exists in traditional textbased database systems, has been the focus of recent and important efforts of a large research community In this paper, we propose a new automatic video scene segmentation method that explores two main video features; these are spatial-temporal relationship and rhythm of shots. The experimental evidence we obtained from a 80 minutevideo showed that our prototype provides very high accuracy for video segmentation.

READ FULL TEXT

page 6

page 20

page 23

research
09/06/2021

Exploiting Spatial-Temporal Semantic Consistency for Video Scene Parsing

Compared with image scene parsing, video scene parsing introduces tempor...
research
05/19/2018

DenseImage Network: Video Spatial-Temporal Evolution Encoding and Understanding

Many of the leading approaches for video understanding are data-hungry a...
research
09/03/2018

YouTube-VOS: Sequence-to-Sequence Video Object Segmentation

Learning long-term spatial-temporal features are critical for many video...
research
09/27/2022

AdaFocusV3: On Unified Spatial-temporal Dynamic Video Recognition

Recent research has revealed that reducing the temporal and spatial redu...
research
09/19/2009

Extension of Path Probability Method to Approximate Inference over Time

There has been a tremendous growth in publicly available digital video f...
research
01/26/2021

RAPIQUE: Rapid and Accurate Video Quality Prediction of User Generated Content

Blind or no-reference video quality assessment of user-generated content...
research
07/12/2017

Capacity, Fidelity, and Noise Tolerance of Associative Spatial-Temporal Memories Based on Memristive Neuromorphic Network

We have calculated the key characteristics of associative (content-addre...

Please sign up or login with your details

Forgot password? Click here to reset