Learning Video-Story Composition via Recurrent Neural Network

01/31/2018
by   Guangyu Zhong, et al.
0

In this paper, we propose a learning-based method to compose a video-story from a group of video clips that describe an activity or experience. We learn the coherence between video clips from real videos via the Recurrent Neural Network (RNN) that jointly incorporates the spatial-temporal semantics and motion dynamics to generate smooth and relevant compositions. We further rearrange the results generated by the RNN to make the overall video-story compatible with the storyline structure via a submodular ranking optimization process. Experimental results on the video-story dataset show that the proposed algorithm outperforms the state-of-the-art approach.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

page 7

page 8

research
04/28/2019

Hierarchical Recurrent Neural Network for Video Summarization

Exploiting the temporal dependency among video frames or subshots is ver...
research
07/25/2018

Video Storytelling

Bridging vision and natural language is a longstanding goal in computer ...
research
05/12/2022

Performing Video Frame Prediction of Microbial Growth with a Recurrent Neural Network

A Recurrent Neural Network (RNN) was used to perform video frame predict...
research
07/29/2018

Story Understanding in Video Advertisements

In order to resonate with the viewers, many video advertisements explore...
research
04/14/2016

Learning Visual Storylines with Skipping Recurrent Neural Networks

What does a typical visit to Paris look like? Do people first take photo...
research
01/30/2023

Dynamic Storyboard Generation in an Engine-based Virtual Environment for Video Production

Amateurs working on mini-films and short-form videos usually spend lots ...
research
10/31/2018

Picking Apart Story Salads

During natural disasters and conflicts, information about what happened ...

Please sign up or login with your details

Forgot password? Click here to reset