Towards Data-Driven Automatic Video Editing

07/17/2019
by   Sergey Podlesnyy, et al.
3

Automatic video editing involving at least the steps of selecting the most valuable footage from points of view of visual quality and the importance of action filmed; and cutting the footage into a brief and coherent visual story that would be interesting to watch is implemented in a purely data-driven manner. Visual semantic and aesthetic features are extracted by the ImageNet-trained convolutional neural network, and the editing controller is trained by an imitation learning algorithm. As a result, at test time the controller shows the signs of observing basic cinematography editing rules learned from the corpus of motion pictures masterpieces.

READ FULL TEXT
research
08/28/2023

MagicEdit: High-Fidelity and Temporally Coherent Video Editing

In this report, we present MagicEdit, a surprisingly simple yet effectiv...
research
06/05/2019

Visual Story Post-Editing

We introduce the first dataset for human edits of machine-generated visu...
research
07/20/2022

The Anatomy of Video Editing: A Dataset and Benchmark Suite for AI-Assisted Video Editing

Machine learning is transforming the video editing industry. Recent adva...
research
09/06/2022

Erato: Cooperative Data Story Editing via Fact Interpolation

As an effective form of narrative visualization, visual data stories are...
research
03/02/2021

A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint

3D shape editing is widely used in a range of applications such as movie...

Please sign up or login with your details

Forgot password? Click here to reset