Responsive Action-based Video Synthesis

05/20/2017
by   Corneliu Ilisescu, et al.
0

We propose technology to enable a new medium of expression, where video elements can be looped, merged, and triggered, interactively. Like audio, video is easy to sample from the real world but hard to segment into clean reusable elements. Reusing a video clip means non-linear editing and compositing with novel footage. The new context dictates how carefully a clip must be prepared, so our end-to-end approach enables previewing and easy iteration. We convert static-camera videos into loopable sequences, synthesizing them in response to simple end-user requests. This is hard because a) users want essentially semantic-level control over the synthesized video content, and b) automatic loop-finding is brittle and leaves users limited opportunity to work through problems. We propose a human-in-the-loop system where adding effort gives the user progressively more creative control. Artists help us evaluate how our trigger interfaces can be used for authoring of videos and video-performances.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 7

page 8

page 9

page 10

research
05/14/2021

Automatic Non-Linear Video Editing Transfer

We propose an automatic approach that extracts editing styles in a sourc...
research
10/21/2021

LARNet: Latent Action Representation for Human Action Synthesis

We present LARNet, a novel end-to-end approach for generating human acti...
research
01/04/2020

Painting Many Pasts: Synthesizing Time Lapse Videos of Paintings

We introduce a new video synthesis task: synthesizing time lapse videos ...
research
11/21/2020

Iterative Text-based Editing of Talking-heads Using Neural Retargeting

We present a text-based tool for editing talking-head video that enables...
research
10/16/2021

Intelligent Video Editing: Incorporating Modern Talking Face Generation Algorithms in a Video Editor

This paper proposes a video editor based on OpenShot with several state-...
research
11/19/2020

Batteries, camera, action! Learning a semantic control space for expressive robot cinematography

Aerial vehicles are revolutionizing the way film-makers can capture shot...
research
03/01/2017

Making 360^∘ Video Watchable in 2D: Learning Videography for Click Free Viewing

360^∘ video requires human viewers to actively control "where" to look w...

Please sign up or login with your details

Forgot password? Click here to reset