Video Synthesis from a Single Image and Motion Stroke

12/05/2018
by   Qiyang Hu, et al.
10

In this paper, we propose a new method to automatically generate a video sequence from a single image and a user provided motion stroke. Generating a video sequence based on a single input image has many applications in visual content creation, but it is tedious and time-consuming to produce even for experienced artists. Automatic methods have been proposed to address this issue, but most existing video prediction approaches require multiple input frames. In addition, generated sequences have limited variety since the output is mostly determined by the input frames, without allowing the user to provide additional constraints on the result. In our technique, users can control the generated animation using a sketch stroke on a single input image. We train our system such that the trajectory of the animated object follows the stroke, which makes it both more flexible and more controllable. From a single image, users can generate a variety of video sequences corresponding to different sketch inputs. Our method is the first system that, given a single frame and a motion stroke, can generate animations by recurrently generating videos frame by frame. An important benefit of the recurrent nature of our architecture is that it facilitates the synthesis of an arbitrary number of generated frames. Our architecture uses an autoencoder and a generative adversarial network (GAN) to generate sharp texture images, and we use another GAN to guarantee that transitions between frames are realistic and smooth. We demonstrate the effectiveness of our approach on the MNIST, KTH, and Human 3.6M datasets.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 9

page 10

research
08/10/2020

Deep Sketch-guided Cartoon Video Synthesis

We propose a novel framework to produce cartoon videos by fetching the c...
research
04/13/2022

Controllable Video Generation through Global and Local Motion Dynamics

We present GLASS, a method for Global and Local Action-driven Sequence S...
research
05/09/2022

Image2Gif: Generating Continuous Realistic Animations with Warping NODEs

Generating smooth animations from a limited number of sequential observa...
research
01/27/2018

Image2GIF: Generating Cinemagraphs using Recurrent Deep Q-Networks

Given a still photograph, one can imagine how dynamic objects might move...
research
10/16/2019

Animating Landscape: Self-Supervised Learning of Decoupled Motion and Appearance for Single-Image Video Synthesis

Automatic generation of a high-quality video from a single image remains...
research
07/14/2022

Continuous Facial Motion Deblurring

We introduce a novel framework for continuous facial motion deblurring t...
research
07/09/2016

Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks

We study the problem of synthesizing a number of likely future frames fr...

Please sign up or login with your details

Forgot password? Click here to reset