Deep Video-Based Performance Cloning

08/21/2018
by   Kfir Aberman, et al.
0

We present a new video-based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space-time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses paired training data, self-generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances. Supplementary video: https://youtu.be/JpwsEeqNhhA.

READ FULL TEXT
research
04/07/2020

Human Motion Transfer from Poses in the Wild

In this paper, we tackle the problem of human motion transfer, where we ...
research
07/30/2018

Pose Guided Human Video Generation

Due to the emergence of Generative Adversarial Networks, video synthesis...
research
04/07/2022

Video Diffusion Models

Generating temporally coherent high fidelity video is an important miles...
research
10/02/2022

Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis

In this paper, we propose a novel dual-branch Transformation-Synthesis n...
research
05/23/2022

Flexible Diffusion Modeling of Long Videos

We present a framework for video modeling based on denoising diffusion p...
research
11/05/2022

Disentangling Content and Motion for Text-Based Neural Video Manipulation

Giving machines the ability to imagine possible new objects or scenes fr...
research
05/11/2022

Video-ReTime: Learning Temporally Varying Speediness for Time Remapping

We propose a method for generating a temporally remapped video that matc...

Please sign up or login with your details

Forgot password? Click here to reset