Everybody Dance Now

08/22/2018
by   Caroline Chan, et al.
4

This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We pose this problem as a per-frame image-to-image translation with spatio-temporal smoothing. Using pose detections as an intermediate representation between source and target, we learn a mapping from pose images to a target subject's appearance. We adapt this setup for temporally coherent video generation including realistic face synthesis. Our video demo can be found at https://youtu.be/PCBTZh41Ris .

READ FULL TEXT

page 1

page 6

page 8

research
04/10/2021

Do as we do: Multiple Person Video-To-Video Transfer

Our goal is to transfer the motion of real people from a source video to...
research
10/21/2019

DwNet: Dense warp-based network for pose-guided human video generation

Generation of realistic high-resolution videos of human subjects is a ch...
research
08/19/2019

Video synthesis of human upper body with realistic face

This paper presents a generative adversarial learning-based human upper ...
research
08/19/2019

Video synthesis of human upper body with realistic fac

This paper presents a generative adversarial learning-based human upper ...
research
03/31/2020

TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting

We present a lightweight video motion retargeting approach TransMoMo tha...
research
11/30/2020

Animating Pictures with Eulerian Motion Fields

In this paper, we demonstrate a fully automatic method for converting a ...
research
12/02/2020

Single-Shot Freestyle Dance Reenactment

The task of motion transfer between a source dancer and a target person ...

Please sign up or login with your details

Forgot password? Click here to reset