Image Comes Dancing with Collaborative Parsing-Flow Video Synthesis

10/27/2021
by   Bowen Wu, et al.
6

Transferring human motion from a source to a target person poses great potential in computer vision and graphics applications. A crucial step is to manipulate sequential future motion while retaining the appearance characteristic.Previous work has either relied on crafted 3D human models or trained a separate model specifically for each target person, which is not scalable in practice.This work studies a more general setting, in which we aim to learn a single model to parsimoniously transfer motion from a source video to any target person given only one image of the person, named as Collaborative Parsing-Flow Network (CPF-Net). The paucity of information regarding the target person makes the task particularly challenging to faithfully preserve the appearance in varying designated poses. To address this issue, CPF-Net integrates the structured human parsing and appearance flow to guide the realistic foreground synthesis which is merged into the background by a spatio-temporal fusion module. In particular, CPF-Net decouples the problem into stages of human parsing sequence generation, foreground sequence generation and final video generation. The human parsing generation stage captures both the pose and the body structure of the target. The appearance flow is beneficial to keep details in synthesized frames. The integration of human parsing and appearance flow effectively guides the generation of video frames with realistic appearance. Finally, the dedicated designed fusion network ensure the temporal coherence. We further collect a large set of human dancing videos to push forward this research field. Both quantitative and qualitative results show our method substantially improves over previous approaches and is able to generate appealing and photo-realistic target videos given any input person image. All source code and dataset will be released at https://github.com/xiezhy6/CPF-Net.

READ FULL TEXT

page 1

page 2

page 4

page 7

page 8

page 9

page 10

page 11

research
03/30/2019

Dance Dance Generation: Motion Transfer for Internet Videos

This work presents computational methods for transferring body movements...
research
08/12/2019

Multi-Frame Content Integration with a Spatio-Temporal Attention Mechanism for Person Video Motion Transfer

Existing person video generation methods either lack the flexibility in ...
research
03/18/2022

Location-Free Camouflage Generation Network

Camouflage is a common visual phenomenon, which refers to hiding the for...
research
05/03/2022

Copy Motion From One to Another: Fake Motion Video Generation

One compelling application of artificial intelligence is to generate a v...
research
06/30/2023

DisCo: Disentangled Control for Referring Human Dance Generation in Real World

Generative AI has made significant strides in computer vision, particula...
research
12/02/2020

Single-Shot Freestyle Dance Reenactment

The task of motion transfer between a source dancer and a target person ...
research
06/27/2021

Robust Pose Transfer with Dynamic Details using Neural Video Rendering

Pose transfer of human videos aims to generate a high fidelity video of ...

Please sign up or login with your details

Forgot password? Click here to reset