GAC-GAN: A General Method for Appearance-Controllable Human Video Motion Transfer

11/25/2019
by   Dongxu Wei, et al.
0

Human video motion transfer has a wide range of applications in multimedia, computer vision and graphics. Recently, due to the rapid development of Generative Adversarial Networks (GANs), there has been significant progress in the field. However, almost all existing GAN-based works are prone to address the mapping from human motions to video scenes, with scene appearances are encoded individually in the trained models. Therefore, each trained model can only generate videos with a specific scene appearance, new models are required to be trained to generate new appearances. Besides, existing works lack the capability of appearance control. For example, users have to provide video records of wearing new clothes or performing in new backgrounds to enable clothes or background changing in their synthetic videos, which greatly limits the application flexibility. In this paper, we propose GAC-GAN, a general method for appearance-controllable human video motion transfer. To enable general-purpose appearance synthesis, we propose to include appearance information in the conditioning inputs. Thus, once trained, our model can generate new appearances by altering the input appearance information. To achieve appearance control, we first obtain the appearance-controllable conditioning inputs and then utilize a two-stage GAC-GAN to generate the corresponding appearance-controllable outputs, where we utilize an ACGAN loss and a shadow extraction module for output foreground and background appearance control respectively. We further build a solo dance dataset containing a large number of dance videos for training and evaluation. Experimental results show that, our proposed GAC-GAN can not only support appearance-controllable human video motion transfer but also achieve higher video quality than state-of-art methods.

READ FULL TEXT
research
11/25/2019

Appearance Composing GAN: A General Method for Appearance-Controllable Human Video Motion Transfer

Due to the rapid development of GANs, there has been significant progres...
research
12/04/2018

Conditional Video Generation Using Action-Appearance Captions

The field of automatic video generation has received a boost thanks to t...
research
12/14/2018

Spatial Fusion GAN for Image Synthesis

Recent advances in generative adversarial networks (GANs) have shown gre...
research
12/06/2021

Make It Move: Controllable Image-to-Video Generation with Text Descriptions

Generating controllable videos conforming to user intentions is an appea...
research
04/17/2019

Vid2Game: Controllable Characters Extracted from Real-World Videos

We are given a video of a person performing a certain activity, from whi...
research
12/22/2022

Jamdani Motif Generation using Conditional GAN

Jamdani is the strikingly patterned textile heritage of Bangladesh. The ...
research
07/07/2018

Video Prediction with Appearance and Motion Conditions

Video prediction aims to generate realistic future frames by learning dy...

Please sign up or login with your details

Forgot password? Click here to reset