Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis

11/18/2020
by   Wen Liu, et al.
6

We tackle human image synthesis, including human motion imitation, appearance transfer, and novel view synthesis, within a unified framework. It means that the model, once being trained, can be used to handle all these tasks. The existing task-specific methods mainly use 2D keypoints to estimate the human body structure. However, they only express the position information with no abilities to characterize the personalized shape of the person and model the limb rotations. In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape. It can not only model the joint location and rotation but also characterize the personalized body shape. To preserve the source information, such as texture, style, color, and face identity, we propose an Attentional Liquid Warping GAN with Attentional Liquid Warping Block (AttLWB) that propagates the source information in both image and feature spaces to the synthesized reference. Specifically, the source features are extracted by a denoising convolutional auto-encoder for characterizing the source identity well. Furthermore, our proposed method can support a more flexible warping from multiple sources. To further improve the generalization ability of the unseen source images, a one/few-shot adversarial learning is applied. In detail, it firstly trains a model in an extensive training set. Then, it finetunes the model by one/few-shot unseen image(s) in a self-supervised way to generate high-resolution (512 x 512 and 1024 x 1024) results. Also, we build a new dataset, namely iPER dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis. Extensive experiments demonstrate the effectiveness of our methods in terms of preserving face identity, shape consistency, and clothes details. All codes and dataset are available on https://impersonator.org/work/impersonator-plus-plus.html.

READ FULL TEXT

page 2

page 3

page 5

page 6

page 7

page 11

page 14

page 16

research
09/26/2019

Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

We tackle the human motion imitation, appearance transfer, and novel vie...
research
11/23/2022

Semantic-aware One-shot Face Re-enactment with Dense Correspondence Estimation

One-shot face re-enactment is a challenging task due to the identity mis...
research
07/24/2022

Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head Synthesis

Talking head synthesis is an emerging technology with wide applications ...
research
09/29/2022

Motion and Appearance Adaptation for Cross-Domain Motion Transfer

Motion transfer aims to transfer the motion of a driving video to a sour...
research
08/05/2019

One-shot Face Reenactment

To enable realistic shape (e.g. pose and expression) transfer, existing ...
research
06/30/2023

Implicit 3D Human Mesh Recovery using Consistency with Pose and Shape from Unseen-view

From an image of a person, we can easily infer the natural 3D pose and s...
research
03/03/2021

ID-Unet: Iterative Soft and Hard Deformation for View Synthesis

View synthesis is usually done by an autoencoder, in which the encoder m...

Please sign up or login with your details

Forgot password? Click here to reset