Towards Using Clothes Style Transfer for Scenario-aware Person Video Generation

10/14/2021
by   Jingning Xu, et al.
0

Clothes style transfer for person video generation is a challenging task, due to drastic variations of intra-person appearance and video scenarios. To tackle this problem, most recent AdaIN-based architectures are proposed to extract clothes and scenario features for generation. However, these approaches suffer from being short of fine-grained details and are prone to distort the origin person. To further improve the generation performance, we propose a novel framework with disentangled multi-branch encoders and a shared decoder. Moreover, to pursue the strong video spatio-temporal consistency, an inner-frame discriminator is delicately designed with input being cross-frame difference. Besides, the proposed framework possesses the property of scenario adaptation. Extensive experiments on the TEDXPeople benchmark demonstrate the superiority of our method over state-of-the-art approaches in terms of image quality and video coherence.

READ FULL TEXT
research
08/12/2019

Multi-Frame Content Integration with a Spatio-Temporal Attention Mechanism for Person Video Motion Transfer

Existing person video generation methods either lack the flexibility in ...
research
03/16/2023

NLUT: Neural-based 3D Lookup Tables for Video Photorealistic Style Transfer

Video photorealistic style transfer is desired to generate videos with a...
research
03/27/2017

Coherent Online Video Style Transfer

Training a feed-forward network for fast neural style transfer of images...
research
04/22/2023

Two Birds, One Stone: A Unified Framework for Joint Learning of Image and Video Style Transfers

Current arbitrary style transfer models are limited to either image or v...
research
08/31/2023

StyleInV: A Temporal Style Modulated Inversion Network for Unconditional Video Generation

Unconditional video generation is a challenging task that involves synth...
research
05/26/2020

Region-adaptive Texture Enhancement for Detailed Person Image Synthesis

The ability to produce convincing textural details is essential for the ...
research
10/02/2020

MGD-GAN: Text-to-Pedestrian generation through Multi-Grained Discrimination

In this paper, we investigate the problem of text-to-pedestrian synthesi...

Please sign up or login with your details

Forgot password? Click here to reset