Robust Pose Transfer with Dynamic Details using Neural Video Rendering

06/27/2021
by   Yang-Tian Sun, et al.
0

Pose transfer of human videos aims to generate a high fidelity video of a target person imitating actions of a source person. A few studies have made great progress either through image translation with deep latent features or neural rendering with explicit 3D features. However, both of them rely on large amounts of training data to generate realistic results, and the performance degrades on more accessible internet videos due to insufficient training frames. In this paper, we demonstrate that the dynamic details can be preserved even trained from short monocular videos. Overall, we propose a neural video rendering framework coupled with an image-translation-based dynamic details generation network (D2G-Net), which fully utilizes both the stability of explicit 3D features and the capacity of learning components. To be specific, a novel texture representation is presented to encode both the static and pose-varying appearance characteristics, which is then mapped to the image space and rendered as a detail-rich frame in the neural rendering stage. Moreover, we introduce a concise temporal loss in the training stage to suppress the detail flickering that is made more visible due to high-quality dynamic details generated by our method. Through extensive comparisons, we demonstrate that our neural human video renderer is capable of achieving both clearer dynamic details and more robust performance even on accessible short videos with only 2k - 4k frames.

READ FULL TEXT

page 2

page 5

page 6

page 7

page 8

page 9

page 10

research
01/14/2020

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation

Synthesizing realistic videos of humans using neural networks has been a...
research
01/14/2020

Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation

Synthesizing realistic videos of humans using neural networks has been a...
research
03/28/2022

Encode-in-Style: Latent-based Video Encoding using StyleGAN2

We propose an end-to-end facial video encoding approach that facilitates...
research
05/03/2022

Copy Motion From One to Another: Fake Motion Video Generation

One compelling application of artificial intelligence is to generate a v...
research
10/27/2021

Image Comes Dancing with Collaborative Parsing-Flow Video Synthesis

Transferring human motion from a source to a target person poses great p...
research
07/26/2018

Learning to Forecast and Refine Residual Motion for Image-to-Video Generation

We consider the problem of image-to-video translation, where an input im...
research
01/15/2020

Everybody's Talkin': Let Me Talk as You Want

We present a method to edit a target portrait footage by taking a sequen...

Please sign up or login with your details

Forgot password? Click here to reset