PV3D: A 3D Generative Model for Portrait Video Generation

12/13/2022
by   Eric Zhongcong Xu, et al.
4

Recent advances in generative adversarial networks (GANs) have demonstrated the capabilities of generating stunning photo-realistic portrait images. While some prior works have applied such image GANs to unconditional 2D portrait video generation and static 3D portrait synthesis, there are few works successfully extending GANs for generating 3D-aware portrait videos. In this work, we propose PV3D, the first generative framework that can synthesize multi-view consistent portrait videos. Specifically, our method extends the recent static 3D-aware image GAN to the video domain by generalizing the 3D implicit neural representation to model the spatio-temporal space. To introduce motion dynamics to the generation process, we develop a motion generator by stacking multiple motion layers to generate motion features via modulated convolution. To alleviate motion ambiguities caused by camera/human motions, we propose a simple yet effective camera condition strategy for PV3D, enabling both temporal and multi-view consistent video generation. Moreover, PV3D introduces two discriminators for regularizing the spatial and temporal domains to ensure the plausibility of the generated portrait videos. These elaborated designs enable PV3D to generate 3D-aware motion-plausible portrait videos with high-quality appearance and geometry, significantly outperforming prior works. As a result, PV3D is able to support many downstream applications such as animating static portraits and view-consistent video motion editing. Code and models will be released at https://showlab.github.io/pv3d.

READ FULL TEXT

page 2

page 7

page 9

page 14

page 18

research
06/29/2022

3D-Aware Video Generation

Generative models have emerged as an essential building block for many i...
research
04/12/2023

VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs

We propose VidStyleODE, a spatiotemporally continuous disentangled Video...
research
11/18/2020

Data-driven Accelerogram Synthesis using Deep Generative Models

Robust estimation of ground motions generated by scenario earthquakes is...
research
11/30/2022

Dr.3D: Adapting 3D GANs to Artistic Drawings

While 3D GANs have recently demonstrated the high-quality synthesis of m...
research
12/14/2022

Towards Smooth Video Composition

Video generation requires synthesizing consistent and persistent frames ...
research
03/23/2023

PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360^∘

Synthesis and reconstruction of 3D human head has gained increasing inte...
research
05/11/2022

Diverse Video Generation from a Single Video

GANs are able to perform generation and manipulation tasks, trained on a...

Please sign up or login with your details

Forgot password? Click here to reset