Controllable Dynamic Appearance for Neural 3D Portraits

09/20/2023
by   ShahRukh Athar, et al.
0

Recent advances in Neural Radiance Fields (NeRFs) have made it possible to reconstruct and reanimate dynamic portrait scenes with control over head-pose, facial expressions and viewing direction. However, training such models assumes photometric consistency over the deformed region e.g. the face must be evenly lit as it deforms with changing head-pose and facial expression. Such photometric consistency across frames of a video is hard to maintain, even in studio environments, thus making the created reanimatable neural portraits prone to artifacts during reanimation. In this work, we propose CoDyNeRF, a system that enables the creation of fully controllable 3D portraits in real-world capture conditions. CoDyNeRF learns to approximate illumination dependent effects via a dynamic appearance model in the canonical space that is conditioned on predicted surface normals and the facial expressions and head-pose deformations. The surface normals prediction is guided using 3DMM normals that act as a coarse prior for the normals of the human head, where direct prediction of normals is hard due to rigid and non-rigid deformations induced by head-pose and facial expression changes. Using only a smartphone-captured short video of a subject for training, we demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls, and realistic lighting effects. The project page can be found here: http://shahrukhathar.github.io/2023/08/22/CoDyNeRF.html

READ FULL TEXT

page 1

page 4

page 6

page 7

page 8

page 13

page 14

page 15

research
06/13/2022

RigNeRF: Fully Controllable Neural 3D Portraits

Volumetric neural rendering methods, such as neural radiance fields (NeR...
research
10/11/2022

Controllable Radiance Fields for Dynamic Face Synthesis

Recent work on 3D-aware image synthesis has achieved compelling results ...
research
08/10/2021

FLAME-in-NeRF : Neural control of Radiance Fields for Free View Face Animation

This paper presents a neural rendering method for controllable portrait ...
research
05/17/2023

LPMM: Intuitive Pose Control for Neural Talking-Head Model via Landmark-Parameter Morphable Model

While current talking head models are capable of generating photorealist...
research
02/24/2023

Pose-Controllable 3D Facial Animation Synthesis using Hierarchical Audio-Vertex Attention

Most of the existing audio-driven 3D facial animation methods suffered f...
research
09/07/2023

Text2Control3D: Controllable 3D Avatar Generation in Neural Radiance Fields using Geometry-Guided Text-to-Image Diffusion Model

Recent advances in diffusion models such as ControlNet have enabled geom...
research
09/12/2022

Explicitly Controllable 3D-Aware Portrait Generation

In contrast to the traditional avatar creation pipeline which is a costl...

Please sign up or login with your details

Forgot password? Click here to reset