Neural Relighting and Expression Transfer On Video Portraits

07/30/2021
by   Youjia Wang, et al.
24

Photo-realistic video portrait reenactment benefits virtual production and numerous VR/AR experiences. The task remains challenging as the reenacted expression should match the source while the lighting should be adjustable to new environments. We present a neural relighting and expression transfer technique to transfer the facial expressions from a source performer to a portrait video of a target performer while enabling dynamic relighting. Our approach employs 4D reflectance field learning, model-based facial performance capture and target-aware neural rendering. Specifically, given a short sequence of the target performer's OLAT, we apply a rendering-to-video translation network to first synthesize the OLAT result of new sequences with unseen expressions. We then design a semantic-aware facial normalization scheme along with a multi-frame multi-task learning strategy to encode the content, segmentation, and motion flows for reliably inferring the reflectance field. This allows us to simultaneously control facial expression and apply virtual relighting. Extensive experiments demonstrate that our technique can robustly handle challenging expressions and lighting environments and produce results at a cinematographic quality.

READ FULL TEXT

page 1

page 5

page 8

page 9

page 10

page 11

page 12

page 13

research
03/26/2019

Photo-Realistic Facial Details Synthesis from Single Immage

We present a single-image 3D face synthesis technique that can handle ch...
research
08/07/2023

GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images

While current face animation methods can manipulate expressions individu...
research
03/29/2021

High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation

3D video avatars can empower virtual communications by providing compres...
research
09/06/2019

Explicit Facial Expression Transfer via Fine-Grained Semantic Representations

Facial expression transfer between two unpaired images is a challenging ...
research
01/15/2020

Everybody's Talkin': Let Me Talk as You Want

We present a method to edit a target portrait footage by taking a sequen...
research
04/10/2021

Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality

Social presence, the feeling of being there with a real person, will fue...
research
07/11/2023

Neural Point-based Volumetric Avatar: Surface-guided Neural Points for Efficient and Photorealistic Volumetric Head Avatar

Rendering photorealistic and dynamically moving human heads is crucial f...

Please sign up or login with your details

Forgot password? Click here to reset