High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation

03/29/2021
by   Lele Chen, et al.
0

3D video avatars can empower virtual communications by providing compression, privacy, entertainment, and a sense of presence in AR/VR. Best 3D photo-realistic AR/VR avatars driven by video, that can minimize uncanny effects, rely on person-specific models. However, existing person-specific photo-realistic 3D models are not robust to lighting, hence their results typically miss subtle facial behaviors and cause artifacts in the avatar. This is a major drawback for the scalability of these models in communication systems (e.g., Messenger, Skype, FaceTime) and AR/VR. This paper addresses previous limitations by learning a deep learning lighting model, that in combination with a high-quality 3D face tracking algorithm, provides a method for subtle and robust facial motion transfer from a regular video to a 3D photo-realistic avatar. Extensive experimental validation and comparisons to other state-of-the-art methods demonstrate the effectiveness of the proposed framework in real-world scenarios with variability in pose, expression, and illumination. Please visit https://www.youtube.com/watch?v=dtz1LgZR8cc for more results. Our project page can be found at https://www.cs.rochester.edu/u/lchen63.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

research
04/10/2021

Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality

Social presence, the feeling of being there with a real person, will fue...
research
07/30/2021

Neural Relighting and Expression Transfer On Video Portraits

Photo-realistic video portrait reenactment benefits virtual production a...
research
04/14/2023

A Framework for Fast Prototyping of Photo-realistic Environments with Multiple Pedestrians

Robotic applications involving people often require advanced perception ...
research
10/23/2022

Facial De-occlusion Network for Virtual Telepresence Systems

To see what is not in the image is one of the broader missions of comput...
research
12/19/2022

HARP: Personalized Hand Reconstruction from a Monocular RGB Video

We present HARP (HAnd Reconstruction and Personalization), a personalize...
research
07/03/2022

NARRATE: A Normal Assisted Free-View Portrait Stylizer

In this work, we propose NARRATE, a novel pipeline that enables simultan...
research
09/02/2023

Towards High-Frequency Tracking and Fast Edge-Aware Optimization

This dissertation advances the state of the art for AR/VR tracking syste...

Please sign up or login with your details

Forgot password? Click here to reset