Normalized Avatar Synthesis Using StyleGAN and Perceptual Refinement

06/21/2021
by   Huiwen Luo, et al.
2

We introduce a highly robust GAN-based framework for digitizing a normalized 3D avatar of a person from a single unconstrained photo. While the input image can be of a smiling person or taken in extreme lighting conditions, our method can reliably produce a high-quality textured model of a person's face in neutral expression and skin textures under diffuse lighting condition. Cutting-edge 3D face reconstruction methods use non-linear morphable face models combined with GAN-based decoders to capture the likeness and details of a person but fail to produce neutral head models with unshaded albedo textures which is critical for creating relightable and animation-friendly avatars for integration in virtual environments. The key challenges for existing methods to work is the lack of training and ground truth data containing normalized 3D faces. We propose a two-stage approach to address this problem. First, we adopt a highly robust normalized 3D face generator by embedding a non-linear morphable face model into a StyleGAN2 network. This allows us to generate detailed but normalized facial assets. This inference is then followed by a perceptual refinement step that uses the generated assets as regularization to cope with the limited available training samples of normalized faces. We further introduce a Normalized Face Dataset, which consists of a combination photogrammetry scans, carefully selected photographs, and generated fake people with neutral expressions in diffuse lighting conditions. While our prepared dataset contains two orders of magnitude less subjects than cutting edge GAN-based 3D facial reconstruction methods, we show that it is possible to produce high-quality normalized face models for very challenging unconstrained input images, and demonstrate superior performance to the current state-of-the-art.

READ FULL TEXT

page 3

page 4

page 5

page 8

page 9

page 10

page 11

page 13

research
11/25/2022

FFHQ-UV: Normalized Facial UV-Texture Dataset for 3D Face Reconstruction

We present a large-scale facial UV-texture dataset that contains over 50...
research
06/17/2019

Exemplar Guided Face Image Super-Resolution without Facial Landmarks

Nowadays, due to the ubiquitous visual media there are vast amounts of a...
research
12/07/2020

Learning an Animatable Detailed 3D Face Model from In-The-Wild Images

While current monocular 3D face reconstruction methods can recover fine ...
research
09/19/2022

BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction

We propose BareSkinNet, a novel method that simultaneously removes makeu...
research
04/10/2016

Real-Time Facial Segmentation and Performance Capture from RGB Input

We introduce the concept of unconstrained real-time 3D facial performanc...
research
08/16/2019

FSGAN: Subject Agnostic Face Swapping and Reenactment

We present Face Swapping GAN (FSGAN) for face swapping and reenactment. ...
research
10/01/2020

Dynamic Facial Asset and Rig Generation from a Single Scan

The creation of high-fidelity computer-generated (CG) characters used in...

Please sign up or login with your details

Forgot password? Click here to reset