High-Fidelity 3D Digital Human Creation from RGB-D Selfies

10/12/2020 ∙ by Xiangkai Lin, et al. ∙ 10

We present a fully automatic system that can produce high-fidelity, photo-realistic 3D digital human characters with a consumer RGB-D selfie camera. The system only needs the user to take a short selfie RGB-D video while rotating his/her head, and can produce a high quality reconstruction in less than 30 seconds. Our main contribution is a new facial geometry modeling and reflectance synthesis procedure that significantly improves the state-of-the-art. Specifically, given the input video a two-stage frame selection algorithm is first employed to select a few high-quality frames for reconstruction. A novel, differentiable renderer based 3D Morphable Model (3DMM) fitting method is then applied to recover facial geometries from multiview RGB-D data, which takes advantages of extensive data generation and perturbation. Our 3DMM has much larger expressive capacities than conventional 3DMM, allowing us to recover more accurate facial geometry using merely linear bases. For reflectance synthesis, we present a hybrid approach that combines parametric fitting and CNNs to synthesize high-resolution albedo/normal maps with realistic hair/pore/wrinkle details. Results show that our system can produce faithful 3D characters with extremely realistic details. Code and the constructed 3DMM is publicly available.



There are no comments yet.


page 1

page 9

page 11

page 12

page 13

page 14

page 15

page 16

Code Repositories


Code and data for our paper "High-Fidelity 3D Digital Human Creation from RGB-D Selfies".

view repo


Project page for our paper "High-Fidelity 3D Digital Human Creation from RGB-D Selfies".

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Real-time rendering of realistic digital humans is an increasingly important task in various immersive applications like augmented and virtual reality (AR/VR). To render a realistic human face, high-quality geometry and reflectance data are essential. There exist specialized hardware like Light Stage (Alexander et al., 2009) for high-fidelity 3D faces capturing and reconstruction in the movie industry, but they are cumbersome to use for consumers. Research efforts have been dedicated to consumer-friendly solutions, trying to create 3D faces with consumer cameras, e.g., RGB-D data (Zollhöfer et al., 2011; Thies et al., 2015), multiview images (Ichim et al., 2015), or even a single image (Hu et al., 2017; Yamaguchi et al., 2018; Lattas et al., 2020). While good results have been shown, the reconstructed 3D faces still contain artifacts and are far from satisfactory.

Indeed, faithful 3D facial reconstruction is a challenging problem due to the extreme sensitivity that human perception has towards faces. First, the recovered facial geometry needs to preserve all important facial features like cheek silhouettes and mouth shapes. Single-image based approaches (Hu et al., 2017; Yamaguchi et al., 2018; Lattas et al., 2020) can hardly achieve this due to the lack of reliable geometric constraints. With multiview RGB/RGB-D inputs, existing approaches (Zollhöfer et al., 2011; Thies et al., 2015; Ichim et al., 2015)

do not fully leverage most recent advances in deep learning and differentiable rendering

(Genova et al., 2018; Gecer et al., 2019), leading to inaccurate recovery that does not fully resemble the user’s facial shape. Second, the synthesized facial reflectance maps need to be high-resolution with fine details like eyebrow hair, lip wrinkles, and pore details on facial skin. Several recent work (Saito et al., 2017; Yamaguchi et al., 2018; Lattas et al., 2020) have tried to address these issues, but their results still lack natural facial details that are critical for realistic rendering.

In this paper, we present new facial geometry modeling and reflectance synthesis approaches that can produce faithful geometry shapes and high-quality, realistic reflectance maps, from multiview RGB-D data. Our geometry modeling algorithm extends differentiable renderer based 3DMM fitting, such as GANFIT (Gecer et al., 2019), from single image to multiview RGB-D data. Different from GANFIT, we employ conventional PCA-based texture bases instead of GAN to reduce the texture space, so that more data constraints can be exerted on geometric shaping. Additionally, we present an effective frame selection scheme, as well as an initial model fitting procedure, which can avoid enforcing conflicting constraints and increase system robustness. Moreover, we propose an effective approach that takes advantages of extensive data generation and perturbation to construct the 3DMM, which has much larger expressive capacity compared with previous methods. We show that even with the linear bases of the new 3DMM, our method can consistently recover accurate, personalized facial geometry.

For facial reflectance modeling, we use high-resolution 2K (2048 2048) UV-maps consisting of an albedo map and a normal map. We propose a hybrid approach that consists of a regional parametric fitting and CNN-based refinement networks. The regional parametric fitting is based on a set of novel pyramid bases constructed by considering variations in multi-resolution albedo maps, as well as high-resolution normal maps. Faithful but over-smoothed high-resolution albedo/normal maps can be obtained in this step. GAN-based networks are then employed to refine the albedo/normal maps to yield the final high-quality results. Our experiments show that even with the resolution inputs, our method can produce high-resolution albedo/normal maps, where eyebrow hair, lip wrinkles and facial skin pores are all clearly visible. The high-quality reflectance maps significantly improve the realism of the final renderings in real-time physically based rendering engines.

With the recovered facial geometry and reflectance, we further present a fully automatic pipeline to create a full head rig, by completing a head model, matching a hair model, estimating the position/scale of eyeballs/teeth models, generating the expression blendshapes, etc. We conduct extensive experiments and demonstrate potential applications of our system.

Our major contributions include:

  • A fully automatic system for producing high-fidelity, realistic 3D digital human characters with consumer-level RGB-D selfie cameras. Compared with previous avatar approaches, our system can generate higher quality assets for physically based rendering of photo-realistic 3D characters. The total acquisition and production time for a character is less than 30 seconds. The core code and the constructed 3DMM will be made publicly available111See our project page at: https://tencent-ailab.github.io/hifi3dface_projpage/.

  • A robust procedure consisting of frame selection, initial model fitting, and differentiable renderer based optimization to recover faithful facial geometries from multiview RGB-D data, which can tolerate data inconsistency introduced during user data acquisition.

  • A novel morphable model construction approach that takes advantages of extensive data generation and perturbation. The constructed linear 3DMM by our approach has much larger expressive capacity than conventional 3DMM.

  • A novel hybrid approach to synthesize high-resolution facial albedo/normal maps. Our method can produce high-quality results with fine-scale realistic facial details.

2. Related Work

Creating high-fidelity realistic digital human characters commonly relies on specialized hardware (Alexander et al., 2009; Beeler et al., 2010; Debevec et al., 2000) and tedious artist labors like model editing and rigging (von der Pahlen et al., 2014). Several recent work seek to create realistic 3D avatars with consumer devices like a smartphone using domain specific reconstruction approaches (i.e., with face shape/appearance priors) (Ichim et al., 2015; Yamaguchi et al., 2018; Lattas et al., 2020). We mainly focus on prior arts along this line and briefly summarize the most related work in this section. Please refer to the recent surveys (Egger et al., 2020; Zollhöfer et al., 2018) for more detailed reviews.

2.1. Face 3D Morphable Model

The 3D morphable model (3DMM) is introduced in (Blanz and Vetter, 1999) to represent a 3D face model by a linear combination of shape and texture bases. These bases are extracted with PCA algorithm on topological aligned 3D face meshes. To recover a 3D face model from observations, the 3DMM parameters can be estimated instead. Since the 3DMM bases are linear combinations of source 3D models, the expressive capacity of a 3DMM is rather limited. Researchers tried to increase the capacity either by automatically generating large amounts of topological aligned face meshes (Booth et al., 2016) or turn the linear procedure into a nonlinear one (Lüthi et al., 2017; Tran and Liu, 2018). However, the generated face models with these 3DMM models are usually flawed and not suitable for realistic digital human rendering. Another line to increase the expressive capacity of 3DMM is to segment the face into regions and then employ spatially localized bases to model each region (Blanz and Vetter, 1999; Tena et al., 2011; Neumann et al., 2013). We present a novel data augmentation approach that can effectively increase the capacity of either global or localized 3DMM with the same amount of source 3D face meshes as existing approaches.

2.2. Facial Geometry Capture

Capturing from Single Image

Given a single face image, the 3D face model can be recovered by estimating the 3DMM parameters with analysis-by-synthesis optimization approaches (Blanz and Vetter, 2003; Romdhani and Vetter, 2005; Garrido et al., 2013; Thies et al., 2016; Hu et al., 2017; Garrido et al., 2016; Yamaguchi et al., 2018; Gecer et al., 2019). A widely adopted approach among them is described in the Face2Face work (Thies et al., 2016), where the optimization objective consists of photo consistency, facial landmark alignment, and statistical regularization. Although there is a recent surge of deep learning based approaches to use CNNs to regress 3DMM parameters (Zhu et al., 2016; Tran et al., 2017; Tewari et al., 2017; Genova et al., 2018), the results are commonly not in high fidelity due to lack of reliable geometric constraints. Some work go beyond the 3DMM parametric estimation to use additional geometric representations to model facial details (Richardson et al., 2017; Tran et al., 2018; Tewari et al., 2018; Jackson et al., 2017; Sela et al., 2017; Chen et al., 2020; Guo et al., 2019; Kemelmacher-Shlizerman and Basri, 2011; Shi et al., 2014), but the results are generally not satisfactory for realistic rendering.

Capturing from Multiview Images

Ichim et al. (2015) present a complete system to produce face rigs by taking hand-held videos with a smartphone. The system relies on a multiview stereo reconstruction of the captured head followed by non-rigid registrations, which is slow and error-prone, especially when motion occurs or no reliable feature points can be detected in face regions. Besides, the two separated steps are very brittle: the reconstruction step cannot utilize the strong human facial prior from 3DMM and hence its results are usually rather noisy, which further leads to erroneous registration results. Recent research on multiview face reconstruction with deep learning approaches (Dou and Kakadiaris, 2018; Wu et al., 2019) do not explicitly model geometric constraints and thus are not accurate enough for high-fidelity rendering.

Capturing from RGB-D Data

Modeling facial geometries from RGB-D data commonly consists of several separated steps (Zollhöfer et al., 2011; Weise et al., 2011; Bouaziz et al., 2013; Li et al., 2013; Zollhöfer et al., 2014). First, accumulated point clouds are obtained with rigid registration (Newcombe et al., 2011). Then a non-rigid registration procedure is employed to obtain a deformed mesh from the target mesh model (Bouaziz et al., 2016; Chen et al., 2013). Finally, in order to obtain a 3DMM parametric representation, a morphable model fitting is applied using the deformed mesh as geometric constraints (Zollhöfer et al., 2011; Bouaziz et al., 2016). Although the approach is widely adopted as standard practices, it suffers from accumulated errors due to the long pipeline. Thies et al. (2015) propose to use an unified parametric fitting procedure to directly optimize camera poses together with 3DMM parameters, taking into account both RGB and depth constraints. Their method achieves high-quality results in facial expression tracking, but is not specially designed for recovering personalized geometric characteristics.

2.3. Facial Reflectance Capture

Saito et al. (2017) propose to synthesize high-resolution facial albedo maps using CNN style features based optimization like the style transfer algorithm (Gatys et al., 2016). However, the approach requires iterative optimization and needs several minutes of computation. Yamaguchi et al. (2018)

further propose to inference albedo maps, as well as specular maps and displacement maps, using texture completion CNNs and super-resolution CNNs. GANFIT

(Gecer et al., 2019)

employs the latent vector of a Generative Adversarial Network (GAN) as the parametric representation of texture maps and then use an differentiable renderer based optimization to estimate the texture parameters. The most recent work AvatarMe

(Lattas et al., 2020) propose to infer separated diffuse albedo maps, diffuse normal maps, specular albedo maps, and specular normal maps using a series of CNNs. We present a novel hybrid approach that can achieve high-quality results while at the same time is more robust than the above pure CNN-based approaches.

2.4. Full Head Rig Creation

To complete a full head avatar model, accessories beyond face region need to be attached to the recovered face model, e.g., hair, eyeballs, teeth, etc. Ichim et al. (2015) describe a simple solution to transfer these accessories (except hair) from a template model and adapt the scales/positions to the reconstructed face model. Cao et al. (2016) use image-based billboards to deal with eyes and teeth, and coarse geometric proxy to deal with hair model. Nagano et al. (2018) employ a GAN-based network to synthesis mouth interiors. Hu et al. (2017) propose to perform hair digitization by parsing hair attributes from the input image and then retrieving a hair model for further refinement. There are also some approaches working on modeling hairs in strand level (Wei et al., 2005; Hu et al., 2015; Chai et al., 2015; Luo et al., 2013; Saito et al., 2018). For expression blendshape generation, Ichim et al. (2015) present a dynamic modeling process to produce personalized blendshapes, while Hu et al. (2017) adopt a simplified solution to transfer generic FACS-based blendshapes to the target model. The expression blendshapes can also be generated with a bilinear 3DMM model like FaceWarehouse (Cao et al., 2014), where the face identity and expression parameters are in independent dimensions.

3. Overview

We first introduce the 3D face dataset used in our system. Then we describe the goal of our system, followed by the user data acquisition process and a summary of the main processing steps.

Figure 2.

Our two-stage frame selection procedure. Four frames are selected out of 200-300 frames considering both view coverage and data quality. Note that the reference model used for coarse screening and grouping is a template 3D face model, which may lead to inaccurate pose estimation. But the rough poses are sufficient for excluding extreme/invalid frames and categorizing the rest frames into pose groups. In the second stage, the reference model for rigidness screening is the lifted 3D landmarks from the front face data, which can result more accurate poses for more strict rigidness verification.

3D Face Dataset

We use a specialized camera array system (Beeler et al., 2010) to scan 200 East Asians, including 100 males and 100 females, aged from 20 to 50 years old (with their permissions to use their face data). The scanned face models are manually cleaned and aligned to a triangle mesh template with 20,481 vertices and 40,832 faces. Each face model is associated with a 2K-resolution (2048 2048) albedo map and a 2K normal map, where pore-level details are preserved. A linear PCA-based 3DMM (Blanz and Vetter, 1999) can be constructed from the dataset, which consists of shape bases, albedo map bases, and normal map bases. Note that we propose a novel approach to construct an augmented version of the 3DMM shape bases in Sec. 5.3. Besides, a novel pyramid version of the 3DMM albedo/normal maps is presented in Sec. 6.1.


The goal of our system is to capture high-fidelity users facial geometry and reflectance with RGB-D selfie data, which is further used to create and render full-head, realistic digital humans. For geometry modeling, we use 3DMM parameters to represent a face, since it is more robust to degraded input data and with more controllable mesh quality than deformation-based representations. For reflectance modeling, we synthesize 2K-resolution albedo and normal maps regardless of the input RGB-D resolution.

User Data Acquisition

We use an iPhone X to capture user selfie RGB-D data. Note that it is common nowadays for a smartphone to be equipped with a front-facing depth sensor and any such phone can be used. While a user is taking selfie RGB-D video, our capturing interface will guide the user to consecutively rotate his/her head to left, right, upward, and back to middle. The entire acquisition process takes less than 10 seconds, and a total of 200-300 frames of RGB-D images are collected, with resolution . The face region for computation is cropped (and resized) to . The camera intrinsic parameters are directly read from the device.

Processing Pipeline

We first employ an automatic frame selection algorithm to select a few high-quality frames that cover all sides of the user (Sec. 4). Then an initial 3DMM model fitting is computed with the detected facial landmarks in the selected frames (Sec. 5.1). Starting from the initial fitting, a differentiable renderer based optimization with multiview RGB-D constraints (Sec. 5.2) is applied to solve the 3DMM parameters as well as lighting parameters and poses. Based on the estimated parameters, high-resolution albedo/normal maps are then synthesized (Sec. 6). Finally, high-quality, realistic full head avatars can be created and rendered (Sec. 7).

4. Frame Selection

There are typically 200-300 frames acquired from a user. For efficiency and robustness, we developed a robust frame selection procedure to select a few high-quality frames for further processing, which considers both view coverage and data quality. As shown in Fig. 2, the procedure consists of two stages as described below.

Coarse Screening and Preprocessing

We first apply a real-time facial landmark detector (a MobileNet (Howard et al., 2017) model trained on 300W-LP dataset (Zhu et al., 2016)) on RGB images to detect 2D landmarks for each frame. Then a rough head pose for each frame can be efficiently computed with the correspondences between the 2D landmarks and the 3D keypoints on a template 3D face model using PnP algorithm (Lepetit et al., 2009). Frames with extreme/invalid poses or closed-eye/opened-mouth expressions can be easily identified and screened out with the 2D landmarks and rough head poses. We categorize the rest frames by poses into groups: front, left, right, and up. Each group only keeps 10-30 frames near the center pose of the group. Note that more groups can be obtained by categorizing the frames with finer-level angle partitioning. We experimented with different number of groups and found four is a good balance between accuracy and efficiency. The remaining depth images are preprocessed to remove depth values beyond the range between 40cm and 1m (the typical selfie distances). Bilateral filtering (Paris and Durand, 2009) with a small spatial and range kernel is then applied to the depth images to attenuate noises.

Frame Selection

For each group, we further select one frame based on two criteria: image quality and rigidness. To measure the image quality of a frame, we compute the Laplacian of Gaussian (LoG) filter response and use the variance as a motion blur score (images with a larger score are sharper). A front face frame is first selected based on the motion blur score in the front group. We then compute the rigidness between each frame in the other groups and the front face with the help of depth data. Specifically, the detected 2D landmarks for each frame are lifted from 2D to 3D using depth data. Note that occluded landmarks are automatically removed according to the group that a frame belongs to, e.g., for a frame in the left group, the landmarks on the right side of the face are removed. We use RANSAC method to compute the relative pose between each frame in the other groups and the front face using the 3D-3D landmark correspondences

(Arun et al., 1987)

. Frames with too many outliers are considered as low rigidness and thus are excluded. Then a best frame in each group can be found based on the motion blur score. The output of this step is four frames with the 3D landmarks.

5. Facial Geometry Modeling

Figure 3. The masks derived from the detected landmarks for texture blending. We use the verb “unwrap” to refer to the process of extracting partial texture maps from input photos and blending them into a complete texture map.

5.1. Initial Model Fitting

We use PCA-based linear 3DMM (Blanz and Vetter, 1999)

for parametric modeling. The shape and albedo texture of a face model is represented as

where is the vector format of the mean 3D face shape model, is the shape identity basis, is the corresponding identity parameter vector to be estimated, is the vector format of the mean albedo map, is the albedo map basis, is the corresponding albedo parameter vector to be estimated. The details of the bases are presented in Secs. 5.3 (shape) and 6.1 (albedo).

We fit an initial shape model with the detected 3D landmarks using a ridge regression

(Zhu et al., 2015). A partial texture map can be extracted by projecting the shape model onto each input image. With a predefined mask derived from landmarks for each view (see Fig. 3), the partial texture maps are then blended into a complete texture map using Laplacian pyramid blending (Burt and Adelson, 1983). The initial albedo parameters can be obtained with another ridge regression to fit the blended texture map.

5.2. Optimization

Figure 4. Our optimization framework. The parameters to be solved include: 3DMM parameters and for a user, lighting parameters and poses for each view. The constraints include: landmark loss , RGB photo loss , depth loss , and identity perceptual loss .

Fig. 4 shows our optimization framework. The parameters to be optimized are

and is the shape parameter, is the albedo parameter, is the second-order spherical harmonics lighting parameter, include the rotation and translation parameters for rigid transformation. Note that we have only one and one for an user, while the number of and equals to the number of views. With a set of estimated parameters and the 3DMM bases, a set of rendered RGB-D frames can be computed via a differentiable renderer (Genova et al., 2018; Gecer et al., 2019)

. The distances between the rendered RGB-D frames and the input RGB-D frames can be minimized by backpropagating the errors to update parameters

. The loss function to be minimized is defined as:


where denotes pixel-wise RGB photometric loss, indicates pixel-wise depth loss, is identity perceptual loss, represents landmark loss, and means regularization terms. Note that the landmark loss, RGB photometric loss, and regularization term are similar to conventional analysis-by-synthesis optimization approaches (Thies et al., 2016). The identity perceptual loss is also employed in recent differentiable renderer based approaches (Genova et al., 2018; Gecer et al., 2019). We extend these losses into multiview setting and incorporate depth data for geometric constraints. The details of each term are as follows.

RGB Photo Loss

The pixelwise RGB photometric loss is:

where is the input RGB image, is the rendered RGB image from the differentiable renderer. We adopt -norm because it is more robust against outliers than -norm.

Depth Loss

The depth loss is defined as:

where defines a truncated -norm that clips the per-pixel mean squared error, is the input depth image, is the rendered depth image from the differentiable renderer. The truncated function makes the optimization more robust to depth outliers.

Identity Perceptual Loss

To capture high-level identity information, we apply identity perceptual loss defined as


is the deep identity features exacted from a pretrained face recognition model. Here we use features from the

fc7 layer of VGGFace model (Parkhi et al., 2015).

Landmark Loss

We define the landmark loss as the average distances between the detected 2D landmarks and projected landmarks from the predicted 3D model:

where are the detected landmarks, denotes that the vertex is rigidly transformed by and projected by camera . The weighting is to control the importance of each keypoint, where we set 50 for those located in eye, nose and mouth, while others are 1.


To ensure the plausibility of the reconstructed faces, we apply regularization to shape and texture parameters:

where we set and .

Implementation Details

For efficiency, we use albedo maps of a resolution during the optimization. We render RGB-D images and compute the pixel losses in the same resolution as input depth images, which is . The weightings in Eq. (1) is set to , , , , . We use Adam optimizer (Kingma and Ba, 2014)

in Tensorflow to update parameters for 150 iterations to get the results, with a learning rate

decaying exponentially in every 10 iterations.

Relation to Existing Approaches

The differences between our approach and state-of-the-art 3DMM fitting approaches are listed in Table 1. Result comparisons are presented in Sec. 8.2. Note that our implementation will be publicly available and can be easily configured into equivalent settings to other approaches by changing the combinations of input data and loss terms.

Method Input Loss Term Optimizer
Ours RGB-D DR-based
Face2Face RGB Gauss-Newton
(Thies et al., 2015) RGB-D Gauss-Newton
Table 1. Different 3DMM fitting approaches. “DR-based” stands for differentiable renderer based optimizer.

5.3. Morphable Model Augmentation

As the constraints incorporated in the optimization are rich, we found the expressive capacity of the linear 3DMM constructed using conventional approaches are very limited. We here present an augmentation approach to effectively boost the 3DMM capacity. Our approach is motivated by the observation that human faces are mostly not symmetrical. This will cause ambiguities when aligning face models. The reason is that during the alignment of two models, the relative rotation and translation between them is determined by minimizing the errors at some reference points on the models. Different reference points may lead to different alignment results. There are no perfect reference points due to the asymmetrical structures of human faces. This reminds us that we can perturb the relative pose between two aligned models to get an “alternative” alignment. In this way, we can actually get additional samples for PCA, since the new alignments introduces new morphing targets. Furthermore, we can use a set of perturbation operations including pose perturbation, mirroring, region replacement, etc., to augment the aligned models. Based on the large amount of generated data, we propose a stochastic iterative algorithm to construct a 3DMM that compresses more capacities into lower dimensions of the bases.

Data Generation and Perturbation

Starting from the 200 aligned face shape models, our data generation and perturbation process consists of the following steps:

  • Region Replacement with Perturbation. We first replace the nose region of each model with other models, with a rotation perturbation along the pitch angle (uniformly sampled within degree). Mouth region is also processed in the same way. For eye region, we apply replacement without perturbation. The different perturbations are empirically designed by minimizing the introduced visual defects during processing. The facial regions used in this step are shown in Fig. 5.

  • Rigid Transformation Perturbation. We then apply rigid transformation perturbations to each face model, where the uniformly sampled range is set to: degree along yaw/pitch/roll angles for rotation, along each of the three axes for translation, for scale.

  • Mirroring. Finally, we apply a mirroring for all the generated face models along model local coordinate system. In this way, we get over 100,000 face models in total.

Figure 5. Masks for region replacement.

Stochastic Iterative 3DMM Construction

Our iterative 3DMM construction algorithm is presented in Alg. 1. There are two levels of loops in our algorithm. We maintain a model set for 3DMM construction and update it inside the loops. In each iteration of the outer loop, we sample a test set with models from the whole generated dataset. In each iteration of the inner loop, we use the constructed 3DMM from to fit models in , and add models with largest fitting errors in into . The convergence threshold is empirically set such that the inner loop is usually converged in less than 5 iterations. Note that in the inner loop, a model sample in could be repeatedly added into for several times. In this case, constructing a 3DMM from the final is different from directly performing PCA on the whole dataset as the data population is changed. Our algorithm encourages more data variance to be captured using fewer principal components (note that in each iteration we construct 3DMM using only the principal components with cumulative explained variance).

Params : ,
       Face model set initial 200 models;
             Randomly sample (without replacement) a test set with face models from the whole dataset (over 100,000 models);
             while  do

    Apply Principal Component Analysis (PCA) on

                   Select the principal components with cumulative explained variance to get the 3DMM bases ;
                   Fit the models in using bases ;
                   Select the models with largest fitting errors as set ;
                   Add the the corresponding mirrored models into ;
                   the median error of the models;
             end while
      until the whole dataset is sampled;
Output : PCA bases .
ALGORITHM 1 Iterative 3DMM Construction Algorithm


Figure 6. Fitting errors (in cm) with different versions of bases. Note that the bases without augmentation are constructed from 200 models and thus have a maximum number of dimensions 199. The results show the expressive power of our bases is much larger than original bases.
Figure 7. The recovered geometries with two versions of bases. The bases obtained with our method can preserve more personalized facial geometries (note the regions of facial silhouette, mouth shape, and the nose shape).

In order to validate the effectiveness of our technique, we design a numerical evaluation experiments with the help of BFM 2009 model (Paysan et al., 2009)

. Note that the source face models in our dataset are all East Asians, while those in BFM are mostly not Asians. The domain gap between the two datasets provides us a good benchmark for cross validation (our goal is not to model cross-ethnicity fitting, but use the relative fitting errors between different versions of 3DMM to evaluate their expressive power). For each BFM basis, we compute two 3D face models using the positive and negative standard deviation values. A total of 398 BFM face models are obtained in this way. We register the BFM face models to our mesh topology using Wrap3 software

(R3ds, 2020). We use the extracted PCA bases from our dataset to fit the obtained BFM face models and measure the fitting errors. Fig. 6 shows the comparison of fitting errors between the augmented bases and the original bases. If only 100 bases are used, the augmented version has no advantage against the original version. As the number of bases grows, the augmented version clearly outperforms the original version. Note that the maximum number of bases of the original version is 200 since there are only 200 source models for 3DMM construction. For the augmented version, thousands of bases could be obtained since there are over 100,000 models after augmentation. Since our iterative algorithm emphasizes the expressive power of the principal components with cumulative explained variance, most of the expressive capacities are compressed into these components. In our experiments, we found the final number of the components with cumulative explained variance in different runs is generally around 500. Thus we use 500 bases through this paper. Fig. 7 shows a comparison of the facial geometries obtained using our optimization algorithm in Sec. 5.2 with different versions of PCA bases.

Relation to Localized 3DMM

There are some approaches constructing separate 3DMM for each facial region (Blanz and Vetter, 1999; Tena et al., 2011; Neumann et al., 2013). The localized 3DMMs obtain more capacities compared with global models by separating the deformation correlations between different facial regions. The region replacement augmentation in our approach is in the same spirit as localized 3DMM by explicitly generating samples with possible combinations of facial regions from different subjects. Compared with localized models, our 3DMM avoids online fusion of facial regions and thus is more efficient. Besides, our perturbation scheme and iterative 3DMM construction algorithm can be applied to localized models to improve their capacities as well. In this paper, we employ global model for efficiency consideration.

6. Facial Reflectance Synthesis

In this section, we present a our hybrid approach to synthesis high-resolution albedo and normal maps. We notice that super-resolution based approaches (Yamaguchi et al., 2018; Lattas et al., 2020) cannot yield high-quality, hair-level details of the eyebrows. On the other hand, directly synthesizing high-resolution texture maps (Saito et al., 2017) may lead to overwhelming details, which also makes the rendering not realistic. Our approach addresses the problems with the help of a pyramid-based parametric representation. Fig. 8 shows the pipeline of our approach, which we explain as follows.

Figure 8. Our albedo/normal map synthesis pipeline.

6.1. Regional Pyramid Bases

Fig. 9 illustrates the process to construct our regional pyramid bases. We first compute image pyramids consisting of two resolutions ( and ) for the 200 albedo maps in our

dataset. We divide facial regions into 8 sub-regions indicated as the different colors in the left UV-map. The region partitioning is based on the fact that different regions have different types of skin/hair details. Denote the set of all regions as . For each region , we construct a linear PCA-based blending model. We define each sample in our dataset as a triplet , where stands for albedo map of region , and and are the albedo map and normal map in resolution. Then the triplet is vectorized into a vector format by fetching and concatenating all pixel values together from the three maps in the region. Note that during the process, the pixel indices in the three maps are recorded such that the vectorized sample can be “scatter back222We use the “scatter_nd” function in Tensorflow as the “scatter back” operation.” into UV-map format. For each region , we apply a PCA on the the 200 vectorized samples to get the bases. Finally, the vectorized bases can be scattered back into UV-map format to obtain the blending bases , where is the low-resolution albedo basis, is the high-resolution albedo basis, is the high-resolution normal basis. and is the number of pixels within region in the -resolution.

Figure 9. Construction of the regional pyramid bases.

The constructed regional pyramid bases have several advantages compared with conventional bases. First, the expressive capacity is larger than global linear bases, while each region can be processed individually to accelerate the runtime. Second, the bases capture variations in both albedo and normal map. Third, the incorporated multiple resolutions can emphasize more structural information in the extracted bases. With the pyramid basis, we can perform parametric fitting on the low resolution, and directly apply the same parameters on high-resolution bases to obtain high-resolution albedo and normal maps. This not only reduces computation, but also generally yields higher-quality results than directly fitting on high resolution. Note that the albedo bases employed in our geometric fitting procedure (Sec. 5.2) is the conventional global model in -resolution, while the bases used in this section dedicated for reflectance synthesis are the regional pyramid model. The reason is that we found the less powerful albedo bases would make the optimization constraints more imposed on the geometric parameter estimation rather than texture parameter estimation. Otherwise more powerful albedo bases tends to result in overfitted textures but underfitted geometries.

6.2. Regional Fitting

Since the albedo parameters obtained in Sec. 5.2 are based on conventional global bases, the resulting albedo maps are not satisfactory due to limited expressive power. Here we directly extract textures from the source images using the estimated shape and poses from Sec. 5.2. Then a model-based delighting using the estimated lighting parameters from Sec. 5.2 is applied on the extracted textures, followed by an unwrapping and blending to yield an initial albedo map . We use the -resolution regional bases to fit the initial albedo map:

where , denotes the total variation function, and . Note that the total variation term is essential to eliminate the artifacts in the resulting albedo maps near boundaries between regions. After obtaining , we can directly compute a high-resolution albedo map and a normal map as

With the help of regional pyramid bases, different types of skin/hair details in different regions can be separately preserved via the high-resolution bases, while the fitting process on low resolution makes the algorithm focus on major facial structures, e.g., the shape of the eyebrows and lips. Since the parametric representation is based on linear blending model, the results are usually over-smoothed (see Fig. 17). We next present our detail synthesis step to refine the albedo and normal maps.

6.3. Detail Synthesis

We adopt two refinement networks to synthesize details for albedo and normal map respectively. The refinement networks employ the architecture of a GAN-based image translation model, pix2pix (Isola et al., 2017). As shown in Fig. 8, for albedo refinement, the network takes the fitted -resolution albedo map as input and outputs a refined albedo map in the same resolution. For normal refinement, the refined albedo map and the fitted normal map are concatenated along channel dimension. The refinement network takes the concatenation as inputs and outputs a refined normal map.

During training, we first use facial region replacement and skin color transfer (Reinhard et al., 2001) to augment the 200 high-quality albedo/normal maps (from the dataset for constructing the 3DMM) into 4000 maps, which serve as ground-truth supervision for training the two networks. Then we perform regional fitting (Sec. 6.2) on the 4000 maps to get the fitted albedo/normal maps, which serve as inputs of the networks during training. We only use the facial regions out of the whole UV maps for computing training losses. Similar to pix2pix (Isola et al., 2017), we keep loss and GAN loss in both networks. For albedo refinement, we additionally apply total variation loss to reduce artifacts and improve skin smoothness. The weights for , GAN and total variation losses are , , . For normal refinement, we additionally employ a pixel-wise cosine distances between the predictions and ground-truth maps to increase the accuracy of normal directions. The weights for , GAN and cosine distance losses are , , . The networks are trained with Adam optimizer for iterations.

Relation to Existing Approaches

There are three recent CNN-based approaches that can be adopted to synthesize high-resolution facial UV-maps, which are Yamaguchi et al. (2018), GANFIT (Gecer et al., 2019), and AvatarMe (Lattas et al., 2020). However, these approaches cannot produce satisfactory results in our case. GANFIT (Gecer et al., 2019) needs about times more training data than ours to train a GAN as the nonlinear parametric representation of texture maps. In their work, the texture maps are obtained using unwrapped photos, where shadings and specular highlights are not removed. In our system, the albedo maps and normal maps are created with very high-quality artistic efforts, where shadings and specular highlights are completely removed and hair-level details are preserved. It is rather difficult to extend our data amount to theirs while keeping such high data quality. Regarding the other two approaches, we also tried super-resolution based network as in Yamaguchi et al. (2018) and pure CNN-based synthesis in AvatarMe (Lattas et al., 2020), and found the results obtained with their approaches are generally inferior to ours. We present some comparison in Sec. 8.3.

7. Full Head Rig Creation and Rendering

Head Completion

Although the scanned 200 models in out dataset are full head models, there are usually no reliable geometric constraints beyond facial regions for a RGB-D selfie user. We employ an algorithm to automatically complete a full head model given the recovered facial model. The regions involved in our algorithm are:

  • A: facial region;

  • B: back head region;

  • C: intermediate region;

  • D: overlapped region between A and C.

Our goal is to compute a full head shape such that region A matches the facial shape and region B matches a reference back head shape. The reason of using a reference shape for back head region B is it can further ease the difficulties to attach accessories like hair models. To this end, we construct a head morphable model using only regions B C of the 200 source models. Note that this model does not need strong expressive power as face models, thus we do not employ the technique in Sec. 5.3 but directly use PCA to extract bases. Given the recovered facial shape of a user, we apply a ridge regression similar to Sec. 5.1 to get the head 3DMM parameters, using the constraints of region B D. Then the full head model is obtained by combining the resulting shape (B C) with facial region A.


We perform a hairstyle classification on the user’s front photo (using a MobileNet (Howard et al., 2017) image classification model trained on labeled front photos) and attach the corresponding hair model (created by artists in advance) to the head model according to the predicted hairstyle label. There are in total 30 hairs models in different hairstyles in our system (see supplementary materials). For eyeballs, we use template models and calculate the positions and scales based on reference points on the head model. For teeth, we employ an upper teeth model and a lower teeth model. The upper teeth model is placed according to reference points near nose, and it remains still when facial expression changes. The lower teeth model is placed according to reference points on the chin. When mouth opens or closes, the lower teeth model moves with the chin. Note that the accessory models are not the focus of this work, their modeling and animation can be found in dedicated research work (Zoss et al., 2018; Wu et al., 2016; Bérard et al., 2019, 2016; Zoss et al., 2019; Velinov et al., 2018).

Expression Rigging

We adopt a simple approach similar to Hu et al. (2017) to transfer generic FACS-based blendshapes to the target model to obtain expression blendshapes. Note that our approach can be extended to further acquire user’s expression data and construct personalized blendshapes like Ichim et al. (2015).


The recovered full head mesh model, as well as the high-quality albedo map and normal map, can be rendered with any physically based renderer. In this work, we show rendered results using Unreal Engine 4 (UE4), with the material composition templates provided by the engine (2020). Since we do not model specular map in our approach, we use a same specular map from the material template for rendering all the results in this paper.

8. Results and Evaluation

Processing Step Runtime
Landmark Detection
Coarse Screening
Bilateral Filtering
Frame Selection 0.2s
Initial Model Fitting 0.1s
Initial Texture 0.5s
Optimization 10s
Regional Parametric Fitting 1.5s
Detail Enhancement 1s
Head Completion 0.5s
Accessories 0.1s
Expression Rigging 1s
Total 15s
Table 2. The runtime for each step in our system. Note that the first three steps are computed during data acquisition and thus do not need additional processing time. GPU and multi-thread CPU are used.
Avatar Creation Acquisition Processing Manual
System Time Time Interaction
(Ichim et al., 2015) 10 minutes 1 hour 15 minutes
(Cao et al., 2016) 10 minutes 1 hour needed
(Hu et al., 2017) 1 second 6 minutes
Ours 10 seconds 15 seconds
Table 3. Time comparison with other avatar creation systems.

8.1. Acquisition and Processing Time

The selfie data acquisition typically takes less than 10 seconds (200-300 frames). The total processing time after data acquisition is about 15 seconds. Note that some of the processing steps like real-time landmark detection, coarse screening, and bilateral filtering can be computed on the smartphone client while the user is taking selfie. The preprocessed data are streamed to a server via WiFi during acquisition. The rest steps of the processing are computed on the server. Table 2 shows the runtime on our server with an Nvidia Tesla P40 GPU and an Intel Xeon E5-2699 CPU (22 cores). Note that the frame selection and expression blendshape generation are implemented with multi-thread acceleration and the total runtime is largely reduced thanks to parallel processing. Table 3 shows a time comparison with other avatar creation systems. In terms of total acquisition and processing time, our system provides a convenient and efficient solution for users to create high-quality digital humans.

Figure 10. Visual comparison with state-of-the-art approaches. As pointed out by the red arrows, our method is able to generate face geometries with more accurate cheek silhouettes and more faithful mouth shapes to the input photos. In comparison, the mouth shapes obtained by N-ICP lack personalized features and are similar to each other among all the subjects. For fair comparison, all the results are obtained with our 3DMM.

8.2. Quality of Facial Geometry

We evaluate the quality of our recovered facial shapes in extensive experimental settings. The experiments include quantitative and qualitative comparisons of different variants of our approach and existing methods such as Face2Face (Thies et al., 2016; Hu et al., 2017; Yamaguchi et al., 2018), GANFIT (Gecer et al., 2019; Lattas et al., 2020), N-ICP (Weise et al., 2011; Bouaziz et al., 2016), etc. Note that all the results are obtained with our 3DMM bases for fair comparisons.

Figure 11. Error maps for different approaches (mm).
Figure 12. Visual comparison for different variants of our method. The results obtained with multiview RGB-D data and identity loss (the second column from right) are generally more faithful than other results. We also show the results obtained without initial model fitting (the rightmost column), which are usually flawed and inferior to the full algorithm.
Method Average ranking percentage
Face2Face (Thies et al., 2016) 30.3%
GANFIT (Gecer et al., 2019) 27.3%
N-ICP (Bouaziz et al., 2016) 16.6%
Ours 14.2%
Table 4. Identity verification results using rendered images. Lower ranking percentage means the rendered images are more similar to user photos from the point of view of a face recognition network.
Figure 13. Examples of our synthesized albedo and normal maps.

Quantitative Evaluation

We use the same workflow as the production of our dataset to manually create the ground-truth models of two users. Since the ground-truth obtained in this way is very expensive, we only perform numerical evaluations on the two models to get quantitative observations. The corresponding geometries recovered from RGB-D selfie data with different approaches are evaluated. The results are in Fig. 11. It can be seen from the results that our method yields the lowest mean errors, closely followed by N-ICP and the single-view variants of our method (single rgbd+id). Compared with N-ICP, our method performs better on detailed facial geometries near eyes, nose, and mouth. It conforms to our motivation that appearance constraints (photo loss and identity perceptual loss) help capture more accurate facial features.

Identity Verification using Rendered Images

To demonstrate the ability of our method to capture high-level perceptual identity features, we design a novel face verification experiment for further numerical evaluation. We collect selfie data from 30 subjects and put their selfie photos into a large face image dataset with over photos of Asian people. Then the geometry models of the 30 subjects are obtained using different approaches. We use UE4 to render the recovered models into realistic images. For fair comparison of geometry models, we use the same albedo and normal maps for models obtained using different approaches for a user. The rendered realistic images are compared with all the images in the large dataset using a face recognition network (Deng et al., 2019). We then calculate the ranking of each user’s rendered image to his/her real selfie photo among all the face images. The average ranking percentage are shown in Table 4. Our method generally yields more recognizable shapes than other approaches.

Qualitative Evaluation

Fig. 12 shows two examples of the shape models obtained using different variants of our method. The version with multiview RGB-D data generally outperforms other variants. Besides, as shown in the figure, random initialization of our optimization can lead to flawed models. The initial fitting in Sec. 5.1 improves the system robustness. We further show results comparisons between our method with Face2Face (Thies et al., 2016; Hu et al., 2017; Yamaguchi et al., 2018), GANFIT (Gecer et al., 2019; Lattas et al., 2020), and N-ICP (Weise et al., 2011; Bouaziz et al., 2016) in Fig. 10. In general, both our method and N-ICP can reconstruct more accurate facial shapes than the methods using only RGB data (Face2face and GANFIT). This can be clearly observed from the silhouettes near the cheek region in the results. Watching more closely, our method can recover more faithful and personalized facial shapes than N-ICP, especially near mouth region. The mouth shapes obtained with N-ICP are similar among all the faces, while our results preserve personalized mouth features and are more faithful to the photos. This can be explained by the additionally incorporated photometric loss and identity loss in our method.

Figure 14. The intermediate results of our albedo synthesis algorithm. The unwrapped result (left) is extracted from the low-resolution input photos. It seems dirty due to imperfect lighting removal. The result obtained after regional fitting (middle) seems much cleaner and higher-quality. The final result (right) contains more realistic details.
Figure 15. Overlay results with our synthesized albedo maps. Note that the eyebrow and mouth shapes in our albedo maps are faithful to input photos.

8.3. Quality of Facial Reflectance


Our method can produce albedo and normal maps with high-quality, realistic details while preserving major facial features of the users. Fig. 13 shows several examples of our obtained albedo and normal maps. Hair-level details in the albedo and normal maps are clearly visible. Fig. 15 shows the overlay results with synthesized albedo maps. The major facial features are consistent between the synthesized albedo maps and the input photos, especially in the eyebrow region. More importantly, the hair-level details near eyebrow region and the pore-level skin details are clearly visible. Note that the input RGB-D images in our method are in resolution, where actual skin micro/meso-structures and hair-level details are hardly visible (see Fig. 14 left column). The synthesized skin/hair details by our approach are actually plausible hallucination, which is critical for realistic rendering of digital humans. We further show the intermediate results of our albedo synthesis algorithm in Fig. 14. The unwrapped result (left) is extracted from the input photos. It can be seen from the close-up display that it is blurry and low-quality. Moreover, the overall map seems dirty due to the imperfect lighting removal step. After the regional fitting step, the result is much more clean but without hair/pore details. The final refined albedo map preserves the major facial features (e.g., the shape of eyebrows), while containing more high-quality, realistic details.

Pix2pix vs pix2pix-HD

We first compare two variants of our detail enhancement models, i.e., with pix2pix (Isola et al., 2017) and pix2pix-HD (Wang et al., 2018). An example is shown below. The detail enhancement with pix2pix generally produces superior results than pix2pix-HD, especially around the eyebrow and cheek regions. Besides, we found pix2pix-HD is more difficult to train and the results are generally worse than pix2pix, possibly due to the small amounts of training data. Note that pix2pix-HD is adopted in AvatarMe (Lattas et al., 2020).

Figure 16. Comparison between pix2pix and pix2pix-HD. The results obtained with pix2pix-HD are generally with uneven skin colors and obvious artifacts.
Figure 17. Comparison with super-resolution (SR) approach. SR-based method tends to produce unnatural high-frequency details rather than natural details. In comparison, our method (right) can produce realistic details.

Comparison with Super-resolution Approach

Fig. 17 shows a comparison between our method and a state-of-the-art super-resolution network (Zhang et al., 2018), which is trained on the same dataset as our detail enhancement network. The super-resolution network generally cannot yield hair-level details around eyebrows. Our regional fitting algorithm produces over-smoothed eyebrow strands, which can be further refined into clear hair-level details while preserving major eyebrow shape. Note that Yamaguchi et al. (2018) employed a super-resolution model for high-resolution texture synthesis.

Figure 18. Comparison with state-of-the-art high-resolution facial texture synthesis approaches. Our result contains more realistic hair/pore details.

Comparison with State-of-the-art Approaches

We compare our result to two state-of-the-art high-resolution reflectance synthesis approaches (Yamaguchi et al., 2018), (Lattas et al., 2020) in Fig. 18. The hair-level details around eyebrows in our result are clearly with higher quality. Besides, the pore-level details on the cheek are more realistic in our result, while those in the other approaches seem noisy and unnatural.

Figure 19. Results obtained using RGB-D data captured in different lighting conditions, but rendered in an identical lighting setting. The recovered facial structures are consistent in the two lighting conditions. Note that the skin color result in the right rendered image is a little bit more yellowish than the left. This is because the photos in the right is captured with a different white balance setting from the left. There is an inherent decoupling ambiguity between skin color and illumination in our approach.
Figure 20. Comparison to model-free reconstruction (e.g., Bellus3D (2020)). The model-free reconstruction are not topologically consistent and are prone to flaws, which cause difficulties when being animated. The extracted textures contain undesired shadows and highlights that would result unnatural renderings. Our results are topological consistent and ready for animation. The high-quality albedo/normal maps make our rendering very realistic.

8.4. Comparison to Model-free Reconstruction

We notice some commercial systems (e.g., Bellus3D (2020)) utilize RGB-D selfies to reconstruct static 3D face models and directly extract texture maps from input photos. Their systems commonly employ a model-free reconstruction approach like KinectFusion (Newcombe et al., 2011) and the results usually seem very faithful to input photos. However, there are several drawbacks in their results. First, as shown in Fig. 20, their reconstructed meshes are not topologically consistent and are prone to flaws. It would be difficult to attach accessories and animate them. Moreover, the extracted texture maps contain shadows and highlights, which are undesired since they cause severe unnatural issues when the rendered lighting is different from the captured lighting. In comparison, our results are high-quality and ready for realistic rendering and animation (see Fig. 20).

8.5. Robustness to Different Inputs

We conduct experiments for a user taking selfies in different lighting conditions (Fig. 19). The recovered shape and reflectance remain consistent regardless of different lighting conditions and poses. Note that there is an inherent decoupling ambiguity between skin color and illumination. The resulting skin color in the right of the figure is actually a little bit more yellowish due to the yellower input photo. However, the facial structures (like the eyebrow shape) in the resulting reflectance map are consistent.

8.6. Rendered Results

Fig. 24 shows some of our rendered results with UE4. Thanks to the high-fidelity geometry and reflectance maps, the rendered results are realistic and faithful to input faces. Note the wrinkle details on the lips, the pore-level details on the cheek, the hair-level details of the eyebrows. Fig. 21 shows two examples of our results with attached hair models, which are retrieved from our hair model database by performing hairstyle classification on the selfie photos. More results are in the supplementary video.

Figure 21. Rendering results with hair models.
Figure 22. Snapshots of our lip-sync animation. See supplementary video.
Figure 23. Results on other ethnicities different from our 3DMM data source. The results roughly resemble the subjects but lack ethnicity-specific features.
Figure 24. Rendered results with UE4 rendering engine. Our method can recover faithful face models with high-quality, realistic hair/pore/wrinkle details. Note that the selfie photos are with perspective camera projection, while the rendered results are with orthogonal camera projection.

8.7. Limitations

Our approach does not take into account cross-ethnicity generalization. Since our 3DMM is constructed from East-Asian subjects, our evaluations are designed on the same ethnical population. We tried to directly apply our approach on some people from other ethnicities and Fig. 23 shows two examples. Although the results roughly resemble the subjects being captured, some ethnicity-specific facial features are not recovered. Besides, our approach cannot model facial hairs like moustache and beard. Children or aged people beyond the age scope of our dataset are also not considered in our approach.

9. Applications

The avatars created with our method are animation-ready. Animations can be retargeted from existing characters (Bouaziz and Pauly, 2014), or interactively keyframe-posed (Ichim et al., 2015), or even transferred from facial tracking applications (Weise et al., 2011). We demonstrate an application of lip-sync animation in the supplementary video, where a real-time multimodal synthesis system (Yu et al., 2019) is adopted to simultaneously synthesize speech and expression blendshape weights given input texts. Fig. 22 shows several snapshots of the animation. The application enables users to conveniently create high-fidelity, realistic digital humans that can be interacted with in real time. We also include another lip-sync animation result driven by speech inputs (Huang et al., 2020) in the supplementary video.

10. Conclusion

We have introduced a fully automatic system that can produce high-fidelity 3D facial avatars with a commercial RGB-D selfie camera. The system is robust, efficient, and consumer-friendly. The total acquisition and processing for a user can be finished in less than 30 seconds. The generated geometry models and reflectance maps are in very high fidelity and quality. With a physically based renderer, the assets can be used to render highly realistic digital humans. Our system provides an excellent consumer-level solution for users to create high-fidelity digital humans.

Future Work   The animation with generic expression blendshapes are not satisfactory. We intend to extend our system to capture personalized expression blendshapes like Ichim et al. (2015). Besides, the current system employs very simple approaches to handle accessories like hair, eyeballs, and teeth. We intend to incorporate more advanced methods to model accessories.

We would like to thank Cheng Ge and other colleagues at Tencent NExT Studios for valuable discussions; Shaobing Zhang, Han Liu, Caisheng Ouyang, Yanfeng Zhang, and other colleagues at Tencent AI Lab for helping us with the videos; and all the subjects for allowing us to use their selfie data for testing.


  • O. Alexander, M. Rogers, W. Lambeth, M. Chiang, and P. Debevec (2009) The digital emily project: photoreal facial modeling and animation. In ACM SIGGRAPH 2009 Courses, Cited by: §1, §2.
  • K. S. Arun, T. S. Huang, and S. D. Blostein (1987) Least-squares fitting of two 3-d point sets. IEEE Transactions on pattern analysis and machine intelligence (5), pp. 698–700. Cited by: §4.
  • T. Beeler, B. Bickel, P. Beardsley, B. Sumner, and M. Gross (2010) High-quality single-shot capture of facial geometry. ACM Trans. Graph. (Proc. SIGGRAPH) 29 (4), pp. 1–9. Cited by: §2, §3.
  • Bellus3D (2020) External Links: Link Cited by: Figure 20, §8.4.
  • P. Bérard, D. Bradley, M. Gross, and T. Beeler (2016) Lightweight eye capture using a parametric model. ACM Trans. Graph. (Proc. SIGGRAPH) 35 (4), pp. 1–12. Cited by: §7.
  • P. Bérard, D. Bradley, M. Gross, and T. Beeler (2019) Practical person-specific eye rigging. In Computer Graphics Forum, Vol. 38, pp. 441–454. Cited by: §7.
  • V. Blanz and T. Vetter (1999) A morphable model for the synthesis of 3d faces. In Proc. SIGGRAPH, pp. 187–194. Cited by: §2.1, §3, §5.1, §5.3.
  • V. Blanz and T. Vetter (2003) Face recognition based on fitting a 3d morphable model. IEEE Transactions on pattern analysis and machine intelligence 25 (9), pp. 1063–1074. Cited by: §2.2.
  • J. Booth, A. Roussos, S. Zafeiriou, A. Ponniah, and D. Dunaway (2016) A 3d morphable model learnt from 10,000 faces. In Proc. CVPR, pp. 5543–5552. Cited by: §2.1.
  • S. Bouaziz and M. Pauly (2014) Semi-supervised facial animation retargeting. Technical report Cited by: §9.
  • S. Bouaziz, A. Tagliasacchi, H. Li, and M. Pauly (2016) Modern techniques and applications for real-time non-rigid registration. In ACM SIGGRAPH Asia 2016 Courses, pp. 1–25. Cited by: §2.2, §8.2, §8.2, Table 4.
  • S. Bouaziz, Y. Wang, and M. Pauly (2013) Online modeling for realtime facial animation. ACM Trans. Graph. (Proc. SIGGRAPH) 32 (4), pp. 1–10. Cited by: §2.2.
  • P. J. Burt and E. H. Adelson (1983) A multiresolution spline with application to image mosaics. ACM Trans. Graph. 2 (4), pp. 217–236. Cited by: §5.1.
  • C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou (2014) Facewarehouse: a 3d facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics 20 (3), pp. 413–425. Cited by: §2.4.
  • C. Cao, H. Wu, Y. Weng, T. Shao, and K. Zhou (2016) Real-time facial animation with image-based dynamic avatars. ACM Trans. Graph. (Proc. SIGGRAPH) 35 (4), pp. 1–12. Cited by: §2.4, Table 3.
  • M. Chai, L. Luo, K. Sunkavalli, N. Carr, S. Hadap, and K. Zhou (2015) High-quality hair modeling from a single portrait photo. ACM Trans. Graph. (Proc. SIGGRAPH) 34 (6), pp. 1–10. Cited by: §2.4.
  • Y. Chen, F. Wu, Z. Wang, Y. Song, Y. Ling, and L. Bao (2020)

    Self-supervised learning of detailed 3d face reconstruction

    IEEE Transactions on Image Processing. Cited by: §2.2.
  • Y. Chen, H. Wu, F. Shi, X. Tong, and J. Chai (2013) Accurate and robust 3d facial capture using a single rgbd camera. In Proc. ICCV, pp. 3615–3622. Cited by: §2.2.
  • P. Debevec, T. Hawkins, C. Tchou, H. Duiker, W. Sarokin, and M. Sagar (2000) Acquiring the reflectance field of a human face. In Proc. of SIGGRAPH, pp. 145–156. Cited by: §2.
  • J. Deng, J. Guo, N. Xue, and S. Zafeiriou (2019) Arcface: additive angular margin loss for deep face recognition. In Proc. CVPR, Cited by: §8.2.
  • P. Dou and I. A. Kakadiaris (2018)

    Multi-view 3d face reconstruction with deep recurrent neural networks

    Image and Vision Computing 80, pp. 80–91. Cited by: §2.2.
  • B. Egger, W. A. Smith, A. Tewari, S. Wuhrer, M. Zollhoefer, T. Beeler, F. Bernard, T. Bolkart, A. Kortylewski, S. Romdhani, C. Theobalt, V. Blanz, and T. Vetter (2020) 3D morphable face models–past, present and future. ACM Trans. Graph.. Cited by: §2.
  • EpicGames (2020) External Links: Link Cited by: §7.
  • P. Garrido, L. Valgaerts, C. Wu, and C. Theobalt (2013) Reconstructing detailed dynamic face geometry from monocular video.. ACM Trans. Graph. 32 (6), pp. 158–1. Cited by: §2.2.
  • P. Garrido, M. Zollhöfer, D. Casas, L. Valgaerts, K. Varanasi, P. Pérez, and C. Theobalt (2016) Reconstruction of personalized 3d face rigs from monocular video. ACM Trans. Graph. 35 (3), pp. 1–15. Cited by: §2.2.
  • L. A. Gatys, A. S. Ecker, and M. Bethge (2016)

    Image style transfer using convolutional neural networks

    In Proc. CVPR, pp. 2414–2423. Cited by: §2.3.
  • B. Gecer, S. Ploumpis, I. Kotsia, and S. Zafeiriou (2019) Ganfit: generative adversarial network fitting for high fidelity 3d face reconstruction. In Proc. CVPR, pp. 1155–1164. Cited by: §1, §1, §2.2, §2.3, §5.2, §6.3, §8.2, §8.2, Table 4.
  • K. Genova, F. Cole, A. Maschinot, A. Sarna, D. Vlasic, and W. T. Freeman (2018) Unsupervised training for 3d morphable model regression. In Proc. CVPR, pp. 8377–8386. Cited by: §1, §2.2, §5.2.
  • Y. Guo, J. Zhang, J. Cai, B. Jiang, and J. Zheng (2019) CNN-based real-time dense face reconstruction with inverse-rendered photo-realistic face images. IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (6), pp. 1294–1307. Cited by: §2.2.
  • A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam (2017)

    Mobilenets: efficient convolutional neural networks for mobile vision applications

    arXiv preprint arXiv:1704.04861. Cited by: §4, §7.
  • L. Hu, C. Ma, L. Luo, and H. Li (2015) Single-view hair modeling using a hairstyle database. ACM Trans. Graph. (Proc. SIGGRAPH) 34 (4), pp. 1–9. Cited by: §2.4.
  • L. Hu, S. Saito, L. Wei, K. Nagano, J. Seo, J. Fursund, I. Sadeghi, C. Sun, Y. Chen, and H. Li (2017) Avatar digitization from a single image for real-time rendering. ACM Trans. Graph. (Proc. SIGGRAPH) 36 (6), pp. 1–14. Cited by: §1, §1, §2.2, §2.4, §7, §8.2, §8.2, Table 3.
  • H. Huang, Z. Wu, S. Kang, D. Dai, J. Jia, T. Fu, D. Tuo, G. Lei, P. Liu, D. Su, D. Yu, and H. Meng (2020) Speaker independent and multilingual/mixlingual speech-driven talking head generation using phonetic posteriorgrams. arXiv preprint arXiv:2006.11610. Cited by: §9.
  • A. E. Ichim, S. Bouaziz, and M. Pauly (2015) Dynamic 3d avatar creation from hand-held video input. ACM Trans. Graph. (Proc. SIGGRAPH) 34 (4), pp. 1–14. Cited by: §1, §1, §10, §2.2, §2.4, §2, §7, Table 3, §9.
  • P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In Proc. CVPR, pp. 1125–1134. Cited by: §6.3, §6.3, §8.3.
  • A. S. Jackson, A. Bulat, V. Argyriou, and G. Tzimiropoulos (2017) Large pose 3d face reconstruction from a single image via direct volumetric cnn regression. In Proc. ICCV, pp. 1031–1039. Cited by: §2.2.
  • I. Kemelmacher-Shlizerman and R. Basri (2011) 3D face reconstruction from a single image using a single reference face shape. IEEE transactions on pattern analysis and machine intelligence 33 (2), pp. 394–405. Cited by: §2.2.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.2.
  • A. Lattas, S. Moschoglou, B. Gecer, S. Ploumpis, V. Triantafyllou, A. Ghosh, and S. Zafeiriou (2020) AvatarMe: realistically renderable 3d facial reconstruction” in-the-wild”. In Proc. CVPR, Cited by: §1, §1, §2.3, §2, §6.3, §6, §8.2, §8.2, §8.3, §8.3.
  • V. Lepetit, F. Moreno-Noguer, and P. Fua (2009) Epnp: an accurate o (n) solution to the pnp problem.

    International journal of computer vision

    81 (2), pp. 155.
    Cited by: §4.
  • H. Li, J. Yu, Y. Ye, and C. Bregler (2013) Realtime facial animation with on-the-fly correctives.. ACM Trans. Graph. (Proc. SIGGRAPH) 32 (4), pp. 42–1. Cited by: §2.2.
  • L. Luo, H. Li, and S. Rusinkiewicz (2013) Structure-aware hair capture. ACM Trans. Graph. (Proc. SIGGRAPH) 32 (4), pp. 1–12. Cited by: §2.4.
  • M. Lüthi, T. Gerig, C. Jud, and T. Vetter (2017) Gaussian process morphable models. IEEE transactions on pattern analysis and machine intelligence 40 (8), pp. 1860–1873. Cited by: §2.1.
  • K. Nagano, J. Seo, J. Xing, L. Wei, Z. Li, S. Saito, A. Agarwal, J. Fursund, and H. Li (2018) PaGAN: real-time avatars using dynamic textures. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37 (6), pp. 1–12. Cited by: §2.4.
  • T. Neumann, K. Varanasi, S. Wenger, M. Wacker, M. Magnor, and C. Theobalt (2013) Sparse localized deformation components. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 32 (6), pp. 1–10. Cited by: §2.1, §5.3.
  • R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon (2011) KinectFusion: real-time dense surface mapping and tracking. In Proc. ISMAR, pp. 127–136. Cited by: §2.2, §8.4.
  • S. Paris and F. Durand (2009) A fast approximation of the bilateral filter using a signal processing approach. International journal of computer vision 81 (1), pp. 24–52. Cited by: §4.
  • O. M. Parkhi, A. Vedaldi, and A. Zisserman (2015) Deep face recognition. In Proc. BMVC, Cited by: §5.2.
  • P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter (2009) A 3d face model for pose and illumination invariant face recognition. In Proc. AVSS, pp. 296–301. Cited by: §5.3.
  • R3ds (2020) External Links: Link Cited by: §5.3.
  • E. Reinhard, M. Adhikhmin, B. Gooch, and P. Shirley (2001) Color transfer between images. IEEE Computer graphics and applications 21 (5), pp. 34–41. Cited by: §6.3.
  • E. Richardson, M. Sela, R. Or-El, and R. Kimmel (2017) Learning detailed face reconstruction from a single image. In Proc. CVPR, pp. 5553–5562. Cited by: §2.2.
  • S. Romdhani and T. Vetter (2005) Estimating 3d shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In Proc. CVPR, Vol. 2, pp. 986–993. Cited by: §2.2.
  • S. Saito, L. Hu, C. Ma, H. Ibayashi, L. Luo, and H. Li (2018)

    3D hair synthesis using volumetric variational autoencoders

    ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37 (6), pp. 1–12. Cited by: §2.4.
  • S. Saito, L. Wei, L. Hu, K. Nagano, and H. Li (2017) Photorealistic facial texture inference using deep neural networks. In Proc. CVPR, Vol. 3. Cited by: §1, §2.3, §6.
  • M. Sela, E. Richardson, and R. Kimmel (2017) Unrestricted facial geometry reconstruction using image-to-image translation. In Proc. ICCV, pp. 1585–1594. Cited by: §2.2.
  • F. Shi, H. Wu, X. Tong, and J. Chai (2014) Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 33 (6), pp. 1–13. Cited by: §2.2.
  • J. R. Tena, F. De la Torre, and I. Matthews (2011) Interactive region-based linear 3d face models. ACM Trans. Graph. (Proc. SIGGRAPH) 30 (4), pp. 1–10. Cited by: §2.1, §5.3.
  • A. Tewari, M. Zollhöfer, P. Garrido, F. Bernard, H. Kim, P. Pérez, and C. Theobalt (2018) Self-supervised multi-level face model learning for monocular reconstruction at over 250 hz. In Proc. CVPR, Cited by: §2.2.
  • A. Tewari, M. Zollhöfer, H. Kim, P. Garrido, F. Bernard, P. Pérez, and C. Theobalt (2017) Mofa: model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In Proc. ICCV, Vol. 2, pp. 5. Cited by: §2.2.
  • J. Thies, M. Zollhöfer, M. Nießner, L. Valgaerts, M. Stamminger, and C. Theobalt (2015) Real-time expression transfer for facial reenactment. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 34 (6), pp. 183–1. Cited by: §1, §1, §2.2, Table 1.
  • J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner (2016) Face2face: real-time face capture and reenactment of rgb videos. In Proc. CVPR, pp. 2387–2395. Cited by: §2.2, §5.2, §8.2, §8.2, Table 4.
  • A. T. Tran, T. Hassner, I. Masi, and G. Medioni (2017) Regressing robust and discriminative 3d morphable models with a very deep neural network. In Proc. CVPR, pp. 1493–1502. Cited by: §2.2.
  • A. T. Tran, T. Hassner, I. Masi, E. Paz, Y. Nirkin, and G. Medioni (2018) Extreme 3d face reconstruction: seeing through occlusions. In Proc. CVPR, Cited by: §2.2.
  • L. Tran and X. Liu (2018) Nonlinear 3d face morphable model. In Proc. CVPR, Cited by: §2.1.
  • Z. Velinov, M. Papas, D. Bradley, P. Gotardo, P. Mirdehghan, S. Marschner, J. Novák, and T. Beeler (2018) Appearance capture and modeling of human teeth. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37 (6), pp. 1–13. Cited by: §7.
  • J. von der Pahlen, J. Jimenez, E. Danvoye, P. Debevec, G. Fyffe, and O. Alexander (2014) Digital ira and beyond: creating real-time photoreal digital actors. In ACM SIGGRAPH 2014 Courses, Cited by: §2.
  • T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz, and B. Catanzaro (2018) High-resolution image synthesis and semantic manipulation with conditional gans. In Proc. CVPR, Cited by: §8.3.
  • Y. Wei, E. Ofek, L. Quan, and H. Shum (2005) Modeling hair from multiple views. ACM Trans. Graph. (Proc. SIGGRAPH) 24 (3), pp. 816–820. Cited by: §2.4.
  • T. Weise, S. Bouaziz, H. Li, and M. Pauly (2011) Realtime performance-based facial animation. ACM Trans. Graph. (Proc. SIGGRAPH) 30 (4), pp. 1–10. Cited by: §2.2, §8.2, §8.2, §9.
  • C. Wu, D. Bradley, P. Garrido, M. Zollhöfer, C. Theobalt, M. H. Gross, and T. Beeler (2016) Model-based teeth reconstruction.. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 35 (6), pp. 220–1. Cited by: §7.
  • F. Wu, L. Bao, Y. Chen, Y. Ling, Y. Song, S. Li, K. N. Ngan, and W. Liu (2019) Mvf-net: multi-view 3d face morphable model regression. In Proc. CVPR, pp. 959–968. Cited by: §2.2.
  • S. Yamaguchi, S. Saito, K. Nagano, Y. Zhao, W. Chen, K. Olszewski, S. Morishima, and H. Li (2018) High-fidelity facial reflectance and geometry inference from an unconstrained image. ACM Trans. Graph. (Proc. SIGGRAPH) 37 (4), pp. 1–14. Cited by: §1, §1, §2.2, §2.3, §2, §6.3, §6, §8.2, §8.2, §8.3, §8.3.
  • C. Yu, H. Lu, N. Hu, M. Yu, C. Weng, K. Xu, P. Liu, D. Tuo, S. Kang, G. Lei, D. Su, and D. Yu (2019) Durian: duration informed attention network for multimodal synthesis. arXiv preprint arXiv:1909.01700. Cited by: §9.
  • Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu (2018) Image super-resolution using very deep residual channel attention networks. In Proc. ECCV, pp. 286–301. Cited by: §8.3.
  • X. Zhu, Z. Lei, X. Liu, H. Shi, and S. Z. Li (2016) Face alignment across large poses: a 3d solution. In Proc. CVPR, pp. 146–155. Cited by: §2.2, §4.
  • X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li (2015) High-fidelity pose and expression normalization for face recognition in the wild. In Proc. CVPR, pp. 787–796. Cited by: §5.1.
  • M. Zollhöfer, M. Martinek, G. Greiner, M. Stamminger, and J. Süßmuth (2011) Automatic reconstruction of personalized avatars from 3d face scans. Computer Animation and Virtual Worlds 22 (2-3), pp. 195–202. Cited by: §1, §1, §2.2.
  • M. Zollhöfer, M. Nießner, S. Izadi, C. Rehmann, C. Zach, M. Fisher, C. Wu, A. Fitzgibbon, C. Loop, C. Theobalt, et al. (2014) Real-time non-rigid reconstruction using an rgb-d camera. ACM Trans. Graph. (Proc. SIGGRAPH) 33 (4), pp. 1–12. Cited by: §2.2.
  • M. Zollhöfer, J. Thies, P. Garrido, D. Bradley, T. Beeler, P. Pérez, M. Stamminger, M. Nießner, and C. Theobalt (2018) State of the art on monocular 3d face reconstruction, tracking, and applications. In Computer Graphics Forum, Vol. 37, pp. 523–550. Cited by: §2.
  • G. Zoss, T. Beeler, M. Gross, and D. Bradley (2019) Accurate markerless jaw tracking for facial performance capture. ACM Trans. Graph. (Proc. SIGGRAPH) 38 (4), pp. 1–8. Cited by: §7.
  • G. Zoss, D. Bradley, P. Bérard, and T. Beeler (2018) An empirical rig for jaw animation. ACM Trans. Graph. (Proc. SIGGRAPH) 37 (4), pp. 1–12. Cited by: §7.