Self-Supervised Adaptation of High-Fidelity Face Models for Monocular Performance Tracking

07/25/2019 ∙ by Jae Shin Yoon, et al. ∙ University of Minnesota Facebook 5

Improvements in data-capture and face modeling techniques have enabled us to create high-fidelity realistic face models. However, driving these realistic face models requires special input data, e.g. 3D meshes and unwrapped textures. Also, these face models expect clean input data taken under controlled lab environments, which is very different from data collected in the wild. All these constraints make it challenging to use the high-fidelity models in tracking for commodity cameras. In this paper, we propose a self-supervised domain adaptation approach to enable the animation of high-fidelity face models from a commodity camera. Our approach first circumvents the requirement for special input data by training a new network that can directly drive a face model just from a single 2D image. Then, we overcome the domain mismatch between lab and uncontrolled environments by performing self-supervised domain adaptation based on "consecutive frame texture consistency" based on the assumption that the appearance of the face is consistent over consecutive frames, avoiding the necessity of modeling the new environment such as lighting or background. Experiments show that we are able to drive a high-fidelity face model to perform complex facial motion from a cellphone camera without requiring any labeled data from the new domain.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 7

page 8

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

High-fidelity face models enable the building of realistic avatars, which play a key role in communicating ideas, thoughts and emotions. Thanks to the uprising of data-driven approaches, highly realistic and detailed face models can be created with active appearance models (AAMs) [6, 5], 3D morphable models (3DMMs) [1], or deep appearance models (DAMs) [13]

. These data-driven approaches jointly model facial geometry and appearance, thus empowering the model to learn the correlation between the two and synthesize high-quality facial images. Particularly, the recently proposed DAMs can model and generate realistic animation and view-dependent textures with pore-level details by leveraging the high capacity of deep neural networks.

Unfortunately, barriers exist when applying these high-quality models to monocular in-the-wild imagery due to modality mismatch and domain mismatch. Modality mismatch refers to the fact that high-fidelity face modeling and tracking requires specialized input data, (e.g. DAMs require tracked 3D meshes and unwrapped textures) which is not easily accessible on consumer-grade mobile capture devices. Domain mismatch refers to the fact that the visual statistics of in-the-wild imagery are considerably different from that of a controlled lab environment used to build the high-fidelity face model. In-the-wild imagery includes various background clutter, low resolution, and complex ambient lighting. Such domain gap breaks the correlation between appearance and geometry learned by the data-driven model and the model may no longer work well in the new domain. The existence of these two challenges greatly inhibits the wide-spread use of the high-fidelity face models.

In this paper, we present a method to perform high-fidelity face tracking for monocular in-the-wild imagery based on DAMs face model learned from lab-controlled imagery. Our method bridges the controlled lab domain and in-the-wild domain such that we can perform high-fidelity face tracking with DAM face models on in-the-wild video camera sequences. To tackle the modality mismatch, we train I2ZNet, a deep neural network that takes a monocular image as input and directly regresses to the intermediate representation of the DAM, thus circumventing the need for 3D meshes and unwrapped textures required in DAMs. As I2ZNet relies on data captured in a lab environment and cannot handle the domain mismatch, we present a self-supervised domain adaptation technique that can adapt I2ZNet to new environments without requiring any labeled data from the new domain. Our approach leverages the assumption that the textures (appearance) of a face between consecutive frames should be consistent and incorporates this source of supervision to adapt the domain of I2ZNet such that final tracking results preserve consistent texture over consecutive frames on target-domain imagery, as shown in Figure Self-Supervised Adaptation of High-Fidelity Face Models for Monocular Performance Tracking. The resulting face tracker outperforms state-of-the-art face tracking methods in terms of geometric accuracy, temporal stability, and visual plausibility.

The key strength of this approach is that we do not make any other assumptions on the scene or lighting of in-the-wild imagery, enabling our method to be applicable to a wide variety of scenes. Furthermore, our method computes consistency for all visible portions of the texture, thus providing significantly more supervision and useful gradients than per-vertex based methods [22, 8]. Finally, we emphasize that the consecutive frame texture consistency assumption is not simply a regularizer to avoid overfitting. This assumption provides an additional source of supervision which enables our model to adapt to new environments and achieve considerable improvement of accuracy and stability.

In summary, the contributions of this paper are as follows:

  1. I2ZNet, a deep neural network that can directly predict the intermediate representation of a DAM from a single image.

  2. A self-supervised domain adaptation method based on consecutive frame texture consistency to enhance face tracking. No labeled data is required for images from the target domain.

  3. High-fidelity 3D face tracking on in-the-wild videos captured with a commodity camera.

2 Related Work

Humans are evolved to decode, understand and convey non-verbal information from facial motion, e.g., a subtle unnatural eye blink, symmetry, and reciprocal response can be easily detected. Therefore, the realistic rendering of facial motion is key to enable telepresence technology [13]. This paper lies in the intersection between high fidelity face modeling and 3D face reconstruction from a monocular camera, which will be briefly reviewed here.

3D Face Modeling Faces have underlying spatial structural patterns where low dimensional embedding can efficiently and compactly represent diverse facial configurations, shapes, and textures. Active Shape Models (ASMs) [6] have shown strong expressibility and flexibility to describe a variety of facial configurations by leveraging a set of facial landmarks. However, the nature of the sparse landmark dependency limits the reconstruction accuracy that is fundamentally bounded by the landmark localization. AAMs [5] address the limitation by exploiting the photometric measure using both shape and texture, resulting in compelling face tracking. Individual faces are combined into a single 3DMM [1] by computing dense correspondences based on optical flow in conjunction with the shape and texture priors in a linear subspace. Large-scale face scans (more than 10,000 people) from diverse population enables modeling of accurate distributions of faces [3, 2]. With the aid of multi-camera systems and deep neural networks, the limitation of the linear models can be overcome using DAMs [13]

that predicts high quality geometry and texture. Its latent representation is learned by a conditional variational autoencoder 

[12] that encodes view-dependent appearance from different viewpoints. Our approach eliminates the multi-camera requirement of the DAMs by adapting the networks to a video from a monocular camera.

Single View Face Reconstruction

The main benefit of the compact representation of 3D face modeling is that it allows estimating the face shape, appearance, and illumination parameters from a single view image. For instance, the latent representation of the 3DMMs can be recovered by jointly optimizing pixel intensity, edges and illumination (approximated by spherical harmonics) 

[17]. The recovered 3DMMs can be further refined to fit to a target face using a collection of photos [18] or depth based camera [4]. [23] leveraged expert designed rendering layers which model face shape, expression, and illumination and utilized inverse rendering to estimate a set of compact parameters which renders a face that best fits the input. This is often an simplification and cannot model all situations. In contrast, our method does not make any explicit assumptions on the lighting of the scene, and thus achieves more flexibility to different environments.

Other methods include [10, 29], which used cascaded CNNs which densely align the 3DMM with a 2D face in an iterative way based on facial landmarks. The geometry of a 3D face is regressed in a coarse-to-fine manner [16], and asymmetric loss enforces the network to regress the identity consistent 3D face [24]. [22] utilizes jointly learned geometry and reflectance correctives to fit in-the-wild faces. [9] trained UV regression maps which jointly align with the 3DMM to directly reconstruct a 3D face.

Tackling Domain Mismatch A key challenge is the oftentimes significant gap between the distribution of training and testing data. To this end, [15, 24] utilized synthetic data to boost 3D face reconstruction performance. A challenge here is to generate synthetic data that is representative of the testing distribution. [8] utilized domain invariant motion cues to perform unsupervised domain adaptation for facial landmark tracking. While their method was tested on sparse landmarks and benefited from a limited source of supervision, our method performs dense per-pixel matching of textures, providing more supervision for domain adaptation.

Figure 1: Illustration of the I2ZNet architecture. I2ZNet extracts the domain-invariant perceptual features and facial image features using the pre-trained VGGNet [20] and HourglassNet [14], respectively, from an input image . The combined multiple depth-level features are then regressed to the latent code z of the pre-trained DAMs ([13] and the head pose H via fully connected layers. I2ZNet is trained with the losses defined for and , namely and , as well as the view consistency loss in Eq. (4).

3 Methodology

When applying existing face models such as AAMs and DAMs to monocular video recordings, we face two challenges: modality mismatch and domain mismatch. Modality mismatch occurs when the existing face model requires input data to be represented in a face centric representation such as 3D meshes with unwrapped texture in a pre-defined topology. This representation does not comply with an image centric representation, thus preventing us from using these face models. Domain mismatch occurs when the visual statistics of in-the-wild images are different from that of the scenes used to construct the models. In the following sections, we first present I2ZNet for the modality mismatch, and then describe how to adapt I2ZNet in a self-supervised fashion for the domain mismatch.

3.1 Handling Modality Mismatch

Many face models including DAMs can be viewed as an encoder and decoder framework. The encoder takes an input , which corresponds to the geometry and unwrapped texture, respectively. represents the 3D locations of vertices which form a 3D mesh of the face. Note that rigid head motion has already been removed from the vertex locations, i.e. represents only local deformations of the face. The unwrapped texture is a 2D image that represents the appearance at different locations on in the UV space. The output of is the intermediate code . The decoder then takes and computes a reconstructed output . The encoder and decoder are learned by minimizing the difference between and for a large number of training samples.

The challenge is that , i.e., the 3D geometry and unwrapped texture, is not readily available in a monocular image . Therefore, we learn a separate deep encoder called I2ZNet (Image-to- network): , which takes a monocular image I as input and directly outputs and the rigid head pose . I2ZNet first extracts the domain independent two-stream features using the pre-trained VGGNet [20] and HourglassNet [14], which provides perceptual information and facial landmarks, respectively. The multiple depth-level two-stream features are combined with skip connections, and are regressed respectively to the intermediate representation and the head pose using several fully connected layers [26]. This architecture allows to directly predicts the parameters (z, H) based on the category-level semantic information from the deep layers and local geometric/appearance details from the shallow layers at the same time. can be given to the existing decoder to decode the 3D mesh and texture, while allows to reproject the decoded 3D mesh onto the 2D image. Figure 1 illustrates the overall architecture of I2ZNet, and more details are described in the supplementary manuscript.

is trained in a supervised way with multiview image sequences used for training and of DAMs. The by-product of learning and are the latent code and the head pose at each time. As a result of DAM training, we acquire as many tuples of as the camera views at every time as training data for .

The total loss to train is defined as

(1)

where and are the losses for and , respectively, and is the view-consistency loss. , and are weights for , and , respectively.

is the direct supervision term for z defined as

(2)

where is a DAM latent code regressed from I via .

Inspired by [22, 11], we formulate as the reprojection error of the 3D landmarks predicted via w.r.t. the 2D ground-truth landmarks for the head pose prediction:

(3)

where is the number of landmarks, is a weak perspective projection matrix, and is the head pose regressed from via I2ZNet. is the set of vertex locations decoded from via , and computes the 3D location of -th landmark from .

Because the training image data is captured with synchronized cameras, we want to ensure that the regressed is the same for images from different views captured at the same time. Therefore, we incorporate the view-consistency loss , defined as

(4)

We randomly select two views at every training iteration.

Figure 2: Overview of our self-supervised domain adaptation process. Given two consecutive frames (, ), we run followed by to acquire the geometry (, ), textures (, ) and head poses (, ). Then, , and are used to compute observed textures (, ). These enable us to compute and . For frame , we run a hourglass facial landmark detector to get 2D landmark locations , which is then used to compute . These losses can back-propagate gradients back to to perform self-supervised domain adaptation.

3.2 Handling Domain Mismatch

To handle the domain mismatch, we adapt I2ZNet to a new domain using a set of unlabeled images in a self-supervised manner. The overview of the proposed domain adaptation is illustrated in Figure 2. Given a monocular video, we refine the encoder by minimizing the domain adaptation loss (Eq. (5)), which consists of (1) consecutive frame texture consistency , (2) model-to-observation texture consistency , and (3) facial landmark reprojection consistency :

(5)

where , and correspond to the weights for each loss term. is our key contribution. It adapts such that textures computed from predicted geometry are temporally coherent. enforces the consistent color of DAM generated texture with the observed texture via pixel-wise matching. anchors the tracked 3D face by minimizing the reprojection error of the 3D model landmarks with the detected facial landmarks.

3.2.1 Consecutive Frame Texture Consistency

Figure 3: Illustration of how to compute .

Inspired by the brightness constancy assumption employed in many optical flow algorithms, we can reasonably assume that 3D face tracking for two consecutive frames is accurate only if unwrapped textures for the two frames are nearly identical. Inversely, if we see large changes in unwrapped texture across consecutive frames, it is highly likely due to inaccurate 3D geometry predictions. We make the assumption that environmental lighting and the appearance of the face does not change significantly between consecutive frames, which is satisfied in most scenarios. Otherwise, we do not make any assumptions on the lighting environment of a new scene, which makes our method more generalizable than existing methods which, for example, approximates lighting with spherical harmonics [22].

The consecutive frame texture consistency loss is:

(6)

where is a confidence matrix, is a texture obtained by projecting onto with , and is element-wise multiplication. We use the cosine of incident angle of the ray from the camera center to each texel as the confidence to reduce the effect of texture distortion caused at grazing angles. Elements smaller than a threshold in are set to 0. is the number of non-zero elements in . Figure 3 shows example confidence matrices and textures as well as computation of .

is obtained by projecting the 3D location of each texel decoded from to an observed image .

(7)

where is texel coordinates. Unlike existing methods that compute per-vertex texture loss [22, 8], considers all visible texels, providing significantly richer source of supervision and gradients than per-vertex-based methods. The aforementioned steps are all differentiable, thus the entire model can be updated in an end-to-end fashion.

3.2.2 Model-to-Observation Texture Consistency

This loss enforces the predicted textures to match the texture observed in the image . Although this is similar to the photometric loss used in [22], a challenge in our technique is the aforementioned domain mismatch: could be significantly different from mainly due to lighting condition changes. Therefore, we incorporate an additional network to convert the color of the predicted texture to the one of the currently observed texture. is also learned, and since training data is limited, we learn a single 1-by-1 convolutional filter which can be viewed as the color correction matrix and corrects the white-balance between the two textures. The model-to-observation texture consistency (MOTC) is formulated as

(8)

3.2.3 Facial Landmark Reprojection Consistency

This loss enforces a sparse set of vertices on the 3D mesh corresponding to the landmark locations to be consistent with 2D landmark predictions. Given facial landmarks, the facial landmark reprojection consistency (FLRC) loss is formulated as:

(9)

where is the location of the -th detected 2D landmark.

Figure 4: Proposed method during testing phase.

3.3 Testing Phase

Figure 4 depicts the steps required during the testing phase of our network, which is simply a feed-forward pass through the adapted and the estimated color correction function . Note that and the landmark detection are no longer required. Therefore, the timing of the network is still exactly the same as the original network except for the additional color correction, which itself is simple and fast.

4 Experiments

To demonstrate the effectiveness of our proposed self-supervised domain adaptation method for high-fidelity 3D face tracking, we perform both quantitative and qualitative analysis. Though qualitative analysis is relatively straight forward, quantitative analysis for evaluating the accuracy and stability of tracking results requires a high-resolution in-the-wild video dataset with ground-truth 3D meshes, which unfortunately is difficult to collect because scanning high quality 3D facial scans usually requires being in a special lab environment with controlled settings. Thus quantitative analysis of recent 3D face tracking methods such as [22, 23] are limited to static image datasets [4], or video sequences shot in a controlled environment [25]. Therefore, in light of the aforementioned limitations, we collected a new dataset and devised two metrics for quantitatively evaluating 3D face tracking performance.

Evaluation metrics: We employ two metrics, accuracy and temporal stability, which are denoted as "Reprojection" and "Temporal" in Table 1, respectively. For accuracy, since we do not have ground truth 3D meshes for in-the-wild data, we utilize average 2D landmark reprojection error as a proxy for the accuracy of the predicted 3D geometry. First, a 3D point corresponding to a 2D landmark is projected into 2D, and then the Euclidean distance between the reprojected point and ground truth 2D point is computed. For temporal stability, we propose a smoothness metric as

(10)

where corresponds to the 3D location of vertex at time . This metric assumes that the vertices of the 3D mesh should move on a straight line over the course of three frames, thus unstable or jittering predictions will lead to higher (worse) score. The lowest (best) metric score is 1.

Dataset collection and annotation: We recorded 19201080 resolution facial performance data in the wild for four different identities. Recording environments include indoor, outdoor, plain background and cluttered background under various lighting conditions.

150 frames of facial performance data were annotated for each of the 4 identities. For each frame, we annotate on the person’s face 5 salient landmarks that do not correspond to any typical facial landmark such as eye corners and mouth corners that can be detected by our landmark detector. These points are selected because our domain adaptation method already optimizes for facial landmark reprojection consistency, so our evaluation metric should use a separate set of landmarks for evaluation. Therefore, we focus on annotating salient personalized landmarks, such as pimples or moles on a person’s face, which can be easily identified and accurately annotated by a human. In this way, our annotations enable us to measure performance of tracking in regions where there are no generic facial landmarks and provide a more accurate measure of tracking performance.

Implementation Details: DAMs [13] are first created for all four identities from multi-view images captured in a lighting-controlled environment, and our I2ZNet is newly trained for each identity. Our proposed self-supervised domain adaptation method is then applied to videos of the four identities in a different lighting and background environment. For DAM, the unwrapped texture resolution is , and the geometry had vertices. We train the I2ZNet with Stochastic Gradient Decent (SGD). The face is cropped and resized to 256256 image and given to . During the self-supervised domain adaptation, the related parameters are set to .

Subject1 Subject2 Subject3 Subject4 Average

HPEN

Temporal 1.5197 1.2951 1.8206 1.3559 1.4978
Reprojection 8.8075 5.5475 13.3823 10.4688 9.5515

3DDFA

Temporal 1.5503 1.4500 1.8608 1.5139 1.5938
Reprojection 14.1171 10.2568 21.5077 18.1647 16.011

PRNet

Temporal 1.5551 1.3701 1.5700 1.4973 1.4981
Reprojection 8.4867 7.2522 14.052 9.6586 9.8624

Ours

Temporal 1.4106 1.2476 1.8322 1.4169 1.4768

w/o DA

Reprojection 6.2171 7.4914 10.9225 9.5953 8.5566

Ours

Temporal 1.3624 1.3274 1.6583 1.132 1.3700

w/ 

Reprojection 5.7558 6.982 10.1258 7.5230 7.5960

Ours

Temporal 1.1299 1.0498 1.2934 1.0915 1.1412
Reprojection 5.5689 6.7281 9.6015 7.1368 7.2588
Table 1: Evaluation on in-the-wild dataset. “Ours w/o DA” represents before doing any domain adaptation.
Figure 5: Temporal stability graph for subject 4. Note that smaller stability score means more stable results.

4.1 Results on In-the-wild Dataset

We compare our method against three state-of-the-art baselines: HPEN [28]: 3DMM fitting based on landmarks, 3DDFA [27]: 3DMM fitting based on landmarks and dense correspondence, and PRNet [9]: 3DMM fitting based on the direct depth regression map. The system input image size is 256256 except for 3DDFA (100100). We also add our method without domain adaptation (Ours w/o DA) and only with facial landmark reprojection consistency (Ours w/ ). As shown in Table 1, the proposed domain adaptation consistently increases the performance of the our model without domain adaptation for all 4 subjects. In terms of stability, the proposed domain adaptation method improves our model by 22% relative. Particularly, we are able to achieve 1.05 stability score for subject 2, which is close to the lowest possible stability score (1.0). This demonstrates the effectiveness of our proposed method. For the other baselines, our model without the domain adaptation already outperforms them in terms of geometry. This may be because our model is pre-trained with many pairs of training data, while the baselines were used out of the box. But on the other hand, all baselines including Ours w/o DA perform similarly in terms of stability (between 1.45-1.60), but our domain adaptation method is able to improve it to 1.14, which clearly demonstrates the effectiveness of our method.

Figure 5 visualizes the temporal stability metric for all the different methods for a single sequence. Our method has a consistently better (i.e., smaller) stability score than all the other methods for nearly all the frames, and demonstrates not only the effectiveness, but also the reliability and robustness of our method for in-the-wild sequences.

Figure 6: Qualitative comparisons with baseline methods.

Figure 6 shows qualitative comparisons with baselines. Overall, our face tracking results most closely resemble the input facial configuration, especially for the eyes and the mouth. For example, in the second row, the baselines erroneously predicted that the person’s mouth is opened, while our method correctly predicted that the person’s mouth is closed. We can also clearly see the effectiveness of our color correction approach, which is able to correct the relatively green-looking face to better match to the appearance in the input.

Figure 7 shows the visualization of our in-the-wild face tracking results. Our method is able to track complex motion in many different backgrounds, head pose, and lighting conditions that are difficult to approximate with spherical harmonics such as hard shadow. Our method is also able to adapt to the white-balance of the current scene. Note that the gaze direction is also tracked for most cases.

Figure 7: Visualization of 3D face tracking for in-the-wild video. For each input image, we show in the bottom right corner the predicted geometry overlaid on top of the face, and the predicted color corrected face.

4.2 Ablation Studies

To gain more insight to our model, we performed the following ablation experiments.

4.2.1 Evaluation of I2ZNet Structure

To validate the performance gain of each component on our regression network, we compare I2ZNet against three baseline networks: VGG+Skip+Key denotes I2ZNet, which uses VGGNet, multi-level features (skip connections), and landmarks from HourglassNet. VGG+Skip: landmarks guidance is removed. VGG

: Multi-level features (skip connection) are further removed and only deep features are used for regression.

VGG Scratch has the same structure with VGG but it is trained from scratch. For other settings which use VGG, pre-trained VGG-16 features are used, and the VGG portion of the network is not updated during training. The models are tested on unseen test datasets where the vertex-wise dense ground-truth is available. Three metrics are employed to evaluate performance: (1) accuracy for geometry is computed by Euclidean distance between predicted and ground-truth 3D vertices, (2) accuracy for texture is calculated by pixel intensity difference between predicted and ground-truth texture, and (3) the temporal stability is measured in the same way as Eq. 10.

The average scores with respect to the four test subjects are reported in Table 2, and the representative subject results are visualized in Figure 8. We observe that multi-level features (VGG+Skip) significantly improves performance over VGG, while adding keypoints (VGG+Skip+Key) further improves performance. VGG seems to lack of capacity to directly regress the latent parameters with only pre-trained deep features which are not updated. More ablation studies (e.g., tests on view consistency and robustness to the synthetic visual perturbation) on I2ZNet are described in the supplementary manuscript.

VGG Scratch   VGG VGG+Skip VGG+Skip+Key

Geometry

   1.011    1.481    0.411   0.315

Texture

   0.016    0.027    0.007   0.004

Temporal

   2.143    3.138    1.499   1.446
Table 2: Ablation test on I2ZNet. The average score with respect to all subjects are reported.
Figure 8: Ablation test on I2ZNet with a representative subject. The vertex-wise error is visualized with the associated average score for subject 1.

4.2.2 Effect of Image Resolution

The cropped image resolution plays a key role in the accuracy of face tracking. In this experiment, we quantify the performance degradation according to the resolution using relative reprojection error metric. Relative reprojection error is computed by comparing the 2D reprojected vertices location of the estimated geometry from different resolution images with the one of the gold-standard geometry, which is the geometry acquired when using the highest image resolution 256256. Figure 9 shows the results. Until 175175, we achieve average error less than 4 pixel-error, but performance degrades significantly as the resolution becomes further smaller.

Figure 9: Ablation studies on the performance degradation under various input resolution.

4.3 Limitations

There are two main limitations to the proposed approach. The first limitation is that our method assumes that a person-specific DAM already exists for the person to be tracked, as our method takes the DAM as input. The second limitation is that our MOTC color correction cannot handle complex lighting and specularities. For example, in Figure 7 first row first image, a portion of the face is brighter due to the sun, but since we only have a global color correction matrix for color correction, the sun’s effect could not be captured and thus not reflected in the output.

5 Conclusion

We proposed a deep neural network that predicts the intermediate representation and head pose of a high-fidelity 3D face model from a single image and its self-supervised domain adaptation method, thus enabling high-quality facial performance tracking from a monocular video in-the-wild. Our domain adaptation method leverages the assumption that the textures of a face over two consecutive frames should not change drastically, and this assumption enables us to extract supervision from unlabeled in-the-wild video frames to fine-tune the existing face tracker. The results demonstrated that the proposed method not only improves face-tracking accuracy, but also the stability of tracking.

Acknowledgement

This work was partially supported by NSF Grant IIS 1755895.

References

  • [1] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3D faces. In Proc. ACM SIGGRAPH, pages 187–194, 1999.
  • [2] James Booth, Epameinondas Antonakos, Stylianos Ploumpis, George Trigeorgis, and Yannis Panagakis andStefanos Zafeiriou. 3D face morphable models “in-the-wild”. In Proc. CVPR, 2017.
  • [3] James Booth, Anastasios Roussos, Allan Ponniah, David Dunaway, and Stefanos Zafeiriou. Large scale 3D morphable models. IJCV, 126(2-4):233–254, 2018.
  • [4] Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong, and Kun Zhou. FaceWarehouse: A 3D facial expression database for visual computing. IEEE TVCG, 20(3):413–425, 2014.
  • [5] Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEE TPAMI, (6):681–685, 2001.
  • [6] Timothy F Cootes, Christopher J Taylor, David H Cooper, and Jim Graham. Active shape models-their training and application. CVIU, 61(1):38–59, 1995.
  • [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In Proc. CVPR, 2009.
  • [8] Xuanyi Dong, Shoou-I Yu, Xinshuo Weng, Shih-En Wei, Yi Yang, and Yaser Sheikh. Supervision-by-registration: An unsupervised approach to improve the precision of facial landmark detectors. In Proc. CVPR, 2018.
  • [9] Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, and Xi Zhou. Joint 3D face reconstruction and dense alignment with position map regression network. In Proc. ECCV, 2018.
  • [10] László A Jeni, Jeffrey F Cohn, and Takeo Kanade. Dense 3D face alignment from 2D video for real-time use. Image Vision Comput., 58(C):13–24, 2017.
  • [11] Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In Proc. CVPR, 2018.
  • [12] Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. In Proc. ICLR, 2014.
  • [13] Stephen Lombardi, Tomas Simon, Jason Saragih, and Yaser Sheikh. Deep appearance models for face rendering. ACM TOG, 37(4), 2018.
  • [14] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In Proc. ECCV, 2016.
  • [15] Elad Richardson, Matan Sela, and Ron Kimmel. 3D face reconstruction by learning from synthetic data. In Proc. 3DV, 2016.
  • [16] Elad Richardson, Matan Sela, Roy Or-El, and Ron Kimmel. Learning detailed face reconstruction from a single image. In Proc. CVPR, 2017.
  • [17] Sami Romdhani and Thomas Vetter. Estimating 3D shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In Proc. CVPR, 2005.
  • [18] Joseph Roth, Yiying Tong, and Xiaoming Liu. Adaptive 3D face reconstruction from unconstrained photo collections. In Proc. CVPR, 2016.
  • [19] Christos Sagonas, Epameinondas Antonakos, Georgios Tzimiropoulos, Stefanos Zafeiriou, and Maja Pantic. 300 faces in-the-wild challenge: Database and results. Image Vision Comput., 47:3–18, 2016.
  • [20] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Proc. ICLR, 2015.
  • [21] Ran Tao, Efstratios Gavves, and Arnold W. M. Smeulders. Siamese instance search for tracking. In Proc. CVPR, 2016.
  • [22] Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, and Christian Theobalt. Self-supervised multi-level face model learning for monocular reconstruction at over 250 Hz. In Proc. CVPR, 2018.
  • [23] Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Pérez, and Christian Theobalt. MoFA: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In Proc. ICCV, 2017.
  • [24] Anh Tuan Tran, Tal Hassner, Iacopo Masi, and Gérard Medioni. Regressing robust and discriminative 3D morphable models with a very deep neural network. In Proc. CVPR, 2017.
  • [25] Levi Valgaerts, Chenglei Wu, Andrés Bruhn, Hans-Peter Seidel, and Christian Theobalt. Lightweight binocular facial performance capture under uncontrolled lighting. ACM TOG, 31(6):187–1, 2012.
  • [26] Jae Shin Yoon, Francois Rameau, Junsik Kim, Seokju Lee, Seunghak Shin, and In So Kweon.

    Pixel-level matching for video object segmentation using convolutional neural networks.

    In Proc. ICCV, 2017.
  • [27] Xiangyu Zhu, Zhen Lei, Xiaoming Liu, Hailin Shi, and Stan Z. Li. Face alignment across large poses: A 3D solution. In Proc. CVPR, 2016.
  • [28] Xiangyu Zhu, Zhen Lei, Junjie Yan, Dong Yi, and Stan Z. Li.

    High-fidelity pose and expression normalization for face recognition in the wild.

    In Proc. CVPR, 2015.
  • [29] Xiangyu Zhu, Xiaoming Liu, Zhen Lei, and Stan Z. Li. Face alignment in full pose range: A 3D total solution. IEEE TPAMI, 2019.