Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

03/29/2017 ∙ by Matan Sela, et al. ∙ Technion 0

It has been recently shown that neural networks can recover the geometric structure of a face from a single given image. A common denominator of most existing face geometry reconstruction methods is the restriction of the solution space to some low-dimensional subspace. While such a model significantly simplifies the reconstruction problem, it is inherently limited in its expressiveness. As an alternative, we propose an Image-to-Image translation network that jointly maps the input image to a depth image and a facial correspondence map. This explicit pixel-based mapping can then be utilized to provide high quality reconstructions of diverse faces under extreme expressions, using a purely geometric refinement process. In the spirit of recent approaches, the network is trained only with synthetic data, and is then evaluated on in-the-wild facial images. Both qualitative and quantitative analyses demonstrate the accuracy and the robustness of our approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

page 7

page 8

page 14

page 15

page 16

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recovering the geometric structure of a face is a fundamental task in computer vision with numerous applications. For example, facial characteristics of actors in realistic movies can be manually edited with facial rigs that are carefully designed for manipulating the expression 

[42]. While producing animation movies, tracking the geometry of an actor across multiple frames allows transferring the expression to an animated avatar [14, 8, 7]

. Image-based face recognition methods deform the recovered geometry for producing a neutralized and frontal version of the input face in a given image, reducing the variations between images of the same subject 

[49, 19]. As for medical applications, acquiring the structure of a face allows for fine planning of aesthetic operations and plastic surgeries, designing of personalized masks [2, 37] and even bio-printing facial organs.

Here, we focus on the recovery of the geometric structure of a face from a single facial image under a wide range of expressions and poses. This problem has been investigated for decades and most existing solutions involve one or more of the following components.

  • Facial landmarks [25, 46, 32, 47] - a set of automatically detected key points on the face such as the tip of the nose and the corners of the eyes, which can guide the reconstruction process [49, 26, 1, 12, 29].

  • A reference facial model - an average neutral face that is used as an initialization of optical flow or shape from shading procedures [19, 26].

  • A three-dimensional morphable model - a prior low-dimensional linear subspace of plausible facial geometries which allows an efficient, yet rough, recovery of a facial structure [4, 6, 49, 36, 23, 33, 43],

Figure 1: The algorithmic reconstruction pipeline.

While using these components can simplify the reconstruction problem, they introduce some inherent limitations. Methods that rely only on landmarks are limited to a sparse set of constrained points. Classical techniques that use a reference facial model might fail to recover extreme expressions and non-frontal poses, as optical flows restrict the deformation to the image plane. The morphable model, while providing some robustness, limits the reconstruction as it can express only coarse geometries. Integrating some of these components together could mitigate the problems, yet, the underlying limitations are still manifested in the final reconstruction.

Alternatively, we propose an unrestricted approach which involves a fully convolutional network that learns to translate an input facial image to a representation containing two maps. The first map is an estimation of a depth image, while the second is an embedding of a facial template mesh in the image domain. This network is trained following the Image-to-Image translation framework of 

[22], where an additional normal-based loss is introduced to enhance the depth result. Similar to previous approaches, we use synthetic images for training, where the images are sampled from a wide range of facial identities, poses, expressions, lighting conditions, backgrounds and material parameters. Surprisingly, even though the network is still trained with faces that are drawn from a limited generative model, it can generalize and produce structures far and beyond the limited scope of that model. To process the raw network results, an iterative facial deformation procedure is used which combines the representations into a full facial mesh. Finally, a refinement step is applied to produce a detailed reconstruction. This novel blending of neural networks with purely geometric techniques allows us to reconstruct high-quality meshes with wrinkles and details at a mesoscopic-level from only a single image.

While using a neural network for face reconstruction was proposed in the past [33, 34, 43, 48, 24], previous methods were still limited by the expressiveness of the linear model. In [34]

, a second network was proposed to refine the coarse facial reconstruction, yet, it could not compensate for large geometric variations beyond the given subspace. For example, the structure of the nose was still limited by the span of a facial morphable model. By learning the unconstrained geometry directly in the image domain, we overcome this limitation, as demonstrated by both quantitative and qualitative experimental results. To further analyze the potential of the proposed representation we devise an application for translating images from one domain to another. As a case study, we transform synthetic facial images into realistic ones, enforcing our network as a loss function to preserve the geometry throughout the cross domain mapping.

The main contributions of this paper are:

  • A novel formulation for predicting a geometric representation of a face from a single image, which is not restricted to a linear model.

  • A purely geometric deformation and refinement procedure that utilizes the network representation to produce high quality facial reconstructions.

  • A novel application of the proposed network which allows translating synthetic facial images into realistic ones, while keeping the geometric structure intact.

2 Overview

The algorithmic pipeline is presented in Figure 1. The input of the network is a facial image, and the network produces two outputs: The first is an estimated depth map aligned with the input image. The second output is a dense map from each pixel to a corresponding vertex on a reference facial mesh. To bring the results into full vertex correspondence and complete occluded parts of the face, we warp a template mesh in the three-dimensional space by an iterative non-rigid deformation procedure. Finally, a fine detail reconstruction algorithm guided by the input image recovers the subtle geometric structure of the face. Code for evaluation is available at https://github.com/matansel/pix2vertex.

3 Learning the Geometric Representation

There are several design choices to consider when working with neural networks. First and foremost is the training data, including the input channels, their labels, and how to gather the samples. Second is the choice of the architecture. A common approach is to start from an existing architecture [27, 39, 40, 20] and to adapt it to the problem at hand. Finally, there is the choice of the training process, including the loss criteria and the optimization technique. Next, we describe our choices for each of these elements.

3.1 The Data and its Representation

The purpose of the suggested network is to regress a geometric representation from a given facial image. This representation is composed of the following two components:

Depth Image

A depth profile of the facial geometry. Indeed, for many facial reconstruction tasks providing only the depth profile is sufficient [18, 26].

Figure 2: A reference template face presented alongside the dense correspondence signature from different viewpoints.
Correspondence Map

An embedding which allows mapping image pixels to points on a template facial model, given as a triangulated mesh. To compute this signature for any facial geometry, we paint each vertex with the , , and coordinates of the corresponding point on a normalized canonical face. Then, we paint each pixel in the map with the color value of the corresponding projected vertex, see Figure 2. This feature map is a deformation agnostic representation, which is useful for applications such as facial motion capture [44], face normalization [49] and texture mapping [50]. While a similar representation was used in [34, 48] as feedback channel for an iterative network, the facial recovery was still restricted to the span of a facial morphable model.

For training the network, we adopt the same synthetic data generation procedure proposed in [33]. Each random face is generated by drawing random mesh coordinates and texture from a facial morphable model [4]

. In practice, we draw a pair of Gaussian random vectors,

and , and recover the synthetic face as follows

where and are the stacked average facial geometry and texture of the model, respectively. and are matrices whose columns are the bases of low-dimensional linear subspaces spanning plausible facial geometries and textures, respectively. Notice that geometry basis is composed to both identity and expression basis elements, as proposed in [10]. Next, we render the random textured meshes under various illumination conditions and poses, generating a dataset of synthetic facial images. As the ground-truth geometry is known for each synthetic image, one readily has the matching depth and correspondence maps to use as labels. Some examples of input images alongside their desired outputs are shown in Figure 3.

Figure 3: Training data samples alongside their representations.

Working with synthetic data can still present some gaps when generalizing to “in-the-wild” images [9, 33], however it provides much-needed flexibility in the generation process and ensures a deterministic connection from an image to its label. Alternatively, other methods [16, 43] proposed to generate training data by employing existing reconstruction algorithms and regarding their results as ground-truth labels. For example, Güler et al. [16], used a framework similar to that of [48] to match dense correspondence maps to a dataset of facial images, starting from only a sparse set of landmarks. These correspondence maps were then used as training labels for their method. Notice that such data can also be used for training our network without requiring any other modification.

3.2 Image to Geometry Translation

Pixel-wise prediction requires a proper network architecture [30, 17]. The proposed structure is inspired by the recent Image-to-Image translation framework proposed in [22], where a network was trained to map the input image to output images of various types. The architecture used there is based on the U-net [35] layout, where skip connections are used between corresponding layers in the encoder and the decoder. Additional considerations as to the network implementation are given in the supplementary.

While in [22] a combination of and adversarial loss functions were used, in the proposed framework, we chose to omit the adversarial loss. That is because unlike the problems explored in [22], our setup includes less ambiguity in the mapping. Hence, a distributional loss function is less effective, and mainly introduces artifacts. Still, since the basic loss function favors sparse errors in the depth prediction and does not account for differences between pixel neighborhoods, it is insufficient for producing fine geometric structures, see (b). Hence, we propose to augment the loss function with an additional term, which penalizes the discrepancy between the normals of the reconstructed depth and ground truth.

(1)

where is the recovered depth, and denotes the ground-truth depth image. During training we set and , where and are the matching loss weights. Note that for the correspondence image only the loss was applied. Figure 4 demonstrates the contribution of the to the quality of the depth reconstruction provided by the network.

(a)
(b)
(c)
Figure 4: (subfig:normals-im) the input image, (subfig:normals-no-loss) the result with only the loss function and (subfig:normals-with-loss) the result with the additional normals loss function. Note the artifacts in (subfig:normals-no-loss).

4 From Representations to a Mesh

Based on the resulting depth and correspondence we introduce an approach to translate the 2.5D representation to a 3D facial mesh. The procedure is composed of an iterative elastic deformation algorithm (4.1) followed by a fine detail recovery step driven by the input image (4.2

). The resulting output is an accurate reconstructed facial mesh with a full vertex correspondence to a template mesh with fixed triangulation. This type of data is helpful for various dynamic facial processing applications, such as facial rigs, which allows creating and editing photo-realistic animations of actors. As a byproduct, this process also corrects the prediction of the network by completing domains in the face which are mistakenly classified as part of the background.

4.1 Non-Rigid Registration

Next, we describe the iterative deformation-based registration pipeline. First, we turn the depth map from the network into a mesh, by connecting neighboring pixels. Based on the correspondence map from the network, we compute the affine transformation from a template face to the mesh. This operation is done by minimizing the squared Euclidean distances between corresponding vertex pairs. Next, similar to [28], an iterative non-rigid registration process deforms the transformed template, aligning it with the mesh. Note that throughout the registration, only the template is warped, while the target mesh remains fixed. Each iteration involves the following four steps.

  1. Each vertex in the template mesh, , is associated with a vertex, , on the target mesh, by evaluating the nearest neighbor in the correspondence embedding space. This step is different from the method described in [28], which computes the nearest neighbor in the Euclidean space. As a result, the proposed step allows registering a single template face to different facial identities with arbitrary expressions.

  2. Pairs, , which are physically distant and those whose normal directions disagree are detected and ignored in the next step.

  3. The template mesh is deformed by minimizing the following energy

    (5)

    where, is the weight corresponding to the biharmonic Laplacian operator (see [21, 5]), is the normal of the corresponding vertex at the target mesh , is the set of the remaining associated vertex pairs , and is the set 1-ring neighboring vertices about the vertex . Notice that the first term above is the sum of squared Euclidean distances between matches. The second term is the distance from the point to the tangent plane at the corresponding point of the target mesh. The third term quantifies the stiffness of the mesh.

  4. If the motion of the template mesh between the current iteration and the previous one is below a fixed threshold, we divide the weight by two. This relaxes the stiffness term and allows a greater deformation in the next iteration.

This iterative process terminates when the stiffness weight is below a given threshold. Further implementation information and parameters of the registration process are provided in the supplementary material. The resulting output of this phase is a deformed template with fixed triangulation, which contains the overall facial structure recovered by the network, yet, is smoother and complete, see the third column of Figure 8.

4.2 Fine Detail Reconstruction

Although the network already recovers some fine geometric details, such as wrinkles and moles, across parts of the face, a geometric approach can reconstruct details at a finer level, on the entire face, independently of the resolution. Here, we propose an approach motivated by the passive-stereo facial reconstruction method suggested in [3]. The underlying assumption here is that subtle geometric structures can be explained by local variations in the image domain. For some skin tissues, such as nevi, this assumption is inaccurate as the intensity variation results from the albedo. In such cases, the geometric structure would be wrongly modified. Still, for most parts of the face, the reconstructed details are consistent with the actual variations in depth.

The method begins from an interpolated version of the deformed template. Each vertex

is painted with the intensity value of the nearest pixel in the image plane. Since we are interested in recovering small details, only the high spatial frequencies, , of the texture, , are taken into consideration in this phase. For computing this frequency band, we subtract the synthesized low frequencies from the original intensity values. This low-pass filtered part can be computed by convolving the texture with a spatially varying Gaussian kernel in the image domain, as originally proposed. In contrast, since this convolution is equivalent to computing the heat distribution upon the shape after time , where the initial heat profile is the original texture, we propose to compute as

(6)

where

is the identity matrix,

is the cotangent weight discrete Laplacian operator for triangulated meshes [31], and is a scalar proportional to the cut-off frequency of the filter.

Next, we displace each vertex along its normal direction such that . The step size of the displacement, , is a combination of a data-driven term, , and a regularization one, . The data-driven term is guided by the high-pass filtered part of the texture, . In practice, we require the local differences in the geometry to be proportional to the local variation in the high frequency band of the texture. For each vertex , with a normal , and a neighboring vertex , the data-driven term is given by

(7)

where . For further explanation of  Equation 7, we refer the reader to the supplementary material of this paper or the implementation details of [3].

Since we move each vertex along the normal direction, triangles could intersect each other, particularly in domains of high curvature. To reduce the probability of such collisions, a regularizing displacement field,

, is added. This term is proportional to the mean curvature of the original surface, and is equivalent to a single explicit mesh fairing step [11]. The final surface modification is given by

(8)

for some constant . A demonstration of the results before and after this step is presented in  Figure 5

Figure 5: Mesoscopic displacement. From left to right: an input image, the shape after the iterative registration, the high-frequency part of the texture - , and the final shape.

5 Experiments

Next, we present evaluations on both the proposed network and the pipeline as a whole, and comparison to different prominent methods of single image based facial reconstruction [26, 49, 34].

5.1 Qualitative Evaluation

Figure 6: Network Output.

The first component of our algorithm is an Image-to-Image network. In Figure 6, we show samples of output maps produced by the proposed network. Although the network was trained with synthetic data, with simple random backgrounds (see Figure 3), it successfully separates the hair and background from the face itself and learns the corresponding representations. To qualitatively assess the accuracy of the correspondence, we present a visualization where an average facial texture is mapped to the image plane via the predicted embedding, see Figure 7, this shows how the network successfully learns to represent the facial structure. Next, in Figure 8 we show the reconstruction of the network, alongside the registered template and the final shape. Notice how the structural information retrieved by the network is preserved through the geometric stages. Figure 9 shows a qualitative comparison between the proposed method and others. One can see that our method better matches the global structure, as well as the facial details. To better perceive these differences, see Figure 10. Finally, to demonstrate the limited expressiveness of the 3DMM space compared to our method, Figure 11 presents our registered template next to its projection onto the 3DMM space. This clearly shows that our network is able to learn structures which are not spanned by the 3DMM model.

Figure 7: Texture mapping via the embedding.
Figure 8: The reconstruction stages. From left to right: the input image, the reconstruction of the network, the registered template and the final shape.
Input Proposed [34] [26] [49] Proposed [34] [26] [49]
Figure 9: Qualitative comparison. Input images are presented alongside the reconstructions of the different methods.

5.2 Quantitative Evaluation

For a quantitative comparison, we used the first 200 subjects from the BU-3DFE dataset [45], which contains facial images aligned with ground truth depth images. Each method provides its own estimation for the depth image alongside a binary mask, representing the valid pixels to be taken into account in the evaluation. Obviously, since the problem of reconstructing depth from a single image is ill-posed, the estimation needs to be judged up to global scaling and transition along the depth axis. Thus, we compute these paramters using the Random Sample Concensus (RANSAC) approach [13]

, for normalizing the estimation according to the ground truth depth. This significantly reduces the absolute error of each method as the global parameter estimation is robust to outliers. Note that the parameters of the RANSAC were identical for all the methods and samples. The results of this comparison are given in 

Table 1, where the units are given in terms of the percentile of the ground-truth depth range. As a further analysis of the reconstruction accuracy, we computed the mean absolute error of each method based on expressions, see Table 2.

Input Proposed [34] [26] [49]
Figure 10: Zoomed qualitative result of first and fourth subjects from Figure 9.
Mean Err. Std Err. Median Err. 90% Err.
[26] 3.89 4.14 2.94 7.34
[49] 3.85 3.23 2.93 7.91
[34] 3.61 2.99 2.72 6.82
Ours 3.51 2.69 2.65 6.59
Table 1:

Quantitative evaluation on the BU-3DFE Dataset. From left to right: the absolute depth errors evaluated by mean, standard deviation, median and the average ninety percent largest error.

AN DI FE HA NE SA SU
[26] 3.47 4.03 3.94 4.30 3.43 3.52 4.19
[49] 4.00 3.93 3.91 3.70 3.76 3.61 3.96
[34] 3.42 3.46 3.64 3.41 4.22 3.59 4.00
Ours 3.67 3.34 3.36 3.01 3.17 3.37 4.41
Table 2: The mean error by expression. From left to right: Anger, Disgust, Fear, Happy, Neutral, Sad, Surprise.
Figure 11: 3DMM Projection. From left to right: the input image, the registered template, the projected mesh and the projection error.

5.3 The Network as a Geometric Constraint

As demonstrated by the results, the proposed network successfully learns both the depth and the embedding representations for a variety of images. This representation is the key part behind the reconstruction pipeline. However, it can also be helpful for other face-related tasks. As an example, we show that the network can be used as a geometric constraint for facial image manipulations, such as transforming synthetic images into realistic ones. This idea is based on recent advances in applying Generative Adversarial Networks (GAN) [15] for domain adaption tasks [41].

In the basic GAN framework, a Generator Network (G) learns to map from the source domain, , to the target domain , where a Discriminator Network (D) tries to distinguish between generated images and samples from the target domain, by optimizing the following objective

(9)

Theoretically, this framework could also translate images from the synthetic domain into the realistic one. However, it does not guarantee that the underlying geometry of the synthetic data is preserved throughout that transformation. That is, the generated image might look realistic, but have a completely different facial structure from the synthetic input. To solve that potential inconsistency, we suggest to involve the proposed network as an additional loss function on the output of the generator.

(10)

where represents the operation of the introduced network. Note that this is feasible, thanks to the fact that the proposed network is fully differentiable. The additional geometric fidelity term forces the generator to learn a mapping that makes a synthetic image more realistic while keeping the underlying geometry intact. This translation process could potentially be useful for data generation procedures, similarly to [38]. Some successful translations are visualized in Figure 12. Notice that the network implicitly learns to add facial hair and teeth, and modify the texture the and shading, without changing the facial structure. As demonstrated by this analysis, the proposed network learns a strong representation that has merit not only for reconstruction, but for other tasks as well.

Figure 12: Translation results. From top to bottom: synthetic input images, the correspondence and the depth maps recovered by the network, and the transformed result.

6 Limitations

One of the core ideas of this work was a model-free approach, where the solution space is not restricted by a low dimensional subspace. Instead, the Image-to-Image network represents the solution in the extremely high-dimensional image domain. This structure is learned from synthetic examples, and shown to successfully generalize to “in-the-wild” images. Still, facial images that significantly deviate from our training domain are challenging, resulting in missing areas and errors inside the representation maps. More specifically, our network has difficulty handling extreme occlusions such as sunglasses, hands or beards, as these were not seen in the training data. Similarly to other methods, reconstructions under strong rotations are also not well handled. Reconstructions under such scenarios are shown in the supplementary material. Another limiting factor of our pipeline is speed. While the suggested network by itself can be applied efficiently, our template registration step is currently not optimized for speed and can take a few minutes to converge.

7 Conclusion

We presented an unrestricted approach for recovering the geometric structure of a face from a single image. Our algorithm employs an Image-to-Image network which maps the input image to a pixel-based geometric representation, followed by geometric deformation and refinement steps. The network is trained only by synthetic facial images, yet, is capable of reconstructing real faces. Using the network as a loss function, we propose a framework for translating synthetic facial images into realistic ones while preserving the geometric structure.

Acknowledgments

We would like to thank Roy Or-El for the helpful discussions and comments.

References

Appendix A Additional Network Details

Here, we summarize additional considerations concerning the network and its training procedure.

  • The proposed architecture is based on the one introduced in [22]. For allowing further refinement of the results, three additional convolution layers with a kernel of size were concatenated at the end. Following the notations of [22], the encoder architecture is given as

    while the decoder is given by

    where represents a

    convolution with stride

    .

  • The resolution of the input and output training images was pixels. While this is a relatively large input size for training, the Image-to-Image architecture was able to process it successfully, and provided accurate results. Although, one could train a network on smaller resolutions and then evaluate it on larger images, as shown in [22], we found that our network did not successfully scale up for unseen resolutions.

  • While a single network was successfully trained to retrieve both depth and correspondence representations, our experiments show that training separated networks to recover the representations is preferable. Note that the architectures of both networks were identical. This can be justified by the observation that during training, a network allocates its resources for a specific translation task and the representation maps we used have different characteristics.

  • A necessary parameter for the registration step is the scale of the face with respect to the image dimensions. While this can be estimated based on global features, such as the distance between the eyes, we opted to retrieve it directly by training the network to predict the and coordinates of each pixel in the image alongside the coordinate.

Appendix B Additional Registration and Refinement Details

Next, we provide a detailed version of the iterative deformation-based registration phase, including implementation details of the fine detail reconstruction.

b.1 Non-Rigid Registration

First, we turn the , and maps from the network into a mesh, by connecting four neighboring pixels, for which the coordinates are known, with a couple of triangles. This step yields a target mesh that might have holes but has dense map to our template model. Based on the correspondence given by the network, we compute the affine transformation from a template face to the mesh. This operation is done by minimizing the squared Euclidean distances between corresponding vertex pairs. To handle outliers, a RANSAC approach is used [13] with iterations and a threshold of millimeters for detecting inliers. Next, similar to [28], an iterative non-rigid registration process deforms the transformed template, aligning it with the mesh. Note, that throughout the registration, only the template is warped, while the target mesh remains fixed. Each iteration involves the following four steps.

  1. Each vertex in the template mesh, , is associated with a vertex, , on the target mesh, by evaluating the nearest neighbor in the embedding space. This step is different from the method described in [28], which computes the nearest neighbor in the Euclidean space. As a result, the proposed step allows registering a single template face to different facial identities with arbitrary expressions.

  2. Pairs, , which are physically distant by more than millimeter and those with normal direction disagreement of more than degrees are detected and ignored in the next step.

  3. The template mesh is deformed by minimizing the following energy

    (14)

    where, is the weight corresponding to the biharmonic Laplacian operator (see [21, 5]), is the normal of the corresponding vertex at the target mesh , is the set of the remaining associated vertex pairs , and is the set 1-ring neighboring vertices about the vertex . Notice that the first term above is the sum of squared Euclidean distances between matches and its weight is set to . The second term is the distance from the point to the tangent plane at the corresponding point on the target mesh, and its weight is set to . The third term quantifies the stiffness of the mesh and its weight is initialized to . In practice, the energy term given in Equation 14 is minimized iteratively by an inner loop which contains a linear system of equations. We run this loop until the norm of the difference between the vertex positions of the current iteration and the previous one is below .

  4. If the motion of the template mesh between the current outer iteration and the previous one is below , we divide the weight by two. This relaxes the stiffness term and allows a greater deformation in the next outer iteration. In addition, we evaluate the difference between the number of remaining pairwise matches in the current iteration versus the previous one. If the difference is below 500, we modify the vertex association step to estimate the physical nearest neighbor vertex, instead of the the nearest neighbor in the space of the embedding given by the network.

This iterative process terminates when the stiffness weight is below . The resulting output of this phase is a deformed template with fixed triangulation, which contains the overall facial structure recovered by the network, yet, is smoother and complete.

b.2 Fine Detail Reconstruction

Although the network already recovers fine geometric details, such as wrinkles and moles, across parts of the face, a geometric approach can reconstruct details at a finer level, on the entire face, independently of the resolution. Here, we propose an approach motivated by the passive-stereo facial reconstruction method suggested in [3]. The underlying assumption here is that subtle geometric structures can be explained by local variations in the image domain. For some skin tissues, such as nevi, this assumption is inaccurate as the intensity variation results from the albedo. In such cases, the geometric structure would be wrongly modified. Still, for most parts of the face, the reconstructed details are consistent with the actual variations in depth.

The method begins from an interpolated version of the deformed template, provided by a surface subdivision technique. Each vertex is painted with the intensity value of the nearest pixel in the image plane. Since we are interested in recovering small details, only the high spatial frequencies, , of the texture, , are taken into consideration in this phase. For computing this frequency band, we subtract the synthesized low frequencies from the original intensity values. This low-pass filtered part can be computed by convolving the texture with a spatially varying Gaussian kernel in the image domain, as originally proposed. In contrast, since this convolution is equivalent to computing the heat distribution upon the shape after time , where the initial heat profile is the original texture, we propose to compute as

(15)

where is the identity matrix, is the cotangent weight discrete Laplacian operator for triangulated meshes [31], and is a scalar proportional to the cut-off frequency of the filter.

Next, we displace each vertex along its normal direction such that . The step size of the displacement, , is a combination of a data-driven term, , and a regularization one, . The data-driven term is guided by the high-pass filtered part of the texture, . In practice, we require the local differences in the geometry to be proportional to the local variation in the high frequency band of the texture. That is for each vertex , with a normal , and a neighboring vertex , the data-driven term is given by

(16)

Thus, the step size assuming a single neighboring vertex can be calculated by

(17)

In the presence of any number of neighboring vertices of , we compute the weighted average of its 1-ring neighborhood

(18)

An alternative term can spatially attenuate the contribution of the data-driven term in curved regions for regularizing the reconstruction by

(19)

where . where is the set 1-ring neighboring vertices about the vertex , and is the unit normal at the vertex .

Since we move each vertex along the normal direction, triangles could intersect each other, particularly in regions with high curvature. To reduce the probability of such collisions, a regularizing displacement field, , is added. This term is proportional to the mean curvature of the original surface, and is equivalent to a single explicit mesh fairing step [11]. The final surface modification is given by

(20)

for a constant .

Appendix C Additional Experimental Results

We present additional qualitative results of our method. Figure 13 shows the output representations of the proposed network for a variety of different faces, notice the failure cases presented in the last two rows. One can see that the network generalizes well, but is still limited by the synthetic data. Specifically, the network might fail in presence of occlusions, facial hair or extreme poses. This is also visualized in Figure 14 where the correspondence error is visualized using the texture mapping. Additional reconstruction results of our method are presented in Figure 15. For analyzing the distribution of the error along the face, we present an additional comparison in Figure 18, where the absolute error, given in percents of the ground truth depth, is shown for several facial images.

Figure 13: Network Output.
Figure 14: Results under occlusions and rotations. Input images are shown next to the matching correspondence result, visualized using the texture mapping to better show the errors.
Figure 15: Additional reconstruction results.
Figure 16: Additional reconstruction results.
Figure 17: Additional reconstruction results.
Input Proposed [34] [26] [49] Err. % Scale
Figure 18: Error heat maps in percentile of ground truth depth.