Due to their ability of inferring and representing 3D surfaces, 3D Morphable Models (3DMMs) have many applications in computer vision, computer graphics, biometrics, and medical imaging[4, 15, 1, 26]. Many registered raw 3D images (‘scans’) are required for correctly training a 3DMM, which comes at a very large cost of manual labour for collecting and annotating such images with meta data. Sometimes, only the resulting 3DMMs become available to the research community, and not the raw 3D images. This is particularly true of 3D images of the human face/head, due to increasingly stringent data protection regulations. Furthermore, even if 3DMMs have overlapping parts, their resolution and ability to express detailed shape variation may be quite different, and we may wish to capture the best properties of multiple 3DMMs within a single model. However, it is currently extremely difficult to combine and enrich existing 3DMMs with different attributes that describe distinct parts of an object without such raw data. Therefore, in this paper, we present a general approach that can be employed to combine 3DMMs from different parts of an object class into a single 3DMM. Due to their widespread use in the computer vision community, we fuse 3DMMs of the human face and the full human head, as our exemplar, thus creating the first combined, large-scale, full-head morphable model. The technique is readily extensible to incorporate detailed models of the ear  and the body, and indeed is applicable to any object class well-described by 3DMMs.
More specifically, although there have been many models of the human face both in terms of identity [17, 30, 28] and expression [8, 29], very few deal with the complete head anatomy . Building a high-quality, large-scale statistical model that describes the anatomy of the full human head paves directions across numerous disciplines. First, it will assist craniofacial clinicians in diagnosis, surgical planning, and assessment. Second, generating proportionally correct head models based on the geometry of the face will aid computer graphics designers to create realistic avatar-like representations. Finally, a head model will give opportunities that aim at reconstructing a full head representation from data-deficient sources, such as 2D images.
Our key contributions are: (i) a methodology that aims to fuse shape-based 3DMMs, using the human face and head as an exemplar. (Note that the texture component of the 3DMM is out-of-scope of this paper and the subject of our future work.) In particular, we propose both a regression method based on latent shape parameters, and a covariance combination approach, utilized in a Gaussian process framework, (ii) a combined large-scale statistical model of the human head in terms of ethnicity, age and gender that is significantly more accurate than any other existing head morphable model - we make this publicly-available for the benefit of the research community, including versions with and without eyes and teeth, and (iii) an application experiment in which we utilize the combined 3DMM to perform full head reconstruction from unconstrained single images, also utilizing the FaceWarehouse blendshapes to handle facial expressions.
2 Face and head model literature
The first 3DMM was proposed by Blanz and Vetter . They were the first to to recognize the generative capabilities of a 3DMM and they proposed a technique to capture the variations of 3D faces. Only 200 scans were used to build the model (100 male and 100 female) where dense correspondences were computed based on optical flow that depends on an energy function that describes both the shape and texture. The Basel Face Model (BFM) is the most widely-used and well-known 3DMM, which was built by Paysan et al.  and utilizes a better registration method than the original Blanz-Vetter 3DMM. They use a known template mesh in which all the vertices have known positions and then they register it to the training scans by utilizing an optimal step Non-rigid Iterative Closest Point algorithm (NICP) . Standard PCA was employed as a dimensionality reduction technique to construct their model.
Recently, Booth et al.  built a Large-scale Face Model (LSFM) by utilizing nearly face scans. The model is constructed by applying a weighted version of the optimal-step NICP algorithm , followed by a Generalized Procrustes Analysis (GPA) and standard PCA. Due to the large number of facial scans, a robust automated procedure was carried out including 3D landmark localization and error pruning of badly registered scans. This work was the first to introduce bespoke models in terms of age, gender and ethnicity, and is the most information-rich 3DMM of face shapes in neutral expression produced to date.
Li et al  used a total of head scans from the US and European CEASAR body scan database  to build a statistical model of the entire head. The aim of this work focuses mainly on the temporal registration of 3D scans rather than on the topology of the head area. The data consists of full body scans and the resolution in which the head topology was recorded in is insufficient to depict correctly the shape of each individual human head. In addition, the template used for registration in this method is extremely sparse with only vertices which makes it difficult to accurately represent the entire head. Moreover, the registration process incorporates coupling weights for the back of head and the back of the neck, which constrains drastically the actual statistical variation of the entire head area. An extension of this work is proposed in 
in which a non-linear model is constructed using convolution mesh autoencoders focusing on facial expressions, but still it lacks the statistical variation of the full cranium. Similarly, in the work of Hu and Saito
, a full head model is created from single images mainly for real-time rendering. The work aims at creating a realistic avatar model which includes 3D hair estimation. The head topology is considered to be unchanged for all subjects and only the face part of the head is a statistically-correct representation.
The most accurate craniofacial 3DMM of the human head both in terms of shape and texture, is the Liverpool-York Head model (LYHM) 
. In this work, global craniofacial 3DMMs and demographic sub-population 3DMMs were built from 1,212 distinct identities. Although this work is the first that describes the statistical correlation between the cranium and the face part, it lacks detail of the facial characteristics, as the spatial resolution of the facial region is not significantly higher than the cranial region. In effect, the variance of the cranial and neck areas dominates that of the facial region in the PCA parameterization. Also, although the model describes how the cranium is affected given the age of the subject, it is biased in terms of ethnicity, due to the lack of ethnic diversity in the dataset.
3 Face and head shape combination
In this section, we propose two methods to combine the LSFM face model with the LYHM full head model. The first approach, utilizes the latent PCA parameters and solves a linear least squares problem to approximate the full head shape, whereas the second constructs a combined covariance matrix that is later utilized as a kernel in a Gaussian Process Morphable Model (GPMM) .
3.1 Regression modelling
Figure 1 illustrates the three-stage regression modeling pipeline, which comprises 1) regression matrix calculation, 2) model combination and 3) full head model registration followed by PCA modeling. Each stage is now described.
For stage 1, let us denote the 3D mesh (shape) of an object with points as a vector
The LYHM is a PCA generative head model with points, described by an orthonormal basis after keeping the first principal components and the associated eigenvalues. This model can be used to generate novel 3D head instances as follows:
where are the shape parameters. Similarly the LSFM face model with number of points, is described by a corresponding orthonormal basis after keeping the principal components and the associated eigenvalues. The model generates novel 3D faces instances by:
where are the shape parameters.
In order to combine the two models, we synthesize data directly from the latent eigenspace of the head model (
) by drawing random samples from a Gaussian distribution defined by the principal eigenvalues of the head model. The standard deviation for each of the distributions is equal to the square root of the eigenvalue. In that way we produce randomlydistinct shape parameters.
After generating the random full head instances we apply non-rigid registration (NICP)  between the head meshes and the cropped mean face of the LSFM face model. We perform this task in each one of the meshes in order to get the facial part of the full head instance and describe it in terms of the LSFM topology. Once we acquire those registered meshes we project them to the LSFM subspace and we retrieve the corresponding shape parameters. Thus, for each one of the randomly produced head instances, we have a pair of shape parameters () corresponding to the full head representation and to the facial area respectively.
By utilizing those pairs we construct a matrix where we stack all the head shape parameters and a matrix where we stack the face shape parameters from the LSFM model. We would like to find a matrix to describe the mapping from the LSFM face shape parameters to the corresponding LYHM full head shape parameters . We solve this by formulating a linear least square problem that minimizes:
By utilizing the normal equation, the solution of Eq. 4 is readily given by:
where is the right pseudo-inverse of . Given a 3D face instance , we derive the 3D shape of the full head, , as follows:
In this way we can map and predict the shape of the cranium region for any given face shape in terms of LYHM topology.
In stage 2 (Fig. 1), we employ the large MeIn3D database  which includes nearly 3D face images, and we utilize the regression matrix to construct new full head shapes that we later combine with the real facial scans. We achieve this by discarding the facial region of the the full head instance which has less detailed information and we replace it with the registered LSFM face of the MeIn3D scan. In order to create a unique instance we merge the meshes together by applying a NICP framework, where we deform only the outer parts of the facial mesh to match with the cranium angle and shape so that the result is a smooth combination of the two meshes. Following the formulation in , this is accomplished by introducing higher stiffness weights in the inner mesh (lower on the outside) while we apply the NICP algorithm. To compute those weights we measure the Euclidean distance of a given point from the nose tip of the mesh and we assign a relative weight to that point. The bigger the distance from the nose tip, the smaller the weight of the point.
One of the drawbacks of the LYHM is the arbitrary neck circumference, where the neck tends to get broader when the general shape of the head increases. In stage 3 (Fig. 1), we aim at excluding this factor from our final head model by applying a final NICP step between the merged meshes and our head template . We utilized the same framework as before with the point-weighted strategy where we assign weights to the points based on their Euclidean distance from the center of the head mass. This helps us avoid any inconsistencies of the neck area that might appear from the regression scheme. For the area around the ear, we have introduced 50 additional landmarks to control the registration and preserve the general shape of the ear area.
After implementing the aforementioned pipeline for each one of the meshes, we perform PCA on the points of the mesh and we acquire a new generative full head model that exhibits more detail in the face area in combination with bespoke head shapes.
3.2 Gaussian process modeling
Gaussian processes for model combination is a less complicated and more robust technique that does not generate irregular head shapes due to poor regression values.
The concept of Gaussian Process Morphable Models (GPMMs) was recently introduced in [22, 14, 18]. The main contribution of GPMMs is the generalization of classic Point Distribution Models (such as are constructed using PCA), with the help of Gaussian processes. A shape is modeled as a deformation from the reference shape i.e. a shape can be represented as:
where is a deformation function with . The deformations are modeled as a Gaussian process . Where is the mean deformation and is a covariance function or kernel.
The Gaussian process model is capable of operation outside of the space of valid face shapes. This depends highly on the kernels chosen for this task. In the classic approaches, the deformation function is learned through a series of typical example surfaces where a set of deformation fields is learned where denotes the deformation field that maps a point on the reference shape to the corresponding point on the -training surface.
A Gaussian process that models this characteristic deformations is obtained by estimating the empirical mean:
and the covariance function:
This kernel is defined as the empirical/sample covariance kernel. This specific Gaussian process model is a continuous analog to a PCA model and it operates in the facial deformation spectrum. For each one of the models (LYHM, LSFM), we know the principal orthonormal basis and the eigenvalues. Hence the covariance matrix for each model is defined:
where and are the covariance matrices, and the and are diagonal matrices with the eigenvalues in their the main diagonal of the head and face model respectively.
We aim at constructing a universal covariance matrix that accommodates the high detailed facial properties of the LSFM and the head distribution from the LYHM. We keep, as a reference, the mean of the head model and we non-rigidly register the mean face of the LSFM. Both PCA models must be in the same scale space for this method to work, which was not necessary for the regression method. Similarly, we register our head template by utilizing the same pipeline as before for full head registration, which is going to be used as the reference mesh for the new joined covariance matrix.
For each point pair in , there exists a local covariance matrix . In order to calculate its value, we begin by projecting the points onto the mean head mesh. If both points lie outside the face area that the registered mean mesh of LSFM covers, we identify their exact location in the mean head mesh in terms of barycentric coordinates for the point and for the point with respect to their corresponding triangles .
Each vertex pair in between the triangles, has an individual covariance matrix with . Therefore, we blend those local vertex-covariance matrices to acquire our final local as follows:
where is a weighting scheme based on the barycentric coordinates of the points. An illustration of the aforementioned methodology can be seen in Figure 2.
In the case where the points lie in the face area, we initially repeat the same procedure by projecting and calculating a blended covariance matrix given the mean face mesh of LSFM, followed by a blended covariance matrix calculated given the mean head mesh of LYHM. We formulate the final local covariance matrix as:
where is a normalized weight, based on the Euclidean distances of the points from the nose-tip of the registered meshes. We apply this weighting scheme to smoothly blend the properties of the head and face model and to avoid the discontinuities that appear on the borders of the face and head area.
Lastly, when the points belong to different areas i.e. ( point on face, point on head) we simply follow the first method that exploits just the head covariance matrix , since the correlation of the face/head shape only exist in the LYHM. After repeating the aforementioned methodology for every point pair in and calculating the entire joined covariance matrix , we are able to sample new instances from the Gaussian process morphable model.
3.3 Model Refinement
To refine our model, we begin by exploiting the already trained GPMM of the previous section. With our head template and the universal covariance matrix , we define a kernel function:
where and are two given points from the domain where the Gaussian process is defined and the function returns the index of the closest point of on the surface . We then define our GPMM as:
where . For each scan in the MeIn3D dataset, we first try to reconstruct a full head registration with our GPMM using Gaussian Process Regression [22, 14]. Given a set of observed deformations subject to Gaussian noise , Gaussian process regression computes a posterior model . The landmark pairs between a reference mesh and the raw scan define a set of sparse mappings, which tells us exactly how the points on the reference mesh will deform. Any sample from this posterior model will then have fixed deformations on our observed points i.e. facial landmarks. The mean and covariance are computed as:
For a scan with landmarks , we first compute a posterior model based on the sparse deformations defined by the landmarks:
We then refine the posterior model with Iterative Closest Point algorithm. More specifically, at each iteration we compute the current regression result as , which is the reference shape wrapped with the mean deformation of the posterior model . We then find the closest points for each point in on , and update our posterior model as:
Since the raw scans in the MeIn3D database can be noisy, we exclude a pair of correspondence if is on the edge of or the distance between and exceed a threshold. After the final iteration we obtain the regression result . We then non-rigidly align the face region of to the face region of the raw scan to obtain our final reconstruction.
In practice, we noticed that the reconstructions often produce unrealistic head shapes. We therefore modify the covariance matrix before the Gaussian process regression. We first compute the principal components by decomposing , then reconstruct the covariance matrix using Eq. 10 with fewer statistical components. With the full head reconstructions from the MeIn3D dataset, we then compute a new sample covariance matrix, and repeat the previous GP regression process to refine the reconstructions. Finally we perform PCA on the refined reconstructions to obtain our final refined model.
4 Intrinsic evaluation of CFHM models
We name our combined full head model as the Combined Face & Head Model (CFHM) and now show its comparative performance. Following common practice, we evaluate our CFHM variations compared to LYHM by utilizing, compactness, generalization and specificity [12, 9, 5]. For all the subsequent experiments we utilise the original head scans of  from which we have chosen 300 head meshes that were excluded from the training procedure. This test set was randomly chosen within demographic constrains to ensure ethnic, age and gender diversity. We name our model variations as: CFHM-reg built by the regression method, CFHM-GP built by the Gaussian processes kernels framework and finally, CFHM-ref built after refinement with Gaussian process regression. Also, we present bespoke modes in terms of age and ethnicity, constructed by the Gaussian processes kernels method coupled with refinement.
The top graphs in Figure 4 present the compactness measures of the CFHM models compared to LYHM. Compactness calculates the percentage of variance of the training data that is explained by the model, when certain number of principal components are retained. The models CFHM-reg, CFHM-GP express higher compactness compared to the model after the refinement. The compactness ability of the all proposed methods is far greater than the LYHM as it can be seen by the graph. Both global and bespoke CFHM models can be considered sufficiently compact.
The center row of Fig. 4 illustrates the generalization error which demonstrates the ability of the models to represent novel head shapes that are unseen during training. To compute the generalization error for a given number of principal components retained, we compute the per-vertex Euclidian distance between every sample of the test set and its corresponding model projection and then take the average value over all vertices and test samples. All of the proposed models exhibit far greater generalization capability compared to LYHM. The refined model CFHM-ref tends to generalize better than the other approaches, especially in the range of to components. Additionally, we plot the generalization error of the bespoke models against the CFHM-ref in center Figure 4 (b). In order to derive a correct generalization measure for the bespoke CFHM-ref, for every mesh we use its demographic information, we project it on the subspace of the corresponding bespoke model and then we compute an overall average error. We observe that the CFHM-ref mostly outperforms the bespoke generalization models, which might be attributed to the fact that many of the specific models are trained from smaller cohorts, and so run out of interesting statistical variance.
Lastly, the bottom graphs of Figure 4 show the specificity measures of the introduced models, which evaluate the validity of synthetic faces generated by a model. We randomly synthesize 5,000 faces from each model for a fixed number of components and measure how close they are to the real faces based on a standard per-vertex Euclidean distance metric. We observe that the model which holds the best error results is the proposed refined model CFHM-ref. The LYHM model demonstrates better specificity error than the CFHM-reg, CFHM-GP models only in the first 20 components. Both of the proposed combined models exhibit steady error measures () after keeping components greater than 20. This is due to the higher compactness that both combined models demonstrate, which enables them to maintain certain specificity error after the 20 components. For all bespoke models we observe that the specificity errors attain particularly low values, in the range of to mm. This is evidence that the synthetic faces generated by both global and bespoke CFHM models are realistic enough.
Our results show that our combination techniques yield models that are capable of exhibiting improved intrinsic characteristics compared to the original LYHM head model.
5 Head reconstruction from single images
As an application experiment, we outline a methodology that enables us to reconstruct the entire head shape from unconstrained single images. We strictly utilize only one view/pose for head reconstruction in contrast to  where multiple images were utilized with photometric constraints. We achieve this by regressing from a latent space that represents the 3D face and ear shape to the latent space of the full head models constructed by the proposed methodologies. We begin by building a PCA model of the inner face along with landmarks on each ear as described in . We utilize the head meshes produced by our proposed methods. After building the face-ear PCA model, we project each one of the face-ear examples to get the associated shape parameters . Similarly, we project the full head mesh of the same identity to the full head PCA model in order to the acquire the latent shape parameters of the entire head . Similarly, as in section 3.1, we construct a regression matrix which works as a mapping from the latent space of the ear/face shape to the full head representation.
In order to reconstruct the full head shape from 2D images we begin by fitting a face 3DMM utilizing the “In-the-Wild” feature-based texture algorithm proposed in . Afterwards, we implement an ear detector and an Active Appearance Model (AAM) as proposed in  to localize the ear landmarks in the 2D image domain. Since we have fitted a 3DMM in the image space, we already have the camera parameters,i.e., focal length, rotation, translation. To this effect, we can easily retrieve the ear landmarks in the 3D space by solving an inverse perspective-n-point problem  given the camera parameters and the depth values of the fitted mesh. We mirror the 3D landmarks with respect to the z-axis to obtain the missing landmarks of the occluded ear. After acquiring the facial part and the ear landmarks we are able to attain the full head representation with the help of the regression matrix. Since each proposed method estimates a slightly different head shape for the face scans, we repeat the aforementioned procedure by building bespoke regression matrices for each head model. Some qualitative results can be seen in Figure 5.
We evaluate quantitatively our methodology by rendering distinct head scans from our test set in frontal and side poses varying from to degrees around the -axis in order for the ears to be visible in the image space. None of the head scans utilized for the evaluation, belong in the training process of any head model. We apply our previous procedure, where we fit a 3DMM face and we detect the ear landmarks in the image plane. Then for each method we exploit the bespoke regression matrix to predict the entire head shape. We measure the per-vertex error between the recovered head shape and the actual ground-truth head scan by projecting each point of the fitted mesh to the ground-truth and measuring the Euclidean distance. All distances are normalized based on the inter-ocular distance of the recovered mesh. Fig 6 shows the cumulative error distribution for this experiment, for the four models under test. Table 1 and 2 report the corresponding Area Under Curve (AUC) and failure rates for the fitted and the actual ground truth 3D facial meshes respectively. In both situations, the LYHM struggles to recover the head shapes. CFHM-reg and CFHM-GP perform equally, whereas the model after refinement attains the best results. Finally, Fig. 7 illustrates regression of the full head shape, when only the face of the imaged subject is visible.
|Method||AUC||Failure Rate (%)|
|Method||AUC||Failure Rate (%)|
We presented a pipeline to fuse multiple 3DMMs into a single 3DMM and used it to combine the LSFM face model and the LYHM head model. This resulting 3DMM captured desirable properties of both constituent 3DMMs; namely high facial detail of the facial model and the full cranial shape variations of the head model. The augmented model is capable of representing and reconstructing any given face/head shape due to the high variation of facial and head appearances existing in the original models. We demonstrated that our methodology yielded a statistical model that is considerably superior to the original constituent models. Finally we illustrated the model’s utility in full head reconstruction from a single image, where only the face is visible. In future work we will add a texture component to our models, extrapolating cranial texture from facial texture.
-  O. Aldrian and W. A. Smith. Inverse rendering of faces with a 3d morphable model. IEEE transactions on pattern analysis and machine intelligence, 35(5):1080–1093, 2013.
B. Amberg, S. Romdhani, and T. Vetter.
Optimal step nonrigid icp algorithms for surface registration.
Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8, 2007.
-  V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proc 26th annual conf on Computer Graphics and Interactive Techniques, pages 187–194, 1999.
-  V. Blanz and T. Vetter. Face recognition based on fitting a 3d morphable model. IEEE Transactions on pattern analysis and machine intelligence, 25(9):1063–1074, 2003.
-  T. Bolkart and S. Wuhrer. A groupwise multilinear correspondence optimization for 3d faces. In Proceedings of the IEEE International Conference on Computer Vision, pages 3604–3612, 2015.
-  J. Booth, E. Antonakos, S. Ploumpis, G. Trigeorgis, Y. Panagakis, S. Zafeiriou, et al. 3d face morphable models “in-the-wild”. In Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition, 2017.
-  J. Booth, A. Roussos, S. Zafeiriou, A. Ponniah, and D. Dunaway. A 3d morphable model learnt from 10,000 faces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5543–5552, 2016.
-  A. M. Bronstein, M. M. Bronstein, and R. Kimmel. Expression-invariant 3d face recognition. In international conference on Audio-and video-based biometric person authentication, pages 62–70, 2003.
-  A. Brunton, A. Salazar, T. Bolkart, and S. Wuhrer. Review of statistical shape spaces for 3d data with comparative analysis for human faces. Computer Vision and Image Understanding, 128:1–17, 2014.
-  H. Dai, N. Pears, and W. Smith. A data-augmented 3d morphable model of the ear. In Automatic Face & Gesture Recognition (FG 2018), 2018 13th IEEE International Conference on, pages 404–408. IEEE, 2018.
-  H. Dai, N. Pears, W. Smith, and C. Duncan. A 3d morphable model of craniofacial shape and texture variation. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 3104–3112, 2017.
-  R. Davies, C. Twining, and C. Taylor. Statistical models of shape: Optimisation and evaluation. Springer Science & Business Media, 2008.
-  M. De Smet and L. Van Gool. Optimal regions for linear model-based 3d face reconstruction. In Asian Conference on Computer Vision, pages 276–289, 2010.
-  T. Gerig, A. Morel-Forster, C. Blumer, B. Egger, M. Luthi, S. Schönborn, and T. Vetter. Morphable face models-an open framework. In Automatic Face & Gesture Recognition (FG 2018), 2018 13th IEEE International Conference on, pages 75–82, 2018.
-  G. Hu, F. Yan, C.-H. Chan, W. Deng, W. Christmas, J. Kittler, and N. M. Robertson. Face recognition using a unified 3d morphable model. In European Conference on Computer Vision, pages 73–89, 2016.
-  L. Hu, S. Saito, L. Wei, K. Nagano, J. Seo, J. Fursund, I. Sadeghi, C. Sun, Y.-C. Chen, and H. Li. Avatar digitization from a single image for real-time rendering. ACM Transactions on Graphics (TOG), 36(6):195, 2017.
-  P. Huber, G. Hu, R. Tena, P. Mortazavian, P. Koppen, W. J. Christmas, M. Ratsch, and J. Kittler. A multiresolution 3d morphable face model and fitting framework. In Proceedings of the 11th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2016.
-  P. Koppen, Z.-H. Feng, J. Kittler, M. Awais, W. Christmas, X.-J. Wu, and H.-F. Yin. Gaussian mixture 3d morphable face model. Pattern Recognition, 74:617–628, 2018.
-  V. Lepetit, F. Moreno-Noguer, and P. Fua. Epnp: An accurate o (n) solution to the pnp problem. International journal of computer vision, 81(2):155, 2009.
-  T. Li, T. Bolkart, M. J. Black, H. Li, and J. Romero. Learning a model of facial shape and expression from 4d scans. ACM Transactions on Graphics (TOG), 36(6):194, 2017.
-  S. Liang, L. G. Shapiro, and I. Kemelmacher-Shlizerman. Head reconstruction from internet photos. In European Conference on Computer Vision, pages 360–374, 2016.
-  M. Lüthi, T. Gerig, C. Jud, and T. Vetter. Gaussian process morphable models. IEEE transactions on pattern analysis and machine intelligence, 2017.
-  P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter. A 3d face model for pose and illumination invariant face recognition. In Advanced video and signal based surveillance, 2009. AVSS’09. Sixth IEEE International Conference on, pages 296–301, 2009.
-  A. Ranjan, T. Bolkart, S. Sanyal, and M. J. Black. Generating 3d faces using convolutional mesh autoencoders. arXiv preprint arXiv:1807.10267, 2018.
-  K. M. Robinette, S. Blackwell, H. Daanen, M. Boehmer, and S. Fleming. Civilian american and european surface anthropometry resource (caesar), final report. volume 1. summary. Technical report, 2002.
F. C. Staal, A. J. Ponniah, F. Angullia, C. Ruff, M. J. Koudstaal, and
Describing crouzon and pfeiffer syndrome based on principal component analysis.Journal of Cranio-Maxillofacial Surgery, 43(4):528–536, 2015.
-  Y. Zhou and S. Zaferiou. Deformable models of ears in-the-wild for alignment and recognition. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pages 626–633, 2017.
-  X. Zhu, Z. Lei, X. Liu, H. Shi, and S. Z. Li. Face alignment across large poses: A 3d solution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 146–155, 2016.
-  X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li. High-fidelity pose and expression normalization for face recognition in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 787–796, 2015.
-  X. Zhu, J. Yan, D. Yi, Z. Lei, and S. Z. Li. Discriminative 3d morphable model fitting. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, volume 1, pages 1–8, 2015.