1 Introduction
Due to their ability to infer and represent 3D surfaces, 3D Morphable Models (3DMMs) have many applications in computer vision, computer graphics, biometrics, and medical imaging
[8, 26, 2, 49]. Many registered raw 3D images (‘scans’) are required for correctly training a 3DMM, which comes at a very large cost of manual labour for collecting and annotating such images with meta data. Sometimes, only the resulting 3DMMs become available to the research community, and not the raw 3D images. This is particularly true of 3D images of the human face/head, due to increasingly stringent data protection regulations. Furthermore, even if 3DMMs have overlapping parts, their resolution and ability to express detailed shape variation may be quite different, and we may wish to capture the best properties of multiple 3DMMs within a single model. However, it is currently extremely difficult to combine and enrich existing 3DMMs with different attributes that describe distinct parts of an object without such raw data. Therefore, we present a general approach that can be employed to combine 3DMMs from different parts of an object class into a single 3DMM. Due to their widespread use in the computer vision community, we fuse 3DMMs of the human face and the full human head as our exemplar. We add detailed models of the ears, eyes and eye regions to our head model, along with a basic model of the oral cavity, tongue and teeth. Thus we create a largescale, fullhead morphable model that has a more complete representation of shape variation than any other published to date. The technique is readily extensible to incorporate detailed models of the human body [25, 3], and indeed is applicable to any object class welldescribed by 3DMMs. Recent works that aim at predicting the 3D representation of more than one morphable model [47, 31], try to solve this problem with a partbased approach where multiple separate models are fitted and then linearly blended into the final result. Our framework aims at avoiding any discontinuities that might appear from partbased approaches by fusing all models into one single morphable model.More specifically, although there have been many models of the human face both in terms of identity [28, 63, 61] and expression [12, 62], very few deal with the complete head anatomy [14]. Building a highquality, largescale statistical model that describes the anatomy of the full human head paves directions across numerous disciplines. First, it will assist craniofacial clinicians in diagnosis, surgical planning, and assessment. Second, generating proportionally correct head models based on the geometry of the face will aid computer graphics designers to create realistic avatarlike representations. Third, ergonomic design of headwear, eyewear, breathing apparatus and so on benefits from accurate models of craniofacial shape variation across the population. Finally, a head model will give opportunities that aim at reconstructing a full head representation from datadeficient sources, such as 2D images.
Our key contributions are: (i) a methodology that aims to fuse shapebased 3DMMs, using the human face, head and ear as an exemplar. In particular, we propose both a regression method based on latent shape parameters, and a covariance combination approach, utilized in a Gaussian process framework, (ii) a combined largescale statistical model of the human head in terms of ethnicity, age and gender that is significantly more accurate than any other existing head morphable model  we make this publiclyavailable for the benefit of the research community, including versions with and without eyes and teeth, and (iii) an application experiment in which we utilize the combined 3DMM to perform full head reconstruction from unconstrained single images.
The remainder of the paper is structured as follows. In Section 2 we review relevant related work. In Section 3 we elaborate on the methodology of the face and head model combination and in Sections 4, 5 we describe the modeling of ears and eyes, which results in our complete head representation. In Section 7, we describe our head texture completion pipeline and in Section 8, we outline a series of quantitative and qualitative experiments. Finally, conclusions are presented in Section 9.
2 Related work
A very recent survey [19] identified more complete statistical modelling of the human head as an important open challenge in the development of 3DMMs. Motivated by this goal, we begin by surveying existing attempts to model the face, the full craniofacial region, eyes and ears. An earlier version of the work in this paper was originally presented in [44]. Here, we have extended the model by additionally integrating detailed ear and eye models and a full head texture model as well as including further experimental evaluation.
2.1 Face models
The first 3DMM was proposed by Blanz and Vetter [7]. They were the first to to recognize the generative capabilities of a 3DMM and they proposed a technique to capture the variations of 3D faces. Only 200 scans were used to build the model (100 male and 100 female) where dense correspondences were computed based on optical flow that depends on an energy function that describes both the shape and texture. The Basel Face Model (BFM) is the most widelyused and wellknown 3DMM, which was built by Paysan et.al.[42] and utilizes a better registration method than the original BlanzVetter 3DMM. They use a known template mesh in which all the vertices have known positions and then they register it to the training scans by utilizing an optimal step Nonrigid Iterative Closest Point algorithm (NICP) [4]. Standard PCA was employed as a dimensionality reduction technique to construct their model.
Recently, Booth et.al.[11] built a Largescale Face Model (LSFM) by utilizing nearly face scans. The model is constructed by applying a weighted version of the optimalstep NICP algorithm [17], followed by a Generalized Procrustes Analysis (GPA) and standard PCA. Due to the large number of facial scans, a robust automated procedure was carried out including 3D landmark localization and error pruning of badly registered scans. This work was the first to introduce bespoke models in terms of age, gender and ethnicity, and is the most informationrich 3DMM of face shapes in neutral expression produced to date.
2.2 Head models
In terms of 3DMMs associated with the human body, the main focus of the research literature has been on the reconstruction of the human face, but not other parts of the human head. The reason for this is mainly due to the lack of 3D image datasets that describe the other parts of the human head. In recent years, a few works such as [34] have tried to tackle this task, in which a total of head scans was utilized from the US and European CEASAR body scan database [46] to build a statistical model of the entire head. The aim of this work focuses mainly on the temporal registration of 3D scans rather than on the topology of the head area. The data consists of full body scans and the resolution in which the head topology was recorded in is insufficient to depict correctly the shape of each individual human head. In addition, the template used for registration in this method is extremely sparse with only vertices which makes it difficult to accurately represent the entire head. Moreover, the registration process incorporates coupling weights for the back of head and the back of the neck, which drastically constrains the actual statistical variation of the entire head area. An extension of this work is proposed in [45]
in which a nonlinear model is constructed using convolution mesh autoencoders focusing on facial expressions, but still it lacks the statistical variation of the full cranium. Similarly, in the work of Hu and Saito
[27], a full head model is created from single images mainly for realtime rendering. The work aims at creating a realistic avatar model which includes 3D hair estimation. The head topology is considered to be unchanged for all subjects and only the face part of the head is a statisticallycorrect representation.
The most accurate craniofacial 3DMM of the human head both in terms of shape and texture, is the LiverpoolYork Head model (LYHM) [14]
. In this work, global craniofacial 3DMMs and demographic subpopulation 3DMMs were built from 1,212 distinct identities. They have proposed a dense correspondence system, combining a hierarchical partsbased template morphing framework in the shape channel and a refining optical flow in the texture channel. Although this work is the first that describes the statistical correlation between the cranium and the face part, it lacks detail of the facial characteristics, as the spatial resolution of the facial region is not significantly higher than the cranial region. In effect, the variance of the cranial and neck areas dominates that of the facial region in the PCA parameterization. Also, although the model describes how the cranium is affected given the age of the subject, it is biased in terms of ethnicity, due to the lack of ethnic diversity in the dataset.
2.3 Eye and ear models
There are some key structures of the human head that have an important contribution to the appearance and identity of a person, that perhaps should be treated with greater attention and detail to such an extent that separate 3DMMs should be formulated.
One of the most significant structures of the human head are the eyes, by which we communicate and and through their movements we expresses our interests, our attention, and our emotional disposition. As a result, eye appearance[24, 59] and gaze estimation [59, 23, 59, 41] are active topics in computer vision. The first parametric approach to eye modeling was proposed by Bérard et.al.[5] where a 3DMM model was build by utilizing a database of eyeball scans [6]. Although the results of the reconstruction were appealing in terms of quality, the method for reconstruction appeared to be semiautomatic. The most recent 3DMM of the human eye was proposed in [54] focusing on the eyeball as well as on the peripheral eye region and skin region around the eye. In our work, instead of treating the eye region as a separate model we globally estimate the position of the eyes and, by employing sparse localized deformation blendshapes, we are able to determine the gaze direction and the general shape of the eye region.
Another structure of the human head that contributes to biometric recognition and to the general appearance of a person are the ears [43, 1]. Numerous works have been published over the years on earbased recognition [29, 57, 56, 53], thus making the ear an important structure to represent in any human head modeling. The two foremost examples of 3DMMs of the ear are those of Zolfghari et.al.[64] and Dai et.al.[15]. Both models were constructed by applying PCA to ear meshes from the SYMARE database [51], using and samples respectively. To overcome the limited statistical variation of their restricted sample size, [15] estimate the 3D shape of ears in a landmarked 2D ear image dataset and combine these with their initial model to propose a dataaugmented 3DMM. Both the LSFM face model and the LYHM head model templates contain the ear; however, modelling the detailed shape of the ear was not the intention during the construction of either of these. As such, the statistical variation of the ear is limited in both models, and neither contain a sufficient number of vertices in the ear region to accurately represent its complex structure. In this work, we enrich the statistical variability of the aforementioned models by fusing our own ear model constructed from highresolution ear scans. To the best of our knowledge, the resulting head model is the most complete and accurate 3DMM of the human head.
3 Face and head shape combination
In this section, we propose two methods to combine the LSFM face model with the LYHM full head model. The first approach, utilizes the latent PCA parameters and solves a linear least squares problem to approximate the full head shape, whereas the second constructs a combined covariance matrix that is later utilized as a kernel in a Gaussian Process Morphable Model (GPMM) [37].
3.1 Regression modelling
Figure 3 illustrates the threestage regression modeling pipeline, which comprises 1) regression matrix calculation, 2) model combination and 3) full head model registration followed by PCA modeling. Each stage is now described.
For stage 1, let us denote the 3D mesh (shape) of an object with points as a vector
(1) 
The LYHM is a PCA generative head model with points, described by an orthonormal basis after keeping the first principal components and the associated eigenvalues. This model can be used to generate novel 3D head instances as follows:
(2) 
where are the shape parameters. Similarly the LSFM face model with number of points, is described by a corresponding orthonormal basis after keeping the principal components and the associated eigenvalues. The model generates novel 3D faces instances by:
(3) 
where are the shape parameters.
In order to combine the two models, we synthesize data directly from the latent eigenspace of the head model (
) by drawing random samples from a Gaussian distribution defined by the principal eigenvalues of the head model. The standard deviation for each of the distributions is equal to the square root of the eigenvalue. In that way we produce randomly
distinct shape parameters.After generating the random full head instances we apply nonrigid registration (NICP) [17] between the head meshes and the cropped mean face of the LSFM face model. We perform this task in each one of the meshes in order to get the facial part of the full head instance and describe it in terms of the LSFM topology. Once we acquire those registered meshes we project them to the LSFM subspace and we retrieve the corresponding shape parameters. Thus, for each one of the randomly produced head instances, we have a pair of shape parameters () corresponding to the full head representation and to the facial area respectively.
By utilizing those pairs we construct a matrix where we stack all the head shape parameters and a matrix where we stack the face shape parameters from the LSFM model. We would like to find a matrix to describe the mapping from the LSFM face shape parameters to the corresponding LYHM full head shape parameters . We solve this by formulating a linear least square problem that minimizes:
(4) 
By utilizing the normal equation, the solution of (4) is readily given by:
(5) 
where is the right pseudoinverse of . Given a 3D face instance , we derive the 3D shape of the full head, , as follows:
(6) 
In this way we can map and predict the shape of the cranium region for any given face shape in terms of LYHM topology.
In stage 2 (Fig. 3), we employ the large MeIn3D database [11] which includes nearly 3D face images, and we utilize the regression matrix to construct new full head shapes that we later combine with the real facial scans. We achieve this by discarding the facial region of the the full head instance which has less detailed information and we replace it with the registered LSFM face of the MeIn3D scan. In order to create a unique instance we merge the meshes together by applying a NICP framework, where we deform only the outer parts of the facial mesh to match with the cranium angle and shape so that the result is a smooth combination of the two meshes. Following the formulation in [17], this is accomplished by introducing higher stiffness weights in the inner mesh (lower on the outside) while we apply the NICP algorithm. To compute those weights we measure the Euclidean distance of a given point from the nose tip of the mesh and we assign a relative weight to that point. The bigger the distance from the nose tip, the smaller the weight of the point.
One of the drawbacks of the LYHM is the arbitrary neck circumference, where the neck tends to get broader when the general shape of the head increases. In stage 3 (Fig. 3), we aim at excluding this factor from our final head model by applying a final NICP step between the merged meshes and our head template . We utilized the same framework as before with the pointweighted strategy where we assign weights to the points based on their Euclidean distance from the center of the head mass. This helps us avoid any inconsistencies of the neck area that might appear from the regression scheme. For the area around the ear, we have introduced 50 additional landmarks to control the registration and preserve the general shape of the ear area.
After implementing the aforementioned pipeline for each one of the meshes, we perform PCA on the points of the mesh and we acquire a new generative full head model that exhibits more detail in the face area in combination with bespoke head shapes.
3.2 Gaussian process modeling
While regressing in the latent space to combine two distinct 3DMMs seems to demonstrate good results, Gaussian processes for model combination is a less complicated and more robust technique that does not generate irregular head shapes due to poor regression values.
The concept of Gaussian Process Morphable Models (GPMMs) was recently introduced in [37, 22, 32]. The main contribution of GPMMs is the generalization of classic Point Distribution Models (such as are constructed using PCA), with the help of Gaussian processes. A shape is modeled as a deformation from the reference shape i.e. a shape can be represented as:
(7) 
where is a deformation function with . The deformations are modeled as a Gaussian process . Where is the mean deformation and is a covariance function or kernel.
The Gaussian process model is capable of operation outside of the space of valid face shapes. This depends highly on the kernels chosen for this task. In the classic approaches, the deformation function is learned through a series of typical example surfaces where a set of deformation fields is learned where denotes the deformation field that maps a point on the reference shape to the corresponding point on the training surface.
A Gaussian process that models this characteristic deformations is obtained by estimating the empirical mean:
(8) 
and the covariance function:
(9) 
This kernel is defined as the empirical/sample covariance kernel. This specific Gaussian process model is a continuous analog to a PCA model and it operates in the facial deformation spectrum. In our case we are lacking with regards to the original head scans so we are unable to learn deformation fields from them, nor combine them with the MeIn3D facial dataset. In order to overcome this problem, we have utilized the alreadylearned point distribution models. For each one of the models (LYHM, LSFM), we know the principal orthonormal basis and the eigenvalues. Hence the covariance matrix for each model is defined:
(10) 
where and are the covariance matrices, and the and are diagonal matrices with the eigenvalues in their the main diagonal of the head and face model respectively.
We aim at constructing a universal covariance matrix that accommodates the high detailed facial properties of the LSFM and the head distribution from the LYHM. We keep, as a reference, the mean of the head model and we nonrigidly register the mean face of the LSFM. Both PCA models must be in the same scale space for this method to work, which was not necessary for the regression method. Similarly, we register our head template by utilizing the same pipeline as before for full head registration, which is going to be used as the reference mesh for the new joined covariance matrix.
For each point pair in , there exists a local covariance matrix . In order to calculate its value, we begin by projecting the points onto the mean head mesh. If both points lie outside the face area that the registered mean mesh of LSFM covers, we identify their exact location in the mean head mesh in terms of barycentric coordinates for the point and for the point with respect to their corresponding triangles .
Each vertex pair in between the triangles, has an individual covariance matrix with . Therefore, we blend those local vertexcovariance matrices to acquire our final local as follows:
(11) 
where is a weighting scheme based on the barycentric coordinates of the points. An illustration of the aforementioned methodology can be seen in Figure 4.
In the case where the points lie in the face area, we initially repeat the same procedure by projecting and calculating a blended covariance matrix given the mean face mesh of LSFM, followed by a blended covariance matrix calculated given the mean head mesh of LYHM. We formulate the final local covariance matrix as:
(12) 
where is a normalized weight, based on the Euclidean distances of the points from the nosetip of the registered meshes. We apply this weighting scheme to smoothly blend the properties of the head and face model and to avoid the discontinuities that appear on the borders of the face and head area.
Lastly, when the points belong to different areas i.e.( point on face, point on head) we simply follow the first method that exploits just the head covariance matrix , since the correlation of the face/head shape only exist in the LYHM. After repeating the aforementioned methodology for every point pair in and calculating the entire joined covariance matrix , we are able to sample new instances from the Gaussian process morphable model.
3.3 Model Refinement
The registration framework of the LYHM utilizes a modified version of the Coherent Point Drift (CPD) [40] algorithm, where a partbased approach is carried out. Due to this fact, the final PCA model in some cases accommodates head deformations that do not reflect realistic head shapes. We aim at minimizing this defect by utilizing again the MeIn3D facial database. To refine our model, we begin by exploiting the already trained GPMM of the previous section. With our head template and the universal covariance matrix , we define a kernel function:
(13) 
where and are two given points from the domain where the Gaussian process is defined and the function returns the index of the closest point of on the surface . We then define our GPMM as:
(14) 
where . For each scan in the MeIn3D dataset, we first try to reconstruct a full head registration with our GPMM using Gaussian Process Regression [37, 22]. Given a set of observed deformations subject to Gaussian noise , Gaussian process regression computes a posterior model . The landmark pairs between a reference mesh and the raw scan define a set of sparse mappings, which tells us exactly how the points on the reference mesh will deform. Any sample from this posterior model will then have fixed deformations on our observed points i.e.facial landmarks. The mean and covariance are computed as:
(15) 
(16) 
where
(17) 
(18) 
For a scan with landmarks , we first compute a posterior model based on the sparse deformations defined by the landmarks:
(19) 
We then refine the posterior model with Iterative Closest Point algorithm. More specifically, at each iteration we compute the current regression result as , which is the reference shape wrapped with the mean deformation of the posterior model . We then find the closest points for each point in on , and update our posterior model as:
(20) 
Since the raw scans in the MeIn3D database can be noisy, we exclude a pair of correspondence if is on the edge of or the distance between and exceed a threshold. After the final iteration we obtain the regression result . We then nonrigidly align the face region of to the face region of the raw scan to obtain our final reconstruction.
In practice, we noticed that the reconstructions often produce unrealistic head shapes. We therefore modify the covariance matrix before the Gaussian process regression. We first compute the principal components by decomposing , then reconstruct the covariance matrix using (10) with fewer statistical components. With the full head reconstructions from the MeIn3D dataset, we then compute a new sample covariance matrix, and repeat the previous GP regression process to refine the reconstructions. Finally we perform PCA on the refined reconstructions to obtain our final refined model.
4 Ear model combination
Both of the original 3DMMs, LSFM (face) and LYHM (head), exhibit moderate statistical variation around the ear area. For the creation of the LSFM face model, a stiffness registration framework was carried out due to the very noisy raw scans, where all the statistical variation of the outer face region, including the ears is lost. In the case of LYHM the registration framework was implemented in the entire head topology but very few head scans had adequate resolution around the ears, resulting in a coarse approximation of the ear shape. In order to overcome these limitations, we augment our combined face and head model by creating a high resolution model of the ears constructed from distinct ear scans.
4.1 High resolution ear model
To construct a 3DMM from a sufficiently large sample size, we draw on a number of different data sources. As with previous ear models, we make use of the SYMARE database [30], which provides both left and right ears of individuals. Additionally we have build a dataset of distinct highresolution ears from individuals (32 males and 32 females) ranging from 20 to 70 years old, by scanning the inner and outer area of both ears with a lightstage apparatus. In order to amplify the statistical variation of our ear model across all ages we supplement the dataset with an additional ears of children, acquired via CT scan.
The combined dataset comprises meshes. All left ears were mirrored to be consistent with the right ears. Each of the meshes was manually annotated with landmark points to guide the registration process, and then put in correspondence with a template containing vertices () using the same NICPvariant nonrigid registration framework employed for the LSFM[11]. These meshes are then rigidly aligned using Generalised Procrustes Analysis (GPA). Applying PCA to all points in the aligned meshes yields a high resolution 3DMM of the right ear. A 3DMM of the left ear is obtained by reflecting the right ear 3DMM in the sagittal plane both in terms of its mean shape and its principal components. The resulting shape components of our right ear 3DMM can be seen in Figure 6.
4.2 Fusing the ear and the head model
In order to accurately incorporate new ear shape variations to our combined face and head model we exploit the same methodology of Gaussian process modeling as described in Section 3.2. We begin by merging in a nonrigid manner the mean shape templates (, ) of each ear model (left and right) to the ears of our mean head mesh after the combination of the LSFM (face) and LYHM (head) models. Once all the mean templates are registered we calculate the covariance matrices for each individual model:
where , are the convariance matrices and the and are diagonal matrices with eigenvalues in the main diagonal of the right and the left ear respectively. Our goal is to enhance the ear shape variation of the combined covariance matrix . We begin by merging the right ear model and, as before, we keep the mean head template as a reference. For each projected point pair that belongs in the right ear area, we identify their exact location in the registered mesh in terms of barycentric coordinates with respect to the corresponding triangles. In between each vertex pair we blend the local covariance matrices as before with (11). We then perform the same procedure for the left ear model.
In order to correctly incorporate both of the ear models into the full head model we need to introduce a blending distance function that helps avoid discontinuities on the merge borders around the ear base. We adopt (12) as the blending mechanism for our covariance matrices and we seek to find a suitable normalized weighing scheme for the points pairs that belong in the ear templates. Naturally, ears form an elongated shape where a Euclidian distance from the base of the ear to the outer parts becomes an unsuitable measure for weighting the point pairs correctly. Instead, we first unwrap the ear mesh into a circle in 2D space, where the center belongs to the ear canal and the furthest points of the circle correspond to the base of the ear. The blending scheme is now measured in the 2D flattened space where distances are calculated from the center of the unwrapped circle.
5 Eye model combination
Accurate modeling of the characteristics of human eyes, such as gaze direction, pupil size, iris color and eyelid and eye region shape is important for creating realistic 3D face models. Both LSFM and LYHM models include limited variation of the eyelid shape, due to the low resolution of this region in the original scans, but no other characteristics of the eyes are described by these models. In the LSFM face model, the eyes are represented by points connected to the rest of the mesh, covering the visible lens area and being mainly static. In the LYHM head model, the eyes area is empty, leaving holes in the mesh at the visible eye parts of the face. To overcome these limitations, we model the eyes and peripheral eye regions with separate statistical models that we then incorporate in our final head model.
5.1 Eye models
We initially utilize a classical 3DMM optimization framework, under which we combine a statistical model of the eyelid shape, a statistical model of the eye to accurately recover eyelid shape, and gaze direction and pupil size obtained from images.
Eye region shape model: To capture the variation of eyelids and the peripheral eye region in the human face, we constructed a PCA model based on distinct 3D meshes sculpted around the eye region by a graphics artist. The 3D meshes were sculpted using the mean shape of our fused head model as the template, and thus include vertices. The eye region shape model can be expressed as:
(21) 
where is the mean shape, is the eye region shape subspace with dimension and are parameters of the model. The five most significant statistical components of our eye region model, are depicted in Figure 8 cropped in a patch around the left eye area. The variation of our eye region PCA model can accurately represent different types of eye region shapes, such as: round eyes, almond eyes, monolid, hooded eyes, upturned and downturned eyes.
Eye shape model: We model the eye gaze direction and pupil size separately from the eyelid. Particularly, we employ two separate meshes to model the eye, the outer lens and the eye ball, as depicted in Figure 7. The outer lens covers the eye ball and is static in shape, while the eye ball includes the iris and leaves a hole for the pupil to become visible. To control pupil dilation and constriction, we manually created a blendshape of the pupil size by sculpting an eye ball instance with a different pupil size and subtracting it from the original mesh. The blendshape allows us to render eye balls with arbitrary pupil size as in Figure 7. For consistency with the eyelid model, we express the eye model as a linear combination of the mean eye shape and the blendshape as , where is a parameter of the model and is the number of vertices of the eye model.
Eye texture model: To boost the reconstruction accuracy of our eye model when fitting to input images, and to recover the color of the eyes along with the shape, we attach an RGB texture model on our eye ball model , extracted from 2D images. To build we utilized 100 frontal images of human irises, which we manually annotated with respect to 16 landmarks around the iris and pupil. We first employed the 8 iris landmarks of the images and 8 corresponding landmarks of our eye ball model to align and project our eye ball model on the image plane. Then, for each image we manually adjusted parameter to make the pupil’s bound match the 8 2D pupil landmarks. We sampled the images at the projected vertex locations of the iris of our model, to create pervertex textures. For the locations outside the iris we used white to represent the sclera of the eye and black to represent the pupil. Finally, we created a PCA model for the pervertex texture of our 3D eye ball, which can be written as:
(22) 
where is the mean texture component, is the eyelid shape subspace with dimension and are parameters of the model.
5.2 Optimizationbased eye model fitting
To automatically recover eyelid shape, gaze direction, pupil size and iris color from images, we employ a 3DMM fitting approach in which we optimize our parametric models of shape and texture, based both on 2D landmarks and the texture of the eyes on images. We automatically extract
2D landmarks from images, by utilizing a deep network with hourglass architecture [18], which we trained on 3000 images that we manually annotated with 33 landmarks.The fitting pipeline is then split in two steps. First, we recover a perspective camera viewpoint for the whole head by solving a PerspectivenPoint problem between 68 2D face landmarks of the image, which we extract with [18], and 68 3D landmarks of the head model. Then, keeping the head camera fixed, we optimize our statistical eye models based on two landmarks losses and a rendering loss. We model gaze direction as an independent 3D rotation relative to the head perspective camera . In our camera models, vector includes parameters for the focal length, translation and rotation, while vector includes only rotation parameters. In both camera transformations, rotation is modeled with quarternions because of the ease of incorporating them in optimization in comparison to Euler angles.
We form the following cost function and solve with respect to our models’ parameters:
(23)  
where is the perspective projection of the eye region shape model in the image plane and is the independent rotation and perspective projection of the eye shape model in the image plane.
In (23), the first term accounts for the reconstruction of the eye region shape, based on a subset of 17 eye region landmarks, while the second term accounts for the reconstruction of the pupil size and gaze direction, based on a subset of 16 iris and pupil landmarks. The third term is a texture loss between image sampled at the model’s projected locations and our texture model instance . The last three terms are regularization terms which serve to counter overfitting and , , , and are weights used to regularize the importance of each term during optimization. Problem (23) is solved with the simultaneous variation of GaussNewton optimization as formulated in [10].
5.3 Extending the traditional approach
The described 3DMM fitting algorithm produces accurate predictions for eyelid shape, pupil size and gaze direction in images, but is relatively slow and requires multiple GaussNewton steps to converge. Thus, we attempted to take the traditional 3DMM fitting approach one step further and trained a regression network to estimate the parameters of our 3D models in a single forward pass.
To this end, we have utilized the pretrained hourglass network from Section 5.2
as an encoder and in the last layers we stack a MultiLayer Perceptron (MLP) architecture resulting in the parameters of our model
. The number of neurons in the MLP are (66, 128, 256, 1024, 512, 256, 128, 10), where the last layer represents the concatenation of the five eye region blendshape parameters, the single pupil blendshape parameter, and the four quaternions that describe the rotation of the eyeball. We trained the entire network endtoend in a supervised fashion with pairs of 2D images of the eye region and the corresponding parameters which we recovered with our 3DMM fitting pipeline. We manually filtered the 3DMM results, discarding any misaligned meshes before training. To extract the pairs we utilized AgeDB
[38] which we split into a training set ( of images) and a testing set ( of images). Our regression network achieved accuracy in recovering the eye region shape, accuracy in recovering gaze direction and accuracy in recovering pupil size. These accuracy values were calculated in terms of euclidean distance with respect to the ground truth meshes recovered by our 3DMM pipeline and the predicted meshes from the regression framework.5.4 Color estimation
High quality eye color reconstruction is difficult to achieve using low resolution images of the eye region with the standard iris PCA model of Section 5.1. To this end, we treat the problem of eye color reconstruction as a classification problem, given a bank of known iris textures as shown in Figure 7.
In order to make as accurate predictions as possible with respect to the color of the eyes of a subject depicted in an “inthewild” image, we need thousands of groundtruth eye images annotated with regards to their color. To this end, we utilized AgeDB [38] and we employed the 68 2D landmarks to extract the eye regions for each one of the images. Subsequently, since there are only seven different colors for human eyes, we manually annotated the extracted eye images with one of the following options: amber, blue, brown, gray, green, or hazel, dark brown. The cropped eye images were of size .
We used of the AgeDB data [38] for the training process and the rest for testing. We carried out the training utilizing a simple encoder architecture, similar to the one described in [39]. The only modification was with respect to the last layer, where the output dimension was changed to seven, to be in accordance with the total number of eye colors. This architecture yielded the best results, with about
classification accuracy in the test set. Given that certain eye classes are highly correlated and are even challenging to classify by humans (such as
amber and brown or gray and blue), the model actually achieves very high accuracy since the vast majority of misclassifications occurs between these groups.6 Oral cavity and teeth
An appropriate and complete representation of the human head should also model the inner mouth cavity and the teeth, in addition to the external characteristics of the human head, as these are often visible in raw images. Correctly capturing the 3D topology of the oral cavity in a single template is a challenging topic due to the lack of 3D data, the challenging nonconvex and specular teeth regions, as well as the highly deformable nature of the tongue. To progress this aspect, we have incorporated an inner mouth topology, where we model the lining inside the cheeks, the front two thirds of the tongue, the upper and lower gum, and the floor and the roof of the mouth. We treat teeth as separate meshes and we fix their location on top of the gums. The tongue and the teeth are not fitted to any training data and, as such, they do not capture any independent statistical variance. However, the overall scale for all axes, is copied and backpropagated smoothly in a decaying manner from the outer lips to the inner cavity of our head model.
7 Texture modeling and completion
Incorporating a full head texture model is vital if we aim at leveraging the proposed shape model to perform 3D reconstructions from data deficient sources like 2D images. Instead of modeling the texture space in a low frequency PCA formulation, we employ a GAN architecture [52] after bringing in correspondence all the textures in a UV domain space. In this way, we are capable of preserving the high frequency skin details and avoiding the blurriness of a PCA model. Our combined data set of textures consists of approximately K facial textures and full head textures from the original LSFM and LYHM respectively. Unfortunately, the textures of the cranium region are unwanted due to the blue latex hair caps that the subjects were instructed to wear during the image capture process.
In order to properly render the head of a given subject, apart from the shape and the facial texture, we need to also successfully visualize the entire head texture. That is, we need to find an elegant way to fill out the missing head texture, given the facial texture. The main problem that arises in this process is the scarcity of ground truth data of full head textures. To address this issue, given the facial textures, we employed a graphics artist to fill out the corresponding missing head textures. In this way, we created an adequate number of facehead texture pairs which we then used to train a pix2pixHD [52] model to fill out the missing cranium textures.
The pix2pixHD methodology is the current stateoftheart when it comes to carrying out image translation tasks in highresolution data. In our case, we learned how to automatically produce complete head textures, given the facial ones. An illustration of a head completion example can be seen in Fig 9. We trained the pix2pixHD model utilizing the learning rates and hyperparameters mentioned in the original implementation [52]. However, the global and local blocks in the generator framework were changed to 5 and 10, respectively. Moreover, no instance feature maps were added to the input and, finally, the VGG feature loss was deactivated as this led to a marginally enhanced performance in the completion process.
8 Experiments
In this section, we analyze in detail the capabilities of our fused head model by examining the intrinsic characteristics in Section 8.1. Additionally, in Sections 8.2 and 8.3, we thoroughly describe the full head reconstruction pipeline from 2D images, and we evaluate our approach both qualitatively and quantitatively for all separate attributes.
8.1 Intrinsic evaluation
After merging the LSFM face and LYHM head models together, we name our initial head model as the Combined Face & Head Model (CFHM). When this is augmented into our final model, it is named the Universal Head Model (UHM), and this combines four separate models (face, cranium, ears and eyes) into a single representation.
Following common practice, we evaluate our model variations compared to the LYHM by utilizing, compactness, generalization and specificity [16, 13, 9]. For all the subsequent experiments we utilise the original head scans of [14] from which we have chosen 300 head meshes that were excluded from the training procedure. This test set was randomly chosen within demographic constrains to ensure ethnic, age and gender diversity. We name our model variations as: CFHMreg built by the regression method, CFHMGP built by the Gaussian processes kernels framework and finally, CFHMref built after refinement with Gaussian process regression. Also, we present bespoke modes in terms of age and ethnicity, constructed by the Gaussian processes kernels method coupled with refinement.
The top graphs in Figure 10 present the compactness measures of the CFHM models compared to LYHM. Compactness calculates the percentage of variance of the training data that is explained by the model, when certain number of principal components are retained. The models CFHMreg, CFHMGP express higher compactness compared to the model after the refinement. The compactness ability of the all proposed methods is far greater than the LYHM, as can be seen by the graph. Both global and bespoke CFHM models can be considered sufficiently compact. In Figure 11 (a) the UHM model demonstrates similar compactness to CFHMreg, CFHMGP models while extending the variation in the ear area. Compared to the original ear model, the universal model is able to describe the same ear variability with fewer components.
The center row of Fig. 10 illustrates the generalization error, which demonstrates the ability of the models to represent novel head shapes that are unseen during training. To compute the generalization error for a given number of principal components retained, we compute the pervertex Euclidian distance between every sample of the test set and its corresponding model projection and then take the average value over all vertices and test samples. All of the proposed models exhibit far greater generalization capability compared to LYHM. The refined model CFHMref tends to generalize better than the other approaches, especially in the range of to components. Equvalently, as can be seen in Figure 11 (b), the UHM performs marginally better, but more importantly exhibits regular descent errors compared to all other methods, which ensures stability across all components.
Additionally, we plot the generalization error of the bespoke models against the CFHMref in Figure 10 (b) center. In order to derive a correct generalization measure for the bespoke CFHMref, for every mesh we use its demographic information, we project it on the subspace of the corresponding bespoke model and then we compute an overall average error. We observe that the CFHMref mostly outperforms the bespoke generalization models, which might be attributed to the fact that many of the specific models are trained from smaller cohorts, and so run out of interesting statistical variance.
Finally, the graphs of Figure 10 (bottom) show the specificity measures of the introduced models that evaluate the validity of the synthetic faces generated by a model. We randomly synthesize 5,000 faces from each model for a fixed number of components and measure how close they are to the real faces based on a standard pervertex Euclidean distance metric. We observe that the model that has the best error results is the proposed refined model CFHMref. The LYHM model demonstrates better specificity error than the CFHMreg, CFHMGP models only in the first 20 components. Both of the proposed combined models exhibit steady error measures () after keeping components greater than 20. This is due to the higher compactness that both combined models demonstrate, which enables them to maintain certain specificity error after the 20 components. For all bespoke models, we observe that the specificity errors attain particularly low values, in the range of to mm. This is evidence that the synthetic head generated by both global and bespoke CFHM models are realistic enough. Similarly, in Figure 11 (c) the UHM model demonstrates identical specificity measures with CFHMref since the ear fusion does not interfere with the overall ability of the model to synthesize realistic head shapes.
Our results show that our combination techniques yield models that are capable of exhibiting improved intrinsic characteristics compared to the original LYHM head model.
8.2 Head reconstruction from single images
By leveraging the UHM model, we outline a methodology that enables us to reconstruct the entire head shape including ears and eye gaze/color from unconstrained single images. We strictly utilize only one view/pose for head reconstruction in contrast to [35] where multiple images were utilized. We achieve this by regressing from a latent space that represents the 3D face and ear shape to the latent space of the full head models constructed by the proposed methodologies.
We begin by building a PCA model of the inner face along with landmarks on each ear as described in [60]. We utilize the head meshes produced by our proposed methods. After building the faceear PCA model, we project each one of the faceear examples to get the associated shape parameters . Similarly, we project the full head mesh of the same identity to the full head PCA model in order to the acquire the latent shape parameters of the entire head . As in Section 3.1, we construct a regression matrix in the same manner, which works as a mapping from the latent space of the ear/face shape to the full head representation.
In order to reconstruct the full head shape and texture from 2D images, we begin by fitting the facial part of our head model. Due to the nature of our high frequency head texture model we employ the recently proposed approach in [21] where high quality texture reconstructions are possible by leveraging a GAN texture model in a gradient descent optimization setting. Afterwards, we implement an ear detector and an Active Appearance Model (AAM) as proposed in [60] to localize the ear landmarks in the 2D image domain. Since we have fitted a facial 3DMM in the image space, we already know the camera parameters, i.e., focal length, rotation, translation. To this effect, we can easily retrieve the ear landmarks in the 3D space by solving an inverse perspectivenpoint problem [33] given the camera parameters and the depth values of the fitted mesh. We mirror the 3D landmarks with respect to the zaxis to obtain the missing landmarks of the occluded ear. After acquiring the facial part and the ear landmarks we are able to attain the full head representation with the help of the regression matrix. Since each proposed method estimates a slightly different head shape for the face scans, we repeat the aforementioned procedure by building bespoke regression matrices for each head model. In order to fill out the entire head texture we employ the texture completion methodology as described in Section 7 where from a facial texture we are able to fill out the entire head surface. Finally, after acquiring the full head shape we refine eye region shape and estimate the eye gaze/color and pupil dilation/contraction, by employing the regression network where the parameters of the eye model are estimated from a cropped image around the eye region. Qualitative results of our approach can be seen in figure 12. Because of the nature of our complete head model we are able to recover large ear variations among the reconstructed subjects as well as different eye region and head shapes in combination with high quality texture.
We evaluate quantitatively our methodology by rendering distinct head scans from our test set in frontal and side poses varying from to degrees around the axis in order for the ears to be visible in the image space. We apply our previous procedure, where we fit a facial 3DMM and we detect the ear landmarks in the image plane. Then for each method we exploit the bespoke regression matrix to predict the entire head shape. We measure the pervertex error between the recovered head shape and the actual groundtruth head scan by projecting each point of the fitted mesh to the groundtruth and measuring the Euclidean distance. Fig 13 shows the cumulative error distribution for this experiment, for the four models under test. Table I and II report the corresponding Area Under Curve (AUC) and failure rates for the fitted and the actual ground truth 3D facial meshes respectively. In both situations, the LYHM struggles to recover the head shapes. CFHMreg and CFHMGP perform equally, whereas the model after refinement attains better results. The model that exhibits the best reconstruction in both settings is the UHM as shown in the diagrams (a) and (b) of Fig 13
. That is attributed to the high quality ear variation of the UHM model after fusion, which the head reconstruction pipeline relies on. By merging the ear model, the degrees of freedom by which the ear topology can drive the entire head shape have significantly increased. Figure
13 (c) show the ear shape estimation results of UHM model against the LYHM and the CHFMref, from 2D image landmarks compared to the actual ground truth 3D ear meshes. The UHM significantly outperforms both models by a large margin. Additional measures are described in table III.Method  AUC  Std  Failure Rate (%) 

UHM  0.875  1.74  1.51 
CFHMref  0.751  3.42  3.64 
CFHMreg  0.693  4.71  6.88 
CFHMGP  0.681  4.36  7.55 
LYHM [14]  0.605  20.95  19.21 
Method  AUC  Std  Failure Rate (%) 

UHM  0.912  1.12  0.44 
CFHMref  0.880  2.04  0.62 
CFHMGP  0.844  2.81  2.46 
CFHMreg  0.831  2.74  1.69 
LYHM [14]  0.739  18.14  14.10 
Method  AUC  Std  Failure Rate (%) 

UHM  0.802  2.4  0.32 
CFHMref Ear  0.697  6.88  2.75 
LYHM Ear [14]  0.621  17.52  12.6 
8.2.1 Special Cases
Naturally, inthewild faces of people often come with all sorts of occlusions including long hair, hats, sunglasses or even other body parts such as hands covering parts of the face/head. Similar to [21]
, we rely on a strong optimization setting in order to overcome these limitations. Due to the high frequency nature of our texture model, we are able to exclude any occluding artifacts that might appear and generate realistic head shapes. Also thanks to the face recognition component (face identity features) of
[21] in the gradient descent optimization framework, we are capable of reconstructing realistic humanlike head shapes from oilpaintings and animated characters. As can be seen in Figure 14, we are able to reconstruct pleasing head shapes and textures from images with various occlusions (hair, sunglasses, hats, hands) from paintinglike images of people and from images of animated characters. In cases where both ears were not visible in the images, we utilized the mean ear landmarks of the UHM model in order to acquire the entire head shape.8.3 Eye model evaluation
We evaluate the eye modeling pipeline of our UHM both qualitatively in terms of resemblance between reconstructions and input images and quantitatively in the task of gaze estimation from single images. Figure 15 includes qualitative results on reconstruction of the eye region from single images by our regression network described in Sec. 5.3. Reconstructions produced by our pipeline accurately simulate the eyelid shape and gaze direction of the corresponding images, while the pupil size also reasonably adapts to the pupil, wherever it is visible.
To further evaluate the eye modeling module of the UHM, we perform a gaze estimation experiment and compare our results with eye3DMM [55], in which gaze direction is also estimated by fitting a 3DMM of the eye region. Our model, builds on a similar pipeline and extends it by training an endtoend network which regresses the 3DMM parameters of our eye models. Table IV includes gaze estimation results in terms of mean angular errors, on the Eyediap database [20]. For fair comparison with other methods, we didn’t include the extreme gaze directions of Eyediap in our experiments. Our model outperforms eye3DMM [55] by .
ours  Eye3DMM  CNN  RF  kNN  ALR  SVR  synth.  

Gaze error (M)°  8.85  9.44  10.5  12.0  12.2  12.7  15.1  19.9 
, Random Forests (RF)
[50], kNN [58], AdaptiveLinear Regression (ALR) [36], and Support Vector Regression (SVR) [48] in mean gaze estimation error on the Eyediap database [20].9 Conclusion
In this work, we propose the first human head 3DMM representation that is complete in the sense that it demonstrates meaningful variations across all major visible surfaces of the human head  that is face, cranium, ears and eyes. In addition, for realistic renderings in openmouth expressions, a basic model of the oral cavity, tongue and teeth is included. We presented a pipeline to fuse multiple 3DMMs into a single 3DMM and used it to combine the LSFM face model, the LYHM head model, and a highdetail ear model. Furthermore, we incorporated a detailed eye model that is capable of reconstructing accurately the eyelid shape and the shape around the eyes as well as the eye gaze and color. Additionally, we build a complete highdetail head texture model by constructing a framework that enables us to complete the missing head texture for any given facial texture. The resulting universal head model captures all the desirable properties of the constituent 3DMMs; namely, the high facial detail of the facial model, the full cranial shape variations of the head model, the additional high quality ear variations as well as the bespoke eyelid and eye region deformations. The augmented model is capable of representing and reconstructing any given head shape (including ears and eyelid shape) due to the high variation of facial and head appearances existing in the original models. We demonstrated that our methodology yielded a statistical model that is considerably superior to the original constituent models. Finally we illustrated the model’s utility in full head reconstruction from a single images.
Although our model is a significant step forward, the challenge of a universal head model remains open. We do not deal with hair, instead modelling cranium geometry with skin texture and baking facial hair into the texture. We do not fully model the statistical shape variance inside the mouth, including teeth and tongue, which is essential for realistic speech dynamics. Rather, we only statistically model external craniofacial shape. There may be value in modelling internal skull geometry and a volumetric skin model both for disentangling rigid body motion from face dynamics and also to enable more accurate rendering. Finally, we still depend on a classical shape modelling pipeline of GPA and PCA where more sophisticated, nonlinear models may be preferable.
Acknowledgments
S. Ploumpis was suppored by EPSRC Project (EP/N007743/1) FACER2VM. S. Zafeiriou acknowledges funding from a Google Faculty Award, as well as from the EPSRC Fellowship DEFORM: Large Scale Shape Analysis of Deformable Models of Humans (EP/S010203/1). N. Pears and W. Smith acknowledge funding from a Google Daydream award. W. Smith was supported by the Royal Academy of Engineering under the Leverhulme Trust Senior Fellowship scheme. We would like to formally thank Vasileios Triantafyllou (computer graphics specialist at Facesoft) for all the visual content of this work, as well as for his valuable contribution to the head texture completion dataset.
References
 [1] (2013) A survey on ear biometrics. ACM computing surveys (CSUR) 45 (2), pp. 22. Cited by: §2.3.
 [2] (2013) Inverse rendering of faces with a 3d morphable model. IEEE transactions on pattern analysis and machine intelligence 35 (5), pp. 1080–1093. Cited by: §1.
 [3] (2003) The space of human body shapes: reconstruction and parameterization from range scans. In ACM transactions on graphics (TOG), Vol. 22, pp. 587–594. Cited by: §1.

[4]
(2007)
Optimal step nonrigid icp algorithms for surface registration.
In
Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on
, pp. 1–8. Cited by: §2.1.  [5] (2016) Lightweight eye capture using a parametric model. ACM Transactions on Graphics (TOG) 35 (4), pp. 117. Cited by: §2.3.
 [6] (2014) Highquality capture of eyes. ACM Transactions on Graphics (TOG) 33 (6), pp. 223. Cited by: §2.3.
 [7] (1999) A morphable model for the synthesis of 3d faces. In Proc 26th annual conf on Computer Graphics and Interactive Techniques, pp. 187–194. Cited by: §2.1.
 [8] (2003) Face recognition based on fitting a 3d morphable model. IEEE Transactions on pattern analysis and machine intelligence 25 (9), pp. 1063–1074. Cited by: §1.
 [9] (2015) A groupwise multilinear correspondence optimization for 3d faces. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3604–3612. Cited by: §8.1.
 [10] (2017) 3D face morphable models “inthewild”. In Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition, Cited by: §5.2.
 [11] (2016) A 3d morphable model learnt from 10,000 faces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5543–5552. Cited by: §2.1, §3.1, §4.1.
 [12] (2003) Expressioninvariant 3d face recognition. In international conference on Audioand videobased biometric person authentication, pp. 62–70. Cited by: §1.
 [13] (2014) Review of statistical shape spaces for 3d data with comparative analysis for human faces. Computer Vision and Image Understanding 128, pp. 1–17. Cited by: §8.1.
 [14] (2017) A 3d morphable model of craniofacial shape and texture variation. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 3104–3112. Cited by: §1, §2.2, §8.1, TABLE I, TABLE II, TABLE III.
 [15] (2018) A dataaugmented 3d morphable model of the ear. In Automatic Face & Gesture Recognition (FG 2018), 2018 13th IEEE International Conference on, pp. 404–408. Cited by: §2.3.
 [16] (2008) Statistical models of shape: optimisation and evaluation. Springer Science & Business Media. Cited by: §8.1.
 [17] (2010) Optimal regions for linear modelbased 3d face reconstruction. In Asian Conference on Computer Vision, pp. 276–289. Cited by: §2.1, §3.1, §3.1.
 [18] (2018) Cascade multiview hourglass model for robust 3d face alignment. In FG, Cited by: §5.2, §5.2.
 [19] (2019) 3D morphable face models–past, present and future. arXiv preprint arXiv:1909.01815. Cited by: §2.
 [20] (201403) EYEDIAP: a database for the development and evaluation of gaze estimation algorithms from rgb and rgbd cameras. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications, External Links: Document Cited by: §8.3, TABLE IV.
 [21] (2019) GANFIT: generative adversarial network fitting for high fidelity 3d face reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1155–1164. Cited by: §8.2.1, §8.2.
 [22] (2018) Morphable face modelsan open framework. In Automatic Face & Gesture Recognition (FG 2018), 2018 13th IEEE International Conference on, pp. 75–82. Cited by: §3.2, §3.3.
 [23] (2006) General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Transactions on biomedical engineering 53 (6), pp. 1124–1133. Cited by: §2.3.
 [24] (2009) In the eye of the beholder: a survey of models for eyes and gaze. IEEE transactions on pattern analysis and machine intelligence 32 (3), pp. 478–500. Cited by: §2.3.
 [25] (2009) A statistical model of human pose and body shape. In Computer graphics forum, Vol. 28, pp. 337–346. Cited by: §1.
 [26] (2016) Face recognition using a unified 3d morphable model. In European Conference on Computer Vision, pp. 73–89. Cited by: §1.
 [27] (2017) Avatar digitization from a single image for realtime rendering. ACM Transactions on Graphics (TOG) 36 (6), pp. 195. Cited by: §2.2.
 [28] (2016) A multiresolution 3d morphable face model and fitting framework. In Proceedings of the 11th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Cited by: §1.
 [29] (2011) Efficient detection and recognition of 3d ears. International Journal of Computer Vision 95 (1), pp. 52–73. Cited by: §2.3.
 [30] (2013) Creating the sydney york morphological and acoustic recordings of ears database. IEEE Transactions on Multimedia 16 (1), pp. 37–46. Cited by: §4.1.
 [31] (2018) Total capture: a 3d deformation model for tracking faces, hands, and bodies. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8320–8329. Cited by: §1.
 [32] (2018) Gaussian mixture 3d morphable face model. Pattern Recognition 74, pp. 617–628. Cited by: §3.2.
 [33] (2009) Epnp: an accurate o (n) solution to the pnp problem. International journal of computer vision 81 (2), pp. 155. Cited by: §8.2.
 [34] (2017) Learning a model of facial shape and expression from 4d scans. ACM Transactions on Graphics (TOG) 36 (6), pp. 194. Cited by: §2.2.
 [35] (2016) Head reconstruction from internet photos. In European Conference on Computer Vision, pp. 360–374. Cited by: §8.2.

[36]
(201410)
Adaptive linear regression for appearancebased gaze estimation
. IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (10), pp. 2033–2046. External Links: Document, ISSN Cited by: TABLE IV.  [37] (2017) Gaussian process morphable models. IEEE transactions on pattern analysis and machine intelligence. Cited by: §3.2, §3.3, §3.
 [38] (2017) Agedb: the first manually collected, inthewild age database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 51–59. Cited by: §5.3, §5.4, §5.4.
 [39] (2019) 3DFaceGAN: adversarial nets for 3d face representation, generation, and translation. arXiv preprint arXiv:1905.00307. Cited by: §5.4.
 [40] (2010) Point set registration: coherent point drift. IEEE transactions on pattern analysis and machine intelligence 32 (12), pp. 2262–2275. Cited by: §3.3.
 [41] (2018) Deep pictorial gaze estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 721–738. Cited by: §2.3.
 [42] (2009) A 3d face model for pose and illumination invariant face recognition. In Advanced video and signal based surveillance, 2009. AVSS’09. Sixth IEEE International Conference on, pp. 296–301. Cited by: §2.1.

[43]
(2012)
Ear biometrics: a survey of detection, feature extraction and recognition methods
. IET biometrics 1 (2), pp. 114–129. Cited by: §2.3.  [44] (2019) Combining 3d morphable models: a large scale faceandhead model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10934–10943. Cited by: §2.
 [45] (2018) Generating 3d faces using convolutional mesh autoencoders. arXiv preprint arXiv:1807.10267. Cited by: §2.2.
 [46] (2002) Civilian american and european surface anthropometry resource (caesar), final report. volume 1. summary. Technical report Cited by: §2.2.
 [47] (2017) Embodied hands: modeling and capturing hands and bodies together. ACM Transactions on Graphics (TOG) 36 (6), pp. 245. Cited by: §1.
 [48] (201408) Manifold alignment for person independent appearancebased gaze estimation. In 2014 22nd International Conference on Pattern Recognition, Vol. , pp. 1167–1172. External Links: Document, ISSN Cited by: TABLE IV.

[49]
(2015)
Describing crouzon and pfeiffer syndrome based on principal component analysis
. Journal of CranioMaxillofacial Surgery 43 (4), pp. 528–536. Cited by: §1.  [50] (2014) Learningbysynthesis for appearancebased 3d gaze estimation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’14, Washington, DC, USA, pp. 1821–1828. External Links: ISBN 9781479951185, Link, Document Cited by: TABLE IV.
 [51] (201401) Creating the sydney york morphological and acoustic recordings of ears database. IEEE Transactions on Multimedia 16 (1), pp. 37–46. Cited by: §2.3.
 [52] (2018) Highresolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8798–8807. Cited by: §7, §7, §7.
 [53] (2008) Blockbased and multiresolution methods for ear recognition using wavelet transform and uniform local binary patterns. In 2008 19th International Conference on Pattern Recognition, pp. 1–4. Cited by: §2.3.
 [54] (2016) A 3d morphable eye region model for gaze estimation. In European Conference on Computer Vision, pp. 297–313. Cited by: §2.3.
 [55] (2016) A 3d morphable eye region model for gaze estimation. In ECCV, Cited by: §8.3, TABLE IV.
 [56] (2008) Ear recognition using lle and idlle algorithm. In 2008 19th International Conference on Pattern Recognition, pp. 1–4. Cited by: §2.3.
 [57] (2010) Ear recognition under partial occlusion based on neighborhood preserving embedding. In Biometric Technology for Human Identification VII, Vol. 7667, pp. 76670Y. Cited by: §2.3.
 [58] (201506) Appearancebased gaze estimation in the wild. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 4511–4520. External Links: Document, ISSN Cited by: TABLE IV.
 [59] (2015) Appearancebased gaze estimation in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4511–4520. Cited by: §2.3.
 [60] (2017) Deformable models of ears inthewild for alignment and recognition. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp. 626–633. Cited by: §8.2, §8.2.
 [61] (2016) Face alignment across large poses: a 3d solution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 146–155. Cited by: §1.
 [62] (2015) Highfidelity pose and expression normalization for face recognition in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 787–796. Cited by: §1.
 [63] (2015) Discriminative 3d morphable model fitting. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, Vol. 1, pp. 1–8. Cited by: §1.
 [64] (Mar 2016) Generating a morphable model of ears. pp. 1771–1775. Cited by: §2.3.
Comments
There are no comments yet.