1 Introduction
Acquisition of 3D surface data is continually becoming more commonplace and affordable, through a variety of modalities ranging from laser scanners to structured light to binocular and multiview stereo systems. However, these data are often incomplete and noisy, and robust regularization is needed. When we are interested in a particular class of objects, such as human faces, we can use prior knowledge about the shape to constrain the reconstruction. This alleviates not only the problems of noise and incomplete data, but also occlusion. Such priors can be learned by computing statistics on databases of registered 3D face shapes.
Accurate 3D face capture is important for many applications, from performance capture to telepresence to gaming to recognition tasks to ergonomics, and considerable resources of data are available from which to learn a statistical prior on the shape of the human face (e.g. [5, 33, 32, 23]).
In this paper, we propose a novel statistical model for the shape of human faces, and use it to fit to input 3D surfaces from different sources, exhibiting high variation in expression and identity, and severe levels of data corruption in the forms of noise, missing data and occlusions. We make the following specific technical contributions:

A novel statistical shape space based on a wavelet decomposition of 3D face geometry and multilinear analysis of the individual wavelet coefficients.

Based on this model, we develop an efficient algorithm for learning a statistical shape model of the human face in varying expressions.

We develop an efficient algorithm for fitting our model to static and dynamic point cloud data, that is robust with respect to highly corrupted scans.

We publish our statistical model and code to fit it to point cloud data [6].
Our model has the following advantages. First, it results in algorithms for training and fitting that are highly efficient and scalable. By using a wavelet transform, we decompose a highdimensional global shape space into many localized, decorrelated lowdimensional shape spaces. This dimensionality is the dominant factor in the complexity of the numerical routines used in both training and fitting. Training on thousands of faces takes a few minutes, and fitting to an input scan takes a few seconds, both using a singlethreaded implementation on a standard PC.
Second, it allows to capture finescale details due to its local nature, as shown in Figure 5, while retaining robustness against corruption of the input data. The wavelet transform decomposes highly correlated vertex coordinates into decorrelated coefficients, upon which multilinear models can be learned independently. Learning many lowdimensional statistical models, rather than a single highdimensional model, as used in [5, 30, 7]
, greatly reduces the risk of overfitting to the training data; it avoids the curse of dimensionality. Thus, a much higher proportion of the variability in the training data can be retained in the model. During fitting, tight statistical bounds can be placed on the model parameters for robustness, yet the model can still fit closely to valid data points.
Third, it is readily generalizable and extendable. Our model requires no explicit segmentation of the face into parts; the wavelet transform decomposes the surface hierarchically into overlapping patches, and the inverse transform recombines them. Unlike manually decomposed partbased models, eg. [13, 28, 25], it requires no sophisticated optimization of blending weights and the decomposition is not classspecific. Further, it can be easily extended to include additional information such as texture.
2 Related Work
This work is concerned with learning 3D statistical shape models that can be used in surface fitting tasks. To learn a statistical shape model, a database of shapes with known correspondence information is required. Computing correspondences between a set of shapes is a challenging problem in general [27]. However, for models of human faces, correspondences can be computed in a fully automatic way using template deformation methods (e.g. [19, 22]).
The most related works to our work are partbased multilinear models that were recently proposed to model 3D human body shapes [9]. To define the partbased model, a segmentation of the training shapes into meaningful parts is required. This is done manually by segmenting the human models into body parts, such as limbs. Lecron et al. [16] use a similar statistical model on human spines, that are manually segmented into its vertebrae. In contrast, our method computes a suitable hierarchical decomposition automatically, thereby eliminating the need to manually generate a meaningful segmentation.
Many statistical models have been used to analyze human faces. The first statistical model for the analysis of D faces was proposed by Blanz and Vetter [5]
. This model is called the morphable model, and uses Principal Component Analysis (PCA) to analyze shape and texture of registered faces, mainly in neutral expression. It is applied to reconstruct
D facial shapes from images [5] and D face scans [4, 21]. Amberg et al. [1] extend the morphable model to consider expressions, by combining it with a PCA model for expression offsets with respect to the neutral expression geometry. An alternative way to incorporate expression changes is to use use a multilinear model, which separates identity and expression variations. This model has been used to modify expressions in videos [30, 11, 31], or to register and analyze D motion sequences [7]. Multilinear models are mathematically equivalent to TensorFaces [29] applied to D data rather than images, and provide an effective way to capture both identity and expression variations, and thus in Section 6 we compare to a global multilinear model and show that our model better captures local geometric detail.Blanz and Vetter [5] manually segmented the face into four regions and learned a morphable model on each segment. The regions are fitted to the data independently and merged in a postprocessing step. This partbased model was shown to lead to a higher data accuracy than the global morphable model. As partbased models are suitable to obtain good fitting results in localized regions, they have been used in multiple followup works, eg. [13, 28, 25]. While the model of Kakadiaris et al. [13] shares some similarities with our model, they use a fixed annotated face model, and wavelet transforms to compare facial geometry images. In contrast, we learn multilinear models on subdivision wavelet coefficients.
All of the methods discussed so far model shape changes using global or partbased statistical models. In contrast, by applying a wavelet transform to the data first, statistical models can be constructed that capture shape variation in both a local and multiscale way. Such waveletdomain techniques have been used extensively for medical imaging [12, 20, 17], and Brunton et al. [8] proposed a method to analyze local shape differences of 3D faces in neutral expression in a hierarchical way. This method decomposes each face hierarchically using a wavelet transform and learns a PCA model for each wavelet coefficient independently. This approach has been shown to capture more facial details than global statistical shape spaces. Hence, in Section 6 we compare to a waveletdomain approach and show that our model better captures expression variation.
We propose a method that combines this localized shape space with a multilinear model, thereby allowing to capture localized shape differences of databases of 3D faces of different subjects in different expressions.
3 Multilinear Wavelet Model
Our statistical shape space for human faces consists of a multilinear model for each wavelet coefficient resulting from a spherical subdivision wavelet decomposition of a template face mesh. The wavelet transform takes a set of highly correlated vertex positions and produces a set of decorrelated wavelet coefficients. This decorrelation means that we can treat the coefficient separately and learn a distinct multilinear model for each coefficient. These multilinear models capture the variation of each wavelet coefficient over changes in identity and expression. In the following, we review the two components of our model.
3.1 Second Generation Spherical Wavelets
Spherical wavelets typically operate on subdivision surfaces [24] following a standard subdivision hierarchy, giving a multiscale decomposition of the surface. This allows coarsescale shape properties to be represented by just a few coefficients, while localized finescale details are represented by additional coefficients. Second generation wavelets can be accelerated using the lifting scheme [26], factoring the convolution of the basis functions into a hierarchy of local lifting operations, which are weighted averages of neighboring vertices. When combined with subsampling, the transform can be computed in time linear in the number of vertices. The particular wavelet decomposition we use [3] follows CatmullClark subdivision, and has been used previously for localized statistical models in multiple application domains [17, 8]. The wavelet transform is a linear operator, denoted . For a D face surface , the wavelet coefficients are .
3.2 Multilinear Models
To statistically analyze a population of shapes, which vary in multiple ways, such as identity and expression for faces, one can use a multilinear model. In general, one constructs a multilinear model by organizing the training data into an
mode tensor, where the first mode is the vector representation of each training sample, and the remaining modes contain training samples varied in distinct ways.
We organize our set of parametrized training shapes into a mode tensor , where is the dimension of each shape, and and
are the number of training samples in each mode of variation; in our case, identity and expression. It would be straightforward to extend this model to allow for more modes, such as varying textures due to illumination changes, if the data were available. We use a higherorder Singular Value Decomposition (HOSVD)
[15] to decompose into(1) 
where is a tensor called a multilinear model, and and are orthogonal matrices. The th mode product replaces each vector of in the direction of th mode by
. To compute the orthogonal matrix
, is unfolded in the direction of nd mode to the matrix , where the columns of are the vectors of in direction of nd mode.The decomposition in (1) is exact, if for all . If for at least one , the decomposition approximates the data. This technique is called truncated HOSVD, and we use this to reduce the dimensionality of the training data.
The multilinear model represents a shape by
(2) 
where is the mean of the training data (over all identities and expressions), and and are identity and expression coefficients. Varying only changes identity while keeping the expression fixed, whereas varying only changes the expression of a single identity.
4 Training
In this section, we describe the process of learning the multilinear wavelet model from a database of registered 3D faces in a fixed number of expressions. Using the notation from Section 3.2, the database contains identities, each in expressions. We discuss in Section 6 how to obtain such a registered database. The training process is depicted graphically in Figure 1.
The first stage in our training pipeline is to apply a wavelet transform to every shape in our training database. The leftmost part of Figure 1 shows the influence region of two wavelet coefficients on four face shapes (two identities in two expressions). To obtain a template with the proper subdivision connectivity, we use a registrationpreserving stereographic resampling onto a regular grid [8], although any quadremeshing technique could be used. Because the training shapes are registered, and have the same connectivity, we now have a database of registered wavelet coefficients (middle of Figure 1). Note that this does not require any manual segmentation, but is computed fully automatically. By considering the decorrelating properties of wavelet transforms, we can look at it another way: we now have a training set for each individual wavelet coefficient, which we can treat separately.
From these decorrelated training sets, covering variations in both identity and expression, we can learn a distinct multilinear model for each coefficient, resulting in many localized shape spaces as shown in the right part of Figure 1. This allows a tremendous amount of flexibility in the model.
Training our model has the following complexity. Each wavelet transform has complexity , for vertices, and we perform of them. The complexity of the HOSVD is [15], and we compute of them. Because every multilinear model is computed for only a single wavelet coefficient over the training set, so the complexity is per wavelet coefficient and overall. Thus, our model allows highly efficient and scalable training, as detailed in Section 6.
Training many lowdimensional models has statistical benefits too. We retain a large amount of the variation present in the training data by truncating modes and at and . We chose because is the smallest modedimension in our tensor.
Our model generates a D face surface as follows. The vertex positions are generated from the wavelet coefficients via the inverse wavelet transform, denoted by . The wavelet coefficients are generated from their individual multilinear weights for identity and expression. Thus, following (2), wavelet coefficients are generated by
(3) 
where is the index of the wavelet coefficient, and the surface is generated by where .
5 Fitting
In this section, we discuss the process of fitting our learned model to an input oriented point cloud or mesh , which may be corrupted by noise, missing data or occlusions. The process is depicted graphically in Figure 2. We fit our model by minimizing a fitting energy that captures the distance between and , subject to the constraints learned in our training phase. We minimize the energy in a coarsetofine manner, starting with the multilinear weights of the coarsescale wavelet coefficients, and refining the result by optimizing finerscale multilinear weights.
5.1 Fitting Energy
We optimize our model parameters to minimize an energy measuring the distance between and . Our model parameters consist of the perwavelet coefficient multilinear weights, , for , and a similarity transform (rigid plus and uniform scaling) mapping the coordinate frame of to the coordinate frame of .
Our fitting energy consists of four parts: a landmark term, a surface fitting term, a surface smoothing term, and a prior term. That is,
(4) 
where , , and are the landmark energy, surface fitting energy, surface smoothing energy and prior energy, respectively. We now describe each of these energies in turn.
The landmark energy measures the Euclidean distance between corresponding landmark sets and located on the model surface and input data, respectively. These landmarks may be obtained in a variety of ways, including automatically [10, 22], and do not restrict our method. In Section 6, we demonstrate how our method performs using landmarks from multiple sources. The landmarks are in correspondence such that and and represent the equivalent points on and respectively. With this, we define our landmark energy as,
(5) 
where is a constant balancing the relative influence of landmarks against that of the rest of the surface.
The surface fitting energy measures the pointtoplane distance between vertices in and their nearest neighbors in . That is,
(6) 
where is the projection of into the tangent plane of p, where is the nearest neighbor of . The distances are weighted by
(7) 
where is a threshold on the distance to the nearest neighbor, providing robustness to missing data. We compute nearest neighbors using ANN [2].
The prior energy restricts the shape to stay in the learned shape space, providing robustness to both noise and outliers. We avoid introducing undue bias to the mean shape via a hyperbox prior
[7],(8) 
where
(9) 
restricts each component of to be within a constant amount of the same component of the modemean , and similarly for each component of .
The smoothing energy is the biLaplacian energy, which penalizes changes in curvature between neighboring vertices. It is needed due to the energy minimization algorithm, described in Section 5.2, which optimizes each multilinear wavelet independently. Without a smoothing energy, this can result in visible patch boundaries in the fitted surface, as can be seen in Figure 4.
Formally, we write
(10) 
where is the doubleumbrella discrete approximation of the biLaplacian operator [14], and is a constant weight.
The smoothing energy poses a tradeoff: visually pleasing smooth surfaces versus fitting accuracy and speed. Leaving out allows the energy minimization to get closer to the data (as expected), and leads to faster fitting due to the energy being more localized. Hence, we retain the option of not evaluating this energy in case the scenario would favor close fitting and fast performance over visually smooth results. We use either or in all our experiments. Section 6 discusses this tradeoff in more concrete terms.
5.2 Energy Minimization
We minimize (4) in a twostep procedure. In the first step, we iteratively minimize with respect to and the multilinear weights of each wavelet coefficient. This rigidly aligns the model and the data, and coarsely deforms the surface to fit the landmarks, giving a good initialization for subsequent surface fitting. We solve for that minimizes , given the landmark positions and . This involves solving a small overdetermined linear system. Then, we optimize and for to minimize . Figure 2 (bottom, middle) shows the result of landmark fitting for a given input data.
In the second step, we fix and minimize (4) with respect to only the multilinear weights. This deforms the surface so that it closely fits the input data . Figure 2 (bottom, right) shows the final fitting result.
The energies , and are nonlinear with respect to the multilinear weights, and we minimize them using the LBFGSB [18] quasiNewton method. This bounded optimization allows the prior (8) to be enforced simply as bounds on the multilinear weights. The hierarchical and decorrelating nature of the wavelet transform allows us to minimize the energies separately for each multilinear model in a coarsetofine manner. During initialization, we recompute and optimize the multilinear weights iteratively at each level of wavelet coefficients. During surface fitting, nearest neighbors are recomputed and the multilinear weights optimized iteratively at each level. During initialization, we allow greater variation in the model, , because we assume the landmarks are not located on occlusions. During surface fitting, we restict the shape space further, , unless the particular weight component is already outside this range from the initialization.
Fitting many lowdimensional local multilinear models is more efficient than fitting a single highdimensional global multilinear model, because the dimensionality of the variables to be optimized is the dominant factor in the complexity of the quasiNewton optimization, which achieves superlinear convergence by updating an estimate of the Hessian matrix in each iteration. For a problem size
the Hessian contains unique entries, which favors solving many small problems even if the total number of variables optimized is greater. This is confirmed experimentally in Section 6. Further, each multilinear model has compact support on , which reduces the number of distances that must be computed in each evaluation of (6) and its gradient.5.3 Tracking
As an application of our shape space, we show how a simple extension of our fitting algorithm can be used to track a facial motion sequence. To the first frame, we fit both identity and expression weights. Subsequently, we fix identity weights and only fit expression weights. This ensures that shape changes over the sequence are only due to expression, not identity. A more elaborate scheme, which averages the identity weights, would also be feasible.
To avoid jitter, we introduce a temporal smoothing term on the vertex positions. Approaches based on global multilinear models often place a temporal smoothing term on the expression weights themselves [31, 7] since these are usually much lower dimension than the surface . In our case, the combined dimensionality of all expression weights is equal to that of the vertex positions, so no efficiency is to be gained by operating on the weights rather than the vertex positions. Further, placing a restriction on the vertex positions fits easily into our energy minimization. We use a simple penalty on the movement of the vertices between frames. This is easily incorporated into our fitting algorithm by simply adding a Euclidean distance penalty to our energy function (4) during surface fitting:
(11) 
where is a constant balancing allowing the surface to move versus reducing jitter.
6 Evaluation
6.1 Experimental Setup
Training Data: For a training database, we use the BU3DFE database [33] registered using an automatic templatefitting approach [22] with ground truth landmarks. This database contains subjects in expressions levels each. We successfully registered subjects in all expressions and used this for training in our experiments.
Test Data: To test our fitting accuracy we use scans from the Bosphorus database [23] including variation in identity, expression and types of occlusions. We specifically do not test on scans from the same database we use for training to avoid bias. Further, the Bosphorus scans typically have higher noise levels than those in BU3DFE, and contain occlusions. This database contains landmarks on each scan; we use the subset of those shown in Figure 2 present on a given surface (not blocked by an occlusion). In Section 6.4, we show the performance of our method when tracking facial motion sequences from the BU4DFE database [32] with landmarks automatically predicted using an approach based on local descriptors and a Markov network [22].
Comparison: We compare our fitting results to the localized PCA model [8] and the global multilinear model [7]. All three models are trained with the same data, with the exception that because the local PCA model does not model expression variation, we train it separately for each expression and give it the correct expression during fitting. The other two are given landmarks for fitting.
Performance: We implemented our model, both training and fitting, in C++ using standard libraries. We ran all tests on a workstation running windows with an Intel Xeon E31245 at . Training our model on face shapes each with vertices takes using a singlethreaded implementation. In practice we found our training algorithm to scale approximately linearly in the number of training shapes. Fitting takes on average with , and with , for a surface with approximately vertices (Sections 6.2 and 6.3). For the motion sequences with approximately vertices per frame (Section 6.4), fitting takes per frame on average without smoothing and with smoothing. The global multilinear model takes for fitting to a static scan. A singlethreaded implementation of the local PCA model takes due to the samplingbased optimization, which avoids local minima.
6.2 Reconstruction of Noisy Data
In this section, we demonstrate our model’s ability to capture finescale detail in the presence of identity and expression variation, and high noise levels. We fit it to models ( identities in up to expressions) from the Bosphorus database [23]. We measure the fitting error as distancetodata, and the pervertex median errors are shown for all three models in Figure 3 (left). Our model has a greater proportion of submillimeter errors than either of the other models. Specifically, the local PCA and the global multilinear have and , respectively, of vertices with error , whereas our model has with and with . Figure 3 (right) shows cumulative error plots for all three methods for vertices in the characteristic detail region of the face, which is shown next to the plot. This region contains prominent facial features with the most geometric detail. We see that our model is more accurate than previous models in this region and has many more submillimeter errors; the local PCA and global multilinear have and of errors , respectively, whereas our model has with and with . This shows that our model has improved accuracy for finescale detail compared to existing models, in particular in areas with prominent features and high geometric detail.
Figures 4 and 5 show examples of fitting to noisy scans of different subjects in different expressions. These scans contain acquisition noise, missing data and facial hair. Figure 4 (left) shows a surprise expression and closeups of the nose region; our reconstruction both and capture significantly more finescale detail than previous models. The right part of the figure demonstrates the effect of the smoothing energy in preventing faint grid artifacts appearing in the reconstruction due to the independent optimization scheme. Figure 5 shows two subjects in fear and happy expressions. We again see the increased accuracy of our model in terms of finescale detail on facial features compared to previous models. Note the accuracy of the nose and mouth shapes in all examples compared to the other models, and the accurate fitting of the underlying face shape in the presence of facial hair. Further note how our model captures the asymmetry in the eyebrow region for the fear expression.
6.3 Reconstruction of Occluded Data
In this section, we demonstrate our model’s robustness to severe data corruptions in the form of occlusions. We fit all three models to scans ( subjects, types of occlusions) from the Bosphorus database [23]. Figure 6 (top right) shows the cumulative error for all three models. Since distancetodata is not a valid error measure in occluded areas, we apply different masks, shown next to the error plot, depending on the type of occlusion so that only unoccluded vertices are measured. Clockwise from topleft: the mask used for eye, glasses, mouth and hair occlusions. From the cumulative error curves, we see that our model retains greater accuracy in unoccluded parts of the face than previous models.
The bottom two rows of Figure 6 show example reconstructions in the presence of severe occlusions. All models show robustness to occlusions and reconstruct plausible face shapes, but our model provides better detail in unoccluded parts of the face than previous models (see the mouth and chin in the first row, and the nose in the second row). For these examples, we show our reconstruction with .
6.4 Reconstruction of Motion Data
In this section, we show our model’s applicability to D face tracking using the simple extension to our fitting algorithm described in Section 5.3. Figure 7 shows some results for a selection of frames from three sequences from the BU4DFE database [32]. We see that, as for static scans, high levels of facial detail are obtained, and even the simple extension of our fitting algorithm tracks the expression well. Since landmarks are predicted automatically for these sequences, the entire tracking is done automatically. This simple tracking algorithm is surprisingly stable. Videos can be found in the supplemental material.
7 Conclusion
We have presented a novel statistical shape space for human faces. Our multilinear wavelet model allows for reconstruction of finescale detail, while remaining robust to noise and severe data corruptions such as occlusions, and is highly efficient and scalable. The use of the wavelet transform has both statistical and computational advantages. By decomposing the surfaces into decorrelated wavelet coefficients, we can learn many independent lowdimensional statistical models rather than a single highdimensional model. Lower dimensional models reduce the risk of overfitting, which allows us to set tight statistical bounds on the shape parameters, thereby providing robustness to data corruptions while capturing finescale detail. Model dimensionality is the dominant factor in the numerical routines used for fitting the model to noisy input data, and fitting many lowdimensional models is much faster than a single highdimensional model even when the total number of parameters is much greater. We have demonstrated these properties experimentally with a thorough evaluation on noisy data with varying expression, occlusions and missing data. We have further shown how our fitting procedure can be easily and simply extended to give stable tracking of D facial motion sequences. Future work includes making our model applicable for realtime tracking. Virtually all aspects of our fitting algorithm are directly parallelizable, and an optimized GPU implementation could likely achieve realtime fitting rates, in particular for tracking, where only expression weights need to be optimized every frame. Such highdetail realtime tracking could have tremendous impact in telepresence and gaming applications.
References

[1]
B. Amberg, R. Knothe, and T. Vetter.
Expression invariant 3D face recognition with a morphable model.
In FG, pages 1–6, 2008.  [2] A. Arya and D. Mount. Approximate nearest neighbor queries in fixed dimensions. In SODA, pages 271–280, 1993. http://www.cs.umd.edu/~mount/ANN/.
 [3] M. Bertram, M. Duchaineau, B. Hamann, and K. I. Joy. Generalized BSpline subdivisionsurface wavelets for geometry compression. TVCG, 10(3):326–338, 2004.
 [4] V. Blanz, K. Scherbaum, and H.P. Seidel. Fitting a morphable model to 3d scans of faces. In ICCV, 2007.
 [5] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH, pages 187–194, 1999.
 [6] Bolkart, T., Brunton, A., Salazar, A., Wuhrer, S.: Statistical 3d shape models of human faces. http://statisticalfacemodels.mmci.unisaarland.de/ (2013)
 [7] T. Bolkart and S. Wuhrer. Statistical analysis of 3d faces in motion. In 3DV, 2013.
 [8] A. Brunton, C. Shu, J. Lang, and E. Dubois. Wavelet modelbased stereo for fast, robust face reconstruction. In CRV, 2011.
 [9] Y. Chen, Z. Liu, and Z. Zhang. Tensorbased human body modeling. In CVPR, 2013.

[10]
C. Creusot, N. Pears, and J. Austin.
A machinelearning approach to keypoint detection and landmarking on 3d meshes.
IJCV, 102(13):146–179, 2013.  [11] K. Dale, K. Sunkavalli, M. K. Johnson, D. Vlasic, W. Matusik, and H. Pfister. Video face replacement. TOG, 30(6):130:1–10, 2011.
 [12] C. Davatzikos, X. Tao, and D. Shen. Hierarchical active shape models, using the wavelet transform. TMI, 22(3):414–423, 2003.
 [13] I. Kakadiaris, G. Passalis, G. Toderici, M. Murtuza, Y. Lu, N. Karamelpatzis, and T. Theoharis. Threedimensional face recognition in the presence of facial expressions: An annotated deformable model approach. TPAMI, 29(4):640–649, 2007.
 [14] Kobbelt, L., Campagna, S., Vorsatz, J., Seidel, H.P.: Interactive multiresolution modeling on arbitrary meshes. In: CGIT (1998)
 [15] L. D. Lathauwer. Signal processing based on multilinear algebra. PhD thesis, K.U. Leuven, Belgium, 1997.
 [16] F. Lecron, J. Boisvert, S. Mahmoudi, H. Labelle, and M. Benjelloun. Fast 3d spine reconstruction of postoperative patients using a multilevel statistical model. MICCAI, 15(2):446–453, 2012.
 [17] Y. Li, T.S. Tan, I. Volkau, and W. Nowinski. Modelguided segmentation of 3D neuroradiological image using statistical surface wavelet model. In CVPR, pages 1–7, 2007.
 [18] D. Liu and J. Nocedal. On the limited memory method for large scale optimization. Math. Prog.: Ser. A, B, 45(3):503–528, 1989.
 [19] I. Mpiperis, S. Malassiotis, and M. G. Strintzis. Bilinear models for 3d face and facial expression recognition. TIFS, 3:498–511, 2008.
 [20] D. Nain, S. Haker, A. Bobick, and A. Tannenbaum. Multiscale 3d shape analysis using spherical wavelets. In MICCAI, pages 459–467, 2005.
 [21] A. Patel and W. Smith. 3d morphable face models revisited. In CVPR, pages 1327–1334, 2009.
 [22] A. Salazar, S. Wuhrer, C. Shu, and F. Prieto. Fully automatic expressioninvariant face correspondence. MVAP, To Appear.
 [23] A. Savran, N. Alyuz, H. Dibeklioglu, O. Celiktutan, B. Gökberk, B. Sankur, and L. Akarun. Bosphorus database for 3d face analysis. In BIOID, pages 47–56, 2008.
 [24] P. Schröder and W. Sweldens. Spherical wavelets: Efficiently representing functions on the sphere. In SIGGRAPH, pages 161–172, 1995.
 [25] M. Smet and L. V. Gool. Optimal regions for linear modelbased 3d face reconstruction. In ACCV, pages 276–289, 2010.
 [26] W. Sweldens. The lifting scheme: A customdesign construction of biorthogonal wavelets. Appl. Comp. Harm. Anal., 3(2):186–200, 1996.
 [27] G. Tam, Z.Q. Cheng, Y.K. Lai, F. Langbein, Y. Liu, D. Marshall, R. Martin, X.F. Sun, and P. Rosin. Registration of 3d point clouds and meshes: A survey from rigid to nonrigid. TVCG, 19(7):1199–1217, 2013.
 [28] F. ter Haar and R. Veltkamp. 3d face model fitting for recognition. In ECCV, pages 652–664, 2008.
 [29] Vasilescu, M., Terzopoulos, D.: Multilinear analysis of image ensembles: Tensorfaces. In: ECCV. pp. 447–460 (2002)
 [30] D. Vlasic, M. Brand, H. Pfister, and J. Popović. Face transfer with multilinear models. TOG, 24(3):426–433, 2005.
 [31] F. Yang, L. Bourdev, J. Wang, E. Shechtman, and D. Metaxas. Facial expression editing in video using a temporallysmooth factorization. In CVPR, pages 861–868, 2012.
 [32] L. Yin, X. Chen, Y. Sun, T. Worm, and M. Reale. A highresolution 3d dynamic facial expression database. In FG, pages 1–6, 2008.
 [33] L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato. A 3d facial expression database for facial behavior research. In FG, pages 211–216, 2006.