1 Introduction
In recent years, storing and modeling multidimensional data have become very common. Potential datasets include different attributes of potential customers, multiple currencies’ exchange rates for each day, and vectorized images. Although these data often lie on nonlinear manifolds, a linear manifold or a combination of linear manifolds can often provide a practical and suitably accurate approximation. Particularly for data of very high dimensionality such as vectorized images, a model may need to produce a reduceddimension representation of the original observations.
Generic manifold methods only use the observations to approximate the manifold, without any extra information. Constructing the manifold from the data requires constructing a coordinate system on the manifold and projecting the observations onto the manifold. These tasks could be challenging if no side information is available.
One popular and effective technique for modeling linear manifolds and incorporating dimensionality reduction is principal component analysis (PCA), which finds a basis
of vectors that can capture the highestvariance directions from the original data
martinez2001pca .Pitelis et al. (2013) showed how an “atlas” of overlapping linear manifolds that they labeled “charts” could model a nonlinear manifold very effectively pitelis2013learning . Their model was learned by a hillclimbing approach which alternated between assigning observations to charts based on the observations’ values and refitting each chart using PCA performed on the relevant subset of observations. The initial charts, which were necessary for the first assignments, could be found by PCA on bootstrap samples. The number of charts was selected by the method based on a usersupplied penalty .
Vidal, Ma, and Sastry (2005) introduced Generalized Principal Component Analysis (GPCA), which similarly addressed the idea of dividing a larger manifold into multiple local manifolds. GPCA used polynomials based on Veronese maps to modify and combine elements of the original data vectors. GPCA could still learn the coefficients of the monomial terms by PCA, though, because the relationship between the full polynomial and these coefficients was still linear vidal2005generalized . The experimental success of GPCA showed that multiple applications of (linear) PCA could be used to learn a complicated manifold, although the local manifolds learned were typically nonlinear. The authors noted, though, that piecewise linear models (which could be learned by multiple PCA applications without GPCA’s polynomials) are “excellent” in many practical applications at balancing the need for model expressiveness with the desire for model simplicity vidal2003generalized . Like the atlas method, GPCA could also select the appropriate number of local manifolds, but using the ranks of Veronese maps evaluated on the observations instead of usersupplied parameters vidal2005generalized .
The Joint Parameterized Atom Selection (PATS) method vural2013learning learns Pattern Transformation Manifolds (PTM), which are manifolds of images undergoing a family of geometric transformations. The PATS method therefore is designed for a different purpose than our method, since our work can be applied to manifolds that are not PTMs. In particular, the PATS method cannot be applied to the examples that will be presented in Section 5.
Other related works include Sparse PCA zou2006sparse where observations are represented as linear combinations of a small number of basis vectors and its precursor SCoTLASS jolliffe2003modified , where the sparse PCA coefficients are obtained through constraints. In Joint Sparse PCA yi2017joint
the authors modify the Sparse PCA loss function to simultaneously obtain feature selection and Sparse PCA. A generalization of PCA to tensor data was presented by Multilinear PCA
lu2006multilinear , while Multilinear Sparse PCA lai2014multilinear generalizes Sparse PCA to tensor data.Manifold learning with side information.
Techniques such as GPCA and the atlasbased method exist for identifying local manifolds for observations based on the observations’ values, and for identifying the number of local manifolds. Situations exist, however, in which data are thought to lie approximately on local manifolds that estimate a larger manifold, but one knows some extra information about which local manifold corresponds to each observation instead of needing the algorithm to discover this. This extra information could be discrete or continuous. For example, one may be modeling images of vehicles using a known class (“car,” “motorcycle,” “SUV,” or “truck”) for each observation, or modeling face images where the age of the person is also known.
Classification: learning manifolds using class information. When the side information is in the form of discrete class labels, it is often desired to model the manifolds for each class for the best separation between the classes. While the generic methods discussed above had the goal of manifold approximation, these methods have a different goal: classification, as illustrated in Figure 1.
In this respect, linear discriminant analysis (LDA) uses linear manifolds to perform dimensionality reduction to best separate the classes rather than to capture the variation of each class martinez2001pca . Modified PCA luo2003modified
improves face recognition by dividing the PC coefficients to the square root of their corresponding eigenvalues. Other techniques such as ClassInformationIncorporated Principal Component Analysis
chen2005class , Locality Preserving Projections niyogi2004locality, MultiManifold SemiSupervised Learning
goldberg2009multi , Multimodal Oriented Discriminant Analysis de2005multimodal , and SemiSupervised Dimensionality Reduction zhang2007semi learn manifolds in the presence of classes. Like LDA, they are focused on classification to a local manifold rather than focused on modeling the observations once the classes are known.These two goals of manifold learning  approximation vs classification  are clearly mentioned in the Joint Parameterized Atom Selection (PATS) paper vural2013learning as two different objectives for building a manifold.
Parameterization: learning manifolds with a continuous context parameter. In this work we are interested in modeling manifolds when the side information is continuous in the form of a context parameter . This side information can help more accurately project each observation to the correct local manifold, obtaining a more accurate manifold approximation.
Various applications exist in which this extra contextual information would be continuous. For vehicle images, one could model them differently based on the vehicles’ weights, volumes, or prices (MSRPs). For face images one could model them differently based on their age, 3D pose, and illumination. Daily percent changes in a stock’s closing share price could use the stock’s market capitalization, because smallercapitalization stocks are thought to have more volatile price movements. Lenders with multiple recorded attributes about their borrowers could use the borrowers’ rates of interest or FICO credit scores as the side information.
One reason that modeling with continuous side information has been effectively unaddressed is that one could discretize the side information into a number of groups and treat the modeling problem as many separate problems, each of which could be addressed by existing techniques such as PCA. For notational simplicity, we will refer to the use of a separate PCA model for each group as Independent Principal Component Analysis (IPCA), because none of the groups’ models use information from the observations of the other groups.
This contextual parameter carries ordinal and interval information that would be ignored by IPCA. Consider borrower data such as a borrower’s number of late payments, with credit scores as the parameter, and bins 300350, 350400, and so on until 800850. The average late payments might decrease in the training examples as the bin increases, except that the 400450 bin might have a surprisingly low average. IPCA would ignore the other bins and the pattern they form, which could overfit the training examples in the 400450 bin. Additionally, IPCA would treat customers with credit scores of 355 and 845 as completely different from one with a credit score of 345, because all three are in different bins. It would ignore that the difference between 345 and 355 is much smaller than the difference between 345 and 845, rather than enforcing similarity between how the 345score and 355score observations are modeled.
In this paper, we propose a new method called parameterized principal component analysis (PPCA) for creating a PCAlike linear model for multivariate data that is continuously changing with a separate parameter with known, observationspecific values.
Like IPCA, PPCA makes multiple linear models of mean vectors and bases, which are based on known divisions of the parameter space. Unlike IPCA, PPCA interpolates between the points in the parameter space at which mean vectors and bases were fitted, and penalizes differences in the models for similar parameter values. For example the PPCA manifold between two consecutive bin endpoints in 3D with a single principal vector would be a ruled surface, such as shown in Figure
2, right.In Figure 2, left, are illustrated the differences between PCA, IPCA and PPCA on a simple 1D data and three bins for the parameter values. In PCA, a simple linear model is fit through all the data. In IPCA, separate models are fit independently on the data from each bin. In PPCA, three linear models are fit but enforced to match at bin endpoints. This type of continuity is enforced for both the mean vectors and for the principal directions.
We describe the PPCA model in Section 2, and discuss its implementation in Sections 3 and 4. In Section 5, we apply PPCA to artificial data following smooth functions of the parameter, and to three real datasets: shapes of differentlysized lymph nodes, human facial images with different degrees of added blurriness, and human facial images with different angles of yaw rotation. In all four experiments, PPCA outperformed IPCA, and was particularly beneficial when the number of training examples was limited.
2 Parameterized Principal Component Analysis
Parameterized principal component analysis (PPCA) applies to an environment in which there are observations , each of dimension , and there are bin endpoints arising from the bins that partition the acceptable range of the parameter . Each bin endpoint has a mean vector and basis vectors . Each bin endpoint corresponds to a value of , and an observation ’s parameter value dictates ’s bin, with lower endpoint and upper endpoint . Figure 3 shows an example using a parameter that varies from to . Note that a bin endpoint usually applies to two bins. For the example in Figure 3, the bin endpoint at would be an endpoint for the 45 bin and for the 56 bin.
The parameter can be translated into weights and for bin endpoints and . Equation (1) shows this, using and as the parameter values for the bin’s lower and upper endpoint, respectively. Figure 4 shows an example for an observation with . It has a 60% weight for the bin endpoint at and a 40% weight for the bin endpoint at , because 4.4 is 60% of the way from 5 to 4, and 40% of the way from 4 to 5.
(1) 
These weights can produce a mean vector and a basis that are specific to the observation’s parameter , as shown in Equations (2) and (3).
(2) 
(3) 
The model produces a lowerdimensional representation of as the coefficient vector . This can be translated to a projection of using .
2.1 Energy Function
PPCA uses the minimization of an energy function to achieve a balance between having these projections fit the training examples well and reducing differences between adjacent bin endpoints’ corresponding model components. The energy contains three terms, a data fidelity term measuring the fitness to the input data, a smoothness term that encourages smoothness for the means and principal vectors along the parameter, and an orthogonality term that encourages the principal vectors at each bin endpoint to be orthogonal to each other.
(4) 
Equation (4) uses the vectors , , and , which are stacked from vectors introduced earlier, as detailed in Equation (5). The functions and can be derived from the vectors and , and the coefficient vectors can be extracted from .
(5) 
The first term from Equation (4) is the data term , which is the mean square approximation error over the training examples,
(6) 
using the linear model based on paramater to approximate example .
The second term,
(7) 
uses penalty coefficients and to ensure smooth functions for the mean vectors and basis vectors, respectively. Differences between corresponding elements for vectors relevant to two endpoints of the same bin are penalized. Large values of and will enforce more smoothness in the representation, at the expense of the projection error on the training set.
PPCA’s two smoothness penalty coefficients and force the model for an observation to incorporate information from observations with similar observations: those in its bin and those in the adjacent bin(s). The amount of the information sharing depends on the differences in parameter values, even for observations in the same bin. The weighted pooling of information enforces a prior belief that observations with more similar values of a parameter should be modeled in a more similar manner. It enforces smoothness, but not monotonicity. Ordinal trends can still be captured, but only locally. This gives PPCA the ability to approximate more complicated smooth functions, though, such as sinusoidal curves. Like the prior beliefs in Bayesian models, PPCA’s prior belief is more useful in the presence of limited training data, because the pooling of information can reduce overfitting.
The energy function also includes the orthogonality term , given in Equation (8). In Equation (8), the functions are indicators for the condition .
(8) 
encourages orthonormality in each bin endpoint’s basis. It penalizes differences from zero for dot products of pairs of vectors from the same bin, promoting orthogonality of each basis. It also penalizes differences from one for the squared norm of each vector.
3 Learning a Parameterized Principal Component Analysis Model
Because the energy function is composed of quadratic terms, we assume it to be locally convex. We find a local minimum in the energy function using partial derivatives of the energy function with respect to the stacked mean vector , the stacked basis vector , and each observation’s coefficient vector . We either perform gradient descent or obtain the respective optimal component analytically by setting the derivatives to zero.
PPCA needs to choose optimal vectors , , and , and a derivativebased method for one of the three requires knowing or estimating the other two. In PPCA, we optimize one at a time, holding the other two constant based on their most recent estimates. After initialization, we run several cycles of optimizing the mean vectors, followed by the basis vectors, and then the coefficient vectors. We choose a predetermined number of cycles , and terminate the algorithm early if the algorithm is deemed to have converged, based on the energy. We store one previous iteration’s estimates of the model components , , and , so these estimates can be treated as final if the energy increases.
3.1 Learning Mean Vectors
A closedform solution for , the PPCA estimate of , is displayed in Equation (9). This uses the observationspecific matrix from Equation (10), which is made up of bin endpoint weights . The weight is equal to if is the lower endpoint for observation , if is upper endpoint for observation , and zero otherwise. Equation (9) also uses the weightproduct matrix from Equation (11) and the matrix from Equation (12). has only three diagonals of nonzero elements, all of which are 1, 1, or 2.
(9) 
(10) 
(11) 
(12) 
The use of a matrix inverse or linear system solution for
is either impractically slow or inaccurate for highdimensional data such as vectorized images. For these data, we optimized the mean vectors using a gradient descent algorithm and the energy derivative from Equation (
13).(13) 
Equation (13) uses the observationspecific coefficient matrices , which are defined using Equations (14) and (15). It also uses the stacked vectors , which stack identical copies of an observation .
(14) 
(15) 
3.2 Learning Basis Vectors
We only use gradient descent to optimize , because the presence of a dot product within a quadratic term creates a quartic term that prevents a closedform solution. The derivative is in Equation (16), and it relies on the length vectors , which stack products of the weights, coefficients, and residuals.
(16) 
(17) 
Equation (16) also uses the transitionlike matrix from Equation (18) and the bincomparison matrix from Equation (19). , if multiplied by , will zero out all except for , which gets moved to the appropriate spot for . In its definition, the functions are indicator functions for the events within the parentheses. is a larger version of the matrix used for the means.
(18) 
(19) 
The gradient descent algorithm has a soft constraint for orthonormal bases, but we implement a hard constraint for normality as well. After the gradient descent algorithm for completes, we rescale each basis vector to have a unit norm. We cannot similarly force orthogonality without undoing the gradient descent algorithm’s attempts to enforce smoothness.
3.3 Learning Coefficient Vectors
If one differentiates the energy function with respect to a single observation’s coefficient vector and sets this derivative equal to the zero vector, one can obtain the estimate below for a coefficient vector .
(20) 
This inverse is applied to a much smaller matrix than that inverted to find , so we use a linear system solution to obtain , even with highdimensional data.
3.4 Initialization
PPCA finds an appropriate local minimum within the energy function, so an appropriate initialization is important for finding a local minimum that can perform similarly to the global minimum. We initialize PPCA using a procedure similar to IPCA, which runs PCA on groups made by binning the parameter . We calculate initial mean vectors using Equation (21), which is like a weighted version of the mean calculation from IPCA.
(21) 
To find the initial basis vectors, we choose overlapping subsets of the observations and assign one subset to each bin endpoint . The included observations are all with weight values above a given threshold such as 0.001. We run PCA on each of these subsets, except that we use the means instead of recalculating the means based on the subsets of . We then reorder these PCA basis vectors to promote smoothness, using a greedy algorithm. One can start with the first bin endpoint’s basis as the first reference basis, and reorder the bases from the second until the last bin endpoint. Alternatively, one can make the last bin endpoint’s basis the first reference basis, and reorder the bases from the secondtolast until the first bin endpoint.
For each pair of bin endpoints, one first calculates the absolute values of the dot products between each pair of basis vectors using one from each endpoint. The two vectors with the highest absolute value of the dot product are paired, and the sign is inverted for the vector from the basis to reorder if the dot product is negative. This procedure continues, each time only using vectors that are not in any pairs, until all vectors in the reference basis have been paired. If any vectors remain in the basis to reorder, they are assigned to any unused locations. The basis just reordered then becomes the reference basis, the next basis in the order is assigned to be reordered, and the procedure continues until all bases except the original reference have been reordered. The coefficients can then be initialized from the initial mean and basis vectors using Equation (20).
3.5 Putting it all together
The complete PPCA training algorithm is summarized in Algorithm 1.
3.6 Tuning of Parameters
The energy must be tracked, so one can use its path to choose the number of overall cycles , the learning rates ( for means, for bases), the number of iterations with those learning rates ( for means, for bases), and the nonorthonormality penalty coefficient . If one wants to choose appropriate smoothness penalty coefficients ( for means, for bases), then one should tune them using a validation set selected randomly from the training examples. Typically, should be much larger than . However, must decrease as increases, so an excessively large leads to unnecessary increases in runtime.
4 Modifications and Generalizations for Real Applications
This section details two modifications for generalizations that allow dimensions to vary with the PPCA parameter. The first is for the dimension of the manifold, and the second is for the observations.
4.1 Generalization to Varied Manifold Dimension
For some applications, the manifold dimension can vary with the parameter . In this case, each bin endpoint would have basis vectors, and would be set to the largest . One would still allocate basis vectors in for each bin endpoint, but one would set to be a zero vector if .
(22) 
If one has differentlysized bases, the energy component needs to follow Equation (22) instead of Equation (7). Also, the energy component needs to follow Equation (23) instead of Equation (8).
(23) 
The only three changes to these two equations are to the upper boundaries of summations. In Equation (22), the third summation ends at rather than at . This is intended so PPCA only enforces similarity between the corresponding basis vectors for adjacent bin endpoints if the basis vectors exist for both. In Equation (23), the second and third summations end at instead of at . This is because there are no vectors beyond vector upon which to enforce orthonormality.
In Section 3.4, we detailed a procedure of rearranging initial basis vectors produced by PCA. If the number of basis vectors is either nondecreasing or nonincreasing with respect to the bin endpoint number, then this procedure still works, with one modification. If all bin endpoints use basis vectors, the user has the choice of reordering from the second until the last bin endpoint, or from the secondtolast until the first bin endpoint. However, if the first bin endpoint’s basis is smaller than the last bin endpoint’s basis, the reordering procedure must go from the second basis to the last. If the last bin endpoint’s basis is smaller than the first bin endpoint’s basis, the reordering procedure must go from the secondtolast basis to the first. If the number of basis vectors both increases and decreases when going from the first to the last bin endpoint, then the reordering must be done more manually.
4.2 Generalization to Varying Manifold Ambient Space
Applications also exist in which the manifold ambient space varies with the parameter . Section 5.4 demonstrates an example of this sort, using face images. In these data, certain pixels may be considered outside the face shape for a given face. Like the observations, the mean and basis vectors for bin endpoint may not use all elements. As shown in Figure 7, we want the mean vectors to have similarity enforced between elements and only if element is relevant for both bin endpoint and bin endpoint . So, we create the indicator variables which equal one if element is included for bin endpoint , and zero otherwise. From these, we can construct matrices that can adjust mean vectors or basis vectors , setting unused elements to zero. We also construct matrices as shown in Equation (25), which can similarly adjust larger vectors.
(24) 
(25) 
For each bin endpoint , one would then calculate and . can then be used to adjust the energy component as shown in Equation (26). The only adjustments made relative to Equation (7) are two additions of .
(26) 
The matrices and from Equations (9), (13), and (16) must be modified as well. These each still have three diagonals that can have nonzero elements, but these diagonals incorporate the indicator variables and thus can have zeros. The modified versions, shown in Equations (27) and (28), only differ from Equations (12) and (19) by including and , respectively, instead of identity matrices of the same size.
(27) 
(28) 
5 Experiments
We evaluated PPCA on four datasets. One had data created from known parameters, so that we could evaluate how well can PPCA recover these parameters. The other three were for applications of PPCA to real data: shapes for lymph nodes of varied sizes, appearances for faces of varied blurriness, and appearances for faces of varied yaw rotation.
Parameter tuning. In these experiments the learning rates were chosen as the combination that obtained the smallest value of the energy function (4).
5.1 Simulation Experiments
First, we tested PPCA’s ability to recover a true model, using threedimensional data created from known mean and basis vectors. These were based on smooth functions of a known parameter , defined on the range from 0 to 360. We used 45 observations with . The data were based on two basis vectors and on coefficients drawn independently from a distribution. We also added random noise to each element, using a distribution. The formulas for the true mean vectors and true basis vectors and were as follows.
(29) 
(30) 
(31) 
We divided the acceptable parameter range into 14 equallysized bins. Because the data and model were small (three dimensions and two basis vectors), we could use the analytical solution to calculate the mean vectors and only needed gradient descent for the bases. We tested various smoothness penalty coefficients, but always used , , and .
Figure 9 shows the sum of squared norms for the error in PPCA’s and IPCA’s estimates of the mean vector, compared to the true mean vectors . This uses various but fixes . For the bases, we could not use a simple error because a proper recovery could have the same linear subspace, but different vectors and coefficients. We instead measured the norms of the normal vectors from the planes created by the recovered basis vectors to each of the true basis vectors. Figure 9 shows the sums (across the observations and two true vectors) of these squared distances. This uses various but fixes .
5.2 Lymph Node Segmentation
Lymph nodes are organs that are part of the circulatory system and the immune system, which are important for the diagnosis and treatment of cancer and other medical conditions. For cancer patients, one may want a segmentation for targeting radiation or for volume estimates. Lymph node sizes generally increase with the onset of cancer, and decrease as treatment succeeds, so volume estimates are used to assess the efficacy of treatment. Generally, radiologists use 3D computed tomography (CT) to assess lymph nodes, and lymph nodes tend to have spherical, elliptical, and beanlike shapes. However, their shape can become more complicated as their size increases. Barbu et al. (2012) demonstrated a model for representing lymph nodes by the lengths of radii, which extend in 162 predetermined directions from the lymph node’s center barbu2012automatic . We reduce this 162dimensional representation of a lymph node’s shape even further, using PPCA and IPCA with 6 dimensions.
We had a dataset available, in which an experienced radiologist had manually segmented 592 lymph nodes from various patients treated at the National Institutes of Health Clinical Center. We used only the 397 lymph nodes for which the 162dimensional model was most appropriate. We eliminated 182 lymph nodes which were part of conglomerates of lymph nodes, and 16 for which the radial model’s SørensenDice coefficient was less than 0.8. We then randomly selected 79 lymph nodes to be in the test set, and assigned the remaining 318 lymph nodes to the training set.
The parameter of interest for this application was the lymph node’s diameter, because lymph nodes’ sizes are related to the types of shapes they can take. We used an estimated diameter based on the 162 modeled radii. We divided the lymph nodes into bins of 612, 1218, 1824, and 2443 millimeters. The dataset had 55, 156, 77, and 30 training examples per bin and 16, 39, 17, and 7 test examples per bin, in increasing order of bin. We used 6 basis vectors, for the IPCA bins and for the PPCA bin endpoints. We also evaluated PCA with 6 principal vectors and Sparse PCA zou2006sparse ^{1}^{1}1Using the implementation from http://www2.imm.dtu.dk/projects/spasm/ (SPCA) with 30 principal vectors with 50 nonzero entries in each vector.
We evaluated all methods after fitting the models using different numbers of training examples per bin, from 2 to 30. These smaller training sets were chosen randomly from the full training set. For PCA and SPCA we trained on the same examples as the other methods, but without using the bin information. All smaller training sets were subsets of the larger training sets, to ensure a more valid comparison of the effect of the training set size. Each time, we calculated the root mean squared error (RMSE) of the approximation of each lymph node’s 162dimensional vector of radii, and then found the mean RMSE across the lymph nodes of the training or test set. For PPCA, we used gradient descent algorithms for both the mean and basis vectors. We used , = 30,000, , , , .
Figure 10
shows that IPCA overfits the data compared to PPCA, particularly for smaller training sets. For all tested sizes, PPCA had noticeably higher error than IPCA when projecting the training set, but noticeably lower error than IPCA did when projecting the test set. The SPCA test RMSEs vary quite a lot around the RMSE of the IPCA. Observe that SPCA actually underfits the data since the training RMSEs are high and comparable to the test RMSEs. PCA does a very good job at approximating this data, which means that probably the manifold is close to linear in this case. Some key results are also summarized in Table
1.Training examples  Train RMSE  Test RMSE  

per bin  PCA  IPCA  PPCA  SPCA  PCA  IPCA  PPCA  SPCA 
2  0.258  0.000  0.101  2.624  2.048  2.980  2.333  2.741 
10  1.525  0.644  1.378  2.200  1.753  2.005  1.777  2.023 
20  1.630  1.262  1.586  2.079  1.681  1.855  1.691  1.921 
30  1.627  1.386  1.597  1.850  1.652  1.774  1.655  1.754 
5.3 Facial Images with Blur
Modeling images of human faces in photos or video frames is useful for creating novel images that satisfy the constraints of realistic faces, such as for animation. It can also modify a known face to show poses or expressions not present in available images. Face models can be used as generative face detectors, too, with applications such as autofocusing a camera or detecting intruders on a security camera. Face modeling can also aid face recognition, by aligning the images to be recognized or by providing the lowerdimensional representation that can be matched against a dictionary.
Variations in the conditions (such as illumination) of images present challenges for face models. One such variation is the blurriness of photographs. Digital cameras, particularly those in many mobile phones, are used frequently to produce photos that may not be appropriately sharp. Even professional photographers using highgrade cameras can produce images with blurred faces in the background. The blurriness of a facial image changes one’s expectations for the face’s appearance, as well as the types of variation in the appearance, so we modeled facial images using a parameter based on blurriness.
To quantify blurriness, we assumed that a Gaussian blur filter could approximate the transformation from an unobserved, unblurred image to the observed, blurred image. This Gaussian blur is a convolution using the kernel from Equation (32), where is the horizontal distance and is the vertical distance between the two pixels involved. We used from Equation (32) (with for an unblurred image) as the PPCA parameter, because higher values of create blurrier images.
(32) 
We treated the facial images from the CBCL Face Database #1 mitcbcl as unblurred, and added Gaussian blur with a kernel and varied . The database had 472 faces chosen as the test set, and the remaining 2,429 as the training set. We used three bins for : 01, 12, and 23. The training and test images were created from the original faces such that each original face produced one blurred image for each of the three bins. The parameter for each observation was selected randomly from a , , or distribution, depending on which bin’s image was being produced.
We used 10 basis vectors for each IPCA bin or PPCA bin endpoint, and for PCA. For SPCA we used 30 principal vectors with 50% nonzero entries each. For PPCA, we used gradient descent for both the mean and basis vectors. We used , , , , , , , and . The training set varied from 2 to 200 examples per bin. For all sizes of training set, all bins had images made from the same faces, but had different added blur according to the values . The smaller training sets were always subsets of the larger training sets, to allow for better examination of the effect of the training set size.
Figure 11 shows the mean across training or test set images for the RMSE of the blurred images’ projections. Some of the errors are also summarized in Table 2.
Training examples  Train RMSE  Test RMSE  

per bin  PCA  IPCA  PPCA  SPCA  PCA  IPCA  PPCA  SPCA 
2  0.000  0.000  0.007  0.031  0.185  0.211  0.193  0.192 
10  0.021  0.000  0.015  0.062  0.073  0.073  0.070  0.091 
20  0.035  0.027  0.030  0.063  0.056  0.057  0.056  0.075 
50  0.041  0.037  0.038  0.057  0.050  0.050  0.049  0.070 
100  0.043  0.039  0.040  0.059  0.047  0.047  0.047  0.066 
200  0.044  0.041  0.042  0.064  0.045  0.045  0.044  0.068 
PPCA had lower approximation error on the test set than IPCA for each training size from 2 to 200 examples per bin, but had a more noticeable advantage when both methods used eight or fewer training examples per bin. SPCA had large training and testing errors, sign that the sparse model cannot fit well this kind of data. PCA did a very good job, comparable to IPCA and PPCA.
Face Recognition. Even though PPCA is designed for manifold approximation and not for classification, we would like to see how these methods compare for face recognition. We can evaluate face recognition performance on this data, since we have three versions of each face, with different blur values.
We experimented with two data sizes, with 100 and 200 faces, each having three blurred versions of each face. For each face, from the three available versions we chose one at random for testing and the other two for training.
We also evaluated two other manifold methods: Modified PCA (MPCA)luo2003modified , which is an modification of PCA for recognition that normalizes the PC coefficients by dividing them to the square root of their corresponding eigenvalues, and Locality Preserving Projections niyogi2004locality , which finds a linear projection of the data so that most of its local information is preserved.
Each method was used to learn a low dimensional representation and the training observations were projected to this low dimensional space. Given a test face, it was projected to the low dimensional space and the training observation of maximal correlation was used to obtain the recognition result. The dimension of the low dimensional space was chosen for each method from to obtain the smallest average test error over 100 random splits of the training/test data.
Faces  PCA  MPCAluo2003modified  SPCAzou2006sparse  IPCA  PPCA  LPPniyogi2004locality 

100  4.94 (1.50)  3.33 (1.30)  5.25 (1.62)  72.78 (8.55)  12.33 (2.45)  62.59 (4.76) 
200  6.50 (1.19)  4.98 (1.35)  6.33 (1.17)  72.69 (10.58)  14.67 (3.36)  77.07 (2.72) 
In Table 3 are show the detection rates for the datasets with 100 and 200 faces. The best recognition errors are obtained by MPCA, followed by PCA and SPCA. PPCA comes next, doing a much better job than IPCA and LPP.
5.4 Facial Images with Rotation
Section 5.3 addressed the challenges of modeling facial images with different levels of blurriness. A separate challenge in face modeling is outofplane rotation, which changes the expected appearance of facial features and produces predictable changes in the occlusion of important facial features. Yaw rotation is highly prevalent in photos, particularly for “in the wild” photos, which are taken in uncontrolled settings, often by ordinary users. One could model pitch or roll rotation with PPCA, but we focus on yaw rotation because it has the largest variation in the available face images.
5.4.1 Background
Linear models for facial appearances exist, such as active appearance models (AAMs) cootes2001active and 3D morphable models (3DMMs) blanz1999morphable . AAMs typically incorporate inplane rotation and suffer from an inability to model outofplane rotation, but 3DMMs exist in 3D and can use 3D rotation. 3DMMs can be fit to 2D test images, but they are trained using 3D facial scans performed in a laboratory setting. Potential users typically do not have the necessary equipment for these scans, and even with the equipment, one has very limited training data relative to a dataset of 2D images. Furthermore, applications for inthewild images grow as these images become more important for social media and other Internet uses, and the laboratory setting on the scan data makes them dissimilar to inthewild images. Zhu and Ramanan (2012) showed that training on inthewild images greatly increases face detection performance on inthewild test data zhu2012face , and it seems logical that similarity between training and test data would be desirable for face modeling as well.
Gross, Matthews, and Baker (2004) modify AAMs to address occlusion gross2004constructing . This does not distinguish occlusion by an object (such as a hand in front of a face) from selfocclusion caused by outofplane rotation, so it does not take advantage of the more predictable nature of rotationbased selfocclusion. Xiao et al. (2004) also modify AAMs, creating a hybrid of a 2D and a 3D model by adding extra parameters and constraints to a 2D model xiao2004real . It allows the training advantages of a 2D model with some of the advantages of a 3D model, but compared to PPCA, it does not address outofplane rotation as directly and relies more on 3D elements not directly observable in the 2D data.
AAMs and 3DMMs each incorporate two linear models: one for the shape mesh, and one for the appearance after removing the influence of shape variation. AAMs typically use frontal images only and translate the appearance from the original image’s shape mesh to the mean shape mesh by triangular warping. Yaw rotation creates predictable changes to both the shape and the appearance. PPCA could model both, but we chose to focus on the appearance component, and modeled the shape using a rigid, 3D shape model built on other data.
5.4.2 Data
We used 272 human facial images from the Annotated Facial Landmarks in the Wild (AFLW) database koestingeraflw , which includes annotations of the locations and occlusion status of 21 key points. We chose the subset such that the faces were all in color and appeared to be of 272 different people. We used yaw rotation in radians as the PPCA parameter . It was limited to the range from to , and we divided the range into 16 equallysized bins. Our subset of AFLW had 17 images in each bin, and three images per bin were selected randomly to be in the test set. The remaining 14 images per bin were eligible for training, but we varied the training set size from 2 to 14 images per bin. The smaller training sets were always subsets of the larger training sets. Values of came from finding the roll, pitch, and yaw angles that best rotated the rigid shape model to fit the unoccluded key points’ horizontal and vertical coordinates. Several yaw angles and key point locations were corrected manually.
AAMs commonly use a triangulation of the face to translate a shape mesh of key points into a shape that can cover pixels. We also used a triangulation, which we constructed manually to have triangles that are less likely to have one of three vertices occluded at yaw angles from to . This generally implied triangles that ran more vertically than in automatic triangulation methods. PPCA promotes the smoothness of adjacent bin endpoints, so the triangles needed to use pixels that corresponded to equivalent areas in other bin endpoints’ shapes. We calculated the triangle’s area for each bin endpoint shape in our 3D model, and used the largestarea version of the triangle for PPCA. We warped each triangle from the original images to these model triangles, which were considered occluded or not based on the direction of the normal vector to that triangle in the rigid shape model rotated to the appropriate yaw angle. AFLW’s imagespecific occlusion annotations were not used after estimating .
We used 10 basis vectors for each IPCA bin or PPCA bin endpoint, and for PCA. For SPCA we used 30 principal vectors with 50% nonzero entries each. We also investigated the influence of whitening. The intensities before whitening were represented as double floatingpoint numbers from zero (black) to one (white). If whitening were used, each image would get six additional parameters in its representation, which were not a part of PPCA (or IPCA) itself. After warping an image, we stored the original mean intensity and standard deviation for red, green, and blue. We translated and rescaled the intensities such that each color had a mean of 0.5 and a standard deviation of 0.031. The latter was chosen to be just large enough to keep all whitened intensities within the [0, 1] interval. PPCA and IPCA modeled the whitened versions, and after projecting the whitened image, we reversed the whitening transformation using the imagespecific means and standard deviations by color.
5.4.3 Model Fitting and Results
We trained models with training set sizes from 2 to 14 examples per bin. PPCA needed to use gradient descent to optimize both the mean and basis vectors. We used , , , , , , , and typically . The models with two to four training examples per bin and 10 basis vectors required smaller to avoid divergence. The occlusion of each triangle was considered known, because it was treated as a function of a known yaw angle. So, we set the imagespecific mean vector and basis vectors in IPCA and PPCA projections to have zeros for any outofshape pixels before we found images’ coefficients. After projecting the image and reversing whitening if it was used, we calculated the RMSE for each image in the training and test sets.
Figure 13 shows the means of these RMSEs, which are averaged across the images of the training or test set. Some of these results are also summarized in Table 4.
Train RMSE  Test RMSE  
Data  Whitening  PCA  IPCA  PPCA  SPCA  PCA  IPCA  PPCA  SPCA 
2 per bin  no  0.0831  0.0079  0.0378  0.1258  0.1298  0.2097  0.1682  0.1673 
8 per bin  no  0.1076  0.0086  0.0519  0.1018  0.1149  0.1351  0.1165  0.1208 
14 per bin  no  0.1108  0.0329  0.0681  0.1025  0.1132  0.1166  0.1090  0.1143 
2 per bin  yes  0.0911  0.0118  0.0421  0.2892  0.1050  0.1563  0.1323  0.2407 
8 per bin  yes  0.1104  0.0113  0.0528  0.2789  0.0901  0.1152  0.1030  0.2220 
14 per bin  yes  0.1133  0.0349  0.0682  0.2734  0.0885  0.1050  0.0976  0.2132 
We see that IPCA consistently overfits the data relative to PPCA. PPCA has higher error on the training set but lower error on the test set than IPCA does. SPCA has a hard time fitting the whitened data but it does a better job than IPCA and slightly worse than PPCA on the data with no whitening. PCA does a very good job on the whitened data but is outperformed by PPCA on the original data for 1114 examples per bin.
Figure 14 shows the IPCA and PPCA mean vectors, warped to the bin midpoint (IPCA) or bin endpoint (PPCA) shapes. These models used four vectors per bin (or bin endpoint), no whitening, and 12 training examples per bin. The IPCA means appear to treat characteristics of the training images as characteristics of the bin to a higher degree than the PPCA means do. One can see more noticeable changes from bin to bin for IPCA with respect to eye color and shape, lip color, illumination, and skin complexion. The smoothness of the mean shape can be improved further for PPCA by increasing the penalty to 0.1, as shown in Figure 15. We did not test additional training set sizes with , but for this example, the mean RMSE for the test set (0.1220) was effectively the same as for (mean RMSE = 0.1220). Both had lower mean RMSEs for projection error than IPCA (0.1313) did.
6 Conclusion and Future Direction
We have presented a novel method, parameterized principal component analysis (PPCA), for modeling multidimensional data on linear manifolds that vary smoothly according to a contextually important parameter . We compared PPCA to independent principal component analysis (IPCA), which uses separate PCA models for groups formed by values of the parameter . We showed that PPCA outperformed IPCA at recovering known true mean vectors and true basis vectors based on smooth functions of the parameter , at producing lower approximation error on three datasets and at obtaining smaller face recognition errors on one dataset. These datasets contained lymph node shapes that varied by the diameter, blurred human facial images that varied by the standard deviation of the Gaussian blur applied, and human facial images that varied by the angle of yaw rotation.
We have explored three types of applications of PPCA to datasets, with different types of parameter in each. However, many other applications exist and future work could extend PPCA to more parameters than the three we tested. Also, we performed some investigation of different modeling choices when modeling faces with different yaw rotation, but it would be beneficial to have further tests of how different numbers of basis vectors used and different adjustments to the data affect the utility of PPCA.
References
References
 [1] Adrian Barbu, Michael Suehling, Xun Xu, David Liu, S Kevin Zhou, and Dorin Comaniciu. Automatic detection and segmentation of lymph nodes from ct data. IEEE Trans. on Medical Imaging, 31(2):240–250, 2012.
 [2] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In Annual Conference on Computer Graphics and Interactive Techniques, pages 187–194, 1999.
 [3] Songcan Chen and Tingkai Sun. Classinformationincorporated principal component analysis. Neurocomputing, 69(1):216–223, 2005.
 [4] Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEE Trans. on Pattern Analysis and Machine Intelligence, 23(6):681–685, 2001.

[5]
Fernando De la Torre and Takeo Kanade.
Multimodal oriented discriminant analysis.
In
International Conference on Machine Learning (ICML)
, pages 177–184, 2005. 
[6]
Andrew B Goldberg, Xiaojin Zhu, Aarti Singh, Zhiting Xu, and Robert Nowak.
Multimanifold semisupervised learning.
In
International Conference on Artificial Intelligence and Statistics (AISTATS)
, pages 169–176, 2009.  [7] Ralph Gross, Iain Matthews, and Simon Baker. Constructing and fitting active appearance models with occlusion. In CVPR Workshop, pages 72–72, 2004.
 [8] Xiaofei He and Partha Niyogi. Locality preserving projections. In Neural Information Processing Systems (NIPS), volume 16, page 153, 2004.
 [9] Ian T Jolliffe, Nickolay T Trendafilov, and Mudassir Uddin. A modified principal component technique based on the lasso. Journal of Computational and Graphical Statistics, 12(3):531–547, 2003.
 [10] Martin Koestinger, Paul Wohlhart, Peter M. Roth, and Horst Bischof. Annotated facial landmarks in the wild: A largescale, realworld database for facial landmark localization, 2011.

[11]
Zhihui Lai, Yong Xu, Qingcai Chen, Jian Yang, and David Zhang.
Multilinear sparse principal component analysis.
IEEE Trans. on Neural Networks and Learning Systems
, 25(10):1942–1950, 2014.  [12] Haiping Lu, Konstantinos N Plataniotis, and Anastasios N Venetsanopoulos. Multilinear principal component analysis of tensor objects for recognition. In International Conference on Pattern Recognition (ICPR), volume 2, pages 776–779, 2006.
 [13] Lin Luo, MNS Swamy, and Eugene I Plotkin. A modified pca algorithm for face recognition. In Canadian Conference on Electrical and Computer Engineering (CCECE), volume 1, pages 57–60. IEEE, 2003.
 [14] Aleix M Martínez and Avinash C Kak. Pca versus lda. IEEE Trans. on Pattern Analysis and Machine Intelligence, 23(2):228–233, 2001.
 [15] MIT Center For Biological and Computation Learning. Cbcl face database #1, 2000. Accessed: 20160407.

[16]
Nikolaos Pitelis, Chris Russell, and Lourdes Agapito.
Learning a manifold as an atlas.
In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, pages 1642–1649, 2013.  [17] René Vidal, Yi Ma, and Shankar Sastry. Generalized principal component analysis (gpca). In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages I–621, 2003.
 [18] Rene Vidal, Yi Ma, and Shankar Sastry. Generalized principal component analysis (gpca). IEEE Trans. on Pattern Analysis and Machine Intelligence, 27(12):1945–1959, 2005.
 [19] Elif Vural and Pascal Frossard. Learning smooth pattern transformation manifolds. IEEE Trans. on Image Processing, 22(4):1311–1325, 2013.
 [20] Jing Xiao, Simon Baker, Iain Matthews, and Takeo Kanade. Realtime combined 2d+ 3d active appearance models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 535–542, 2004.
 [21] Shuangyan Yi, Zhihui Lai, Zhenyu He, Yiuming Cheung, and Yang Liu. Joint sparse principal component analysis. Pattern Recognition, 61:524–536, 2017.
 [22] Daoqiang Zhang, ZhiHua Zhou, and Songcan Chen. Semisupervised dimensionality reduction. In SDM, pages 629–634. SIAM, 2007.
 [23] Xiangxin Zhu and Deva Ramanan. Face detection, pose estimation, and landmark localization in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2879–2886, 2012.
 [24] Hui Zou, Trevor Hastie, and Robert Tibshirani. Sparse principal component analysis. Journal of Computational and Graphical Statistics, 15(2):265–286, 2006.
Comments
There are no comments yet.