1 Introduction
A growing interest within the Gaussian processes community in Machine learning has been the formulation of suitable covariance functions for describing multiple output processes as a joint Gaussian process. Examples include the semiparametric latent factor model
Teh et al. (2005), the multitask Gaussian process Bonilla et al. (2008), or the convolved multioutput Gaussian process Boyle and Frean (2005); Álvarez and Lawrence (2009). Each of these methods uses as a model for the covariance function either a version of the linear model of coregionalization (LMC) Goovaerts (1997) or a version of process convolutions (PC) Higdon (2002). Different alternatives for building covariance functions for multipleoutput processes have been reviewed by Álvarez et al. (2012).Multiple output GPs have been used for supervised learning problems, specifically, for multioutput regression
Bonilla et al. (2008), and multiclass classification Skolidis and Sanguinetti (2011); Chai (2012). The interest has been mainly on exploiting the correlations between outputs to improve the prediction performance, when compared to modeling each output independently. In particular, a Gaussian process is used as a prior over vectorvalued functions mapping from to . Components of may be continuous or discrete.In this report, we advocate the use of multioutput GPs as generative models for vectorvalued random fields, this is, we use multioutput GPs to directly modeling . Afterwards, we use this probabilistic model to tackle a classification problem. An important application area where this setup is of interest is in multivariate time series classification. Here the vectorvalued function is evaluated at discrete values of
, and it is typically modeled using an unsupervised learning method like a hidden Markov model (HMM) or a linear dynamical system (LDS)
Bishop (2007). Notice that by using a multioutput GP to model , we allow the vectorvalued function to be continuous on the input space. Furthermore, we are able to model multivariate random functions for which . It is worth mentioning that the model we propose here, is different from classical GP classification as explained for example in Rasmussen and Williams (2006). In standard GP classification, the feature space is not assumed to follow a particular structure, whereas in our model, the assumption is that the feature space may be structured, with potentially correlated and spatially varying features.As a generative model, the multioutput Gaussian process can be used for classification: we fit a multioutput GP for every class independently, and to classify a new vectorvalued random field, we compute its likelihood for each class and make a decision using Bayes rule. This generative approach works well when the real multivariate signal’s distribution is known, but this is rarely the case. Notice that the optimization goal in the generative model is not a function that measures classification performance, but a likelihood function that is optimized for each class separately.
An alternative is to use discriminative training Jebara (2004)
for estimating the parameters of the multioutput GP. A discriminative approach optimizes a function classification performance directly. Thus, when the multioutput GP is not an appropriate generative distribution the results of the discriminative training procedure are usually better. There are different criteria to perform discriminative training, including maximum mutual information (MMI)
Gopalakrishnan et al. (1991), and minimum classification error (MCE) Juang et al. (1997).In this report we present a discriminative approach to estimate the hyperparameters of a multioutput Gaussian Process (MOGP) based on minimum classification error (MCE). In section
2 we review how to fit the multioutput GP model using the generative approach, and then we introduce our method to train the same MOGP model with a discriminative approach based on MCE. In section 3 we show experimental results, with both the generative and discriminative approaches. Finally, we present conclusions on section 4.2 Generative and discriminative training of multioutput GPs
In our classification scenario, we have classes. We want to come up with a classifier that allows us to map the matrix to one of the classes. Columns of matrix are input vectors , and columns of matrix are feature vectors , for some in an index set. Rows for correspond to different entries of evaluated for all . For example, in a multivariate time series classification problem, is a time point , and is the multivariate time series at . Rows of the matrix are the different time series.
The main idea that we introduce in this report is that we model the classconditional density using a multioutput Gaussian process, where is the class , and are hyperparameters of the multioutput GP for class . By doing so, we allow correlations across the columns of , this is between and , for , and also allow correlations among the variables in the vector , for all . We then estimate for all in a generative classification scheme, and in a discriminative classification scheme using minimum classification error. Notice that a HMM would model , since vectors would be already defined for discrete values of . Also notice that in standard GP classification, we would model , but with now particular correlation assumptions over the entries in .
Available data for each class are matrices , where , and . Index runs over the instances for a class, and each class has instances. In turn, each matrix with columns , , and . To reduce clutter in the notation, we assume that for all , and for all , and . Entries in are given by for . We define the vector with elements given by . Notice that the rows of are given by . Also, vector . We use to collectively refer to all matrices , or all vectors . We use to refer to the set of input vectors for class , and instance . refers to all the matrices . Likewise, refers to the set .
2.1 Multiple Outputs Gaussian Processes
According to Rasmussen and Williams (2006)
, a Gaussian Process is a collection of random variables, any finite number of which have a joint Gaussian distribution. We can use a Gaussian process to model a distribution over functions. Likewise, we can use a multioutput Gaussian process to model a distribution over vectorvalued functions
. The vector valued function is modeled with a GP,where is a kernel for vectorvalued functions Álvarez et al. (2012), with entries given by . A range of alternatives for can be summarized using the general expression
(1) 
where is the number of latent functions used for constructing the kernel; is the number of latent functions (for a particular ) sharing the same covariance; is known as the smoothing kernel for output , and is the kernel of each latent function . For details, the reader is referred to Álvarez et al. (2012). In the linear model of coregionalization, , where are constants, and is the Dirac delta function.
In all the experimental section, we use a kernel of the form (1) with . We also assume that both and are given by Gaussian kernels of the form
where is the precision matrix.
Given a set of input vector , the columns of the matrix correspond to the vectorvalued function evaluated at . Notice that the rows in are vectors . In multioutput GP, we model the vector by , where , and the entries in are computed using , for all , and .
2.2 Generative Training
In the generative model, we train separately a multioutput GP for each class. In our case, training consists of estimating the kernel hyperparameters of the multioutput GP, . Let us assume that the training set consists of several multioutput processes grouped in and drawn independently, from the Gaussian process generative model given by
where is the kernel matrix for class , as explained in section 2.1.
In order to train the generative model, we maximize the log marginal likelihood function with respect to the parameter vector . As we assumed that the different instances of the multioutput process are generated independently given the kernel hyperparameters, we can write the log marginal likelihood for class , , as
(2) 
We use a gradientdescent procedure to perform the optimization.
To predict the class label for a new matrix or equivalently, a new vector
, and assuming equal prior probabilities for each class, we compute the marginal likelihood
for all . We predict as the correct class that one for which the marginal likelihood is bigger.2.3 Discriminative Training
In discriminative training, we search for the hyperparameters that minimize some classification error measure for all classes simultaneously. In this report, we chose to minimize the minimum classification error criterion as presented in Juang et al. (1997)
. A soft version of the {0,1} loss function for classification can be written as
(3) 
where and are user given parameters, and is the class misclassification measure, given by
(4) 
where , and . Parameters , and are again defined by the user. Expression is an scaled and translated version of the log marginal likelihood for the multioutput GP of class . We scale the log marginal likelihood to keep the value of in a small numerical range such that computing does not overflow the capacity of a double floating point number of a computer. Parameters and in equation (3) have the same role as and in , but the numerical problems are less severe here and setting and usually works well.
Expression in equation (4) converges to as tends to infinity. For finite values of , is a differentiable function. The value of is negative if is greater than the “maximum” of , for . We expect this to be the case, if truly belongs to class . Therefore, expression plays the role of a discriminant function between and the “maximum” of , with .^{1}^{1}1We use quotes for the word maximum, since the true maximum is only achieved when tends to infinity. The misclassification measure is a continuous function of , and attempts to model a decision rule. Notice that if then goes to zero, and if then goes to one, and that is the reason as why expression (3) can be seen as a soft version of a {0,1} loss function. The loss function takes into account the classconditional densities , for all classes, and thus, optimizing implies the optimization over the set .
Given some dataset , the purpose is then to find the hyperparameters that minimize the cost function that counts the number of misclassification errors in the dataset,
(5) 
We can compute the derivatives of equation (5) with respect to the hyperparameters , and then use a gradient optimization method to find the optimal hyperparameters for the minimum classification error criterion.
2.3.1 Computational complexity
Equation (4) requires us to compute the sum over all possible classes to compute the denominator. And to compute equation (2), we need to invert the matrix of dimension . The computational complexity of each optimization step is then , this could be very slow for many applications.
In order to reduce computational complexity, in this report we resort to low rank approximations for the covariance matrix appearing on the likelihood model. In particular, we use the partially independent training conditional (PITC) approximation, and the fully independent training conditional (FITC) approximation, both approximations for multioutput GPs Alvarez and Lawrence (2011).
These approximations reduce the complexity to , where is a parameter specified by the user. The value of refers to the number of auxiliary input variables used for performing the low rank approximations. The locations of these input variables can be optimized withing the same optimization procedure used for finding the hyperparameters . For details, the reader is referred to Alvarez and Lawrence (2011).
3 Experimental Results
In the following sections, we show results for different experiments that compare the following methods: hidden Markov models trained in a generative way using the BaumWelch algorithm Rabiner (1989), hidden Markov models trained in a discriminative way using minimum classification error Juang et al. (1997), multioutput GPs trained in a generative way using maximum likelihood (this report), and multioutput GPs trained in a discriminative way using minimum classification error (this report). On section 3.1 we test the different methods, for emotion classification from video sequences on the CohnKanade Database Lucey et al. (2010). On section 3.2, we compare the methods for activity recognition (Running and walking) from video sequences on the CMU MOCAP Database. On section 3.3, we use again the CMU MOCAP database to identify subjects from their walking styles. For this experiment, we also try different frame rates for the training set and validation cameras to show how the multioutput GP method adapts to this case. Finally on section 3.4, we show an example of face recognition from images. Our intention here is to show our method on an example in which the dimensionality of the input space, , is greater than one.
For all the experiments, we assume that the HMMs have a Gaussian distribution per state. The number of hidden states of a HMM are shown in each of the experiments in parenthesis, for instance, HMM(q) means a HMM with hidden states.
3.1 Emotion Recognition from Sequence Images
For the first experiment, we used the CohnKanade Database Lucey et al. (2010). This database consists of processed videos of people expressing emotions, starting from a neutral face and going to the final emotion. The features are the positions of some keypoints or landmarks in the face of the person expressing the emotion. The database consists on seven emotions. We used four emotions, those having more than 40 realizations, namely, anger, disgust, happiness, and surprise. This is . Each instance consists on 68 landmarks evolving over time. Figure 0(c) shows a description for the CohnKanade facial expression database. We employed 19 of those 68 key points (see figure 0(b)), associated to the lips, and the eyes among others, and that are thought to be more relevant for emotion recognition, according to Valstar and Pantic (2012). Figure 0(c) shows these relevant features.
In this experiment, we model the coordinate , and the coordinate of each landmark, as a time series, this is, , and . With landmarks, and two time series per landmark, we are modeling multivariate time series of dimension . The length of each time series was fixed in this first experiment to 71, using a dynamic time warping algorithm for multiple signals Zhou and De la Torre Frade (2012). Our matrices . For each class, we have instances, and use 70% for the training set, and 30% for validation set. We repeated the experiments five times. Each time, we had different instances in the training set and the validation set. We trained multioutput GPs with FITC and PITC approximations, both fixing, and optimizing the auxiliary variables for the low rank approximations. The number of auxiliary variables was . When not optimized, the auxiliary variables were uniformly placed along the input space.
Accuracy results are shown in Table 1
. The table provides the mean, and the standard deviation for the results of the five repetitions of the experiment. The star symbol (*) on the method name means that the auxiliary input points were optimized, otherwise the auxiliary points were fixed.
Method  Generative  MCE  

FITC  79.58  4.52  89.55  2.53 
PITC  69.16  16.82  87.07  5.77 
FITC*  79.16  3.29  89.16  6.49 
PITC*  70.83  16.21  85.82  3.73 
HMM (3)  85.80  7.26  84.15  9.60 
HMM (5)  79.00  3.74  87.91  4.27 
HMM (7)  70.80  8.87  91.66  6.08 
Gen and Disc refer to the generative and discriminative training, using either the multioutput GP or the HMM. The table shows that for multioutput GPs, discriminative training leads to better results than the generative training. The table also shows results for HMM with 3, 5 and 7 hidden states respectively. Results for the HMM with generative training, and the multioutput GP with generative training are within the same range, if we take into account the standard deviation. Accuracies are also similar, when comparing the HMM trained with MCE, and the multioutput GP trained with MCE. We experimentally show then that the multioutput GP is as good as the HMM for emotion recognition.
3.2 Activity Recognition With Motion Capture Data Set
For the second experiment, we use a motion capture database (MOCAP) to classify between walking and running actions. In MOCAP, the input consists of the different angles between the bones of a 3D skeleton. The camera used for the MOCAP database has a frame rate of 120 frames per second, but in this experiment we subsampled the frames to of the original frame rate. Our motion capture data set is from the CMU motion capture data base.^{2}^{2}2The CMU Graphics Lab Motion Capture Database was created with funding from NSF EIA0196217 and is available at http://mocap.cs.cmu.edu. We considered two different categories of movement: running and walking. For running, we take subject 2 motion 3, subject 9 motions 111, subject 16 motions 35, 36, 45, 46, 55, 56, subject 35 motions 1726, subject 127 motions 3, 6, 7, 8, subject 141 motions 1, 2, 3 34, subject 143 motions 1, 42, and for walking we take subject 7 motions 111, subject 8 motions 110, subject 35 motions 111, subject 39 motions 110. Figure 2 shows an example for activity recognition in MOCAP database. In this example then, we have two classes, , and time courses of angles, modeled as a multivariate time series. We also have for running, and for walking.
Here again we compare the generative and discriminative approaches on both our proposed model with FITC and PITC approximations and HMMs. Again, we assume . One important difference between the experiment of section 3.1, and this experiment is that we are using the raw features here, whereas in the experiment of section 3.1, we first performed dynamic time warping to make that all the time series have the same length. It means that for this experiment, actually depends on the particular , and .
Method  Generative  Discriminative  

FITC  60.68  3.98  95.71  2.98 
PITC  76.40  12.38  93.56  5.29 
FITC*  58.90  0.00  96.78  1.95 
PITC*  69.28  15.28  84.90  11.33 
HMM (3)  96.70  2.77  97.95  2.23 
HMM (5)  94.69  4.36  96.32  0.82 
HMM (7)  92.24  4.49  99.77  0.99 
The results are shown in Table 2 for five repeats of the experiment. For each repeat, we used instances for training, and the rest in each class, for validation. Again the results are comparable with the results of the HMM within the standard deviation. As before, the discriminative approach shows in general better results than the generative approach.
Experiments in this section and section 3.1, show that multioutput GPs exhibit similar performances to HMM, when used for pattern recognition of different types of multivariate time series.
3.3 Subject Identification on a Motion Capture Data Set
For the third experiment we took again the CMU MOCAP database but instead of classifying between different actions, we recognized subjects by their walking styles. We considered three different subjects exhibiting walk movements. To perform the identification we took subject 7 motions 1,2,3,6,7,8,9,10, subject 8 motions 1,2,3,5,6,8,9,10, and subject 35 motions 18. Then for each subject we took four instances for training and other four repetitions for validation. Figure 3 shows an example for subject identification in the CMUMOCAP database. We then have , , , and the length for each instance, , was variable.
For this experiment, we supposed the scenario where the frame rate for the motions used in training could be different from the frame rate for the motions used in testing. This configuration simulates the scenario where cameras with different recording rates are used to keep track of human activities. Notice that HMMs are not supposed to adapt well to this scenario, since the Markov assumption is that the current state depends only on the previous state. However, the Gaussian process captures the dependencies of any order, and encodes those dependencies in the kernel function, which is a continuous function of the input variables. Thus, we can evaluate the GP for any set of input points, at the testing stage, without the need to train the whole model again.
Table 3 shows the results of this experiment. In the table, we study three different scenarios: one for which the frame rate in the training instances was slower than in the validation instances, one for which it was faster, and one for which it was the same. We manipulate the frame rates by decimating in training (DT), and decimating in validation (DV). For example, a decimation of means that one of each 16 frames of the original time series is taken. When the validation frame rate is faster than the training frame rate (column Faster in Table 3), the performance of the multioutput GP is clearly superior to the one exhibited by the HMM, both for the generative and the discriminative approaches. When the validation frame rate is slower or equal than the training frame rate (columns Slower and Equal), we could say that the performances are similar (within the standard deviation) for multioutput GPs and HMM, if they are trained with MCE. If the models are trained generatively, the multioutput GP outperforms the HMM. Although the results for the HMM in Table 3 were obtained fixing the number of states to seven, we also performed experiments for three and five states, obtaining similar results. This experiment shows an example, where our model is clearly useful to solve a problem that a HMM does not solve satisfactorily.
Method  Faster  Slower  Equal  

FITC Gen  93.28  3.76  94.96  4.60  94.96  4.60 
PITC Gen  93.28  3.76  94.96  4.60  94.96  4.60 
FITC MCE  94.96  4.60  89.96  9.12  89.96  9.12 
PITC MCE  94.96  4.60  93.28  3.76  88.32  12.63 
HMM Gen (7)  33.33  0.00  36.40  12.56  81.60  6.98 
HMM MCE (7)  83.33  16.6  94.90  4.60  100.00  0.00 
3.4 Face Recognition
The goal of the fourth experiment is to show an example where the vectorvalued function is dependent on input variables with dimensionality greater than one, functions of multidimensional inputs like space. The HMMs as used here are not easily generalized in this case and, thus, we do not present results with HMMs for this experiment. In this problem we work with face recognition from pictures of the Georgia Tech database ^{3}^{3}3Georgia Tech Face Database, http://www.anefian.com/research/face_reco.htm. This database, contains images of 50 subjects stored in JPEG format with pixel resolution. For each individual 15 color images were taken, considering variations in illumination, facial expression, face orientations and appearance (presence of faces using glasses). Figure 4 shows an example for the Georgia Tech Face database.
Here we did two experiments. The first experiment was carried out taking 5 subjects of the Georgia Tech database that did not have glasses. For the second experiment we took another 5 subjects of the database that had glasses. In both experiments, each image was divided in a given number of regions of equal aspect ratio. For each region we computed its centroid and a texture vector . Notice that this can be directly modeled by a multioutput GP where the input vectors are two dimensional.
Method  Gen  Disc  

FITC  61.57  3.50  86.84  0.01 
PITC  64.72  2.34  95.78  8.03 
FITC*  66.71  3.82  96.84  7.06 
PITC*  73.68  5.88  96.30  3.00 
Method  Gen  Disc  

FITC  51.57  3.5  88.42  2.35 
PITC  69.47  3.53  83.68  4.30 
FITC*  56.80  2.44  86.84  0.01 
PITC*  62.10  8.24  87.36  1.17 
Method  Gen  Disc  

FITC  54.73  6.55  81.57  3.7 
PITC  64.21  9.41  81.57  7.2 
FITC*  60.53  0.02  90.52  9.41 
PITC*  69.47  9.41  77.36  8.24 
Method  Gen  Disc  

FITC  42.1  0.02  93.68  2.35 
PITC  35.78  2.35  86.84  0.01 
FITC*  72.6  5.45  86.84  0.01 
PITC*  48.42  2.35  89.47  0.01 
show the results of this experiment with the discriminative and the generative training approaches. The number of divisions in the X and Y coordinates are BX and BY respectively. The features extracted from each block are mean RGB values and Segmentationbased Fractal Texture Analysis (SFTA)
Costa et al. (2012) of each block. The SFTA algorithm extracts a feature vector from each region by decomposing it into a set of binary images, and then computing a scalar measure based on fractal symmetry for each of those binary images.The results show high accuracy in the recognition process in both schemes (Faces with glasses and faces without glasses) when using discriminative training. For all the settings, the results of the discriminative training method are better than the results of the generative training method. This experiment shows the versatility of the multioutput Gaussian process to work in applications that go beyond time series classification.
4 Conclusions
In this report, we advocated the use of multioutput GPs as generative models for vectorvalued random fields. We showed how to estimate the hyperparameters of the multioutput GP in a generative way and in a discriminative way, and through different experiments we demonstrated that the performance of our framework is equal or better than its natural competitor, a HMM.
For future work, we would like to study the performance of the framework using alternative discriminative criteria, like Maximum Mutual Information (MMI) using gradient optimization or Conditional Expectation Maximization
Jebara (2004). We would also like to try practical applications for which there is the need to classify vectorvalued functions with higher dimensionality input spaces. Computational complexity is still an issue, we would like to implement alternative efficient methods for training the multioutput GPs Hensman et al. (2013).Acknowledgments
SGG acknowledges the support from “Beca Jorge Roa Martínez” from Universidad Tecnológica de Pereira, Colombia.
References
 Álvarez and Lawrence (2009) Mauricio A. Álvarez and Neil D. Lawrence. Sparse convolved Gaussian processes for multioutput regression. In Daphne Koller, Dale Schuurmans, Yoshua Bengio, and Léon Bottou, editors, NIPS, volume 21, pages 57–64, Cambridge, MA, 2009. MIT Press.
 Alvarez and Lawrence (2011) Mauricio A Alvarez and Neil D Lawrence. Computationally efficient convolved multiple output Gaussian processes. The Journal of Machine Learning Research, 12:1459–1500, 2011.
 Álvarez et al. (2012) Mauricio A. Álvarez, Lorenzo Rosasco, and Neil D. Lawrence. Kernels for vectorvalued functions: a review. Foundations and Trends^{®} in Machine Learning, 4(3):195–266, 2012.
 Bishop (2007) Christopher M. Bishop. Pattern Recognition And Machine Learning (Information Science And Statistics). Springer, 2007. URL http://www.openisbn.com/isbn/9780387310732/.
 Bonilla et al. (2008) Edwin V. Bonilla, Kian Ming Chai, and Christopher K. I. Williams. Multitask Gaussian process prediction. In John C. Platt, Daphne Koller, Yoram Singer, and Sam Roweis, editors, NIPS, volume 20, Cambridge, MA, 2008. MIT Press.
 Boyle and Frean (2005) Phillip Boyle and Marcus Frean. Dependent Gaussian processes. In Lawrence Saul, Yair Weiss, and Léon Bouttou, editors, NIPS, volume 17, pages 217–224, Cambridge, MA, 2005. MIT Press.

Chai (2012)
Kian Ming Chai.
Variational multinomial logit Gaussian process.
Journal of Machine Learning Research, 13:1745–1808, 2012.  Costa et al. (2012) A.F. Costa, G. HumpireMamani, and A. J M Traina. An efficient algorithm for fractal analysis of textures. In Graphics, Patterns and Images (SIBGRAPI), 2012 25th SIBGRAPI Conference on, pages 39–46, Aug 2012. doi: 10.1109/SIBGRAPI.2012.15.
 Goovaerts (1997) Pierre Goovaerts. Geostatistics For Natural Resources Evaluation. Oxford University Press, USA, 1997.
 Gopalakrishnan et al. (1991) P. Gopalakrishnan, D. Kanevsky, A. Nádas, and D. Nahamoo. An inequality for rational functions with applications to some statistical estimation problems. IEEE Transactions on Information Theory, 37:107–113, 1991.
 Hensman et al. (2013) James Hensman, Nicolò Fusi, and Neil D. Lawrence. Gaussian processes for big data. In Proceedings of UAI, 2013.
 Higdon (2002) David M. Higdon. Space and spacetime modelling using process convolutions. In C. Anderson, V. Barnett, P. Chatwin, and A. ElShaarawi, editors, Quantitative methods for current environmental issues, pages 37–56. SpringerVerlag, 2002.
 Jebara (2004) Tony Jebara. Machine Learning: Discriminative and Generative. Springer, 2004.
 Juang et al. (1997) BiingHwang Juang, Wu Hou, and ChinHui Lee. Minimum classification error rate methods for speech recognition. Speech and Audio Processing, IEEE Transactions on, 5(3):257–265, 1997.
 Lucey et al. (2010) P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The extended cohnkanade dataset (ck+): A complete dataset for action unit and emotionspecified expression. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages 94–101, June 2010. doi: 10.1109/CVPRW.2010.5543262.
 Rabiner (1989) Lawrence Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286, Feb 1989. ISSN 00189219. doi: 10.1109/5.18626.
 Rasmussen and Williams (2006) Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006. ISBN 026218253X.

Skolidis and Sanguinetti (2011)
Grigorios Skolidis and Guido Sanguinetti.
Bayesian multitask classification with Gaussian process priors.
IEEE Transactions on Neural Networks
, 22(12):2011 – 2021, 2011. 
Teh et al. (2005)
Yee Whye Teh, Matthias Seeger, and Michael I. Jordan.
Semiparametric latent factor models.
In Robert G. Cowell and Zoubin Ghahramani, editors, AISTATS
10
, pages 333–340, Barbados, 68 January 2005. Society for Artificial Intelligence and Statistics.
 Valstar and Pantic (2012) M. F. Valstar and M. Pantic. Fully automatic recognition of the temporal phases of facial actions. IEEE Transactions on Systems, Man and Cybernetics, 42:28–43, 2012.
 Zhou and De la Torre Frade (2012) Feng Zhou and Fernando De la Torre Frade. Generalized time warping for multimodal alignment of human motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1282–1289, June 2012.
Comments
There are no comments yet.