1 Introduction
An increasing number of techniques have been proposed in the literature for emotional facial expressions analysis since emotion is an essential component of interpersonal relationships and communication. Human behavior is central to research concerning interaction processes. A facial image contains much information about a person identity but also about emotion and state of mind. Emotion cues show how we feel about ourselves and others. The cues are represented through facial components (eyes, nose, mouth, cheeks, eyebrow,forehead, etc) which are the region of interest (ROI) for emotional recognition system. Facial expression recognition system utilized in locating and extracting different facial motions and facial feature changes from the ROI region and classifying into one of the emotional or mental states. The potential utility of a system capable of analyzing spontaneous facial expressions automatically is considerable in terms of its potential applications: human machine interaction, detection of mental disorders, remote detection of people in trouble, detection of malicious behavior and multimedia facial queries [Tong et al., 2007]. The current researches about facial expression recognition can be divided into two categories [Peng et al., 2009]: recognition of facial affect and recognition of facial muscle actions. In this paper, facial affect recognition is considered for observable expressions of emotion displayed through facial expressions. Our choice comes from the fact that it provides simplicity in identification of the various emotions via extracting information about facial expressions from images. However, facial action units (AUs) which are related to the contraction of specific facial muscles, consist of 44 action units. Although the number of atomic action units is small, more than 7,000 combinations of action units have been observed [Scherer and Ekman, 1982].
As far as automatic facial affect recognition is concerned, most of the existing efforts focused on the six basic Ekman’semotions [Ekman, 1999] because those emotions have universal properties. Moreover, relevant training and test materials are available (e.g., [Kanade et al., 2000] and [Lyons et al., 1998]). These studies are limited to exaggerated expressions and controlled environments. There are a few tentatives efforts to detect nonbasic affective states including mental states (“irritated”, “worried”…) [El Kaliouby and Robinson, 2005]. But those expressions are closer to natural behavior. Additionally, the fact is that spontaneous facial expressions have different temporal and morphological characteristics than posed ones.
The purpose of our work is to demonstrate that sparse representation is an efficient model in order to classify and to increase the accuracy rate of predicting the spontaneous facial expressions using spontaneous facial images. Sparse representation provides higher or lower dimensional representations which induce the likelihood that image classes will be possibly linearly separable. The sparse discriminative feature set provides the main interface through which a machine learning algorithm can infer about the data. More precisely, the main issue with sparse representation being dictionary learning and due to the fact that the original facial image has a very high dimension, the straightforward application of sparse representation for sparse feature extraction from raw images does not lead to a meaningful sparse representation. Thus we present an efficient initialization strategy and dimensionality reduction technique via developing an optimized random face feature descriptor (RFFD) based on the random projection (RP) concept
[Vempala, 2005]. RFFD aims at projecting the facial images into a lower dimensional space and at selecting the most discriminative feature sets that minimizes the correlation between different facial image classes while maximizing the correlation within facial image classes, in an attempt to ensure the uniqueness of the atoms selection from the dictionary during sparse coding process. Our pretraining step allows us to avoid high computational resources (memory usage and training time) required during dictionary training which is an important requirement for developing a realtime automatic facial expression recognition system. Experimental results on the JAFFE acted facial expression database and on the DynEmo spontaneous expression database demonstrate that our algorithm outperforms many recently proposed sparse representation and dictionary learning based approaches. Our algorithm has the capacity to be trained on a small or a big dataset and to provide a high accuracy rate, which can be considered as an advantage compared to deep learning approaches which are doing great nowadays only if a big dataset is provided.
2 Related Work
Numerous methods for extracting discriminative information about facial expressions from images have been developed. For example, Eigenfaces, Fisherfaces, and Laplacianfaces have been used on full face images [Buciu and Pitas, 2004]. Gabor filter banks also have been successfully used as an efficient facial feature ([Candes and Romberg, 2005] and [Candès et al., 2006]) because these features are locally concentrated and have been shown to be robust to block occlusion [Donoho, 2006]
. Once the feature vector is extracted from an image, this vector feeds a classifier which gives the recognized expression. A survey of automatic facial expression recognition methods is presented in
[Hoyer, 2003].A noteworthy contribution of sparse representations of signals has been reported in recent years. It has been successfully applied to a variety of problems in computer vision and image analysis, including image denoising
[Elad and Aharon, 2006], image restoration [Mairal et al., 2008] and image classification [Yang et al., 2009], [Wright et al., 2009] and [Bradley and Bagnell, 2008]. Sparse representation modeling of data assumes an ability to describe signals as linear combinations of few atoms from a prespecified dictionary. The success of the model relies on the quality of the dictionary that sparsifies the signals. The choice of a proper dictionary can be done using one of two following ways [Rubinstein et al., 2010]: building a sparsifying dictionary based on a mathematical model of the data (wavelets, wavelet packets, contourlets, and curvelets), or learning a dictionary to perform best on a training set. Reference [Wright et al., 2009]employs the entire set of training samples as the dictionary for discriminative sparse coding, and achieves impressive performance for face recognition. Many algorithms (
[Mairal et al., 2010] and [Wang et al., 2010]) have been proposed to efficiently learn an overcomplete dictionary (the number of prototype signals, referred as atoms, is much greater than the features size) that enforces some discriminative criteria. Recently, another sparse representation for object representation and recognition was proposed in the seminal work [Wright et al., 2009]. In [Jiang et al., 2013], the class labels of training data are used to learn a discriminative dictionary for sparse coding. In addition, label information is associated with each dictionary item to enforce discriminability in sparse codes during the dictionary learning process. More specifically, a new label consistency constraint called “discriminative sparsecode error” is introduced and combined with the reconstruction error and the classification error to form a unified objective function.Our work is inspired by the good reputation of sparse representation in both theoretical research and practical applications ([Yang et al., 2009], [Wright et al., 2009], [Bradley and Bagnell, 2008] and [Mairal et al., 2008]). Moreover, our choice comes from the fact that sparse representation has the ability to provide sparse vectors that can share the same sparsity pattern at class level if it is correctly built.
3 Model Architecture
Figure 1 presents the global architecture of the proposed algorithm for facial expression recognition.
3.1 Dimensionality Reduction and Dictionary Pretraining Stage
We aim to leverage the random projection (RP) technique [Sulic et al., 2010] to develop a random face feature descriptor (RFFD) for dictionary pretraining that elegantly solves the problem of shared subspace distribution. It also projects the raw data into a lowerdimensional space, while preserving their reconstructive and discriminative properties. Beside, it seeks for the best transformation matrix that maximizes the separation between the multiple classes which is the main key to induce sparsity.
As a preprocessing stage, a face detector [Zhu and Ramanan, 2012] is applied in order to detect and locate a bounding box around the face. Facial images are cropped to focus on the expressive parts only: eyes, eyebrows, mouth and nose, in order to reduce the effect of background variation. Then, RFFD is applied in order to project the data into a lower dimensional subspace and to extract the most informative and discriminative independent features. The projected data serve as dictionary initialization. Thus, the dictionary is pretrained with feature vectors sharing same patterns within class label while feature vectors from different classes have different patterns.
Random Projection Theory and Concept Random projection is a powerful data dimension reduction technique because it is capable to preserve the reconstructive properties of the data [Sulic et al., 2010]. It uses random projection matrices whose columns have unit length to project data from highdimensional subspace to a lowdimensional subspace. It is a computationally simple and efficient method that preserves the structure of the data without significant distortion [Sanghai et al., 2005].
The concept of RP is as follows: Given a data matrix , the dimensionality of data can be reduced by projecting it onto a lowerdimensional subspace formed by a set of random vectors:
(1) 
where is the total number of points, is the original dimension, and is the desired lower dimension, is the random transformation matrix and is the projected data. The central idea of RP is based on the JohnsonLindenstrauss lemma. For complete proofs of the lemma refer to [Tsagkatakis and Savakis, 2009].
The choice of the random matrix
is one of the crucial points of interest. Reference [Tsagkatakis and Savakis, 2009] employs a random matrixwhose elements are drawn independently and are identically distributed (i.i.d.) from a zero mean, bounded variance distribution. There are many choices for the random matrix. A random matrix with elements generated by a normal distribution
being one of the simplest in terms of analysis [Tsagkatakis and Savakis, 2009] has been chosen in this work. Random Face Feature Descriptor Algorithm:A Random Face Feature Descriptor based on RP concept is designed. RFFD firstly tackles the curse of dimensionality in which each image is projected onto
dimensional vector with a randomly generated projection matrix from a zeromean normal distribution. Each row of the transformation random matrix is normalized. RFFD aims at minimizing the correlation between different classes while maximizing the correlation withinclasses. It preserves discriminative properties of the input data.Figure 2 presents the RFFD algorithm. It looks for the best projection matrix and the best dimension of projection that preserve the structure and the reconstructive properties of the original data. The intuition behind this algorithm is as follows: since is generated randomly, it is not guaranteed to have good quality features. A good quality feature vector , , is a vector where its most entire elements are not full of zeros. The quality of the projected matrix (figure 2 step iii) is checked by thresholding the norm of every column vector . If the norm of is smaller than a given threshold, it is considered as a bad feature vector.
Once a good data projection is obtained, is considered as good random transformation matrix. Moreover, the quality of the features vectors from two different good transformation matrices and can vary. We aim at picking out that induce the most discriminability between classes. Selecting the best among a set of ’s is then important (figure 2 steps b and c). In addition, selecting the best dimension that preserves the discriminative properties of the original data with minimal distortion has a great effect on the final recognition rate (figure 2 steps d and 2).
The projected data obtained as the output of the RFFD process is used to initialize the dictionary that is required for the sparse representation process. This step is important to induce sparsity during learning process by initializing the dictionary with atoms that are highly informative and that have maximum separation between multiple classes.
3.2 Dictionary Refining and Sparse Coding Stage
The second step of the algorithm (see figure 1
) firstly aims at refining the pretrained dictionary to sparsify the images via KSingular Value Decomposition (KSVD) algorithm
[Rubinstein et al., 2008]. Secondly it aims at deriving the sparse code associated to each signal by solving norm regularization to enforce sparsity by using an approximate sparse reconstruction algorithm, Orthogonal Matching Pursuit (OMP) [Tropp, 2004].Given a dataset and a target sparsity level L (maximum number of atoms allowed in each representation) the problem is to build the dictionary and the sparse matrix such that . The problem can be formulated as:
(2) 
where:

is pseudo norm, defined by the number of nonzero coefficients in column .

is the Frobenius norm.

columns is normalized atom of the dictionary .
Equation 2 can be solved by an alternating two step optimization process :

Sparse Coding: Keep the pretrained dictionary fixed and estimate such as:
(3) The sparse representation is optimized by using the OMP algorithm. Compared with other alternative methods for sparse coding, a major advantage of the OMP is its simplicity and fast implementation.

Dictionary Refining: Keep the obtained sparse matrix fixed and update the pretrained dictionary via KSVD algorithm to better fit the data.
To recap, the search for the sparse representation of facial expression images over a pretrained dictionary is achieved by optimizing an objective function (equation 2) that includes two terms: one that measures the signal reconstruction error and the other that measures the best sparsity level to ensure the correct representation of the signals.
3.3 Classification Stage
In the last step (see figure 1), the sparse matrix is directly used as feature vectors for classification. Our model trains a
“Multinomial Linear Support Vector Machine”
classifier [Vapnik, 2013]for the purpose of facial expression recognition. We consider linear SVM classifier among the others wellknown classifiers (i.e., KMeans, Ada Boost and Decision Tree) since it shows the best results. In the training step, the sparse matrix
is used to learn a predictive model to recognize facial expressions. The test sparse matrix is used for generalization purpose: the capability of the model to predict unseen facial expression is tested. Grid search is applied to find the best parameter (regularization parameter) to tune the linear SVM classifier.4 Experimental Setup and Analysis
A critical experimental evaluation of the proposed approach is presented. Two public data sets that exhibit various emotions in different conditions, starting from acted facial expressions: the JAFFE database [Lyons et al., 1998], to everyday natural and spontaneous facial expressions: the DynEmo database [Tcherkassof et al., 2013], are used. The effectiveness of the proposed random face feature descriptor as a dimension reduction technique and dictionary pretraining method is analyzed by considering first the acted and controlled JAFFE database. This database is also used for fair comparison with the state of the art methods. Then experiments on the DynEmo database are reported since spontaneous facial expressions recognition is our main goal.
4.1 Model Validation over the JAFFE Database
The JAFFE Database: The Japanese Female Facial Expression (JAFFE) database is a wellknown database made of acted facial expressions related to Eckman’s emotions. It contains 213 images of female facial expressions including: “happy”, “anger”, “sadness”, “surprise”, “disgust”, “fear” and “neutral”. Resolution of original facial images is pixels. After cropping, each image has a resolution of pixels. The head is almost in frontal pose. The number of images corresponding to each of the seven categories of expressions is roughly the same (around 30 images per class). A few of them are shown in figure 3. It is obvious that expressions are over exaggerated. Nonetheless this database has often been used in literature to evaluate the performance of some facial expressions recognition algorithms.
JAFFE Database Protocol: Identities that appear in the training data sets do not appear in the test set.

train set: 20 images per class are picked out as training set. In total we have 143 facial expression images, randomly shuffled, for training our algorithm.

development set: Leaveoneout cross validation is considered over the training set to tune the algorithm parameters.

test set: 10 images per class are picked out as test set. In total we have 70 facial expression images, randomly shuffled, to test the performance of our algorithm.
Experimental Setup and Analysis:
The efficiency of our approach and its capability to recognize acted facial expressions beforehand testing it on spontaneous facial expressions is evaluated and demonstrated as a control experiment.
Firstly, the dataset is divided into two portions based on the number of images per class as defined in the previous JAFFE dataset protocol. Figure 4 represents the evaluation of the random face feature descriptor. The xaxis represents the generation of different random matrix for the same desired dimension . The yaxis represents the final average classification rate over the projected data. To evaluate the performance of RFFD over the JAFFE database, we define a list of desired lower dimensions, : . For each dimension, different random matrices are generated. For a given dimension, that reaches the maximum average classification accuracy rate is picked out. Finally, both the best dimension and the best R are derived. Figure 4 shows that the optimal random projection matrix is . Which means, the optimal dimension is found to be at random matrix of this set. The projected data from reaches an average classification rate of .
We compare the proposed dimension reduction method with PCA which is probably the most popular method for dimensionality reduction. Our method out performs PCA method as shown in table
1.JAFFE database  

Projection Method  Average recognition rate % 
RFFD (ours)  70 
PCA  30 
For illustration, the first three feature vectors for 20 images per class before and after RFFD over the test data are displayed. Figure 5 shows that the data have a shared subspace before projection. This problem is solved by the proposed RFFD method since the data are then partially linearly separable. This reaches our main concern to obtain highly informative and independent feature vectors between different classes.
Secondly, the optimal data projection is used to initialize the dictionary , of size (700 features, 143 atoms ()) (undercomplete dictionary: number of the atoms is smaller than the feature size). Each column of the dictionary is normalized to have unit norm, which ensures that the angle is proportional to the inner product. KSVD algorithm is applied to refine the initialized dictionary and the sparse matrix is computed via OMP algorithm. The optimal dictionary is obtained. It yields to a signal representation with the smallest possible support while the estimated signal is still close to the observation. The choice of is estimated to of the dictionary size by controlling the absolute reconstruction error (figure 6) and the discriminability of the obtained sparse code (figure 7). Figure 6 shows the ability of the trained dictionary to reconstruct the test samples with minimal reconstruction error and with low sparsity level (21 nonzero coefficient at maximum). Figure 7 represents the sparse code coefficients of a given image of the following expressions: ”Anger”, ”Disgust”, ”Fear”, ”Happy”, ”Sad”, ”Surprise” and ”Neutral” respectively from top to bottom. The xaxis represents the dictionary atoms (basis vectors) in which the coming facial image is encoded from while the yaxis represents the coefficients value. Figure 7 shows that each expression is encoded by a different set of atoms with different weights coefficients. The discriminability of the sparse code is a very important property for robust classification. The other point to be noticed is that undercomplete dictionary allows faster computation, since OMP algorithm will pick out at most out of atoms (greedy algorithm).
Finally, after deriving the test and the train sparsity matrices via OMP algorithm based on the refined dictionary via KSVD algorithm, a linear SVM classifier is trained over the training sparse matrix (: samples, : sparse feature vector size). The test sparsity matrix () is used to assess the ability of the classifier to generalize. A grid search is applied to find the best regularization parameter , and is found to be .
Class  Recognition Rate % 

AN  99 
DI  90 
FE  95 
HA  89 
SA  100 
SU  100 
NE  91 
Table 2 presents the recognition rate per class. It shows that expressions “anger” (AN), “sadness” (SA) and “surprise” (SU), are perfectly classified. For the expressions “disgust” (DI), “happy” (HA) and “neutral” (NE) the system is able to recognize them with 91% classification accuracy rate. “Fear” (FE) got 95% recognition rate. The final average recognition rate is 94.85%.
Approach  Average Recognition Rate % 

SFER (ours)  94.85 
LCKSVD1  76 
LCKSVD2  78 
CAEbased  95.8 
FIS  87.6 
Sobelbased  93.1 
We compare our approach with other sparse approaches LCKSVD1 and LCKSVD2 [Jiang et al., 2013]
but also with other techniques: Convolutional Autoencoder, SobelBased, Fuzzy Inference System
[Hamester et al., 2015]. Table 3, shows the average recognition rate for those different approaches. It is obvious that our approach outperforms other sparse approaches and exhibits performance similar to those of the most recent state of the art methods.4.2 Model Evaluation over the DynEmo Database
The DynEmo Database: DynEmo is a database containing dynamic and natural emotional facial expressions (EFEs). It is made of six spontaneous expressions which are: “irritation”, “curiosity”, “happiness”, “worried”, “astonishment”, and “fear” (see figure 8). Those expressions have been elicited by showing some emotive short clips to volunteer subjects. The database contains a set of 125 recordings of EFE of ordinary Caucasian people (ages 25 to 65, 182 females and 176 males) filmed in natural but standardized conditions. In this set, EFE recordings are both associated with the affective state of the expresser itself and with continuous annotations of observers’ ratings of the emotions displayed throughout the recording (see figure 9). The xaxis of figure 9 represents the time line, while the yaxis represents the probability of judgment for each frame. In the rest of this paper, we will refer to the expresser as encoder and to the observer as decoder. Figure 9 shows that of the decoders recognized the feeling of the encoder as irritation at the beginning of the video (time line: frame 1 to frame 20). While for the time line between frame 21 and frame 76, the judgement of different decoders led to different results. To overcome this problem, when different decoders judge with different classes, we associate to each frame the class that gets the highest probability. For example, for the time line corresponding to frames between 36 and 41, the apex (maximum expressiveness of the emotion) is associated to the astonishment class with a probability of . In some cases, when the probability of two or more different classes is the same, we refer to the previous frame judgment as additional information to judge the current frame.
We built a labelled spontaneous database in which the frames are extracted based on the previous defined ground truth. 480 images of female and male facial expressions of 65 different identities form the database. Each image has a resolution of pixels after face detection and cropping. The head is not totally in frontal pose. The number of images corresponding to each of the six categories of expressions is roughly the same (80 images per class). The dataset is challenging since it is closer to natural human behaviour and figure 10 shows that even for the same emotion, people can perform it in different ways.
DynEmo Database Protocol: Identities that appear in the training data sets do not appear in the test set.

train set: 60 images per class are picked out as training set. In total we have 360 facial expression images, randomly shuffled, for training our algorithm.

development set: Leaveoneout cross validation is considered over the training set to tune the algorithm parameters.

test set: 20 images per class are picked out as test set. In total we have 120 facial expression images, randomly shuffled, to test the performance of our algorithm.
Experimental Setup and Analysis:
Same experimental setup as in the control experiment has been followed. Firstly, the dataset is divided into two portions based on the number of images per class as defined in the previous DynEmo dataset protocol. Then, RFFD is applied in which the optimal random matrix that generates good discriminative features and the optimal dimensionality size (features = 250 feature points) are derived. Therefore, passing through the first stage of SFER algorithm, we get a training set with images and feature points () in addition to a testing set of images and feature points. The average recognition rate over the projected data prior to the second stage of SFER algorithm is . The performance of RFFD is compared with PCA which achieves an average recognition rate of only. It is obvious that PCA is not powerful at all to extract good discriminative features compared to RFFD when considering spontaneous facial expressions.
Secondly, a dictionary of size ( features, atoms ()) is initialized (projected training set). The sparsity level () is estimated to of the dictionary size by controlling the absolute reconstruction error. The dictionary is refined via KSVD and the sparse matrix is derived via OMP algorithm.
Finally, a linear SVM classifier is trained over the training matrix (, ). The test sparsity matrix (, ) is used to assess the ability of the classifier for generalization. A grid search is applied to find the best regularization parameter , where is found to be .
Class  IRR  CU  HA  WO  AST  FE  RR 

IRR  85  5  5  0  5  0  81 
CU  5  95  0  0  0  0  95 
DI  0  0  95  5  0  0  93 
WO  15  0  5  80  0  0  86 
AST  0  0  0  0  100  0  98 
FE  5  0  0  0  0  95  97 
Table 4 shows the confusion matrix and the average recognition rate per class. It appears that the highest number of misclassifications is obtained for “irritation” (IRR) and “worried” (WO). Figure 8 shows that WO and IRR expressions are visually close to each other. For the rest of the classes like “curious” (CU), “astonishment” (AST) and “fear” (FE), the obtained recognition rate is above 95%. The class “disgust” (DI) got a 93% recognition rate. The average recognition rate is 91.67%. Table 5 shows the recognition rate on the DynEmo dataset compared to the other sparse approaches. It can be seen that our approach performs much better than LCKSVD1 and LCKSVD2.
Approach  Average Recognition Rate % 

SFER (ours)  91.68 
LCKSVD1  20.1 
LCKSVD2  85.4 

AN  IRR  SU  CU  SA  AST  HA  WO  DI  FE  
Training set  20  60  20  60  20  60  20  60  80  80  
Test set  10  20  10  20  10  20  10  20  30  30  

93  78  94  90  88  92  89  83  89  88  

88.4 % 
4.3 Generalization Performance
The system’s generalization performance is evaluated on the combination of the two datasets: JAFFE + DynEmo. Table 6 show the distribution of the new database, the average recognition per class and the final average recognition rate obtained over the new database. Same experimental setup as the two previous experiments has been followed. The model is tuned by performing 10folds cross validation over the training set and tested over the test set. Table 6 shows that our model is capable of recognizing different classes related to different emotions and mental states. 88.4 % as an average recognition rate over the 10 classes is achieved.
5 Conclusion
In this paper, a robust spontaneous facial expression recognition algorithm (SFER) based on facial images that recognizes nonbasic affective state including mental state is presented. We developed a method to pretrain the dictionary that enforces sparsity and enhances dictionary performance. We shown that it was possible to learn undercomplete dictionary once good discriminative features are extracted prior to dictionary refining stage which ensures the uniqueness of the selected atoms from the dictionary during the optimization process. We proposed the use of random projection as a mean of dimensionality reduction and as a mean of solving the problem of shared subspace. We exhibited very good recognition rates over the recent spontaneous facial database DynEmo. A possible work for the future is exploiting the temporal dynamics of facial expressions in order to improve the recognition rates. Temporal information might be useful since expressions not only vary in their facial deformations but also in their onset, apex, and offset timings.
References
 [Bradley and Bagnell, 2008] Bradley, D. M. and Bagnell, J. A. (2008). Differential sparse coding.
 [Buciu and Pitas, 2004] Buciu, I. and Pitas, I. (2004). Application of nonnegative and local non negative matrix factorization to facial expression recognition. In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, volume 1, pages 288–291. IEEE.
 [Candes and Romberg, 2005] Candes, E. and Romberg, J. (2005). l1magic: Recovery of sparse signals via convex programming. URL: www. acm. caltech. edu/l1magic/downloads/l1magic. pdf, 4:46.
 [Candès et al., 2006] Candès, E. J., Romberg, J., and Tao, T. (2006). Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. Information Theory, IEEE Transactions on, 52(2):489–509.
 [Donoho, 2006] Donoho, D. L. (2006). Compressed sensing. Information Theory, IEEE Transactions on, 52(4):1289–1306.
 [Ekman, 1999] Ekman, P. (1999). Basic emotions. Handbook of cognition and emotion, 98:45–60.
 [El Kaliouby and Robinson, 2005] El Kaliouby, R. and Robinson, P. (2005). Realtime inference of complex mental states from facial expressions and head gestures. In Realtime vision for humancomputer interaction, pages 181–200. Springer.
 [Elad and Aharon, 2006] Elad, M. and Aharon, M. (2006). Image denoising via sparse and redundant representations over learned dictionaries. Image Processing, IEEE Transactions on, 15(12):3736–3745.

[Hamester et al., 2015]
Hamester, D., Barros, P., and Wermter, S. (2015).
Face expression recognition with a 2channel convolutional neural network.
In 2015 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE.  [Hoyer, 2003] Hoyer, P. O. (2003). Modeling receptive fields with nonnegative sparse coding. Neurocomputing, 52:547–552.
 [Jiang et al., 2013] Jiang, Z., Lin, Z., and Davis, L. S. (2013). Label consistent ksvd: Learning a discriminative dictionary for recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(11):2651–2664.
 [Kanade et al., 2000] Kanade, T., Cohn, J. F., and Tian, Y. (2000). Comprehensive database for facial expression analysis. In Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, pages 46–53. IEEE.
 [Lyons et al., 1998] Lyons, M., Akamatsu, S., Kamachi, M., and Gyoba, J. (1998). Coding facial expressions with gabor wavelets. In Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE International Conference on, pages 200–205. IEEE.
 [Mairal et al., 2010] Mairal, J., Bach, F., Ponce, J., and Sapiro, G. (2010). Online learning for matrix factorization and sparse coding. The Journal of Machine Learning Research, 11:19–60.
 [Mairal et al., 2008] Mairal, J., Elad, M., and Sapiro, G. (2008). Sparse representation for color image restoration. Image Processing, IEEE Transactions on, 17(1):53–69.
 [Peng et al., 2009] Peng, X., Zou, B., Tang, L., and Luo, P. (2009). Research on dynamic facial expressions recognition. Modern Applied Science, 3(5):31.
 [Rubinstein et al., 2010] Rubinstein, R., Bruckstein, A. M., and Elad, M. (2010). Dictionaries for sparse representation modeling. Proceedings of the IEEE, 98(6):1045–1057.
 [Rubinstein et al., 2008] Rubinstein, R., Zibulevsky, M., and Elad, M. (2008). Efficient implementation of the ksvd algorithm using batch orthogonal matching pursuit. CS Technion, 40(8):1–15.
 [Sanghai et al., 2005] Sanghai, K., Su, T., Dy, J., and Kaeli, D. (2005). A multinomial clustering model for fast simulation of computer architecture designs. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 808–813. ACM.
 [Scherer and Ekman, 1982] Scherer, K. R. and Ekman, P. (1982). Handbook of methods in nonverbal behavior research, volume 2. Cambridge University Press Cambridge.
 [Sulic et al., 2010] Sulic, V., Perš, J., Kristan, M., and Kovacic, S. (2010). Efficient dimensionality reduction using random projection. In 15th Computer Vision Winter Workshop, pages 29–36.
 [Tcherkassof et al., 2013] Tcherkassof, A., Dupré, D., Meillon, B., Mandran, N., Dubois, M., and Adam, J.M. (2013). Dynemo: A video database of natural facial expressions of emotions. The International Journal of Multimedia & Its Applications, 5(5):61–80.
 [Tong et al., 2007] Tong, Y., Liao, W., and Ji, Q. (2007). Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE Transactions on Pattern Analysis & Machine Intelligence, (10):1683–1699.
 [Tropp, 2004] Tropp, J. A. (2004). Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information theory, 50(10):2231–2242.
 [Tsagkatakis and Savakis, 2009] Tsagkatakis, G. and Savakis, A. (2009). A random projections model for object tracking under variable pose and multicamera views. In Distributed Smart Cameras, 2009. ICDSC 2009. Third ACM/IEEE International Conference on, pages 1–7. IEEE.

[Vapnik, 2013]
Vapnik, V. (2013).
The nature of statistical learning theory
. Springer Science & Business Media.  [Vempala, 2005] Vempala, S. S. (2005). The random projection method, volume 65. American Mathematical Soc.
 [Wang et al., 2010] Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., and Gong, Y. (2010). Localityconstrained linear coding for image classification. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 3360–3367. IEEE.
 [Wright et al., 2009] Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., and Ma, Y. (2009). Robust face recognition via sparse representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(2):210–227.
 [Yang et al., 2009] Yang, J., Yu, K., Gong, Y., and Huang, T. (2009). Linear spatial pyramid matching using sparse coding for image classification. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1794–1801. IEEE.
 [Zhu and Ramanan, 2012] Zhu, X. and Ramanan, D. (2012). Face detection, pose estimation, and landmark localization in the wild. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2879–2886. IEEE.
Comments
There are no comments yet.