1 Introduction
Recognition of human face expression is an useful functionality in computer applications based on humancomputerInterfacing (HCI). The algorithmic background for such systems belongs to artificial intelligence (AI) in general with strong share of computer vision research and development.
Facial expressions as a natural nonverbal means for communication, are conveying human feelings. This language was developed in human beings evolution as a irreplaceable tool in their mutual communication.
Research of the passed centuries including recent findings of psychologists and physiologists, suggest that the visible emotions, regardless of the sexual, national, or religious aspects can be classified into the six basic forms (Ekman and Friesen [1]): happiness, anger, fear, disgust, surprise, and sadness. In recognition systems, for the completeness, the neutral category is appended.
The automatic recognition of facial expression attracted many researchers in the recent years of image analysis development. For instance, already in year 1998 the authors from famous Pittsburgh’s group (Lien, Kanade, Cohn, Li [2]) compared four types of facial expressions getting the accuracy of about for the frontal pose. However, the algorithm recognizes only upper face expressions in the forehead and brow regions. Interestingly, the facial expressions are represented by combinations of action units defined years earlier as the Facial Action Coding System (FACS) by Ekman and Friesen in [3]
. The method uses Hidden Markov Model (HMM) to recognize "words" of action units.
The FACS concept had significant impact on graphics standardization conducted within MPEG4 group on 3D model of human head. The works had led to Candide3 model [4] including numerical values for model vertexes and mesh indexes for model standard geometry, shape units for geometry individualization, and action units for model animation. Introducing Candide model happened to be the important step in image face analysis via 3D models. Then (year 2001), the missing factor was a tool for detecting facial landmarks in the fast and the accurate way. There was general opinion that such task is beyond of the contemporary computing technology.
The image analysis research was facing then the mythical "curse of high dimensionality modeling" and the optical/pixel flow tools were the evidence of the crisis.
However, after a decade both the computing technology and algorithms have made the enormous progress and getting in real time the 3D models of real human heads becomes the reality in humancomputer interfaces HCI (as shown for instance in [5]). Nearly, in the same time the "neural revolution" arrived in the form of deep neural networks (DNN) changing dramatically the status of "intelligent applications" including visible emotions recognition in real time.
Deep learning approaches, particularly using convolutional neural networks (CNN) have been successful at computer vision related task due to achieving good generalization from multidimensional data [6, 7, 8, 9]. Facial expression and emotion recognition with deep learning methods were used in [10, 11, 12, 13, 14, 15]. Moreover DNN network were jointly learned with SVM by Tang[14] for emotion classification. There is great amount of databases for facial expression recognition from image as well as sophisticated recognition methods used in the wild.
2 Geometric feature extraction
On top of all facial image analysis usually the face detector is exploited. In our research we use the face detector of King [16] available in dlib library [17]. It is a novel proposal which appears to be more accurate than Viola and Jones face detector [18] (cf. Fig.1) available in opencv library and being still improved by retraining for new data sets.
The face detector offered in dlib is an instance of more general object detector which can be tuned for various object classes and various feature extractors.
The generality of the maximum margin object detector (MMOD) in dlib is based on similar concepts to those which were proposed by Joachims et alters [19] at developing the ideas of Structural Support Vector Machine (SSVM).
Comparing to Viola and Jones algorithm for face detection, instead of the boosted cascade of weak classifiers based on ensembles of local region contrasts, we have in Kings proposals both, the HOG features (HOG/MMOD), i.e. the Histogram of Oriented Gradients
[20], and features extracted by trained neural network (CNN/MMOD), as well. The SVM model is trained by maxmargin convex optimization problem defined for collections of image rectangles.The sophisticated method of selection of image rectangles with objects of interest avoids the combinatorial explosion by SSVM trick where only the worst constraints are taken to relevant quadratic optimization and by a smart heuristics which in the greedy way allows us to get suboptimal rectangle configurations for a complex but convex risk function.
2.1 Facial characteristic landmarks
At the feature extraction stage, the following three concepts of face description are confronted: (a) static 2D face geometry represented by its 68 characteristic landmarks (FP68); (b) dynamic 3D geometry defined by motion parameters for eight distinguished face parts (denoted as AU8) of personalized Candide3 model; (c) static 2D visual description as 2D array of gray scale pixels (known as facial raw image).


The FP68 detector is implemented as well in dlib [17]
library as well. The detector exploits many regression trees for 68 HOG features mapping. The cascade approach using of many small regression trees gives more effective detector than using one large regression model. The trees are built using stochastic gradient boosting of Friedman
[21].On the other hand dlib FP68 detector is unsatisfactory for detecting landmarks on nonfrontal face images. In most cases landmarks are incorrectly marked especially for test database. We switch to CNNbased FP68 regressor [22] built at the top of MobileNetV1 [23] which is trained on public datasets 300W, 300VW, IBUG [24, 25, 26, 27, 28] in order to extract missing facial landmarks.
2.2 Candide3 model personalization and animation
Candide3 model (cf. Fig.3) consists of static human head geometry specified by a discrete set of 3D points (also known as mean Candide shape) with coordinates normalized to the spatial cube and the dynamic geometry specified via motion vectors assigned to distinguished groups of 3D points (also known as shape units and action units).



2.2.1 Concept of action and shape units
The Candide3 model can be scaled independently in shape and action units, i.e. a local motion can be performed. While the shape units can be used to personalize the face features (e.g. mouth width) the dynamic scaling of action units enables simulation of local face motions typical for face expressions [29].
After local motions, global rotation is applied followed by global scaling and global translation. While local motions (personalization and animation) are performed in Cartesian coordinate system of Candide3, the global scaling and translation are usually used to transfer the model to an observer coordinate system. If the observer is a web camera then the face image can be used to identify parameters of local motions provided we have a fast algorithm identifying in the image the points which convey the local motion.
In the identification of local motion parameters we have to know also the global affine motion parameters (rotation, scaling, and translation). To identify the global motion we need also points of model which do not belong to groups of shape and action units. Fortunately, we can select 37 facial salient points out of FP68 landmarks to be used to minimize the modeling error.
In Fig.4 we see that the major information of the facial muscles movements is located at the eyes, eye brows, mouth and jaw. Therefore action units are assigned to this facial parts. The action units having an impact on visible emotion modeling are given in the table Tab.1. The particular face expression is always a linear combination of many action units:

smile: ,

anger: ,

surprise: .
Indexing  Descrption  Indexing  Indexing 

1  Inner brow raiser  5  Upper lid raiser 
2  Outer brow raiser  7  Lid tightener 
4  Brow lowerer  9  Nose wrinkler 
26  Jaw drop  12  Lip corner puller 
25  Lips part  10  Upper lip raiser 
27  Mouth stretch  15  Lip corner depressor 
16  Lower lip depressor  20  Lip stretcher 
Since in the identification of action units parameters we can establish the correspondence of only 37 facial landmarks to Candide head points which have a significant impact on facial expressions, we select only eight action units to be identified: jaw drop (AU 26/27), brow lower (AU 4), lip corner depressor (AU 13/15), upper lip raiser (AU 10), lip stretcher (AU 20), lid tightener (AU 7), noise wrinkler (AU 9), eye closed (AU 42/43/44/45).
While the action units approximate the muscle movements at visible expressions, the shape units indicate the individual (personal) differences for the location and size of facial components. By registering the shape units, we could distort the original mean Candide3 model into individualized form [30], thus the action units parameters, we identify, suffer much less influences of their personal information. In the personalization process we select most trusted coefficients out of shape units including the following seven elements of FACS: eye brows vertical position (SU 1), eyes vertical position (SU 2), eyes width (SU 3), eyes separation distance (SU 4), mouth vertical position (SU 5), mouth width (SU 6), eyes vertical differences (SU 7). Fig.5 illustrates the selected shape and the action units.
2.2.2 Candide modeling from images – theory
The dynamic 3D modeling of human head is represented by the parameters of transformation applied to Candide model. The global motion model has the following form:
(1) 
where is the index of the point in Candide3 model with
points of global estimation;
is the index set of points for the deformation , where is a list of indexes for the selected shape units to be used for personalization (individualization); is the unit deformation vector being the column of the deformation matrix which is assigned to the point at the deformation , ; the notation selects for the summation only those deformations which refer to the point .Having the global motion parameters and local deformation for personalization, we can relate the local motion of action units with globally moved Candide points (vertexes):
(2) 
where stands for all 3D points corresponding to landmarks. The scale parameter and the rotation matrix are the same as those used in global estimation. is the index set of points for the local animation , where is the set of action units while is the unit deformation vector.
The main function of optimization package is to identify the transformation parameters (local deformations for action units and shape units, global scaling, rotation, and translation) of Candide3 model onto the current face model. To this goal:

Core 3D points for global motion are selected:

Points for global estimation and individualization are selected from core 3D points:

Indexes of deformation points for shape units are joined to core points:

Active 2D points of facial salient points FP68 having corresponding points in core and deformation points, are selected.

Active 3D points are specified as those points of which correspond to active 2D points:

Number of active points is registered:

The centroid for Candide model is computed:
For the current FP68 shape , the initial values of motion parameters with respect to Candide3 shape are found using general formulas (11):

Distortion coefficients and rotation:
(3) 
Scaling :
(4) where the 2D/3D centered shapes are defined as follows:
(5) 
Translation
(6)
Error function is defined:
(7) 
where – scaling parameter; – the vector representation of the rotation matrix (see the inverse Rodrigues formulas below (9)); – the translation vector in the plane; – parameters of local deformations; such that i.e. it is the active index of 2D point corresponding to the active index of 3D point; – denotes the orthographic projection onto plane.
LMM (Levenberg Marquardt Method) optimization procedure is performed for the error function defined by equation (7)with initialization described above.
The function to compute the orthographic projection uses the current transformation parameters. The rotation is represented by 3D vector representing the rotation angle in radians and the rotation axis The rotation matrix is found from the Rodrigues formula.
Namely, let be the rotation matrix for rotation axis and rotation angle . If then otherwise Hence However,
(8) 
Note that the rotation angle and the rotation axis can be recovered from the rotation matrix by the inverse Rodrigues formulas. They follow directly from the linearity of trace and transposition operations for matrices.
(9) 
2.2.3 Affine equations for affine motion initialization
For orthographic viewing, the affine motion of 3D cloud of points without rotation can be considered as the affine motion of 2D model . The we scale by and translate by in order to nearest to our target cloud of points . In our case and are Candide model and FP68 landmarks, both restricted to still points only. Having the word nearest expressed by the least square error we get the following set of equations.
Given four vectors , we are looking for three parameters such that
(10) 
Then the centering trick leads to the solution:
(11) 
2.2.4 Candide model personalization and local motion normalization
Candide model personalization is performed using data from neutral expression. From a small number of image frames we get the second order statistics for each shape unit coefficients. Then Gaussian approximation can be used to define the coefficient trust or distrust measure. Sorting by a distrust
of our statistics is based on the cumulative probability distribution in favor of zero value, i.e. we compute the probability of those real values
which are closer to zero than to the mean value:(12) 
where is the cumulative probability distribution of the Gaussian
In Tab.2 the shape coefficients are sorted by the distrust measure. The data is acquired for the person with photo given in Fig.2. All Gaussians characterizing personal face local proportions are drawn in Fig.8.
Normalization for the action units feature vectors
The normalization of action units feature vectors changes the statistics of animation coefficients to zero mean and unit variance.
Considering that the motion vectors which are defined for each action unit do not share the same length it is clear that some facial component’s coefficients get large values while others are less significant.
(13) 
where is the normalized feature vectors and is the mean value of each action unit values and
is the standard deviation of each action unit values, respectively.
3 Datasets for training and testing of classifiers
We make experiment to compare classical personalized SVM emotion classifier as prior work. For personalization we choose datasets with neutral face state. Moreover to make experiment more reliable and see how different facial features can affect results we choose the test dataset with also nonfrontal face poses (Fig.8(b)) that are not provided in training data (Fig.8(a)). This selection can show what generalization is obtained using different classification techniques. For these reason, we don’t use open datasets where we can’t find neutral faces for personalization as a step of structuredSVM classifier. We use three different databases for training and evaluation. Selected datasets consist of four different emotions for each person.
The first database we use is CohnKanade dataset established by Lucey et al. [31], a dataset specified for action units and emotionspecified expression. It includes approximately 2000 video sequences of around 200 people between the age of 18 to 50. It consists of 69% samples from females and 31% samples from males. 65% of them is taken from white people, 15% of which from black people and 20% of which from Asian or Latin American. Every video sequences of an individual expression starts from neutral face to the maximized deformation of the face of a certain expression, which provides us the accurate data for action units extraction.
The second dataset, The MUG facial expression database developed by Aifanti et al. [32], consists of image sequences of 86 subjects recorded performing facial expressions. 35 females and 51 males all of Caucasian origin between 20 and 35 years of age. The performing of a special expression are joined with different combination of AUs, e.g., fear can be imitated with mouth closed and only moving the upper part of the face or lips stretched.
The third dataset RaFD which we used for the testing purpose is established by Oliver et al. [33], not just providing us 67 models (including Caucasian males and females, Caucasian children, both boys and girls, and Moroccan Dutch males), but also three samples of an expression of different angles from the same subject, thus the dataset is more challenging for our recognition algorithms to test their performance in practical conditions. The RaFD is high quality dataset and contains pictures of eight emotions. Each emotion is shown with three different gaze directions and pictures are taken from five camera angles. We use only three different camera angles (90, 45 and 135 degrees) due to limitations of face detector and facial landmarks extraction, which are not reliable at 180 and 0 degrees (Fig. 10).
In total, 6409 image samples were used as training subset, containing 2079 “neutral” samples, 2149 “smile” samples, 925 “angry” samples, and 1256 “surprised” samples, respectively from the MUG dataset and CohnKanade dataset (Fig.11) while 2412 samples are equally divided into these four classes for testing purpose from RafD dataset [33].
4 Classification
We prepare set of classifiers to recognize emotions using both facial vector features like AU8 and FP68 as well as face grayscale images.
For facial expression geometric data, the comparison of SVM classifiers and CNN classifiers is performed directly. We train and test them on the same data.
4.1 AU8 and FP68 classifiers
The Support Vector Machine (SVM) finds in high dimensional space, the hyper plane separating the feature vectors with the maximum margin. SVM classifier has proven to be accurate in image processing area and machine learning area in case of limited number of feature samples.
For our experiments we use two different implementations of SVM: structured SVM (SSVM) and SVM using oneagainstone decision function with polynomial kernel (SVMpoly).
To compare SVM and CNN classification techniques, sets of simple neural networks adapted to classify emotions were also created with input AU and FP68 facial features.
We present the neural architecture in symbolic notation for tensor neural networks with defined BNF grammars in the paper [34].
For AU8, where the architecture options are limited due to the low dimensionality of the input data, few kinds of twotier neural networks have been checked. We observed that enlarging those networks and addition of nonlinear activations lead to model overfitting.
a1au884 
AU8 au8 := 8_a; optima := [loss, AdaM, SoftMax] 
For FP68, an architecture was established for the best results using
fold validating process. Deepening the neural network, modifications of the activation function, changing tensor dimensions do not improve results neither for training nor for testing data. However, the dropout regularization technique applied with probability
prevents overfitting.Multilayer perceptrons are trained with Adam optimizer with starting learning rate 0.001 being reduced by two when validation metric stops improving.
a1fp6816br804 
FP68 fp68 := 136_a; optima := [loss, AdaM, SoftMax] 
4.2 Image based emotion classifiers
We train neural networks on grayscale images for 4class emotion recognition problem. We think that trained features on small training dataset can be at least good as the features determined by analytic methods.
Testing and training data require common face detector to crop facial image. From dlib library [17] we choose CNN/MMOD neural face detector which is more robust to different face poses than HOG/MMOD face detector.
4.2.1 Image augmentation
The original image training dataset is augmented by performing affine transforms, scalings, cropping, changes of lighting, contrasting, and adding Gaussian noise. The augmentation using imgaug library[35], makes models more robust to changing the pose of the head – it can be seen in the test set (Fig. 13).
We define list of image processing operations on training images as augmentation procedure on image (Fig. 12). Augmentation consists of stochastic set of procedures. Some of them are applied with particular probability. Order of augmentation is also randomized to provide better mixture of training data.

Vertical axis symmetry is applied with probability ;

Cropping randomly rows and columns of the image;

Gaussian blur is randomly applied with probability for

Contrast normalization

Additive Gaussian Noise

Affine transform with random matrix in the uniform pixel coordinates representing the composition of the following basic transformations:

scaling by

translating ,

rotating by

shearing by

4.2.2 Neural networks
We also test convolutional neural networks on different cropped face image sizes: 150x150, 75x75 and 50x50. This experiment shows what is general impact of resolution on the performance. For each image size the best architecture is chosen (Tab.3
). Detailed architecture consists of several convolutional, max pooling, dropout and fully connected layers.
CNN1
network is constructed of convolutional layers with batch normalization and nonlinear ReLU activation unit. After the last convolution layer the global average pooling is used. The network doesn’t contain fully connected layer thanks to the preceding convolution layer which defines four feature maps.
yximg50 516pb 2_σ516pbr 532pb 2_σ532pbr 364pb 2_σ364pbr 164pb 
2_σ3128pbr 1256pb 2_σ3128p 1256pb 2_σ34p ga 
cnn1 img50 := 50_yx; optima := [loss, AdaM, SoftMax] 
CNN2 network is inspired by xception architecture [36]. It contains cast adder blocks with depthwise separable convolution layers. Global average pooling is also used in the same way as in the network CNN1.
yximg75 38pbr 38pbr 
2_σ116p r 316ps_dbr 316ps_db 2_σ3m 2_σ132p r 332ps_dbr 332ps_db 2_σ3m 
2_σ164p r 364ps_dbr 364ps_db 2_σ3m 2_σ1128p r 3128ps_dbr 3128ps_db 2_σ3m 
34p ga 
cnn2 img75 := 75_yx; optima := [loss, AdaM, SoftMax] 
The above unstructured form can be simplified by exploiting user defined units^{1}^{1}1Note that the notation of the residual block is generalized now to multi residual block also known as the cast adder. (cf. [34]).
xcept 2_σ11_$p r 31_$ps_dbr 31_$ps_db 2_σ3m xcept116 xcept232 xcept364 xcept4128 
yximg75 38pbr 38pbr xcept1 xcept2 xcept3 xcept4 34p ga 
cnn2 img75 := 75_yx; optima := [loss, AdaM, SoftMax] 
The above network CNN2 exhibits comparable results to CNN1 for testing data. However, its architecture is more complicated what in this case leads to better generalization measured by the difference between performance for training and testing data:
(cf. Tab.3).
The last network CNN3 is built of convolutional, max pooling and dense layers with the first followed by dropout layer during the training stage.
yximg150 332br 2m 332br 2m 364br 2m 364br 2m 64r 50 4 
cnn3 img150 := 150_yx; optima := [loss, AdaM, SoftMax] 
It is interesting that the architecture of CNN3 is simpler than architecture of CNN1. Apparently, it follows from density of image details for higher image resolution.
5 Experimental results
Statistical results compare SVM and DNN using different features. The statistics computations share the same training and testing samples between SVM and DNN for the discriminative expression features. To analyze the details of the performance and to weight the ups and downs in various aspects for different features and algorithms, we selected Accuracy, Cohen’s kappa value and as the performance measures.
5.1 Accuracy
The accuracy takes the simple average success rate as the final score, counted by
Statistics from Tab.3 indicate that when dealing with the same discriminative features both for AU and FP68, DNN’s solution are overwhelmingly more accurate than SVM’s solutions. We also prepare crossvalidation methodology to test performance of SVM classifiers and select the best. In the Tab. 4 we put standard deviation and mean of accuracy for svm classifiers. Statisitfcs are collected from 30 different experiments. We observe that standard deviation is small for statistics, so each model performance is similar.
AU’s results are almost more accurate than the pure geometric FP68, with DNN’s classification algorithms, it reaches almost the same level as the simple CNN1, 50x50 result.
Having the robust capability of dealing with the RGB images as the input themselves, DNN’s results even peak at while using the classical features as input gives the lower accuracy at %.
Train Data  Test Data  
Vectorized Data  AU  FP68  AU  FP68 
SSVM  0.838  0.800  0.411  0.335 
SVM (poly)  0.824  0.611  0.442  0.404 
DNN*  0.830  0.642  0.754  0.611 
Images  
CNN1 50x50x1  0.838  0.763  
CNN1 75x75x1  0.927  0.847  
CNN2 75x75x1  0.865  0.836  
CNN3 150x150x1  0.932  0.877 
[5pt] Note: DNN – for each input data type there is different architecture.
Train Data  Test Data  
Mean of SR  
Vectorized Data  AU  FP68  AU  FP68 
SVM (poly)  0.799  0.605  0.426  0.388 
SSVM  0.835  0.746  0.404  0.311 
Standard deviation of SR  
Vectorized Data  AU  FP68  AU  FP68 
SVM (poly)  0.011  0.003  0.008  0.008 
SSVM  0.002  0.038  0.004  0.027 
[5pt]
5.2 Cohen’s Kappa results
Let is the number of testing examples which belong to the class but they are recognized to be from the class
Beside the confusion matrix
, the probability of detection for each class of detector is estimated and Cohen’s coefficient is computed. We use the following formulas:(14) 
Cohen’s kappa value is a statistical way of measuring the interrater agreement (accuracy) for the classes. Instead of only counting the percentage of the correct prediction, Cohen’s kappa value takes the possibility of the prediction’s occurring by a chance. It means that is the probability of the random agreement and stands for the observed accuracy (agreement).
From the results in Tab.5 we observe that for AU input data, the CNN algorithm exceeds the SVM solutions in measure by at least while in for FP68 the Cohen’s kappa is higher more than three times.
For raw image data the advantage of CNN solutions over geometric data changes of with pixel resolution from to . The conclusion about higher generalization of xception architecture CNN2 over simpler architecture CNN1 is valid for both measures: the accuracy and the Cohen’s kappa.
Train Data  Test Data  
Vectorized Data  AU  FP68  AU  FP68 
SSVM  0.772  0.718  0.215  0.112 
SVM (poly)  0.748  0.430  0.256  0.204 
DNN  0.758  0.485  0.673  0.482 
Images  
CNN1 50x50x1  0.775  0.684  
CNN1 75x75x1  0.899  0.798  
CNN2 75x75x1  0.814  0.782  
CNN3 150x150x1  0.905  0.836 
Note: DNN – for each input data type there is different architecture.
5.3 Weighted score evaluation
Performance measure (index)
also known as Fmeasure, Fscore shows the weighted average of precision and recall. Using the
parameter, this measure combines the precision and recall measures into one performance measure:(15) 
We compute score which reduces Fscore to double harmonic average of precision and recall measures. As expected, weighted score again lead to the similar conclusions as we have seen for the accuracy and the Cohen’s kappa measures.
Train Data  Test Data  
Vectorized Data  AU  FP68  AU  FP68 
SSVM  0.810  0.786  0.354  0.228 
SVM (poly)  0.786  0.568  0.406  0.266 
DNN*  0.808  0.615  0.752  0.581 
Images  
CNN1 50x50x1  0.837  0.760  
CNN1 75x75x1  0.928  0.849  
CNN2 75x75x1  0.865  0.834  
CNN3 150x150x1  0.933  0.880 
Note: DNN – for each input data type there is different architecture.
6 Conclusions
Input  SR[%]  Symbolic notation for neural architecture 

FP68  57  a1fp6816br804 
AU8  75  a1au884 
76  yximg bbr1bbr2bbr3 164pb 2_σ3128pbr 1256pb 2_σ3128p 1256pb 2_σ34p ga  
83  yximg 38pbr 38pbr xcept1 xcept2 xcept3 xcept4 34p ga  
88  yximg 332br 2m 332br 2m 364br 2m 364br 2m 64r 50 4 
[5pt]
where the unit bbr is defined for : bbr 1_$2_$pb 2_σ1_$2_$pbr bbr15,16 bbr25,32 bbr33,64
The experiments regarding to the facial expression classification performance of different features(raw images, FP68 landmarks and action units) and algorithms(SVM and DNN) illustrate that when dealing with each type of those discriminative features, DNN as the classification algorithm shows the most promising results, even when just classifying the eight dimensional data, it holds approximately solid 30% advantage in accuracy than SVM when the testing samples are much more challenging than the training samples.
Namely, at the challenging conditions when the models are trained for frontal views of human faces while they are tested for arbitrary head poses, for geometric features, the success rate (accuracy) indicate nearly triple increase of performance of CNN with respect to SVM classifiers. For raw images, CNN outperforms in accuracy its best geometric counterpart (AU/CNN) by about 30 percent while the best SVM solutions are inferior nearly four times. For Fscore the high advantage of raw/CNN over geometric/CNN and geometric/SVM is observed, as well.
We conclude also that contrary to CNN based emotion classifiers, the generalization capability wrt human head pose is for SVM based emotion classifiers poor.
To summarize and compare the neural architectures and their performance we assemble them in Tab.7 sorting by type of input with the success rate column SR[%].
References
 [1] Ekman P, Friesen WV. Constants across cultures in the face and emotion. Journal of personality and social psychology, 1971. 17 2:124–9.
 [2] Lien JJ, Kanade T, Cohn JF, Li CC. Automated facial expression recognition based on FACS action units. In: Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition. 1998 pp. 390–395. doi:10.1109/AFGR.1998.670980.
 [3] Ekman P, Friesen W. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, 1978.
 [4] Ahlberg J. CANDIDE3  An Updated Parameterised Face. Technical report, Image Coding Group. Dept. of Electrical Engineering, Linköping University, 2001.
 [5] Yuksel K, Chang X, Skarbek W. Smile detectors correlation. In: Proc.SPIE, volume 10445. 2017 pp. 10445 – 10445 – 12. doi:10.1117/12.2280760. URL https://doi.org/10.1117/12.2280760.
 [6] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2017. 60(6):84–90. doi:10.1145/3065386. URL http://dl.acm.org/citation.cfm?doid=3098997.3065386.
 [7] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, FeiFei L. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs], 2014. ArXiv: 1409.0575, URL http://arxiv.org/abs/1409.0575.
 [8] Simonyan K, Zisserman A. Very Deep Convolutional Networks for LargeScale Image Recognition. arXiv:1409.1556 [cs], 2014. ArXiv: 1409.1556, URL http://arxiv.org/abs/1409.1556.

[9]
Szegedy C, Wei Liu, Yangqing Jia, Sermanet P, Reed S, Anguelov D, Erhan D,
Vanhoucke V, Rabinovich A.
Going deeper with convolutions.
In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Boston, MA, USA.
ISBN 9781467369640, 2015 pp. 1–9. doi:10.1109/CVPR.2015.7298594. URL http://ieeexplore.ieee.org/document/7298594/.  [10] Liu M, Wang R, Li S, Shan S, Huang Z, Chen X. Combining Multiple Kernel Methods on Riemannian Manifold for Emotion Recognition in the Wild. In: Proceedings of the 16th International Conference on Multimodal Interaction  ICMI ’14. ACM Press, Istanbul, Turkey. ISBN 9781450328852, 2014 pp. 494–501. doi:10.1145/2663204.2666274. URL http://dl.acm.org/citation.cfm?doid=2663204.2666274.
 [11] Sikka K, Dykstra K, Sathyanarayana S, Littlewort G, Bartlett M. Multiple kernel learning for emotion recognition in the wild. In: Proceedings of the 15th ACM on International conference on multimodal interaction  ICMI ’13. ACM Press, Sydney, Australia. ISBN 9781450321297, 2013 pp. 517–524. doi:10.1145/2522848.2531741. URL http://dl.acm.org/citation.cfm?doid=2522848.2531741.

[12]
Liu P, Han S, Meng Z, Tong Y.
Facial Expression Recognition via a Boosted Deep Belief Network.
In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, OH, USA. ISBN 9781479951185, 2014 pp. 1805–1812. doi:10.1109/CVPR.2014.233. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6909629.  [13] Kanou SE, Ferrari RC, Mirza M, Jean S, Carrier PL, Dauphin Y, BoulangerLewandowski N, Aggarwal A, Zumer J, Lamblin P, Raymond JP, Pal C, Desjardins G, Pascanu R, WardeFarley D, Torabi A, Sharma A, Bengio E, Konda KR, Wu Z, Bouthillier X, Froumenty P, Gulcehre C, Memisevic R, Vincent P, Courville A, Bengio Y. Combining modality specific deep neural networks for emotion recognition in video. In: Proceedings of the 15th ACM on International conference on multimodal interaction  ICMI ’13. ACM Press, Sydney, Australia. ISBN 9781450321297, 2013 pp. 543–550. doi:10.1145/2522848.2531745. URL http://dl.acm.org/citation.cfm?doid=2522848.2531745.
 [14] Tang Y. Deep Learning using Linear Support Vector Machines. arXiv:1306.0239 [cs, stat], 2013. ArXiv: 1306.0239, URL http://arxiv.org/abs/1306.0239.
 [15] Yu Z, Zhang C. Image Based Static Facial Expression Recognition with Multiple Deep Network Learning. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI ’15. ACM, New York, NY, USA. ISBN 9781450339124, 2015 pp. 435–442. doi:10.1145/2818346.2830595. URL http://doi.acm.org/10.1145/2818346.2830595.
 [16] King DE. MaxMargin Object detection. arXiv:1502.00046 [cs.CV], 2015. URL http://arxiv.org/abs/1502.00046.
 [17] King DE. Dlibml: A Machine Learning Toolkit. Journal of Machine Learning Research, 2009. 10:1755–1758.
 [18] Viola P, Jones MJ. Robust RealTime Face Detection. Int. J. Comput. Vision, 2004. 57(2):137–154. URL http://dx.doi.org/10.1023/B:VISI.0000013087.49260.fb.
 [19] Joachims T, Finley T, Yu CN. CuttingPlane Training of Structural SVMs. Machine Learning, 2009. 77(1):27–59.
 [20] Dalal N, Triggs B. Histograms of Oriented Gradients for Human Detection. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)  Volume 1  Volume 01, CVPR ’05. IEEE Computer Society, Washington, DC, USA. ISBN 0769523722, 2005 pp. 886–893. doi:10.1109/CVPR.2005.177. URL http://dx.doi.org/10.1109/CVPR.2005.177.
 [21] Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E. Scikitlearn: Machine Learning in Python. Journal of Machine Learning Research, 2011. 12:2825–2830.
 [22] Pilarczyk R, Skarbek W. Tuning deep learning algorithms for face alignment and pose estimation. In: Proc.SPIE, volume 10808. 2018 pp. 10808 – 10808 – 8. doi:10.1117/12.2501682. URL https://doi.org/10.1117/12.2501682.
 [23] Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861 [cs], 2017. ArXiv: 1704.04861, URL http://arxiv.org/abs/1704.04861.
 [24] Sagonas C, Antonakos E, Tzimiropoulos G, Zafeiriou S, Pantic M. 300 Faces InTheWild Challenge: database and results. Image and Vision Computing, 2016. 47:3–18. doi:10.1016/j.imavis.2016.01.002. URL https://linkinghub.elsevier.com/retrieve/pii/S0262885616000147.
 [25] Sagonas C, Tzimiropoulos G, Zafeiriou S, Pantic M. 300 Faces intheWild Challenge: The First Facial Landmark Localization Challenge. In: 2013 IEEE International Conference on Computer Vision Workshops. IEEE, Sydney, Australia. ISBN 9781479930227, 2013 pp. 397–403. doi:10.1109/ICCVW.2013.59. URL http://ieeexplore.ieee.org/document/6755925/.
 [26] Sagonas C, Tzimiropoulos G, Zafeiriou S, Pantic M. A Semiautomatic Methodology for Facial Landmark Annotation. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, OR, USA. ISBN 9780769549903, 2013 pp. 896–903. doi:10.1109/CVPRW.2013.132. URL http://ieeexplore.ieee.org/document/6595977/.
 [27] Chrysos GG, Antonakos E, Zafeiriou S, Snape P. Offline Deformable Face Tracking in Arbitrary Videos. In: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, Santiago, Chile. ISBN 9781467397117, 2015 pp. 954–962. doi:10.1109/ICCVW.2015.126. URL http://ieeexplore.ieee.org/document/7406475/.
 [28] Chrysos GG, Antonakos E, Zafeiriou S, Snape P. Offline Deformable Face Tracking in Arbitrary Videos. In: The IEEE International Conference on Computer Vision (ICCV) Workshops. 2015 pp. 50–58.
 [29] Chang X, Skarbek W. Facial expressions recognition by animated motion of Candide 3D model. In: Proc.SPIE, volume 10808. 2018 pp. 10808 – 10808 – 10. doi:10.1117/12.2500175. URL https://doi.org/10.1117/12.2500175.
 [30] Febriana P, Skarbek W. Personalization of Candide 3D model for human computer interfacing. In: Proc.SPIE, volume 10808. 2018 pp. 10808 – 10808 – 8. doi:10.1117/12.2501645. URL https://doi.org/10.1117/12.2501645.
 [31] Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I. The Extended CohnKanade Dataset (CK+): A complete dataset for action unit and emotionspecified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition  Workshops. 2010 pp. 94–101. doi:10.1109/CVPRW.2010.5543262.
 [32] Aifanti N, Papachristou C, Delopoulos A. The MUG facial expression database. In: 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10. 2010 pp. 1–4.
 [33] Langner O, Dotsch R, Bijlstra G, Wigboldus DHJ, Hawk ST, van Knippenberg A. Presentation and validation of the Radboud Faces Database. Cognition and Emotion, 2010. 24(8):1377–1388. doi:10.1080/02699930903485076. https://doi.org/10.1080/02699930903485076, URL https://doi.org/10.1080/02699930903485076.
 [34] Skarbek W. Symbolic Tensor Neural Networks for Digital Media  from Tensor Processing via BNF Graph Rules to CREAMS Applications. Preprint stored in Cornell University Archive, 2018. abs/1809.06582. 1809.06582, URL https://arxiv.org/abs/1809.06582.
 [35] Jung A. Imgaug  image augmentation library. https://github.com/aleju/imgaug, 2018.
 [36] Chollet F. Xception: Deep Learning with Depthwise Separable Convolutions. arXiv:1610.02357 [cs], 2016. ArXiv: 1610.02357, URL http://arxiv.org/abs/1610.02357.
Comments
There are no comments yet.