Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution

by   Emad Barsoum, et al.

Crowd sourcing has become a widely adopted scheme to collect ground truth labels. However, it is a well-known problem that these labels can be very noisy. In this paper, we demonstrate how to learn a deep convolutional neural network (DCNN) from noisy labels, using facial expression recognition as an example. More specifically, we have 10 taggers to label each input image, and compare four different approaches to utilizing the multiple labels: majority voting, multi-label learning, probabilistic label drawing, and cross-entropy loss. We show that the traditional majority voting scheme does not perform as well as the last two approaches that fully leverage the label distribution. An enhanced FER+ data set with multiple labels for each face image will also be shared with the research community.



There are no comments yet.


page 1

page 2

page 3

page 4


Consensual Collaborative Training And Knowledge Distillation Based Facial Expression Recognition Under Noisy Annotations

Presence of noise in the labels of large scale facial expression dataset...

Label quality in AffectNet: results of crowd-based re-annotation

AffectNet is one of the most popular resources for facial expression rec...

To-sequence:Multi-label Relation Modeling in Facial Action Units Detection

Facial Action Units Detection (FAUD), one of the main approaches for fac...

On Releasing Annotator-Level Labels and Information in Datasets

A common practice in building NLP datasets, especially using crowd-sourc...

Deep Label Distribution Learning with Label Ambiguity

Convolutional Neural Networks (ConvNets) have achieved excellent recogni...

MOON: A Mixed Objective Optimization Network for the Recognition of Facial Attributes

Attribute recognition, particularly facial, extracts many labels for eac...

Using Crowdsourcing to Train Facial Emotion Machine Learning Models with Ambiguous Labels

Current emotion detection classifiers predict discrete emotions. However...

Code Repositories


This is the FER+ new label annotations for the Emotion FER dataset.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Understanding the unspoken words from facial and body cues is a fundamental human trait, and such aptitude is vital in our daily communications and social interactions. In research communities such as human computer interaction (HCI), neuroscience and computer vision, scientists have conducted extensive research to understand human emotions. Such studies would allow creating computers that can understand human emotions as well as ourselves, and lead to seamless interactions between human and computers.

Among many inputs that can be used to derive emotions, facial expression is by far the most popular. One of the pioneer works by Paul Ekman [10] identified 6 emotions that are universal across different cultures. Later, Ekman [11] developed the Facial Action Coding System (FACS), which became the standard scheme for facial expression research. Facial expression analysis can thus be conducted by analyzing facial action units for each of the facial parts (eyes, nose, mouth corners, etc.), and map them into FACS codes [30]. Unfortunately, FACS coding requires professionally trained coders to annotate, and there are very few existing data sets that are available for learning FACS based facial expressions, in particular for unconstrained real-world images.

With the latest advances in machine learning, it is more and more popular to recognize facial expressions directly from input images. Such appearance-based approaches have the advantage that the ground truth labels may be abundantly obtained through crowd-sourcing platforms 

[1]. The cost of tagging a holistic facial emotion is often on the order of 1-2 US cents, which is orders of magnitude cheaper than FACS coding. On the other hand, crowd-sourced labels are usually much noisier than FACS codes annotated by specially trained coders. This can be attributed to two main reasons. First, emotions are very subjective, and it is very common that two people have diametrically different opinions on the same face image. Second, the workers in crowd-sourcing platforms are paid very low, and their incentive is more on getting more work done rather than ensuring the tagging quality. Consequently, crowd-sourced labels on emotions exhibit only accuracy, as reported for the original FER data set [12].

In this paper, we adopt the latest deep convolutional neural networks (DCNN) architecture, and evaluate the effectiveness of four different schemes to train emotion recognition on crowd-sourced labels. In order to overcome the noisy label issue, we asked 10 crowd taggers to re-label each image in the FER data set, resulting in a new data set named FER+[2]

. Then, we change the cost function of the DCNN based on different schemes using the distribution of tags: majority voting, multi-label learning, probabilistic label drawing, and cross-entropy loss. We compare the performance of the trained classifiers and found the last two schemes to be the most effective to train emotion recognition classifiers based on noisy labels.

The rest of the paper is organized as follows. Related works are discussed in Section 2 and a description of the FER+ data set is introduced in Section 3. Then, the four schemes for DCNN training are presented in Section 4 while experimental results and conclusions are given in Section 5 and 6, respectively.

2 Related Work

Crowd sourcing has been proven to be a cheap and effective way of tagging large amounts of data [24]. While the quality of the labels from crowd sourcing is not always guaranteed, a lot of works and investigations have been conducted to improve the tagging quality [3, 4, 6]. For example, one effective approach is to add a gold standard ground truth as part of the dataset, i.e., data or dummy questions with known answers [3, 9, 15]. Alternatively, one can filter out annotators that are too fast or too slow [7, 22, 26], or use a reference set with ground truth to monitor annotators accuracy and fatigue in real-time [6].

Recognizing facial expressions based on appearances has been an active research topic for decades. Early works rely on hand-crafted features such as Gabor Wavelets [34], Local Binary Patterns on Three Orthogonal Planes (LBP-TOP) [35], Pyramid Histogram of Oriented Gradients (PHOG) [20] and Local Quantized Patterns (LPQ) [5]. Lately, due to the great success of DCNN in a wide variety of image classification tasks [16, 28], it has also been applied in emotion recognition [14, 18, 17, 29, 33]

. One of the main attractiveness of DCNNs is its ability to learn features directly from data avoiding the tedious hand crafted feature generation used in other supervised learning methods. Hence, it is possible to have end-to-end systems that learn directly from data and infer the output with a single learning algorithm. Naturally, the quality and quantity of the training data set largely determines the overall performance of the final system.

We are not the first one who realizes that the emotion of a subject is often non-exclusive. For example, in [31], Trohidis et al. observed that music may evoke more than one emotion at the same time, and compared 4 multi-label classification algorithms to address the issue. In [8], the authors allowed the annotation of emotion mixtures for speech, and numerous works follow similar ideas in speech emotion recognition [19, 25]. In [36]

, the authors proposed an emotion distribution learning (EDL) algorithm for still images. Their algorithm extracts LBP features from the face region, and learns a parametric model for the conditional probability distribution of emotions given an image. In contrast, our algorithm learns the features and the classifier simultaneously in a DCNN framework, thanks to a much bigger training set – the FER+ data set.

Figure 1: FER vs FER+ examples. Top labels are FER and bottom labels are FER+ (after majority voting).

3 The FER+ Data Set

The original FER data set was prepared by Pierre Luc Carrier and Aaron Courville by web crawling face images with emotion related keywords. The images are filtered by human labelers, but the label accuracy is not very high [12]. A few examples are given in Figure 1.

For this paper, we decided to re-tag the FER data set with crowd sourcing. For each input image, we asked crowd taggers to label the image into one of 8 emotion types: neutral, happiness, surprise, sadness, anger, disgust, fear, and contempt. The taggers are required to choose one single emotion for each image and the gold standard method has been adopted to ensure the tagging quality. In a first attempt, tagging was stopped as long as two taggers agreed upon a single emotion, but the obtained quality was unsatisfactory. In the end, we asked 10 taggers to label each image, thus obtaining a distribution of emotions for each face image.

Figure 2 shows a plot relating the tagging quality versus the number of taggers. We randomly chose 10k images in the data set and assume that the majority of the 10 labels are a good approximation to the “ground truth” label. Then, when we have fewer taggers, we compute how many of the majority agree with the “ground truth” emotion. It can be seen from the figure that when there are 3 taggers, the agreement is merely 46%. With 5 taggers, the accuracy improves to about 67% and, with 7 taggers, the agreement improves to above 80%. With this, we can conclude that the number of taggers has a high impact on the final label quality [21].

With 10 annotators for each face image, we now can generate a probability distribution of emotion capture by the facial expression, which enables us to experiment with multiple schemes during training. In section  4.2, we discuss in depth the 4 schemes that we tried: majority voting, multi-label learning, probabilistic label drawing, and cross-entropy loss.

Figure 2: Tagger count versus quality.

4 DCNN Learning

Discriminating emotion based on appearance is essentially an image classification problem. Therefore, a state-of-the-art DCNN model that performs well in image classification should also perform well in facial expression recognition. We tried multiple DCNN models, including custom versions of the VGG network [23], GoogLeNet [27], ResNet [13], etc. Since comparing different DCNN models is not the objective of this paper, we adopt a custom VGG network in this paper to demonstrate emotion recognition performance on the FER+ data set.

Figure 3:

Our custom VGG13 network: yellow, green, orange, blue and gray are convolution, max pooling, dropout, fully connected and soft-max layer, respectively.

4.1 Network Architecture

The input to our emotion recognition model is a gray scale image at resolution. The output is 8 emotion classes: neutral, happiness, surprise, sadness, anger, disgust, fear and contempt. Our custom VGG13 model is shown in Figure 3. It has 10 convolution layers, interleaved with max pooling and dropout layers. More specifically, after the input layer, there are 2 convolution layers with 64 kernels of size . After max pooling, a dropout layer is added with a dropout rate of 25%. The structure repeats but changes in the number of convolution layers and number of kernels. After all the convolution layers, 2 dense layers are added, each with 1024 hidden nodes, followed by a 50% dropout layer. The final dense layer is followed with a soft-max layer to generate the output.

Although the FER+ training set has only about 35k images, the dropout layers are effective in avoiding model overfitting in our model.

4.2 Training

We train the custom VGG13 network from scratch on the FER+ data set employing the same split between training, validation and testing data provided in the original FER. During training we augment the data set on the fly, applying affine transforms similar to those in [33]. Such data augmentation has been proven to improve the robustness of the model against translation, rotation and scaling.

Thanks to the large number of taggers per image, we could generate a probability distribution for each face image. In the following, we examine how to utilize the label distribution in a DCNN learning framework during training. Let there be a total of training examples . For the example, let the custom VGG13 network’s output after its soft-max layer be , , and the crowd-sourced label distribution for this example be , . Naturally, we have:


We experimented 4 different schemes: majority voting (MV), multi-label learning (ML), probabilistic label drawing (PLD) and cross-entropy loss (CEL). These approaches are explained in detail below.

4.2.1 Majority Voting

In most existing facial expression date sets, each facial image is only associated with one single emotion. It is natural to use the majority of the label distribution as the single tag for the image. More formally, we create a new target distribution for each example , such that:


The cost function for DCNN learning is the standard cross-entropy cost, i.e.


4.2.2 Multi-Label Learning

Many face images may exhibit multiple emotions. For example, someone can be happily surprised, or angrily disgusted. The idea of multi-label learning is to admit that such multi-emotion cases exist, and it is fine for our learning algorithm to match with any of the emotions that had sufficient number of taggers labeling them. Mathematically, we adopt a new loss function as follows:


where is an indicator function with threshold :


Since more than one emotion is acceptable for each face, we let the algorithm pick the emotion it wants to train on based on the output probability of each emotion. It is basically applying multi-instance learning in the label space. Effectively, as long as the network output agrees with any of the emotions that a certain portion of the taggers agree, the cost would be low. In our experiments, the parameter is set to 30%.

Scheme Trials Accuracy
1 2 3 4 5
MV 83.60 % 84.89 % 83.15 % 83.39 % 84.23 % 83.852 0.631 %
ML 83.69 % 83.63 % 83.81 % 84.62 % 84.08 % 83.966 0.362 %
PLD 85.43 % 84.65 % 85.34 % 85.01 % 84.50 % 84.986 0.366 %
CEL 85.01 % 84.59 % 84.32 % 84.80 % 84.86 % 84.716 0.239 %
Table 1: Testing accuracy from training VGG13 using four different schemes: majority voting (MV), multi-label learning (ML), probabilistic label drawing (PLD) and cross-entropy loss (CEL).

4.2.3 Probabilistic Label Drawing

In the probabilistic label drawing approach, when an example is used in a training epoch, a random emotion tag is drawn from the example’s label distribution

. We then treat this example as if it has a single emotion label as the drawn emotion tag. In the next epoch, the random drawing will happen again, and may be associated with a different emotion tag. Over the multiple epochs during training, we believe we will approach the true label distribution on average. Formally, at epoch , we create a new distribution :


where is a random number generator based on the distribution . The cost function for DCNN is the same standard cross-entropy loss:


4.2.4 Cross-entropy loss

The fourth approach is the standard cross-entropy loss. We treat the label distribution as the target we want the DCNN to approach. That is:


5 Experimental results

We tested the above four schemes on the FER+ data set we created. As mentioned earlier, each image is tagged by 10 taggers. The label distribution is generated with a simple outlier rejection mechanism: if an emotion was tagged less than once, the frequency count of that emotion is reset to zero. The label frequencies are normalized to ensure the distribution sum to one.

To compare the performance across all four approaches on the test set, we take the majority emotion as the single emotion label, and we measure prediction accuracy against the majority emotion.

For each scheme, we train our custom VGG13 network 5 times, and report the accuracy numbers in Table 1

. Due to random initialization, the accuracy of the same scheme varies across different runs. It can be seen from the table that the PLD and CEL approaches yield the best accuracy on the test data set. Both approaches are over 1% better in accuracy compared with MV. The t-value is around 3.1, which gives probability of 99%-99.5% that the statistic is significant. On the other hand, the difference between PLD and CEL is within the standard deviation. The slight advangtage of PLD may be explained by its similarity to the independently discovered DisturbLabel approach in 


It was a bit surprising to us that ML did not achieve as good performance as PLD and CEL. Since we ask each tagger to tag only the single dominate emotion, we expect the label distribution does not necessarily reflect the emotion distribution of the underlying image, and we thought ML would be a more flexible learning target. We hypothesize that it might be because during testing only the majority emotion is used, and there is a bigger mismatch between training and testing for ML. Further work is needed to verify our hypothesis.

Figure 4

shows the confusion matrix of the best performing network. We perform well on most of the emotions except disgust and contempt. This is because we have very few examples in the FER+ training set that are labeled with these two emotions.

Figure 4: Confusion matrix for the probability scheme.

6 Conclusions

In this paper, we compare different schemes of training DCNN on crowd-sourced label distributions. We show that taking advantage of the multiple labels per image boost the classification accuracy compared with the traditional approach of single label from majority voting.

The FER+ data set[2] is available for download in the following web address:


  • [1] Amazon mechanical turk., 2016 (accessed April 26, 2016).
  • [2] Fer+ emotion label., 2016 (accessed September 14, 2016).
  • [3] V. Ambati. Active Learning and Crowdsourcing for Machine Translation in Low Resource Scenarios. PhD thesis, Pittsburgh, PA, USA, 2012. AAI3528171.
  • [4] A. Batliner, S. Steidl, C. Hacker, and E. Nöth. Private emotions versus social interaction: a data-driven approach towards analysing emotion in speech. User Modeling and User-Adapted Interaction, 18(1):175–206, 2007.
  • [5] A. Bosch, A. Zisserman, and X. Munoz. Representing shape with a spatial pyramid kernel. In Proceedings of the 6th ACM international conference on Image and video retrieval, pages 401–408. ACM, 2007.
  • [6] A. Burmania, S. Parthasarathy, and C. Busso. Increasing the reliability of crowdsourcing evaluations using online quality assessment. IEEE Transactions on Affective Computing, PP(99):1–1, 2015.
  • [7] H. Cao, D. G. Cooper, M. K. Keutmann, R. C. Gur, A. Nenkova, and R. Verma. CREMA-D: crowd-sourced emotional multimodal actors dataset. IEEE Trans. Affective Computing, 5(4):377–390, 2014.
  • [8] L. Devillers, L. Vidrascu, and L. Lamel. Challenges in real-life emotion annotation and machine learning based detection. Neural Networks, 18(4):407–422, 2005.
  • [9] C. Eickhoff and A. P. de Vries. Increasing cheat robustness of crowdsourcing tasks. Inf. Retr., 16(2):121–137, 2013.
  • [10] P. Ekman and W. V. Friesen. Constants across cultures in the face and emotion. Journal of personality and social psychology, 17(2):124, 1971.
  • [11] P. Ekman and W. V. Friesen. Facial action coding system. 1977.
  • [12] I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukierski, Y. Tang, D. Thaler, D.-H. Lee, et al. Challenges in representation learning: A report on three machine learning contests. In Neural information processing, pages 117–124. Springer, 2013.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
  • [14] S. E. Kahou, C. Pal, X. Bouthillier, P. Froumenty, Ç. Gülçehre, R. Memisevic, P. Vincent, A. Courville, Y. Bengio, R. C. Ferrari, et al. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the 15th ACM on International conference on multimodal interaction, pages 543–550. ACM, 2013.
  • [15] A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with mechanical turk. In M. Czerwinski, A. M. Lund, and D. S. Tan, editors, Proceedings of the 2008 Conference on Human Factors in Computing Systems, CHI 2008, 2008, Florence, Italy, April 5-10, 2008, pages 453–456. ACM, 2008.
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1106–1114, 2012.
  • [17] M. Liu, S. Li, S. Shan, R. Wang, and X. Chen. Deeply learning deformable facial action parts model for dynamic expression analysis. In Computer Vision–ACCV 2014, pages 143–157. Springer, 2014.
  • [18] P. Liu, S. Han, Z. Meng, and Y. Tong.

    Facial expression recognition via a boosted deep belief network.


    Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on

    , pages 1805–1812. IEEE, 2014.
  • [19] E. Mower, A. Metallinou, C.-C. Lee, A. Kazemzadeh, C. Busso, S. Lee, and S. Narayanan. Interpreting ambiguous emotional expressions. In Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on, pages 1–8. IEEE, 2009.
  • [20] V. Ojansivu and J. Heikkilä. Blur insensitive texture classification using local phase quantization. In Image and signal processing, pages 236–243. Springer, 2008.
  • [21] R. Rosenthal. Conducting judgment studies: Some methodological issues. The new handbook of methods in nonverbal behavior research, pages 199–234, 2005.
  • [22] N. Sadoughi, Y. Liu, and C. Busso. Speech-driven animation constrained by appropriate discourse functions. In A. A. Salah, J. F. Cohn, B. W. Schuller, O. Aran, L. Morency, and P. R. Cohen, editors, Proceedings of the 16th International Conference on Multimodal Interaction, ICMI 2014, Istanbul, Turkey, November 12-16, 2014, pages 148–155. ACM, 2014.
  • [23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
  • [24] R. Snow, B. O’Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast—but is it good?: Evaluating non-expert annotations for natural language tasks. In

    Proceedings of the Conference on Empirical Methods in Natural Language Processing

    , EMNLP ’08, pages 254–263, Stroudsburg, PA, USA, 2008. Association for Computational Linguistics.
  • [25] T. Sobol-Shikler and P. Robinson. Classification of complex information: Inference of co-occurring affective states from their expressions in speech. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(7):1284–1297, 2010.
  • [26] M. Soleymani and M. Larson. Crowdsourcing for affective annotation of video: Development of a viewer-reported boredom corpus. 2010.
  • [27] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • [28] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 1–9, 2015.
  • [29] Y. Tang. Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239, 2013.
  • [30] Y.-l. Tian, T. Kanade, and J. F. Cohn. Recognizing action units for facial expression analysis. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(2):97–115, 2001.
  • [31] K. Trohidis, G. Tsoumakas, G. Kalliris, and I. P. Vlahavas. Multi-label classification of music into emotions. In ISMIR, volume 8, pages 325–330, 2008.
  • [32] L. Xie, J. Wang, Z. Wei, M. Wang, and Q. Tian. Disturblabel: Regularizing cnn on the loss layer. arXiv preprint arXiv:1605.00055, 2016.
  • [33] Z. Yu and C. Zhang. Image based static facial expression recognition with multiple deep network learning. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI ’15, pages 435–442, New York, NY, USA, 2015. ACM.
  • [34] Z. Zhang.

    Feature-based facial expression recognition: Sensitivity analysis and experiments with a multi-layer perceptron.

    International Journal of Pattern Recognition and Artificial Intelligence

    , 13(6):893–911, 1999.
  • [35] G. Zhao and M. Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(6):915–928, 2007.
  • [36] Y. Zhou, H. Xue, and X. Geng. Emotion distribution recognition from facial expressions. In Proceedings of the 23rd ACM International Conference on Multimedia, MM ’15, pages 1247–1250, New York, NY, USA, 2015. ACM.