Emotion Understanding in Videos Through Body, Context, and Visual-Semantic Embedding Loss

10/30/2020 ∙ by Panagiotis Paraskevas Filntisis, et al. ∙ National Technical University of Athens IEEE 0

We present our winning submission to the First International Workshop on Bodily Expressed Emotion Understanding (BEEU) challenge. Based on recent literature on the effect of context/environment on emotion, as well as visual representations with semantic meaning using word embeddings, we extend the framework of Temporal Segment Network to accommodate these. Our method is verified on the validation set of the Body Language Dataset (BoLD) and achieves 0.26235 Emotion Recognition Score on the test set, surpassing the previous best result of 0.2530.



There are no comments yet.


page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automatic human affect recognition from visual cues is an important area of computer vision that has attracted increased interest over the last two decades, due to its many applications. Indeed, social robotics

[2], psychiatric care [13], and edutainment [10] are all areas that can benefit from automatic recognition of emotion.

Most past approaches to the problem have focused on facial expressions in order to determine the emotional state of the person of interest [7, 18, 22]. This is reasonable due to the fact that facial expressions have been studied extensively in the psychology and emotion literature [8]. For example, the Facial Action Coding System (FACS) [9] identifies the units of facial movements, based on facial muscle groups. Combinations of the so-called action units (AUs) have also been linked with emotional states with extensions of the basic FACS such as EMFACS (Emotion FACS) [11]. On the other hand, there is no similar established coding system for body expressions, although some have been proposed [4].

Compared to facial expression based approaches, recent works have sought alternative modalities and streams of information to detect emotion; one is bodily expressions since many have highlighted the fact that the emotional state is conveyed through bodily expressions as well, and in certain emotions it is the main modality [5, 15, 26], or can be used to correctly disambiguate the corresponding facial expression [1]. Simultaneously, it is important to note that in cases and applications where the emotion needs to be identified, the human body is more frequently available than the face since the face can be occluded, hidden, or far in the distance. Another auxiliary stream of information besides the face and the body that can help in identifying emotions is the context and the surrounding environment of the person [16, 21]. It is apparent that both the place, as well as objects and other humans can influence a person’s emotions.

We should also note that inherently emotion recognition is a multi-label problem - the subject might be feeling two or more emotions. This is true, especially when considering an extended set of emotions, as in [19]. The emotions in extended sets do not have the same “semantic” distance between them. For example, anger is more close to annoyance than to happiness. Considering that previous works have showed the superiority of methods that attempt to learn a joint embedding space that contains both word embeddings and visual representations [6, 12, 24], we believe that trying to attach a semantic meaning to the extracted visual feature is a natural way forward.

In this paper, based on the above, we describe the method of our team in the First International Workshop on Bodily Expressed Emotion Understanding (BEEU) challenge. Our method combines Temporal Segment Networks (TSNs) [27]

focusing on the body, using the context in each video as an additional stream, and also uses an extra visual-semantic embedding loss, based on GloVE (Global Vectors)

[23] word embedding representations. Our experiments in the validation set verify the better performance of our method compared to the traditional TSNs, while our emotion recognition score on the test set was 0.26235.

2 Related Work

While most past approaches in visual detection of affect have been focused on facial expressions [5], recent approaches have started taking into account the body language [15] of the person in question, as well as its surrounding context/environment.

In [14], Gunes and Piccardi introduced a bimodal architecture that takes into account both upper body and facial expressions, in order to detect affect in videos. In [3]

, Dael et al. analyzed and classified body emotional expressions using a body action and posture coding system which was proposed in

[4]. The 3D pose of children was also utilized in [20] by Marinoui et al. to detect emotions in continuous dimensions, while in [10], 2D pose was used and fused with facial expressions for child emotion recognition. Luo et al. [19] introduced a large scale video dataset (BoLD) annotated with categorical and continuous emotions, which is the one used in the BEEU challenge.

Regarding the context modality, Kosti et al. [16]

introduced a large scale dataset for emotion recognition (EMOTIC) in different contexts (e.g., other people, places, or objects) and a convolutional neural network (CNN) based two-stream architecture that focused on the body and context of the subjects. The CAER video dataset for context-based emotion recognition was presented in

[17], along with a two-stream architecture which employed adaptive-fusion to merge the two steams. In [21], Mittal et al. designed a deep architecture with several branches, focusing on different interpretations of the surrounding context (e.g., environment and interaction context) to significantly increase resulting predictions in the EMOTIC dataset.

Finally, some recent works have also focused on extracting visual representations from images that present the semantic relations found in embeddings built from words. The DeViSE embedding model [12] extracted semantically-meaningful visual representations by introducing a similarity loss between the feature vector extracted from a CNN and the word embedding from a skip-gram text model. Using a similar method, Wei et al. [28] built joint text and visual embeddings as emotion representation from web images, and in [29], Ye and Li built semantic embeddings for a multi-label classification problem.

3 Dataset

The dataset used in the challenge is the BoLD (Body Language Dataset) corpus [19] consisting of 9,876 video clips of humans expressing emotion, primarily through body movements. Each clip can contain multiple characters, yielding a total of 13,239 annotations, split into a training, validation, and test set. The dataset has been annotated by crowdsourcing employing two widely accepted categorizations of emotion. The first one is the categorical annotation with a total of 26 labels first used in [16], by collecting and processing an extensive affective vocabulary. The second annotation regards the continuous emotional dimensions of the VAD (Valence - Arousal - Dominance) Emotional State Model [25]. The methods in the challenge are evaluated using the following Emotion Recognition Score (ERS):


where is the mean coefficient of determination () score for the three dimensional emotions (VAD), and and

is the mean Average Precision and the mean area under receiver operating characteristic curve (ROC AUC) of the multilabel categorical predictions.

Figure 1: TSN with two RGB spatial streams (body and context) and one optical flow stream. The final results are obtained using average score fusion.

4 Model Architecture

Our model is based on the TSN architecture [27], which has been widely used in action recognition and can be seen in Fig. 1. During training, different segments are selected from the input video, and then consecutive frames are selected from each segment. This is done to deal with the fact that consecutive frames have usually redundant information. Traditionally, two different modalities are used, one is the spatial (RGB) modality and the second one is the optical flow. TSNs have already been shown to achieve good results for the BoLD dataset in its introductory paper [19].

In our approach, we modify the original version of TSNs mainly in two directions:


We introduce one additional stream based on the context-environment surrounding the annotated human. For the RGB modality, we input the context in the network in the same way as in [21], by masking out the instance body (we set all pixels to 0). We call this stream RGB-c, and the body streams RGB-b and Flow-b. During training, the RGB-b and RGB-c streams are combined at the feature level (RGB-bc) and are trained jointly while the Flow-b TSN is trained independently.

Embedding Loss:

Our second extension is the introduction of an embedding loss on the feature vector extracted by the Convolutional Neural Network (ConvNet). This is done to exploit the fact that some emotions are closer semantically to others. This is also revealed by examining the correlation matrix of the dataset labels in

[19], where some labels occur more frequently in combination with others (e.g. Happiness and Pleasure, Annoyance and Anger, etc.). Due to this result, we try to attach a semantic meaning to the feature vector extracted by the backbone image network.

To implement this, we first obtain for each one of the 26 categorical labels of BoLD their 300-dimensional GloVE word embedding [23]. A PCA-projection of the 26 embeddings is shown in Fig. 2

, where it is apparent that the distances between embeddings are indicative of their “semantic” distance. We then use a fully connected layer to map the feature extracted from the image to a 300-dimensional space and introduce the following mean-squared based loss:


where is the feature vector extracted by applying the convNet on the image ,

is a linear transformation from the space of the feature vector to the word embedding space,

is the word embedding of the label , and is the set of all positive labels for the image . That is, we try to reduce the Euclidean distance between the projected image feature and the arithmetic mean of the GloVE embeddings of the positive labels for image/video.

Figure 2: PCA projection of the categorical emotions GloVE word embeddings.


Finally, after extracting for each sampled image its feature vector, we use two fully connected layers, one to classify to the 26 different categorical labels, and one to regress over the 3 different categorical emotions. The two TSNs are trained using the following loss:


Specifically, since the dataset does not provide explicitly the multilabel targets, but the crowdsourced scores between 0 and 1, we include two different losses for the classification part: that is the binary cross-entropy between the predicted scores and the multilabel target (obtained after thresholding the multilabel scores at 0.5) and that is the mean squared error between the predicted scores and the multilabel scores. We empirically found that the inclusion of slightly boosted performance. For the regression part, is the mean-squared error between the regressed values and the continuous emotions. Finally is as in (2).

5 Experimental Results

We train each TSN for 50 epochs using Stochastic Gradient Descent (SGD), with initial learning rate

which drops by a factor of 10 at epochs111PyTorch code available at https://github.com/filby89/NTUA-BEEU-eccv2020

. The backbone networks used is a residual network (ResNet) with 101 layers for the body convNets and a ResNet with 50 layers for the context convNet. We use the default hyperparameters of TSNs: 3 segments, 1 frame from each segment for the RGB streams, and 5 frames from each segment for the optical flow stream. The consensus used for segment fusion is averaging. For each network, we select the epoch with the best validation ERS. We have also found experimentally that the partialBN (Batch Normalization) technique used in

[27] gives a nontrivial boost to the performance of the network.

First, in Table 1 we present two ablation experiments regarding the addition of . We can see that adding the embedding loss increases slightly the performance in the RGB-b stream, and gives a boost to the performance of the Flow-b stream.

without RGB-b 0.1567 0.6140 0.0538 0.21955
Flow-b 0.1444 0.5914 0.0507 0.2093
RGB-b + Flow-b 0.1623 0.6307 0.078 0.2375
with RGB-b 0.1564 0.6143 0.0546 0.21997
Flow-b 0.1465 0.5947 0.0579 0.2142
RGB-b + Flow-b 0.1637 0.6327 0.0874 0.2428
Table 1: Ablation experiment by training with and without .

Then, in Table 2 we present our experimental results on the validation set of BoLD including the RGB context stream. From the results we can see that including the context along with the body in the RGB modality boosts the validation ERS of the architecture. We also experimented with including the context in the Flow network, but this resulted in worse performance. Our final submission for the test set was the model with the best validation score (0.2439 employing RGB-bc + Flow-b), using 25 segments instead of 3. The results of the different metrics on the test set can also be seen in Table 2, while the final ERS is 0.26235, improving upon the previous best result of 0.2530[19].

set Model
valid RGB-c 0.1395 0.5760 0.0365 0.1971
RGB-bc 0.1566 0.6055 0.0675 0.2243
RGB-bc + Flow-b 0.1656 0.6266 0.0917 0.2439
test RGB-bc + Flow-b 0.1796 0.6416 0.1141 0.26235
Table 2: Results on the validation and test set of BoLD including the RGB context stream and .

6 Conclusions

In this paper we presented our method submitted at the BEEU challenge, winning first place. Our method extended the TSN framework to include a visual-semantic embedding loss, by utilizing GloVE word embeddings, and also included an additional context stream for the RGB modality. We verified the superiority of our extensions compared to the baseline on the validation set of the challenge, and submitted the best system which achieved 0.26235 Emotion Recognition Score on the BoLD test set, surpassing the previous best result of 0.2530.


This research is carried out / funded in the context of the project “Intelligent Child-Robot Interaction System for designing and implementing edutainment scenarios with emphasis on visual information” (MIS 5049533) under the call for proposals “Researchers’ support with an emphasis on young researchers- 2nd Cycle”. The project is co-financed by Greece and the European Union (European Social Fund- ESF) by the Operational Programme Human Resources Development, Education and Lifelong Learning 2014-2020.


  • [1] H. Aviezer, Y. Trope, and A. Todorov (2012) Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science 338 (6111), pp. 1225–1229. Cited by: §1.
  • [2] F. Cavallo, F. Semeraro, L. Fiorini, G. Magyar, P. Sinčák, and P. Dario (2018) Emotion modelling for social robotics applications: a review. Journal of Bionic Engineering 15 (2), pp. 185–203. Cited by: §1.
  • [3] N. Dael, M. Mortillaro, and K. R. Scherer (2012) Emotion expression in body action and posture.. Emotion 12 (5), pp. 1085. Cited by: §2.
  • [4] N. Dael, M. Mortillaro, and K. R. Scherer (2012) The body action and posture coding system (BAP): development and reliability. J. Nonverbal Behavior 36 (2), pp. 97–121. Cited by: §1, §2.
  • [5] B. De Gelder (2009) Why bodies? twelve reasons for including bodily expressions in affective neuroscience. Philosophical Transactions of the Royal Society of London B: Biological Sciences 364 (1535), pp. 3475–3484. Cited by: §1, §2.
  • [6] J. Dong, X. Li, and C. G. Snoek (2016) Word2visualvec: image and video to sentence matching by visual feature prediction. arXiv preprint arXiv:1604.06838. Cited by: §1.
  • [7] S. Du, Y. Tao, and A. M. Martinez (2014) Compound facial expressions of emotion. Proceedings of the National Academy of Sciences 111 (15), pp. E1454–E1462. Cited by: §1.
  • [8] P. Ekman and D. Keltner (1997) Universal facial expressions of emotion. Segerstrale U, P. Molnar P, eds. Nonverbal communication: Where nature meets culture, pp. 27–46. Cited by: §1.
  • [9] R. Ekman (1997) What the face reveals: basic and applied studies of spontaneous expression using the facial action coding system (facs). Oxford University Press, USA. Cited by: §1.
  • [10] P. P. Filntisis, N. Efthymiou, P. Koutras, G. Potamianos, and P. Maragos (2019) Fusing body posture with facial expressions for joint recognition of affect in child–robot interaction. IEEE Robotics and Automation Letters 4 (4), pp. 4011–4018. Cited by: §1, §2.
  • [11] W. V. Friesen, P. Ekman, et al. (1983) EMFACS-7: emotional facial action coding system. Unpublished manuscript, University of California at San Francisco 2 (36), pp. 1. Cited by: §1.
  • [12] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and T. Mikolov (2013) Devise: a deep visual-semantic embedding model. In Advances in neural information processing systems, pp. 2121–2129. Cited by: §1, §2.
  • [13] B. Gaudelus, J. Virgile, S. Geliot, N. Franck, M. Dupuis, C. Hochard, A. Josserand, A. Koubichkine, T. Lambert, M. Perez, et al. (2016) Improving facial emotion recognition in schizophrenia: a controlled study comparing specific and attentional focused cognitive remediation. Frontiers in psychiatry 7, pp. 105. Cited by: §1.
  • [14] H. Gunes and M. Piccardi (2006) A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior. In Proc. ICPR, Vol. 1, pp. 1148–1153. Cited by: §2.
  • [15] A. Kleinsmith and N. Bianchi-Berthouze (2013) Affective body expression perception and recognition: a survey. IEEE Trans. on Affective Computing 4 (1), pp. 15–33. External Links: ISSN 1949-3045 Cited by: §1, §2.
  • [16] R. Kosti, J. M. Alvarez, A. Recasens, and A. Lapedriza (2017) Emotion recognition in context. In

    Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    pp. 1960–1968 (en). Cited by: §1, §2, §3.
  • [17] J. Lee, S. Kim, S. Kim, J. Park, and K. Sohn (2019) Context-aware emotion recognition networks. In Proc. IEEE International Conference on Computer Vision, pp. 10143–10152. Cited by: §2.
  • [18] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews (2010) The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In Proc. IEEE computer society conference on computer vision and pattern recognition-workshops, pp. 94–101. Cited by: §1.
  • [19] Y. Luo, J. Ye, R. B. Adams, J. Li, M. G. Newman, and J. Z. Wang (2020) ARBEE: Towards automated recognition of bodily expression of emotion in the wild. International Journal of Computer Vision 128 (1), pp. 1–25 (en). Cited by: §1, §2, §3, §4, §4, §5.
  • [20] E. Marinoiu, M. Zanfir, V. Olaru, and C. Sminchisescu (2018) 3D human sensing, action and emotion recognition in robot assisted therapy of children with autism. In Proc. CVPR, pp. 2158–2167. Cited by: §2.
  • [21] T. Mittal, P. Guhan, U. Bhattacharya, R. Chandra, A. Bera, and D. Manocha (2020) EmotiCon: context-aware multimodal emotion recognition using frege’s principle. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14234–14243. Cited by: §1, §2, §4.
  • [22] A. Mollahosseini, B. Hasani, and M. H. Mahoor (2017) AffectNet: a database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing 10 (1), pp. 18–31. Cited by: §1.
  • [23] J. Pennington, R. Socher, and C. D. Manning (2014) GloVE: global vectors for word representation. In

    Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP)

    pp. 1532–1543. Cited by: §1, §4.
  • [24] Z. Ren, H. Jin, Z. Lin, C. Fang, and A. L. Yuille (2017) Multiple instance visual-semantic embedding.. In Proc. BMVC, Cited by: §1.
  • [25] J. A. Russell and A. Mehrabian (1977) Evidence for a three-factor theory of emotions. Journal of Research in Personality 11 (3), pp. 273–294 (en). Cited by: §3.
  • [26] J. L. Tracy and R. W. Robins (2004) Show your pride: evidence for a discrete emotion expression. Psychological Science 15 (3), pp. 194–197. Cited by: §1.
  • [27] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool (2016) Temporal segment networks: towards good practices for deep action recognition. In European Conference on Computer Vision, pp. 20–36. Cited by: §1, §4, §5.
  • [28] Z. Wei, J. Zhang, Z. Lin, J. Lee, N. Balasubramanian, M. Hoai, and D. Samaras (2020) Learning visual emotion representations from web data. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13106–13115. Cited by: §2.
  • [29] M. Yeh and Y. Li (2020) Multilabel deep visual-semantic embedding. IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (6), pp. 1530–1536 (en). Cited by: §2.