Background Hardly Matters: Understanding Personality Attribution in Deep Residual Networks

12/20/2019 ∙ by Gabrielle Ras, et al. ∙ Anchormen Radboud Universiteit 0

Perceived personality traits attributed to an individual do not have to correspond to their actual personality traits and may be determined in part by the context in which one encounters a person. These apparent traits determine, to a large extent, how other people will behave towards them. Deep neural networks are increasingly being used to perform automated personality attribution (e.g., job interviews). It is important that we understand the driving factors behind the predictions, in humans and in deep neural networks. This paper explicitly studies the effect of the image background on apparent personality prediction while addressing two important confounds present in existing literature; overlapping data splits and including facial information in the background. Surprisingly, we found no evidence that background information improves model predictions for apparent personality traits. In fact, when background is explicitly added to the input, a decrease in performance was measured across all models.



There are no comments yet.


page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In personality research, the Big Five model (B5) [10] is the dominant paradigm used to measure various aspects of personality and how these aspects relate to an individual’s happiness, choices and social behavior [9]. The B5 is a taxonomy for personality traits, based on common language descriptors and consists of the five personality traits openness, conscientiousness, extraversion, agreeableness and neuroticism, represented by the acronym OCEAN. The B5 has been shown to be a reliably stable model across a wide range of situations and cultures [6]. Oftentimes in real life, a person does not have direct access to the true composition of another person’s B5 traits. Instead, indirect cues are used to attribute apparent traits to that other person [5]. Apparent personality traits attributed to an individual do not have to correspond to their actual personality traits, however, these apparent traits determine, to a large extent, how other people will behave towards them [31]. It has been shown that the face is an important information source when humans make (potentially actionable) personality and character judgements about others, e.g., when forming first impressions [34] or deciding which candidate to vote for [22]. In general there is a large number of papers showing “both correlational and causal evidence linking facial appearance to a variety of important social outcomes” [29]. Of course, the face is not the only source of information and there is evidence that the voice and general appearance, e.g., body posture, also influence personality judgements [22, 21, 7]. There is some evidence that humans can use objects/items that belong to the other person to base trait attribution on, such as clothing [18] and even bedroom objects [23]. Studies in market research often find that brands are associated with certain personality traits [1]. In the related field of emotion perception, the role of environmental cues has been studied more extensively and they show that the background scene influences emotional judgements in humans [3, 27, 4, 2]. Yet, it remains unclear to what extent physical object and environmental cues can communicate a person’s personality traits and how reliable these cues are when attributing apparent traits to other people.

1.1 Automated Personality Attribution

The field of personality computing studies and develops computational methods that, among other things, attribute apparent personality traits to individuals [32]. By learning from a large dataset of human annotated data, these methods can, by proxy, serve as models of human behavior. A popular method is the Deep Neural Network (DNN)111In this paper we use the terms ”DNN” and ”model” interchangeably. A recent survey on personality attribution using DNNs indicate that researchers tend to feed the DNNs predominantly with data that contains a face [19]. The following factors might contribute to this trend; 1) There is an abundance of evidence supporting the role of facial cues in personality attribution and insufficient evidence that environmental cues in the background image influences attribution. 2) Using only the face vs. other regions in the image serves as a form of dimensionality reduction of the input data, giving rise to additional benefits such reduced training time and computational resources. 3) In the earlier days of social signal processing, the background of the image was being discarded because it was considered a source of noise for the algorithms being used [25].
Efforts based on extracting information from additional cues often take the approach of feeding different modalities (audio, text) to the DNN and we generally find that the DNN makes better predictions when it receives information from multiple sources [19]. However, there is a lack of research providing insight into how integrating visual environmental cues from the background of the image can aid the attribution of personality traits. Given the recent advances in DNNs and their ability to automatically extract relevant features from the input, we might want to reconsider leaving out the background.

1.2 Confounds in First Impressions v2 Dataset

Fairly recently, the ChaLearn First Impressions v2 dataset has been publicly released [24]. This dataset is a large collection of YouTube vlog clips222A vlog is a video blog, annotated by crowdsourcing with Amazon Mechanical Turk333 Each video depicts one person speaking directly to the camera, often in a home environment, making it an ideal dataset with which to investigate the influence of environmental cues on trait attribution. About of the original full length videos are split into two to six 15 second clips. More dataset details are provided in Section 3.1. Currently this is the largest video personality dataset in existence. The dataset was released in the context of a conference competition and has resulted in a number of papers in which the role of the environmental cues was investigated [8, 13, 12, 33, 14, 15]. However, we discovered two confounds that severely limit the conclusions of these studies.

  1. In machine learning, it is good practice to judge the performance of a model on a set of data that the model has never seen before. This is achieved by partitioning the entire dataset into a train and test split. After the DNN has been trained, its performance is measured on this unseen portion of data, very similar to an exam at the end of a school course. However, in the First Impressions dataset this independent assessment is counfounded by the fact that

    of the clips in the test split originate from the same video as of clips in the train split, see Figure 1. The independence of this assessment is very important because it gives us a measure of how well the DNN can generalize what it has learned and not just what it has memorized.

  2. The other confound is present in the analysis provided in the papers. In order to study the effect of environmental cues, there needs to be a condition in which only the environmental cues are provided during the learning (training) process and evaluation (testing) process of the DNN. However, not a single analysis meets this condition; in all training and testing procedures facial features were present.

1.3 Our Contribution

This paper contributes to personality computing by explicitly studying the effect of visual cues in the image background on personality trait attribution, using DNNs, while addressing the existing confounds. In this paper we ask the question do environmental cues encoded in the image background significantly inform apparent personality trait attribution? We expect to find that when background information is added to the DNN, the model performance increases. The answer to this question is very relevant; if it turns out that background information improves personality attribution then researchers in the personality computing field might want to consider methods that make use of the background information. By studying the behavior of the DNN we might also gain insight into how humans make personality attributions.

2 Related Work

All of the following papers use the First Impressions dataset and the original dataset splits where of the clips in the test split originate from the same video as of clips in the train split.

Gürpınar et al.  [15, 14]

use a combination of DNNs to extract audio, scene and facial features from the data. These features are fused together to predict the final attribution scores. The scene component of the DNN has been trained on the ImageNet dataset 

[28] which includes facial information. The ablation studies indicate that their method benefits from the additional scene features. However, scene information is always entangled with facial information. Wei et al. [33] train various DNNs by using the information encoded in the entire frame. A visual feature importance analysis was performed on one of the higher (decision) layers of the networks and the results suggest that DNNs pay attention to the background of the image when making predictions. However, this analysis was limited to only 12 random frames in the test split. Given that there is a large overlap between the train and test data, the results of this analysis can be a consequence of the DNNs simply recalling what they have seen during the training phase. Güçlütürk et al. [11] take into account scene information by feeding their DNNs random frame crops during training. Güçlütürk et al. [12] use occlusion analysis to visualize regions in the image input that are important for the predictions. The results show that the background provides important information. However, similar to the limitations of [33], the DNN can simply be recalling information from an image that it has seen before.

3 Experimental setup

To account for the previously mentioned confounds, we 1) create new data splits, where the clips in one split are completely independent from clips in another split, and 2) implement an experimental design enabling the study of the effect of environmental cues on the DNN attribution capabilities, see Figure 3. All following experiments are performed using only the individual frames of the videoclips. Hence the word frame will be used instead of videoclip. Given an input, the DNN has to predict on a continuous scale between 0 and 1 the intensity of each B5 trait. The experiments are repeated on three different DNNs. Finally the DNN predictions are compared to the actual labels and the attribution performance is measured. We hypothesize that if the background contains useful cues, these will be extracted and utilized by the DNN. As a result there should be an increase in attribution performance in conditions where the background is included in the input.

3.1 Dataset Details

Figure 1: A Venn diagram visualizing the portion of overlapping data in the original ChaLearn First Impressions split.
training 6744 2060 3.27
testing 1676 500 3.35
validation 1580 500 3.16
Table 1: The number of videos in each data split and the number of videos per UID. All of the datasplits are independent of each other, meaning that there are no overlapping UIDs.

The ChaLearn First Impressions v2 dataset [24] is used in all of the experiments. This dataset is a collection of 10000 HD 720p YouTube videoclips, gathered from 3060 unique videos. The duration of each videoclip is 15 seconds and has an average framerate of 30 FPS. The videoclips were annotated by Amazon Mechanical Turk workers. The labels in the dataset indicate the intensity of the perceived Big Five traits on a continuous scale between 0 and 1. Unfortunately, the original dataset splits include a rather large number overlapping dependent videoclips, see Figure 1. We created new splits where the clips in one split are completely independent from clips in another split. The dataset splits can be obtained by sorting the unique video names alphabetically and then selecting sequentially the respective number of videos in each data split indicated in Table 1.

3.2 Experimental Conditions

In Figure 2 an overview is given of the different experimental conditions. In total there are four conditions. In each condition a different region of the frame is given as input during the training and evaluation of the DNN.
In the face only condition the performance of the models is measured when they are trained and evaluated on the face alone. Face extraction was performed using the dlib library [20] to detect facial landmarks and return bounding box coordinates containing the location of the face for each frame of the video. The face is cropped out and resized to pixels. The face is aligned by aligning the facial landmarks using a similarity transform to the average location of the all facial landmarks in the training split.
In the background only condition the performance of the models is measured when they are trained and evaluated on the background alone. First the facial bounding box area is filled with the mean RGB pixel value of the image, excluding the values of the region included in the facial bounding box. To capture as much background as possible and as little person information as possible, a crop is made starting from either the left or right edge of the image, depending on the location of the face. For example, if the face is more to the left, the crop starts from the right edge of the image. By removing the facial information completely from the frame, we can investigate if something other than the face is additionally driving the predictions. This why the body was not removed from the frame.
In the face+bg condition the performance of the models is measured when they are trained and evaluated on the face and background data.
Finally, the entire frame condition serves as a control. In this condition we measure the performance of the models when they are trained and evaluated on the entire frame. The entire frame has a size of pixels.

Figure 2: An overview of the different regions of data that can be given as input, A. the entire frame, B. the face and C. the background.
Figure 3: The general setup of the experiments. Experimental conditions face, background and entire frame all use the same network architecture. In condition face+bg the data is passed through a two-stream network, of which only the final fully connected layer is trained. Each stream adopts and freezes the weights of a previously trained face or background model.

3.3 Deep Neural Networks

Three different DNNs are used in the experiments: Deep Impression [13] and two versions of ResNet18 [16], one pre-trained on the ImageNet dataset and one without pre-training. The DNNs that are utilized in this paper belong to the widely used residual architecture family of DNNs. Three versions of the same type of architecture are used to rule out that differences in the findings are not a result of the architecture-specific components.

3.3.1 Deep Impression

A modified version of Deep Impression network [13] is used. The network was implemented in Chainer 4.0.0 [30]. Single frames are used to study the effect of only the background on the prediction since multiple frames bring on the effect of time on the predictions. Given that only visual information is taken into account, only the visual stream of the original network is used. The visual stream is a 17 layer deep residual network. The networks are optimized using Adam with , , , and minibatch size of 32.

3.3.2 ResNet18

The ResNet18 networks are obtained from PyTorch


. The final layer is replaced with an fully connected layer with five outputs, one output for each B5 trait. Two ResNets are used: The first model, ResNet18 v1, is not pre-trained. This model, initialized with random weights, is later trained on the ChaLearn dataset. The second version, ResNet18 v2, is pre-trained on ImageNet and is fine-tuned on the ChaLearn dataset. This way we can also investigate the effects of pre-training on model performance. Pre-trained models should have a greater representational capacity and should be able to better learn relationships between background and personality traits. Both ResNet18s are optimized using stochastic gradient descent and a learning rate of 0.001 and a momentum of 0.9.

3.3.3 Training and Validation

Each DNN is trained on ChaLearn First Impressions v2 dataset to predict the value of the traits . stands for , since the inverse of the notion of neuroticism

was used when collecting the data. All networks were trained for 100 epochs using the mean absolute error (MAE) as loss function. During the training phase, one random frame is sampled from each videoclip in the train data. After every 10 epochs the network is ran against the validation data to determine the validation loss.

Face, background, entire frame The networks are initialized according to the specifications given previously.
Face+bg The face and background images being fed to the network come from the same frame. The face stream and the background stream of the network are initialized with the weights of the face only and background only models. The weights are chosen from the models that perform best on the validation data. The weights of both branches are then frozen and only the final fusion layer is trained. This allows the network to update the weights accordingly to either use or ignore information coming from the branches.

3.3.4 Testing

The model that has the lowest validation loss is chosen to run against the test set. Each chosen model is run against all frames in the test split and the average prediction per video per trait is recorded. Then all predictions are compared to the ground truth and the Pearson correlation coefficient is computed. In Figure 4 the predictions per trait are averaged and compared to the mean ground truth score, also averaged across traits, and then the correlations are computed.

3.3.5 Comparing model performance

Models are compared to each other in a pairwise manner, i.e. model 1 is compared to model 2. In order to determine whether the performance difference between models is meaningful the Pearson correlation coefficients between model predictions and the ground truth labels are calculated:


These values are plotted in Figure 4. The Fisher transformation in (3) is used to obtain the -values (2) from the -values. The -value measures if the difference between correlations is significant. The -values are documented in Table 2.


where , , is the error function and is defined as:


where and .

is the standard error and is defined as:


where is the number of pairs of scores in and is the number of pairs of scores in . In our case for the size of the test set. The significance level is set at . After Bonferroni correction . is divided by three because three different models are used; the more models we experiment with, the larger the possibility that we find a result by chance. The difference in performance is significant when .
MAE is not used directly for model comparison because the results can be misleading; a reasonable-seeming MAE (0.12) can be achieved simply by calculating the mean of the training scores. Furthermore we noticed that MAE can be very low (accuracy is very high) while correlations are very low, indicating that the model predictions are not correlated to the actual ground truth labels yet achieving a high accuracy. Further investigation shows that this is especially the case when the values of the ground truth labels lie in a very small range which is the case with the agreeableness trait.

4 Experimental Results

Figure 4: Mean correlation between predictions and annotated attribution labels, in each data mode, across all personality traits, for the three models that were investigated. All of the -values computed have a , much smaller than significance level of indicating that our results are likely not an effect of chance. The significance of the difference between data modes within model is documented in Table 2.

In the experiments three different DNNs are run in four experimental conditions. It is important to note that our focus lies on the performance difference between the different conditions. The absolute performance of the different models is not of interest in this paper. We expected that the inclusion of background information, in addition to facial information, would increase the performance across all models. However this is not reflected in our results. The following observations can be made about the results presented in Figure 4 and Table 2.
Similarity The DNNs in the face

condition always results in a relatively high correlation with the annotated labels. This can easily be explained by the fact that faces are naturally more similar to each other compared to the set of images featuring various backgrounds, i.e. human faces have the same structure whereas backgrounds can contain various different objects that can appear in a number of different configurations. We can quantify similarity by calculating the standard deviation

from the mean image of the data in the different experimental conditions; the lower , the more similar the images are in the particular experimental condition. This results in and for the face and background condition respectively. The images in the face condition have also been aligned such that the location of the facial landmarks is the same for all the images. This makes the learning task easier for the DNNs. In contrast nothing has been done to increase the ”structuredness” of the images in the background condition.
SNR When background information is explicitly added to the input, i.e. face vs. face+bg condition, a relative decrease in correlation was measured across all models in comparison to the face condition. This decrease in correlation is significant for the Deep Impression and ResNet18 v2 models. Initially this may seem like a surprising result because the DNNs in the background condition do show some correlation to the ground truth. In theory it means that when more data is given to the model, the better it should perform, especially since both conditions show correlation to the ground truth. However, the correlation in the background condition is significantly lower compared to the correlation in the face condition, see Table 2. This performance gap indicates that, for the tested DNN architectures, the face condition is significantly more informative than the background

condition. From an information theoretical perspective the analogy of the signal-to-noise ratio (SNR) can be made; we can say that the

background condition contains more noise, decreasing the correlation when added to the face condition, . And inversely that the face condition contains more signal, increasing the correlation when added to the background condition, .
Pre-train It can be observed that the correlations of ResNet18 v2 are relatively high for all conditions. Given that the model has been pre-trained on ImageNet, we can assume that this pre-training is increasing the model’s ability to find useful features, both in the background and the face. Even though this DNN behaves similar to the other two DNNs in the face+bg condition, it performs much better in the entire frame condition. We suspect that the combination of pre-training and the fact that the entire frame contains more information are causing the increase in correlation. This suspicion is also reflected in the fact that the other two DNNs have not been pre-trained on ImageNet and that their performance in the entire frame condition is relatively low compared to the face condition.

Deep Impression
face vs. face+bg
face vs. entire frame
face vs. bg
bg vs. face+bg
bg vs. entire frame
face+bg vs. entire frame
ResNet18 v1
face vs. face+bg
face vs. entire frame
face vs. bg
bg vs. face+bg
bg vs. entire frame
face+bg vs. entire frame
ResNet18 v2
face vs. face+bg
face vs. entire frame
face vs. bg
bg vs. face+bg
bg vs. entire frame
face+bg vs. entire frame
Table 2: An overview of the comparisons between correlations of Figure 4. The data modes that differ significantly have a and are indicated with an asterisk. . This table should be viewed in conjunction with Figure 4.

5 Discussion

Technologies that make use of facial information remain a controversial topic in general. However, researching said technology and the methods behind them is beneficial when the goal is to understand the inner workings and assess the reliability of these methods given that they are being used in the real world to make actionable decisions.

In this study, we considered the visual sources of information that drive apparent personality attribution using DNNs. Three different DNNs were run in four conditions of the ChaLearn First Impressions v2 dataset. It was expected that the inclusion of background information, in addition to facial information, would increase the performance across all models.

Surprisingly, we found no evidence that background information is improving model attributions for apparent personality traits. In fact, when background is explicitly added to the input, a decrease in performance was measured across all models. Our results do suggest that correlations to ground truth can be boosted by training on the entire frame, but the result is not significantly higher than training on the facial information alone. From the experiments we can conclude that facial information is significantly more informative to our models than the background information, even when the model is pre-trained on ImageNet and given access to the complete frame information. However, it is notable that, in the background condition, the DNNs can perform trait attributions with some correlation to the labels. This suggests that there is a regularity present in the background condition that the DNNs pick up on. Further investigation is required to determine to what degree the human annotators could utilize information in the background and if they have done so.

As is often the case with deep learning applications, it is difficult to say with any certainty whether the results will replicate across other architectures and datasets. DNN architectures are by their very nature incredibly complex structures containing many interactive components. The net result of the interaction between these components cannot be predicted in advance 


. To address this limitation the experiments should be performed on more DNN architectures. The ability of the model to extract informative features from the data is crucial to its performance as has been shown by pre-training the network. The recent advances in contextual feature extraction indicate that it is possible to create networks with a human-like capability of image understanding for certain tasks, especially image captioning 

[17]. However, we still have a long way to go to bridge the gap to subjective social human understanding from image data.

Concluding, our results suggest that DNNs mainly exploit facial features to predict apparent personality traits. Future research should provide further insights into how exactly facial features determine particular apparent personality traits.


  • [1] J. Aaker and S. Fournier (1995) A brand as a character, a partner and a person: three perspectives on the question of brand personality. ACR North American Advances. Cited by: §1.
  • [2] H. Aviezer, R. R. Hassin, J. Ryan, C. Grady, J. Susskind, A. Anderson, M. Moscovitch, and S. Bentin (2008) Angry, disgusted, or afraid? studies on the malleability of emotion perception. Psychological science 19 (7), pp. 724–732. Cited by: §1.
  • [3] L. F. Barrett and E. A. Kensinger (2010) Context is routinely encoded during emotion perception. Psychological Science 21 (4), pp. 595–599. Cited by: §1.
  • [4] L. F. Barrett, B. Mesquita, and M. Gendron (2011) Context in emotion perception. Current Directions in Psychological Science 20 (5), pp. 286–290. Cited by: §1.
  • [5] E. Brunswik (1947) Systematic and representative design of psychological experiments. Univ. of Calif. Press Berkeley. Cited by: §1.
  • [6] I. J. Deary (2009) The trait approach to personality. The Cambridge handbook of personality psychology 1, pp. 89. Cited by: §1.
  • [7] P. Ekman, W. V. Friesen, M. O’sullivan, and K. Scherer (1980) Relative importance of face, body, and speech in judgments of personality and affect.. Journal of personality and social psychology 38 (2), pp. 270. Cited by: §1.
  • [8] H. J. Escalante, H. Kaya, A. A. Salah, S. Escalera, Y. Güçlütürk, U. Güçlü, X. Baró, I. Guyon, J. J. Junior, M. Madadi, et al. (2018) Explaining first impressions: modeling, recognizing, and explaining apparent personality from videos. arXiv preprint arXiv:1802.00745. Cited by: §1.2.
  • [9] D. C. Funder (2001) Personality. Annual Review of Psychology 52 (1), pp. 197–221. Note: PMID: 11148304 External Links: Document Cited by: §1.
  • [10] L. R. Goldberg (1993) The structure of phenotypic personality traits.. American psychologist 48 (1), pp. 26. Cited by: §1.
  • [11] Y. Güçlütürk, U. Güçlü, X. Baró, H. J. Escalante, I. Guyon, S. Escalera, M. A. Van Gerven, and R. Van Lier (2018) Multimodal first impression analysis with deep residual networks. IEEE Transactions on Affective Computing 9 (3), pp. 316–329. Cited by: §2.
  • [12] Y. Güçlütürk, U. Güçlü, M. Perez, H. Jair Escalante, X. Baró, I. Guyon, C. Andujar, J. Jacques Junior, M. Madadi, S. Escalera, et al. (2017) Visualizing apparent personality analysis with deep residual networks. In

    Proceedings of the IEEE International Conference on Computer Vision

    pp. 3101–3109. Cited by: §1.2, §2.
  • [13] Y. Güçlütürk, U. Güçlü, M. A. van Gerven, and R. van Lier (2016) Deep impression: audiovisual deep residual networks for multimodal apparent personality trait recognition. In European Conference on Computer Vision, pp. 349–358. Cited by: §1.2, §3.3.1, §3.3.
  • [14] F. Gürpinar, H. Kaya, and A. A. Salah (2016)

    Multimodal fusion of audio, scene, and face features for first impression estimation

    In Pattern Recognition (ICPR), 2016 23rd International Conference on, pp. 43–48. Cited by: §1.2, §2.
  • [15] F. Gürpınar, H. Kaya, and A. A. Salah (2016) Combining deep facial and ambient features for first impression estimation. In European Conference on Computer Vision, pp. 372–385. Cited by: §1.2, §2.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.3.
  • [17] M. Hossain, F. Sohel, M. F. Shiratuddin, and H. Laga (2019) A comprehensive survey of deep learning for image captioning. ACM Computing Surveys (CSUR) 51 (6), pp. 118. Cited by: §5.
  • [18] N. Howlett, K. Pine, I. Orakçıoğlu, and B. Fletcher (2013) The influence of clothing on first impressions: rapid and positive responses to minor changes in male attire. Journal of Fashion Marketing and Management: An International Journal 17 (1), pp. 38–48. Cited by: §1.
  • [19] J. Junior, C. Jacques, Y. Güçlütürk, M. Pérez, U. Güçlü, C. Andujar, X. Baró, H. J. Escalante, I. Guyon, M. A. van Gerven, et al. (2018) First impressions: a survey on computer vision-based apparent personality trait analysis. arXiv preprint arXiv:1804.08046. Cited by: §1.1.
  • [20] D. E. King (2009) Dlib-ml: a machine learning toolkit. Journal of Machine Learning Research 10, pp. 1755–1758. Cited by: §3.2.
  • [21] L. P. Naumann, S. Vazire, P. J. Rentfrow, and S. D. Gosling (2009) Personality judgments based on physical appearance. Personality and social psychology bulletin 35 (12), pp. 1661–1671. Cited by: §1.
  • [22] C. Y. Olivola and A. Todorov (2010) Elected in 100 milliseconds: appearance-based trait inferences and voting. Journal of nonverbal behavior 34 (2), pp. 83–110. Cited by: §1.
  • [23] L. Poggio, J. Aragonés, and R. Pérez-López (2013) Inferences of personality traits from bedroom objects: an approach from the scm. Procedia-Social and Behavioral Sciences 82, pp. 668–673. Cited by: §1.
  • [24] V. Ponce-López, B. Chen, M. Oliu, C. Corneanu, A. Clapés, I. Guyon, X. Baró, H. J. Escalante, and S. Escalera (2016) Chalearn lap 2016: first round challenge on first impressions-dataset and results. In European Conference on Computer Vision, pp. 400–418. Cited by: §1.2, §3.1.
  • [25] S. Poria, E. Cambria, R. Bajpai, and A. Hussain (2017) A review of affective computing: from unimodal analysis to multimodal fusion. Information Fusion 37, pp. 98–125. Cited by: §1.1.
  • [26] G. Ras, M. van Gerven, and P. Haselager (2018) Explanation methods in deep learning: users, values, concerns and challenges. In Explainable and Interpretable Models in Computer Vision and Machine Learning, pp. 19–36. Cited by: §5.
  • [27] R. Righart and B. De Gelder (2008) Recognition of facial expressions is influenced by emotional scene gist. Cognitive, Affective, & Behavioral Neuroscience 8 (3), pp. 264–272. Cited by: §1.
  • [28] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. External Links: Document Cited by: §2.
  • [29] A. Todorov, C. Y. Olivola, R. Dotsch, and P. Mende-Siedlecki (2015) Social attributions from faces: determinants, consequences, accuracy, and functional significance. Annual review of psychology 66, pp. 519–545. Cited by: §1.
  • [30] S. Tokui, K. Oono, S. Hido, and J. Clayton (2015)

    Chainer: a next-generation open source framework for deep learning

    In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS), External Links: Link Cited by: §3.3.1.
  • [31] J. S. Uleman, S. Adil Saribay, and C. M. Gonzalez (2008) Spontaneous inferences, implicit impressions, and implicit theories. Annu. Rev. Psychol. 59, pp. 329–360. Cited by: §1.
  • [32] A. Vinciarelli and G. Mohammadi (2014) A survey of personality computing. IEEE Transactions on Affective Computing 5 (3), pp. 273–291. Cited by: §1.1.
  • [33] X. Wei, C. Zhang, H. Zhang, and J. Wu (2018) Deep bimodal regression of apparent personality traits from short video sequences. IEEE Transactions on Affective Computing 9 (3), pp. 303–315. Cited by: §1.2, §2.
  • [34] J. Willis and A. Todorov (2006) First impressions: making up your mind after a 100-ms exposure to a face. Psychological science 17 (7), pp. 592–598. Cited by: §1.