First Impressions: A Survey on Computer Vision-Based Apparent Personality Trait Analysis

04/21/2018 ∙ by Julio C. S. Jacques Junior, et al. ∙ University of Barcelona 0

Personality analysis has been widely studied in psychology, neuropsychology, signal processing fields, among others. From the computing point of view, by far speech and text have been the most analyzed cues of information for analyzing personality. However, recently there has been an increasing interest form the computer vision community in analyzing personality starting from visual information. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing computer vision-based visual and multimodal approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features. More importantly, future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed. Hence, the survey provides an up-to-date review of research progress in a wide range of aspects of this research theme.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 19

page 20

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Psychologists have long studied human personality, and throughout the years different theories have been proposed to categorize, explain and understand it. According to Vinciarelli and Mohammadi [1], the models that most effectively predict measurable aspects in the life of people are those based on traits. Trait theory [2] is an approach based on the definition and measurement of traits, i.e., habitual patterns of behaviors, thoughts and emotions relatively stable over time. Trait models are built upon human judgments about semantic similarity and relationships between adjectives that people use to describe themselves and the others. For instance, consider most of people know the meaning of nervous, enthusiastic, and open-minded. Trait psychologists build on these familiar notions, giving precise definitions, devising quantitative measures, and documenting the impact of traits on people’s lives [2].

Psychology studies, among other aspects, behaviour. Behaviour () is a function of the person () and the situation (), i.e., . From a psychological point of view, most research has been conducted on the personal side of the equation (), especially on individual differences in personality and cognitive traits. From a computational perspective, recent few studies also started to pay attention on the situational part of the equation (), with a particular interest on personality perception. Apparent personality (), however, is conditioned to the observer (), and could be defined as a function of , i.e., . While a vast amount of psychological research on the study of cognitive processes in individuality judgment from the point of view of the observer can be found in the literature [3], the research from a computational point of view is just at its early stages. This study revealed, among other things, that in addition to the measurements of agreement with respect to personality perception, almost no attention was given to the part of the equation associated to the observer (), when automatic apparent personality trait recognition is considered.

From the perspective of automatic personality computing, the relationship between stimuli (everything observable people do) and the outcomes of the social perception processes (how we form impressions about others) is likely to be stable enough to be modeled in statistical terms [4]. This is a major advantage for the analysis of social perception processes because computing science, in particular machine learning, provides a wide spectrum of statistical methods aimed at modeling statistical relationships like those observed in social perception. However, the main criticism against the use of personality trait models is that they are purely descriptive and do not correspond to actual characteristics of individuals, even though several decades of research and experiments have shown that the same traits appear with surprisingly regularity across a wide range of situations and cultures, suggesting that they actually correspond to psychological salient phenomena [1].

During the past decades, different trait models have been proposed and broadly studied: the Big-Five [5], Big-Two [6], 16PF [7], among others. The model known as Big-Five or Five-Factor Model, often represented by the acronyms OCEAN, is one of the most adopted and influential models in psychology and personality computing. It is a hierarchical organization of personality traits in terms of five basic dimensions: Openness to Experience (contrasts traits such as imagination, curiosity and creativity with shallowness and imperceptiveness), Conscientiousness (contrasting organization, thoroughness and reliability with traits as carelessness, negligence and unreliability), Extraversion

(contrasts talkativeness, assertiveness and activity level with silence, passivity and reserve),

Agreeableness (contrasts kindness, trust and warmth with traits such as hostility, selfishness and distrusts), and Neuroticism (contrasts emotional instability, anxiety and moodiness with emotional stability).

For the sake of illustration, Fig. 1 shows representative face images for the highest and lowest levels of each Big-Five trait, obtained from [8]. Such images reflect a kind of relationship between observers perception and observed people on the ChaLearn First Impression Database [9]. As it can be seen, these representative samples are influenced by, among other things, facial expression (e.g., a high score on Extraversion trait and an associated smiling expression) and subjective bias of annotators with respect to gender (i.e., some traits scored high, such as Opennes to Experience and Extraversion, show a female looking face whereas the same traits scored low seems to show a male looking face). These are just few examples to show how the characteristics of observers and observed people affect first impressions of personality and how complex and subjective it can be.

Assessing the personality of an individual means to measure how well the adjectives mentioned above describe him/her. Psychologists have developed reliable and useful methodologies for assessing personality traits. Despite its known limitations, the self-report questionnaires have become the dominant method for assessing personality [10]. However, personality assessment is not limited to psychologists: everybody, everyday, makes judgments about our personalities as well as of others. In every-day intuition, the personality of a person is assessed along several dimensions. We are used to talk about an individual as being (non-)open-minded, (dis-)organized, too much/little focused on herself, etc [11]. Nonetheless, support for the validity of these first impressions is inconclusive, raising the question of why do we form them so readily? According to Willis and Todorov [12], people make first impressions about others, either from their faces [13] or in general, from a glimpse as brief as 100ms, and these snap judgments predict all kinds of important decisions (discussed in Sec. 2.1).

Fig. 1: Representative face images for the highest and lowest levels of each Big-Five trait, obtained from [8]. Images were created by aligning and averaging the faces of 100 unique individuals that had the highest and lowest evaluations for each trait on the ChaLearn First Impression database [9] (we inverted images for Neuroticism trait, i.e., low high, as they were drawn in [8] from the perspective of Emotion Stability).

While being diverse in terms of data, technologies and methodologies, all domains concerned with personality computing consider the same three main problems [1], namely automatic personality recognition (i.e., the real personality of an individual), automatic personality perception (the prediction of the personality others attribute to a given person), and automatic personality synthesis (i.e., the generation of artificial personalities through embodies agents). Recently, the machine learning research community adopted the terms of apparent personality, personality impressions, or simply first impressions [9, 14, 15]

to refer to personality perception (which are used interchangeably along the text). Although, first impressions are not restricted to personality. In general, automatic personality perception is composed of three main steps: data annotation/ground-truth generation, feature extraction and classification/regression, which will be incrementally discussed along the text.

Vinciarelli and Mohammadi [1] presented the first survey on personality computing. However, they focused on automatic personality recognition, perception and synthesis from a more general point of view rather than from a vision-based perspective. In this work, we contribute to the research area in the following directions:

  • We present an up-to-date literature review on apparent personality trait analysis from a vision-based perspective, i.e., centered on the visual analysis of humans. Reviewed works include some kind of image-based analysis at some stage of their pipelines. Hence, this study can be considered the first comprehensive review covering this particular research area.

  • We discuss the subjectivity in data labeling (and evaluation protocols) from first impressions, which is a relatively new and emerging research topic.

  • We propose a taxonomy to group works according to the type of data they use: still images, image sequences, audiovisual or multimodal. We claim the type of data and application are strongly correlated, and that the proposed taxonomy can help future researchers in identifying: 1) common features, databases and protocols employed in different categories, 2) pros and cons of “similar” approaches, and 3) what methods they should compare with (or get inspired by).

  • We present and discuss relevant works developed for real personality trait analysis, as well as those correlating both real and apparent personalities, which is an almost unexplored area in visual computing.

  • We present a set of mid-level cues, collected from the reviewed works, highly correlated to each personality trait of the Big-Five model. We analyse reported results and reveal commonly best recognized traits, as well as the more challenging ones.

  • We discuss current datasets and competitions organized to push the research in the field, main limitations and future research prospects.

  • We identify open challenges and research opportunities in the field of apparent personality analysis.

The remainder of this paper is organized as follows. Sec. 2.1 motivates the research on the topic. Subjectivity associated to data labeling is discussed in Sec. 2.2. State-of-the-art, according to the proposed taxonomy, is presented and discussed from Sec. 2.3 to Sec. 2.6. Then, a joint discussion about real and apparent personality is presented in Sec. 2.7. Later, we briefly discuss high-level features and their correlated traits in Sec. 2.8, and overall accuracy obtained for different Big-Five traits in Sec. 2.9. In Sec. 3 we discuss past and current challenges organized to push the research area. Finally, final remarks and conclusions are drawn in Sec. 4.

2 Related work

This section presents a comprehensive review on vision-based methods for apparent personality trait analysis.

2.1 The importance of first impressions in our lives

A computer program capable of predicting in mere seconds the psychological profile of someone could have wide application for companies as well as for individuals around the globe. Just to mention a few, they could be applied in health (e.g. personalized psychological therapies), robotics (e.g., humanized and social robots), learning (e.g. automatic tutoring systems), leisure and business (e.g. personalized recommendation systems). For instance, recent studies show that therapeutic robots can be helpful in stimulating social skills in children with autism, encouraging imitation, touch, eye gaze, and communication with persons [16], which would be impossible without a robot provided with some advanced capabilities. Other studies indicate video interviews, through nonverbal visual human behavior analysis, are starting to modify the way in which applicants get hired [17]. Nevertheless, these kind of applications will only be truly accepted and trusted if explainability and transparency can be guaranteed [18]. Moreover, to become inclusive and benefit everyone, such systems need to be able to generalize to different contexts.

The development and evaluation of automatic methods for personality perception is a very delicate topic, making us to think over the still open question “what should be the limit of such technology?”. The accuracy performance of apparent personality recognition models is generally measured in terms of how close the outcomes of the approach to the judgments made by the raters are (also referred as judges, annotators, labelers or simply external observers). The main assumption behind such evaluation is that social perception technologies are not expected to predict the actual state of the target, but the state observers attributed to it, i.e., their impressions. Thus, making automatic apparent personality trait analysis a very complex and subjective task.

According to the literature, faces are rich source of cues for apparent personality attribution [19, 20]. However, Todorov and Porter [21] showed that first impressions based on facial analysis can vary with different photos (i.e., the ratings vary w.r.t. context). Whether or not trait inferences from faces are accurate, they also affect important social outcomes. For example, attractive people have better mating success and jobs prospects than their less fortunate peers. The effects of appearance on social outcomes may be partly attributed to the halo effect [22], which is the tendency to use global evaluations to make judgments about specific traits (e.g., attractiveness correlates with perceptions of intelligence), having a strong influence on how people build their first impressions about others. Rapid judgments of competence based solely on the facial appearance of candidates was enough to allow participants from an experiment [23] to predict the outcomes of Senate elections in the United States in 72.4% of the cases. It seems politicians who simply look more competent are more likely to win elections [24, 25]. First impressions also influence legal decision-making [26]. Nevertheless, at the same time that different research communities (e.g., machine learning, computer vision and psychology) are advancing the state-of-the-art in the field in different directions, it was recently observed111https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals

that some Artificial Intelligence based models are exhibiting racial and gender biases, which are considered extremely complex and emerging issues. One possible explanation for these problems (i.e., in the case of first impressions) is that the ground truth annotations, used to train AI based models, are given by individuals and may reflect their bias/preconception towards the person in the images or videos, even though it may be unintentional and subconscious 

[27]

. Hence, trained classifiers can inherently contain a subjective bias. This phenomena was also observed and analyzed in natural language processing 

[28], and should receive special attention from all research areas.

2.2 How challenging and subjective can be apparent personality trait labeling/evaluation?

The outcomes of machine learning based models will be in some way a reflect of the learning data they use, i.e., in the case of first impression analysis, the labels provided by the raters. The validity of such data can be very subjective due to several factors, such as cultural [29, 30], social [31, 22], contextual [21], gender [32, 33], appearance [34], etc., which makes the research and development on personality perception a very challenging task.

The subjectivity of impressions raises further questions about how many raters should be involved in an experiment and how much they should agree with one another. When it comes to the Big-Five model, the literature suggests that the agreement should be measured in terms of amount of variance shared by the observers. In general, the low agreement should not be considered the result of low quality judgments or data, but an effect of the inherent ambiguity of the problem 

[4]. However, existing works analyse apparent personality from a universal perception, that is, the impressions given by different observers concerning a particular individual are averaged. Although this became a standard procedure, we argue it does not accurately reflect how first impressions work in real life. Person perception is conditioned to the observer. Thus, particularities of the different populations of observers, such as cultural differences [29], should also be taken into account. The aim of this section, rather than trying to answer the above question, is to introduce a brief review and discussion on the topic.

Jenkins et al. [33] analyzed the variability in photos of the same faces. According to their study, within-person variability exceeded between-person variability in attractiveness, suggesting that the consequences of within-person variability are not confined to judgments of identity. It was also observed that female raters (i.e., gender issues) tended to be rather harsh on the male faces. Even tough attractiveness is not explicitly related to personality, results indicate how challenge and complex is the task of social judgment. The study presented in [31] revealed that the most important source of within-person variability, related to social impressions of key traits of trustworthiness, dominance, and attractiveness, which index the main dimensions in theoretical models of facial impressions, is the emotional expression of the face, but the viewpoint of the photograph also affects impressions and modulates the effects of expression.

Within-person variance in behavior is likely to be a response to variability in relevant situational cues (e.g., people are more extraverted in large groups than in small groups - even though some individuals may not increase or may even decrease their level of extraversion with the size of the group) [35]. Because situational cues vary in everyday behavior, behavior varies as well. Abele and Wojciszke [6] analyzed the Big-Two model (i.e., Agency, also called competence, power, and Communion, also called warmth, morality, expressiveness) from the perspective of self versus others. According to their work, agency is more desirable and important in the self-perspective, and communion is more desirable and important in the other-perspective, i.e., “people perceive and evaluate themselves and others in a way that maximizes their own interests and current goals”.

Walker et al. [29] found some cross cultural consensus and differences when addressing the problem of universal and cultural differences in forming personality trait judgments from faces. Using a face model, they were able to formalize the static facial information that is used to make certain personality trait judgments such as aggressiveness, extroversion and likeability. According to their study, Asian and Western participants were able to identify the enhanced salience of all different traits in the faces, suggesting that the associations between mere static facial information and certain personality traits are highly shared among participants from different cultural backgrounds. However, Asian participants required more time to complete the task. On the other hand, faces with enhanced salience of aggressiveness, extroversion, and trustworthiness were better identified by Western than by Asian participants.

Even though the problem of stereotyping is minimized in [29] (which could bias the analysis) through the usage of manipulated faces (synthetic faces are used in [26, 24] for the same purpose), it should be noted that social judgments in real situations are formed from different sources, such as pose, gaze direction, facial expression or styling. For example, hairstyle, which is extrafacial styling information, can be intentionally chosen by target persons to shape others’ impressions of them. According to Vetter and Walker [36], in order to systematically investigate how faces are perceived, categorized or recognized, we need to control over the stimuli we use in ours experiments. Furthermore, the problem is not only to get face images taken under comparable lighting conditions, distance from the camera, pose or facial expression, but to get face stimuli with clearly defined similarities and differences.

Barratt et al. [37] revisited the classical problem of the Kuleshov effect. According to film mythology, the filmmaker Lev Kuleshov conducted an experiment (in the early 1920s) in which he combined a close-up of an actor’s neutral face with three different emotional contexts: happiness, sadness and hunger. The viewers reportedly perceived the actor’s face as expressing an emotion congruent with the given context. Recent attempts at replicating the experiment have produced either conflicting or unreliable results. However, it was observed that some sort of Kuleshov effect does in fact exist. Olivola and Todorov [38] evaluated the ability of human judges to infer the characteristics of others from their appearances. They found that judges are generally less accurate at predicting characteristics than they would be if appearance cues are ignored, suggesting that appearance is overweighed and can have detrimental effects on accuracy.

More recently, Escalante et al. [27] analyzed the ChaLearn First Impression dataset and top winner approaches used in the ChaLearn LAP Job Candidate Screening Challenge [39]. Part of the study focused on the existence of latent bias towards gender and apparent ethnicity. When correlating these variables with apparent personality annotations (Big-Five), they first observed that there was an overall positive attitude/preconception towards females in both personality traits (except for agreeableness) and job interview invitation variable. Moreover, gender bias was observed to be stronger compared to ethnicity bias. Concerning ethnicity, results indicated an overall positive bias towards Caucasians, and a negative bias towards African-Americans. No discernible bias towards Asians in either way was observed.

2.2.1 Discussion

This section discusses Sec. 2.2, covering the main challenges related to data labeling, different solutions employed to address worker bias, and suggestions for future research.

Data labels. Annotation protocols for personality perception require special attention. The challenge relies in defining a particular score to a certain trait, either from a continuous domain or within a specific range (e.g., using Likert Scale [40]), which by default is extremely subjective, time-consuming and influenced by worker bias. This study revealed that there is no standard protocol to annotate data for personality perception, as well as there is no gold standard for apparent personality. According to [41], every individual may perceive others in a different way, which poses a great obstacle for creating a model addressing the issues of annotators subjectivity and bias. For the sake of illustration of how challenging data labeling in this area can be, Nguyen and Gatica-Perez [17] asked Amazon Mechanical Turk (AMT) workers to label videos with respect to the Big-Five model using a five-point Likert Scale and a standard personality inventory questionnaire. In their work, only extraversion trait was observed to be consistently rated.

Reducing bias. Worker bias is particularly difficult to evaluate and correct when many workers contribute just a few labels, which is typical when labeling is crowd-sourced. Reducing bias has been tackled in different ways in the literature, especially when apparent personality trait analysis is taken into account. Bremner et al. [42] proposed to identify annotators who assigned labels without looking at the content by removing judges who incorrectly answered a test question. The usage of of pairwise comparisons [9, 43, 25] also became a very effective way to address worker bias. Joo et al. [25] asked AMT workers to compare a pair of images in given dimensions rather than evaluating each image individually. A similar strategy is applied in [9]

for video files, which include an algorithm to estimate continuous scores (i.e., ground truth labels) from pairwise annotations 

[43]. Comparison schemes have three main advantages: 1) the annotators do not need to establish the absolute baseline or scales in these social dimensions, which would be inconsistent, i.e., “what does a score of mean?”; 2) they naturally identify the strength of each sample in the context of relational distance from other examples, generating a more reliable ranking of subtle signal differences [25]; and 3) they avoid previously annotated videos/images from biasing future scores (i.e., scoring someone very low on a certain trait because of an unconscious comparison with previous videos/images where the score was high).

Future directions. The area of personality computing would benefit from the design, definition and release of new, large and public databases, as well as with the design of standard protocols for data collection, labeling and evaluation. According to our study, ~40% of reviewed works developed for personality perception are evaluated on public databases without any type of customization (in most of the cases, small in size and/or composed of small number of participants); around 25% of works are evaluated on private databases, and the remaining ones on customized from public databases. The use of private or customized databases makes the comparison among works a big challenge, creating a barrier to advance the state-of-the-art on the field. Disregarding the different applications (e.g., face-to-face interviews, HRI, or conversational videos), which can influence the design of new databases, we argue that future developed databases should be composed at least of a large number of samples from a heterogeneous population (with respect to observed people and annotators), so that current and/or future works can generalize to different cultures easily. We envisage two main scenarios can make a big difference in future researches on the topic, as follows.

Joint analysis of real and apparent personalities: it will require the design of new, large and public databases containing labels for both real and apparent personalities. Up to now, the correlation analysis of both personality types, from a computer vision/machine learning point of view, have not been fully exploited. Despite the great difficulty of accomplish such task, as it requires data collection (self-reports and ratings) from a large population, it could benefit different research lines on the field (discussed in Sec. 2.7). For instance, when addressing real and apparent age estimation, Jacques et al. [44] show that the subjective bias contained in the perception of age can be used to better approximate the real target. As far as we know, a similar idea has never been exploited in the context of personality trait analysis. Then, questions such as “what is the effect of the real personality of someone on his/her perceived personality?”, or “is it possible to accurately regress the real personality from perception mechanisms?”, could be studied.

Correlate observer vs. observed: “what is the observer looking at?” or “what characteristics does the observer have?”, or even better, a combination of both questions. Existing works are not considering any information about the observers (apart from their impressions) to perform automatic personality perception, such as cultural similarities or differences [29] with respect to different target populations. Future researches could take into account, for example, the gender, age, ethnicity, or even the real personality (making a link to the previous scenario) from both observers and people being observed. Taking the observers characteristics into account would move the research on this area to another level. Nevertheless, the above questions may pose a great challenge to privacy and ethic issues. A preliminary analysis on this topic was addressed in [27]. Authors analyzed the correlation among gender and ethnicity (from the people being observed) vs. the interview variable contained in their database (provided by the observers). However, any data from the annotators were collected.

Some extra questions still remain open and could be a subject for future researches, such as whether universal and cultural differences studied in [29] can be generalized to faces from other cultural backgrounds, or whether the personality impression about one trait can influence the impressions about other traits [45], or the relationship among nonverbal content and personality variables/scores [46].

2.3 Still Images

This section reviews the very few works developed for automatic personality perception from still images (neither audio nor temporal information are used). This class of works usually focus on facial information to drive their models, generally combining features at different levels and their relationships. Note that some works developed for other categories (e.g., image sequences or audio-visual) could be applied (or easily adapted) to still images, as they perform a frame-by-frame prediction before a final aggregation or fusion (responsible to deal with the temporal information).

In the work of Guntuku et al. [47], low-level features are employed to detect mid-level cues (gender, age, presence of image editing, etc), used to predict real and apparent personality traits (Big-Five) of users in self-portrait images (selfies). Even though a small dataset is used (i.e., composed of 123 images from different users), authors presented some insights on which mid-level cues contribute to personality recognition and perception (see Table I). Yan et al. [48] studied the relationship between facial appearance and personality impressions in the form of trustworthy

. In their work, different low-level features are extracted from different face regions, as well as relationships between regions. For instance, HoG is used to describe eyebrow shape, and Euclidean distance to describe eyes width. To alleviate the semantic gap between low-level and high-level features, mid-level cues are built through clustering. Then, a Support Vector Machine (SVM) is used to find the relationship between face features and personality impression.

Dhall and Hoey [49]

exploited a multivariate regression approach to infer personality impressions of users from Twitter profile images. Hand-crafted and deep learning based features are computed from the face region. Background information is considered, which may affect personality perception, and a high correlation between

openness and scene descriptors was observed, suggesting that the context where pictures are taken loosely relate to a person’s ability to explore new places. In [50], authors combined eigenfaces features with SVM to investigate whether people perceive an individual portrayed in a picture to be above or below the median with respect to each Big-Five trait.

2.3.1 Discussion

This section discusses the importance of semantic attributes, limitations and future research directions related to Sec. 2.3.

Mid-level features. Facial landmarks seem to be the start point of different feature extraction methods [49], especially those which exploit mid-level features or semantic attributes [47, 48]. Mid-level features or semantic attributes usually carry meaningful information, which can be used to complement other low-level features and then improve accuracy performance. They also enable more interpretable analysis on the results. For instance, when studying social impressions, Joo et al. [25] analyzed the correlation between a set of mid-level attributes and social dimensions (e.g., attractiveness, intelligence and dominance

), and investigated which face regions contribute more to each trait dimension. Such analysis could be considered a step forward in the direction of feature selection for automatic personality perception. Moreover, mid-level cue detectors outperformed many low-level features analysed in 

[47], for almost all trait dimensions of the Big-Five model, reinforcing their benefit.

Current limitations. Most works presented in this section either built their own datasets [47] (e.g., collecting data from the Internet) or adapted to their needs datasets developed for other purposes (i.e., the FERET [51] or the LFW-attribute [52]

datasets, developed for face recognition, as in 

[50] and [48], respectively). The common point of these works is that images need to be labelled, and labels usually do not become public. This way, reproducing their results can be a big challenge. Twitter profile images collected in [53] are used in [49]. However, baseline labels (Big-Five) are created trough the analysis of users tweets. These points reinforce the fact that new and large public datasets, with associated standard evaluation protocols, are fundamental to advance the state-of-the-art on the field.

Future directions. The analysis of image content outside the face region, as performed in [49], is a topic which deserves further attention (and not specifically related to still images, as it could be applied to other categories). As emphasized in [54], when addressing the perception of emotions from images, the context have an important role, which is completely aligned with personality perception studies. Although some works proposed to ignore the background information, hairstyle or clothes [29, 26, 24], people can intentionally combine such information to shape others’ impressions of them. Moreover, the literature shows that context also influence annotators during data labeling. In addition to the context, high-level features extracted from body pose, gestures or facial expression, which have already been exploited by some works in other categories, have not been fully exploited when still images are considered. Body language analysis [55], which is an emerging research topic in computer vision, could benefit personality perception in many ways (with respect to all data modalities). It includes different kinds of nonverbal indicators, such as gaze direction, position of hands, the style of smiling, among others, which are important markers of the emotional and cognitive inner state of a person.

2.4 Image Sequences

Works exploiting visual cues of image sequences are presented next. They benefit from temporal information and scene dynamics (without acoustic information), which bring useful and complementary information to the problem.

Biel et al. [19] studied personality impressions in conversational videos (vlogs), focusing on facial expression analysis. They used a subset of the Youtube vlog dataset [56], and employed a facial expression model based on Facial Action Coding System (FACS). Automatic personality perception is addressed using Support Vector Regression (SVR) combined with statistics of facial activity based on frame-by-frame estimates. Results show that extraversion is the trait showing the largest activity cue utilization (reinforced in Table I), which is related to the evidence found in the literature that extraversion is typically easier to judge [57, 58]. Later, Aran and Gatica-Perez [59]

investigated the use of social media content as a domain for transfer learning from conversational videos to small group settings. They considered the particular trait of

extraversion

, and addressed the problem combining Ridge Regression and SVM classifiers with statistics extracted from

weighted Motion Energy Images. In [45], the connections between facial emotion expressions and apparent personality traits in vlogging is analysed as an extension of [19]. Four sets of behavioral cues (and fusion strategies) that characterize face statistics and dynamics over brief observation windows are proposed to represent facial patterns. Co-occurrence analysis (e.g., smiling with surprise) is also exploited. Finally, the inference task is addressed using SVR. Their study show that while multiple facial expression cues have significant correlation with several of the Big-Five traits, they were only able to significantly predict extraversion impressions.

Taking Human-Computer Interaction (HCI) into account, Celiktutan and Gunes [60]

addressed the challenging task of continuous prediction of perceived traits in space and time. According to the authors, continuous predictions of first impressions have not been explored before. In their work, external observers were asked to continuously provide ratings along multiple dimensions (Big-Five) in order to generate continuous annotations of video sequences. The inference problem is then addressed using low-level visual features (e.g., HoG/HoF) combined with a linear regression method. The work was extended in 

[61, 62] with real-time capabilities, audio-only and audio-visual data analysis.

Considering the great advances in the field of deep learning, Gürpinar and collaborators [20]

employed a pre-trained Convolutional Neural Network (CNN) to extract facial expressions as well as ambient information (which is ignored by most competitive works on the topic) on the ChaLearn First Impression 

[9] dataset. Visual features that represent facial expressions and scene are combined and fed to a Kernel Extreme Learning Machine (ELM) regressor. Ventura et al. [63] studied why CNN models are performing surprisingly well in automatically inferring first impressions. Results show that the face provides most of the discriminative information for personality impressions inference, and the internal CNN representations mainly analyze key face regions such as the eyes, nose, and mouth.

Unlike the later works, Bekhouche et al. [64] combined texture features, extracted from the face region, with five SVRs to estimate apparent personality traits (Big-Five). As reported by the authors, although deep learning-based approaches can achieve better results, temporal face texture-based approaches are still very effective.

2.4.1 Discussion

Next, Sec. 2.4 is discussed. Topics like interaction types, spatio-temporal information modeling, slice length/location and prospects for future research directions are covered.

Type of interaction. Two works presented in this section are related to humans interacting either with a virtual agent [60] or with small groups of people [59]. Hypothetically, one could consider some kind of interaction exists when people talk to a camera, as in [19, 20, 59, 63, 64]. According to [59], people talk to the camera in vlogs as if they were talking to other people. When some kind of interaction is considered, classes of features that encode specific aspects of social interactions can be exploited, such as visual activity, facial expressions or body/head motion.

Spatio-temporal information. A preliminary analysis suggests that the inclusion of spatio-temporal information placed first impression analysis on a new level (compared to still images), with a wider range of applications. Continuous predictions demonstrated to be a research line still little explored. This may be due to the challenging and complex task of generating accurate labels over time for a huge amount of data, in particular when deep learning based methods are considered, which from our knowledge, have not been employed in this context yet. Celiktutan and Gunes [60] pioneered continuous predictions in first impressions. However, they used a small dataset composed of 30 video recordings captured from 10 subjects. Moreover, their continuous prediction can be interpreted as a frame-based regressor where each frame is treated independently. Thus, the dynamics of the scene are not completely explored. In [64, 20], short video clips are globally represented with statistics computed for the sequence of frames. Even though such approach does not treat the frames independently, it still does not consider the temporal evolution of the data. In [45], statistics of facial expression outputs are characterized as dynamic signals over brief observation windows, which can be considered a step forward to dynamically analyze first impression on the temporal dimension.

Slice length/location. The predictive power of facial expressions, depending on the duration and relative position of specific vlog segments, is analyzed in [45]. Results suggest that viewers’ impressions are better predicted by features computed at the beginning of each video, corroborating with the idea that first impressions are built from short interactions [13]. Nevertheless, authors reported that further research is needed to confirm their hypothesis, as well as to verify if the same effect is observed for different nonverbal sources, and whether the optimal duration and position of the slices are the same for each data type. A stronger discussion about slice length/location is provided in Sec. 2.5.3.

Future directions. This study revealed that the number of methods developed for image sequences is significantly smaller if compared to those developed for audiovisual category (Sec. 2.5), which may be related to improvements obtained by the inclusion of complementary information or to the different applications being considered. Few works from the ones described in this section (e.g.,[20, 60]) have been extended to consider audiovisual information [65, 66, 67], emphasizing the benefits of including acoustic features to the pipeline. Nevertheless, in general, works are not exploiting the full benefits of temporal information. A feature representation that can keep the temporal evolution of the data, such as Dynamic Image Networks [68] used in action recognition, should be considered in future research.

Although great advancements have been reported by deep learning based approaches [20, 63], they are often perceived as black-box techniques, i.e., they are able to effectively model very complex problems, but cannot be easily interpreted nor their predictions can be explained. Because of this, explainability and interpretability should deserve special attention in future research on the topic. As already mentioned, person perception-based applications will only be truly accepted and trusted if they are transparent and can be explained [18]. In fact, the interest on this topic is evidenced by the organization of dedicated events, such as thematic workshops [69, 70] and challenges [39, 27]. Although, this kind of research is just on its infancy.

2.5 Audiovisual trait prediction

In this section, we review works using both acoustic (nonverbal) and visual features to perform automatic personality perception. Works are further classified based on the type of interaction (with/without), as they may use datasets, features and methodologies developed for different purposes.

2.5.1 Interactive approaches

Aran and Gatica-Perez [71] addressed personality perception during small group interactions using a subset of the ELEA corpus [72]. Thus, personality impressions needed to be collected, as the dataset did not provide them. The inference task is addressed using Ridge Regression combined with statistics computed from the given video segments (e.g., average speaking turn, prosodic features and visual activity). For a comprehensive review of nonverbal cues applied to small group setings we refer the reader to [73].

Focusing on feature representation for personality and sicial impressions, Okada et al. [74] proposed a co-occurrence event mining framework for multiparty and multimodal interactions. According to the authors, the use of co-occurrence patterns between modalities yields two main advantages: (i) it can improve the inference accuracy of the trait value based on richer feature set and (ii) discover key context patterns linking personality traits. In their work, speech utterances, body motion and gaze are represented as time-series binary data, and co-occurrence patterns are defined as multimodal events overlapped in time. Then, co-occurrent events are detected through clustering before a final inference using Ridge Regression and linear SVM.

Staiano et al. [75] focused on feature selection to model the dynamics of personality states in a meeting scenario. Personality state refer to a specific behavioral episode wherein a person behaves as more or less introvert/extrovert, neurotic or open to experience, etc. It is also referred as situational cues [35]. According to the authors, the problem with “traditional approaches” is that they assume a direct and stable relationship between, e.g., being extravert and acting extravertedly (speaking loudly, being talkative, etc). However, “on the contrary, extraverts can sometimes be silent and reflexive, while introverts can at times exhibit extraverted behaviors. Similarly, people prone to neuroticism do not always exhibit anxious behavior, while agreeable people can sometimes be aggressive” [75]. In their work, several low-level (acoustic) and high-level features (attention given/received in the form of head pose, gaze and voice activity) are combined with different machine learning approaches.

More recently, Çeliktutan and Gunes [67] addressed how personality impressions fluctuate with time and situational contexts

. First, audio-visual features are extracted (e.g., face/head/body movements, geometric and hybrid features). Then, a Bidirectional Long Short-Term Memory (LSTM) Network is employed to model the temporal relationships between the continuously generated annotations and extracted features. Finally, a decision-level fusion is performed to combine the outputs of the audio and the visual regression models. Nevertheless, their study required a database (in their case, a subset of the SEMAINE corpus 

[76]) annotated in a time-continuous manner.

Interested on the differences in situational context affecting trait perceptions and ratings, Joshi et al. [41] analyzed thin slices ( sec long) of behavioral data during HCI settings. Authors analyzed (i) the differences between the perceived traits (Big-Five and social impressions) during audio-visual and visual-only observations; (ii) the deviation in the perception when there is a change of situational context; (iii) and the change in the perception marked by an external observer when the same individual interacts with different virtual characters (exhibiting specific emotional and social attributes). To take into account errors induced by subjective biases, they proposed a framework which encapsulates a weighted model based on linear SVR, and low-level visual features computed over the face region.

Bremner et al. [42] investigated how robot mediation affects the way the Big-Five personality traits of the operator are perceived. Results showed that (i) judges utilize robot appearance cues along with operator vocal cues to make their judgments; (ii) operators’ gestures reproduced on the robot aid personality judgments, and (iii) that personality perception through robot mediation is highly operator-dependent. Extending [42], Çeliktutan et al. [77] showed that apparent personality classification from nonverbal cues works better than from audio-only (except for agreeableness), and that facial activity and head pose together with audio and arm gestures play an important role in conveying specific personality traits in a telepresence context.

2.5.2 Non-interactive settings

In general, works falling in this category are those exploiting conversational videos, self-presentations or video resumes.

Biel and Gatica-Perez [57, 58] pioneered personality impressions in vlogs under the perspective of audiovisual behavioral analysis. In [57], they studied the use of nonverbal cues as descriptors of vloggers’ behavior, and found significant associations between extracted cues for several personality judgments. Later [58], they addressed the problem as a regression task, where features extracted from audio (speaking activity and prosodic), video (looking activity, pose and overall motion) and co-occurrence events (e.g., looking-while-speaking) were combined with SVR to infer apparent personality. In both works, the analyses are performed on thin vlog slices (1-minute). An extensive review discussing both verbal and nonverbal aspects of vlogger behaviors is presented in [78].

Nguyen and Gatica-Perez [17] analyzed the formation of job-related first impressions in conversational video resumes. In fact, job recommendation systems based on the visual analysis of nonverbal human behavior started to receive a lot of attention from the past few years [79, 80, 81], being social impressions the focus of most works. According to [17], online video resumes represent an opportunity to study the formation of first impressions in an employment context at a scale never attempted before. In their work, the linear relationships between nonverbal behavior and the organizational constructs of hirability and apparent personality (Big-Five) are examined. Different regression methods are analyzed for the prediction of personality and hirability impressions from audio (speaking activity and prosody) and visual cues (proximity, face events and visual motion). Results suggest that combining feature groups strongly improves accuracy performance, reinforcing the benefits of including complementary information to the pipeline. More recently, Gatica-Perez et al. [82] addressed the recognition of personal state

and trait impressions, which include personality, in a longitudinal study using behavioral data of

vloggers who posted on YouTube for a period between three and six years. The dataset is composed of a small number of participants, and results do not show any significant temporal trend related to personality.

Following the recent advancements of CNNs, Güçlütürk et al. [15] presented an audiovisual Deep Residual Network (trained end-to-end) for apparent personality trait recognition. The network does not require any feature engineering or visual analysis such as face detection, face landmark alignment or facial expression recognition. Auditory and visual streams are merged in an audiovisual stream, which comprises a fully-connected layer. At the training/test stage, the fully-connected layer outputs five continuous prediction values corresponding to each trait for the given input video clip. Their work won the third place in the ChaLearn First Impressions Challenge [9] (1st round), whereas [83] and [14] achieved the second and first place, respectively. The work [15] was extended in [8] to consider verbal content, and to predict an “invitation to job interview” variable. In [83], two end-to-end trainable deep learning architectures are proposed to recognize personality impressions. The networks have two branches, one for encoding audio and the other for visual features. The first model is formulated as a Volumetric (3D) CNN, while the second one is formulate as an LSTM based network, to learn temporal patterns in the audio-visual channels. Both models concatenate statistics of certain acoustic properties (obtained from non-overlapping partitions) and visual data (from segmented faces, after landmark detection) in a later stage.

In order to capture rich information from both visual and audio modality, Zhang et al. [14, 84] proposed a Deep Bimodal Regression framework. They modified the traditional CNN to exploit important visual cues (introducing what they called Descriptor Aggregation Networks), and built a linear regressor for the audio modality. To combine complementary information from the two modalities, they ensembled predicted regression scores by both early and late fusion. Gürpinar et al.[66] extended their previous work [20] (briefly described in Sec. 2.4) with the inclusion of other visual descriptors, acoustic features and weighted score level fusion strategy, and ranked in the first place of the ChaLearn First Impressions Challenge [85] (2nd round).

2.5.3 Discussion

A general discussion about Sec. 2.5 is presented next, covering current limitations, slice length/location, co-occurrence event mining, personality states, job recommendation systems and prospect for future directions of research.

Current limitations. The ELEA [72] dataset, employed in [71], was captured in a very specific and controlled environment, and developed for analyzing emergent leaderships. From the 40 recorded meetings, only 27 have both audio and video (for more details, see Table III). From the point of view of deep learning based approaches, which are dominating different lines of research in social/affective computing, small datasets have limited application. The Mission Survival Task corpus [86], employed in [75], according to the authors, is not currently available. The databases used in [87, 46], because of the privacy-sensitive content of the interviews or data protection laws [17], are not publicly available. As recently reported in [67], most of the results found in the literature are not directly comparable to each other, as different evaluation protocols are employed.

Slice length/location. According to [71], external observers usually make their impressions based on thin slices selected from video samples. Thus, the decision of which part of the video (from the whole sequence) will be analysed is a common requirement to be addressed, and in most of the cases, it is done empirically. Slice length/location analysis is also studied in the context of automatic personality recognition and social impressions from nonverbal visual analysis. Although having different goals, these two areas have strong overlap with personality perception studies as they are all centered on the the visual analysis of human behavior. For instance, Lepri et al. [88, 89] observed that classification accuracy (of extraversion trait) is affected by the size of the slice when addressing real personality recognition. In [80], manual “scene/slice segmentation” was performed to analyze different segments of a job interview in the context of social impressions. Results show that any slice clearly stood out in terms of predictive validity, i.e., all slices yielded comparable results. As stated in [80], which is shared by personality impressions studies, one of the challenges in thin slice research is the amount of temporal support necessary for each behavioral feature to be predictive of the outcome of the full interaction. In other words, some cues require to be aggregated over a longer period than others. According to their study, and to our review, no metric assessing the necessary amount of temporal support for a given feature exists, neither in social impressions nor in personality perception. Thus, whether thin slice impressions would generalize to the whole video for the prediction task is still an open question. Question such as “is that possible to automatically select the best slice of the whole clip that best describes first impressions? Will it generalize to the whole video?” could be a subject for future research in the field.

Co-occurrence event mining. It demonstrated to be very effective as an alternative to exploit complementary information in first impressions [74, 58, 17]. According to [90], the computer vision community has reached a point when it can start considering high-level reasoning tasks such as the “communicative intents” of images.

Personality states or situational context. Have strong impact in personality perception, either when interactions are considered or not [75, 35, 67, 41, 82]. Although first impressions have been studied from different perspectives in the past few years, situational cues did not receive enough attention from the computer vision community up to now. The great complexity and subjectivity related to this topic, along with the existing dataset limitations, could be a possible explanation for that. One can imagine, for instance, how challenge can be the task of defining a new dataset on this topic, either from different contexts (e.g., at work, in a party, during a job interview, etc) or even during the same context but at different time intervals [67]. Although it can be considered a challenging task, future researches on this topic would have strong impact on the whole research area.

Job recommendation. The question of why a particular individual receives a positive (or negative) evaluation based on first impressions analysis deserves special attention from the research community, either if personality or hirability (social) impressions are considered. Note that a close link between the constructs of personality and hirability exist [91]. Automatic job recommendation systems can be very subjective, and might have strong influence in our lives once they become common. Recent studies [27], including those submitted to a workshop organized by the ChaLearn group [39], sought to address such question.

Future directions. Verbal content [78], combined with nonverbal visual data, is a potential direction to be taken to advance the state-of-the-art in automatic personality perception (briefly discussed in Sec. 2.6). However, according to revised literature, verbal content analysis has some limitations: 1) most existing works exploiting verbal content are based on manual transcriptions of the audio channel (which imposes a great barrier to applicability); 2) automatic speech recognition methods are still not so accurate to capture verbal content without introducing noise/errors to the pipeline; and 3), it is language dependent, i.e., verbal content analysis from people of different spoken languages might require different treatments.

Taking into account the particular case of job recommendation systems, the differences across job types have not been investigated in computing [17], even though have already been addressed in psychological studies. For instance, the expected behavior for a person applying for a sales position may differ from someone looking for an engineering position. Combining differences across job types with personality trait analysis could be used to make job recommendation systems more transparent and inclusive.

Regarding the recently proposed CNN based models for automatic personality perception [15, 83, 14, 63], we observed that there is still a long venue to be explored. The top three winner methods [14, 83, 15] submitted to the ChaLearn First Impression Challenge [9] obtained very similar overall performances (i.e., , and , respectively) even though presenting different solutions, suggesting that proposed architectures may be exploiting complementary features, which could be combined somehow to improve overall accuracy. Moreover, deep neural networks are currently on of the most promising candidates to tackle the challenges of multimodal data fusion [83, 14, 84, 66] and multi-task solutions in first impressions.

2.6 Multimodal trait prediction

This section reviews works using multimodal data for automatic personality perception, i.e., in addition to the audio-visual cues, they may exploit verbal content, depth information or use data acquired by more specialized devices.

Biel et al. [40] addressed personality impressions of vloggers

using Linguistic Inquiry and Word Count (LIWC) and N-grams analysis. While the focus of their work is on what

vloggers say, few experiments fusing verbal and nonverbal content have been performed. Verbal content is also exploited in [78]. In this case, the work focuses on nonverbal content analysis. Both works [40, 78] use manual transcripts of vlogs to verify (in an error-free setting) the ability of verbal content for the prediction of personality impressions (Big-Five). The feasibility of building a fully automatic framework were investigated using Automatic Speech Recognition (ASR). However, errors caused by the ASR system significantly decreased the performances.

Chávez-Martínez et al. [92] considered the inference of mood and personality impressions (Big-Five) of vloggers, from verbal content (i.e., categorizing word counts into linguistic categories, obtained from manual transcriptions) and nonverbal audio-visual cues (e.g., pitch, speaking rate, body motion and facial expression). High-level facial features are considered through the concept of compound facial expressions. The inference task is then addressed using a multi-label classifier. Results suggest that the combination of mood and trait labels improved overall performance in comparison with the mood-only and trait-only experiments.

Using a logistic regression model, Sarkar et al. 

[93] combined audiovisual (pitch, speech and movement analysis), verbal content (unigram bag of words and statistics from the transcriptions), demographic (gender) and sentiment features (e.g., positive/negative sentiment scores of the verbal content) for apparent personality trait (Big-Five) classification. Results show that different personality traits are better predicted using different combinations of features. Alam and Riccardi [94] reached a similar conclusion when addressing personality impression using the same dataset (Youtube vlog [58]), i.e., the performance of each trait varies for different feature sets. In their work, linguistic, psycholinguistic and emotional features extracted from the transcripts are analyzed, in addition to the audio-visual features provided with the dataset. Similarly as in [93, 94], Farnadi et al. [95] combined audiovisual and several text-based features with different multivariate regression techniques. The main differences among the above solutions [93, 94, 95] are in relation to verbal features and the way the problem was modeled, as audio-visual features were provided with the dataset (released in the WCPR2014 Challenge [96]).

Srivastava et al. [97] exploited audio, visual and lexical features to predict the Big-Five Inventory-10 answers, from which personality trait scores can be obtained. Audiovisual and verbal cues are combined for recognizing emotions, and used to learn a linear regression model based on the proposed Sparse and Low-rank Transformation (SLoT). A dataset composed of short movie clips (4-7 sec long each), manually labeled with BFI-10 answers and personality impressions is used. Several high level tasks are performed (e.g., face/emotion expression recognition, tracking, scene change detection) as the dataset can show multiple people and cuts, which makes the study even harder. Moreover, as dialogs are extracted from the movie’s subtitles, audiovisual information might not be well synchronized with the text.

More recently, Güçlütürk et al. [8] extended their previous work [15] to consider verbal content (extracted from audio transcripts provided with the data) as well as to infer hirability scores [39, 27]. Different modalities are analysed, including audio-only, visual-only, language-only, audiovisual, and a combination of audiovisual and language in a late fusion strategy. Results show that the best performance is obtained though the fusion of all data modalities.

Exploiting the concept of group engagement, Salam et al. [98] investigated how personality impressions of participants can be used together with robot’s personality to predict the engagement state of each participant in a triadic Human-Human-Robot Interaction setting. Nonverbal visual cues of individuals (e.g., body activity, appearance and visual focus of attention) and interpersonal features (e.g, relative distances, attention given/received) captured from RGB-D data are employed. However, several high level tasks have to be addressed for feature extraction, such as ROI/group detection, body/head/face detection, skeleton joints, as well as robot detection. Authors also propose to use extroverted/introverted robots in the experiments to vary the context of the interactions.

Finnerty and collaborators [87] studied whether first impressions of stress are equivalent to physiological measurements of electrodermal activity (EDA) in a context of job interviews. The outcomes of job interviews are then analyzed based on features extracted from multiple data modalities (EDA, audio and visual). Even though focusing on stress impressions, they presented preliminary analysis on the relationship among real and apparent personality, and stress impressions, briefly discussed in Sec. 2.7.

2.6.1 Discussion

In this section, we discuss different aspects of complementary information, recurrent problems on the field, as well as prospects for future research directions related to Sec. 2.6.

Complementary information. Whether focusing on verbal [40] or nonverbal content analysis [78], this study revealed that overall improvements of each personality trait are obtained when different cues are employed [40, 93, 94], as well as that they can be further improved when different features are combined/fused [8]. For instance, extraversion was better predicted in [40] using nonverbal content, whereas agreeableness, neuroticism and conscientiousness when verbal cues were used. Nevertheless, overall results were improved when both cues were combined. In [92], improvements were obtained when combining mood and trait labels. Our study indicates that there is not a single set of features that maximizes accuracy performance for all personality traits. Verbal content, voice, facial expressions, gestures and poses (i.e., body language), among many other features, are potential sources to code/decode personality, and can complement each other in different ways.

Recurrent problems. The use of controlled environments [98] or specialized sensors [87] imposes a limitation to applicability, which can be even more limited if the study is based on private [98, 97] or customized datasets [87, 92] composed of small number of participants [98].

Future directions. The two-stage approach presented in  [97] is motivated by the fact that the relationship between features and abstract personality traits is generally difficult to describe through a simple linear model. Authors attempted to learn a model for predicting answers to BFI-10 questionnaire from features, which is like a mid-level step in understanding the semantic hierarchy from features to personality traits. Results indicate that features to answers followed by answers to personality scores can achieve superior performance than features to personality scores, even though in a preliminary study. As far as we know, no other work on the field tried to address the problem using such two-stage methodology, which could receive special attention in future researches (i.e., with respect to all data modalities).

It is well known that deep learning is making a revolution with respect to almost all research domains related to visual computing (i.e., compared to traditional machine learning and standard computer vision approaches based on hand-crafted features). The architecture presented in [15, 8] was the only approach, among the top ranking approaches in the ChaLearn First Impression Challenge [9], which did not rely neither on pretrained models nor on feature engineering, which makes it particularly appealing since it does not require making any assumptions regarding the important features for the task at hand. Authors also evaluated the changes in performance for the audio and visual models as a function of exposure time (i.e., slice length), which still is an open question on the field. Results suggest that there is enough information about personality in a single frame [8], as evidenced in [13] when studying the bases of personality judgments (“first impressions are built from a glimpse as brief as ms”). Nevertheless, the same reasoning does not hold for the auditory modality, especially for very short auditory clips. Note that, frame selection (i.e., which frame from the whole sequence will be analyzed), not addressed in [8]

, have not been properly addressed yet in spatio-temporal based methods for automatic personality perception (i.e., the standard approach is to analyse uniformly distributed samples over the set of frames).

Although single images can carry meaningful information about the personality of an individual, we envisage future research directions in multimodal approaches (which hold for almost all other categories of the proposed taxonomy) should contemplate end-to-end trainable models, multi-task scenarios (different tasks are jointly trained), taking benefit of the evolution of the data on the spatio-temporal domain, possibly benefiting from transfer learning (e.g., cross-domain) and semi-supervised approaches (using partially annotated data from different datasets).

2.7 Real and apparent personality trait analysis

In this section, we present relevant works developed for automatic personality recognition, due to their relevance and similarity to the topic of this survey, and briefly discuss the very few existing works analysing both real and apparent personality traits. Note that, if we ignore the way labels (used to train machine learning algorithms for the classification/regression task) are obtained (i.e., from external observers, in the case of personality perception, or from self-report questionnaires, in real personality trait analysis), the task of inferring personality from visual, audio-visual, or multimodal data could be considered the same, either if real or apparent personality is considered.

2.7.1 Automatic Personality Recognition

Similarly as the case of personality perception, personality recognition have been addressed in the literature using different data modalities, i.e., still images [99, 53, 100], image sequences [101, 102], audiovisual (with [103, 11, 104, 105, 88, 89, 62, 106, 81] or without [91, 107] interactions) and multimodal [108, 109, 110, 111].

Taking the popularity of social networks into account, Ferwerda et al. [99] proposed to infer real personality from the way users manipulate their pictures on Instagram. However, the work is basically limited to color information analysis. Liu et al. [53] addressed how Twitter profile images vary with the personality of users. Although profile images from over 66,000 users are used, personality traits were estimated based on their tweets. In [100], they analysed the Big-Five traits and interaction styles from Facebook profile images. However, without explicitly analysing human faces, as some images even contain one single person.

Subramanian et al. [102] show that social attention patterns computed from target’s position and head pose during social interactions are excellent predictors of extraversion and neuroticism. In [101], the impact of personality during Human-Robot Interactions (HRI) is analysed based on nonverbal cues extracted from a first-person perspective, as well as from their relationships with participants’ self-reported personalities and interaction experience. Linear SVR is employed to predict personality traits from gaze direction, attention and head movements while interacting with either an “extroverted” or “introverted” robot.

Fang et al. [103] combined Ridge Regression with audio-visual features to addresse Big-Five personality recognition and social impressions during small group interactions. In a similar context, speaking time and visual attention is exploited in [88, 89] to predict extraversion. In [88], they consider the attention an individual receive/gives from/to the group members. In [89], authors differentiate the amount of attention given/received while the person is speaking. Both works also address the impact of slice size on the classification. In a meeting scenario, Pianesi et al. [11] addressed extraversion and Locus of Control classification using SVM and audio-visual features (e.g. audio signal statistics and Motion History Images) extracted from 1-min videos. Later [104], the problem was addressed as a regression task. Inference of Extraversion and Locus of Control are also proposed in [105]

using a Bayesian Networks that explicitly incorporate hypotheses about the relationships among personality, actual behavior of the target and

situational aspects.

Batrinca et al.[106] employed 2-5 min videos to recognize personality traits during HCI, combining audio-visual cues and feature selection with SVM. In their work, the computer interacts with individuals using different levels of collaborations, to elicit the manifestation of different personality traits. The work was extended in [112] to consider Human-Human Interactions (HHI). In [91]

, authors combined nonverbal visual features with Naive Bayes and SVM to predict Big-Five traits in a similar monologue setting than

vlogging [57]. The study [91] was extended in [107] to automatically extract few additional visual features. In both works, the highest accuracy were obtained when classifying conscientiousness. As related by the authors, the request of introducing themselves in front of a camera, apparently activated the subjects’ conscientiousness dispositions.

Nguyen et al. [81] extend [79] to predict the Big-Five traits in addition to hirability impressions, focusing on postures and gestures (extracted from a mixture of manual annotations and automated methods) as well as co-occurrence events. Rahbar et al. [108] addressed extraversion recognition during HRI, taking into account the first thin slices of the interaction. Multimodal features extracted from depth images (e.g., motion and human-robot distance) are used to train a Logistic Regression Classifier. The work was extended in [109] to consider new features and in depth analysis. Fernadi et al. [110] compared different personality recognition methods and investigated the possibility of cross-learning from different social media, i.e. Facebook, Twitter and YouTube. However, disregarding the analysis performed on the YouTube vlog [58] dataset, any visual-based analysis was performed in relation to the other sources. In [111], they studied the relation between Big-Five traits and implicit responses of people to affective content (i.e., emotional videos), combining features obtained from electroencephalogram, peripheral physiological signals and facial landmark trajectories with a linear regression model.

In summary, the very few existing works developed for automatic personality recognition are: 1) mainly based on hand-crafted features, classic machine learning approaches and single-task scenarios, without modelling multiple visual human cues for an accurate representation neither exploiting temporal dynamics of the data; 2) analysed on small sized datasets without standard evaluation protocols. In consequence, there is no generalization guarantee to different target populations/scenarios; and last but not least 3) none of the existing works regressing personality traits from audio-visual data analyse sources of bias to further correlate real and apparent personality.

2.7.2 Joint analysis of real and apparent personality

Preliminary results on the relationship between real and apparent personality traits have been reported in the literature [113, 114, 42, 87, 115] with limited outcomes. In the work of Wolffhechel et al. [113], any connection was found between participants self-reported personality traits and the scores they gave to others. According to their study, a single facial picture may lack information for evaluating diverse traits, i.e., “a viewer will miss additional cues for gathering a more complete first impression and will therefore focus overly on facial expressions instead”. In [114], no visual information is used (just audio and transcripts). Furthermore, a small dataset was used and results show low generalization capability of the proposed model. Bremner et al. [42] analysed audio-visual and audio-only information on a small dataset of 20 participants (and 5 observers) captured from a controlled environment, with no generalization guarantee to different target populations. They observed that judge’s ratings bear a significant relation to the target’s self-ratings only for the extraversion trait.

Although focusing on stress impressions (and restricted to job interviews), preliminary results on the relationship among impressions of stress and the Big-Five traits (real and apparent) are reported in [87] based on electrodermal activity, audio and visual data analysis. With respect to real personality, they observed that Openness to experience and Conscientiousness traits were negatively correlated with stress impressions. For traits as perceived by others, Conscientiousness was negatively correlated with stress impressions while Neuroticism was positively correlated.

More recently, Celiktutan et al. [115] presented a multimodal database to study personality simultaneously in HHI and HRI scenarios, and its relationship with engagement. Note that, differently from existing approaches in personality computing, personality impressions were provided by acquaintance people, which may explain why baseline results show that trends in personality classification performance remained the same with respect to the self and acquaintance labels. Moreover, results are limited to an analysis conducted with a small number of participants.

In [47], authors addressed real and apparent personality traits recognition. However, in addition to presenting some insights on which cues contribute to each personality type, any joint analysis was performed. Fang et al. [103] addressed the recognition of real personality traits and social impressions without making an in depth correlation analysis about both domains. In [71], personality impressions were collected for the same dataset used in [103]. However, still without correlating both personality types.

This study revealed that state-of-the-art in visual-based personality computing are neither focusing on understanding human biases that influence personality perception nor trying to automatically (and accurately) regress the real personality from perception mechanisms. This is mainly because of the high complexity on accurately modelling humans in visual data, as well as the high subjectivity of the topic (involving a large set of possible sources of bias), remaining a largely unexplored area.

2.8 What features give better results?

As discussed in previous sections, there is not a standard set or modality of features that works better for any type of data, database or personality trait. Different solutions have been proposed over the past years based on distinct evaluation protocols, which prevent the above question to be properly addressed. However, we present in Table I a list of mid-level features and semantic attributes highly correlated with Big-Five traits, reported by state-of-the-art works on personality computing. We expect the information summarized in Table I can be used to inspire future researches to advance the state-of-the-art on the field, in particular when studying new strategies to improve the recognition performance of traits that are currently difficult to be recognized (as discussed in Sec. 2.9).

While some agreement can be observed in Table I (in most of the cases) in relation to particular sets of attributes, traits and personality type, few minor inconsistencies can also be noted, reinforcing the difficulty of addressing the above question. This is the case of “positive emotions and smiling expressions” with respect to openess and real personality (reported to have, in different studies, negative and positive correlation). It must be emphasized that the studies performed by the different works presented in Table I may be limited to analysis performed on small datasets, composed of small number of participants and without standard evaluation protocols, which do not guarantee generalization to different scenarios and contexts. In the case of apparent personality, they are also influenced by subjective bias from people who labelled the data, number of annotators, among other variables. Nevertheless, disregarding these limitations, it is possible to observe some agreement with reported literature in psychology, such as that extroverted people usually show smiling expressions and speak louder, as well as that more conscientious people tend to preserve their privacy (e.g., by not sharing images from private locations on social media), for example.

Evidences found in the literature show that extraversion is the trait showing the largest activity cue utilization [19] (reflected in Table I), as well as it is the trait typically easier to judge [57, 58]. Our study also revealed that, for the case of personality perception, extraversion is also the trait recognized with higher accuracy (shown in Fig. 2).

Negative correlation Trait Positive correlation
Real Apparent Real Apparent
Positive emotions and smiling expressions [53]; stress impression [87] Negative emotions (anger, disgust) [78, 19, 45]; verbal content associated to negative emotions [40]; long eye contact [58] and frontal face event duration [17] O Positive emotions and smiling expressions [45]; body motion and speaking activity during collaborative task [112] Smiling expressions, joy [45]; speaking time [57]; body motion [78, 58, 17]; hirability impressions and eye contact [17]; verbal content associated to leisure activities [40]
Private location [47]; negative mood [53]; speech conflict with others [103]; stress impression [87] Negative emotions/valence (sad, anger) [45, 40]; body activity [57]; long eye contact [78, 58]; stress impression [87] C Smiling expressions [91], positive mood and valence (joy) [53]; eye contact [100]; speaking activity and body motion during collaborative task [112] Smiling expressions, joy and contempt [45], frontal pose [49]; speaking time [57]; eye contact, low body motion activity, looking-while-speaking [78, 58]; verbal content associated to occupation and achievements [40]
Low deviation in received attention during interaction [102]; visual attention given to the rest of the group [105] Neutral or negative emotions/valence (anger, disgust, contempt) [78, 45, 11, 92, 82]; pressed lips [47]; low speech activity [82] during group interaction [74]; give/receive attention while silent [89]; speech turns [78, 58, 57]; long eye contact [58, 17] E Positive emotions [53]; smiling expressions [100]; body motion [112, 106]; attention received and speaking time [105]; engagement during interaction [98] Positive emotions/valence (joy) [47, 78, 19, 92] and smiling expressions [19, 45]; funny [82]; body motion [78, 58, 17, 57, 74]; give/receive attention while speaking [58, 74, 89, 71]; speaking time [78, 58, 57] and louder [58]; eye contact [17]; verbal content related to interpersonal interactions and sexuality [40]; hirability impression [17, 81]
Negative emotion expressions [53]; talking turns in group interactions [103]; Negative emotions/valence (anger and disgust) [78, 19, 45, 92]; verbal content associated to negative emotions, sexuality and religion [40] A Positive emotions, smiling expressions and joy [53]; body motion during collaborative task [112], and long speaking duration [91] Positive emotions/valence (joy) and smiling expression [78, 19, 45, 82]; eye contact [47]; facial attractiveness [41]; verbal content associated to positive emotions [40]
Positive emotions [53]; smiling expressions [100]; long speaking duration during collaborative task [112] Smiling expressions, joy [45]; face visibility [47]; facial attractiveness [41]; looking-while-speaking [78, 58]; hirability impression[17]; aggreableness [82] N Negative emotions or lack of emotion expression [53]; social profile images without faces [53, 100]; low body motion activity during interactions [102] Negative emotions/valence (anger) [92, 82]; verbal content associated to negative words and negative emotional words [40]; duckface [47]; stress impression [87]
TABLE I: Reported mid-level features and semantic attributes, highly correlated with Big-Five traits, reported by state-of-the-art in personality computing.

2.9 Which traits are easily recognized?

To address the above question, we analysed the results reported by reviewed works with respect to the Big-Five model, i.e., {O, C, E, A, N}. In total, and works addressing real and apparent personality recognition, respectively, have been analysed. Works which did not report results for all Big-Five traits have not been considered. For each work, we retrieved the two traits recognized with highest accuracy. Fig. 2 shows the obtained distribution. Different observations can be taken form Fig. 2: 1) the “ranking” of traits on the two distributions are different, i.e., 2) regarding real trait estimation, “C” and “O” are the traits recognized with highest accuracy, whereas 3) “E” and “C” are the best recognized traits in personality perception; 4) clearly, “N” is challenging for both types of work; 5) surprisingly, “A”, which is usually recognized with satisfactory accuracy in personality perception, is the most difficult trait to be recognized when real personality is considered.

Fig. 2: Distribution of Big-Five personality traits “easily” recognized.

It must be emphasized that Fig. 2 was generated based on few works evaluated on different datasets and protocols. Thus, the analysis presented in this section can only provide a more general view around the above question, and further studies are needed to confirm our observations.

3 Trait recognition challenges

Academic competitions/challenges are an excellent way to quickly advance the state-of-the-art in a particular field. Organizers of challenges formulate a problem, provide data, evaluation metrics, evaluation platform and forums for the dissemination of results. Within the computer vision, pattern recognition and multimedia information processing communities

222Please, note that inferring real personality traits from text is a topic that has been studied intensively by the NLP community (e.g., [116]). several challenges related to personality analysis have been proposed. These are summarized in Table II.

Challenge Year Dataset Task Samples Event Winner Overview
ChaLearn LAP 2017 First impressions v2 Invite to interview 10,000 CVPR/IJCNN [65, 64] [39]
2016 First impressions P. traits (Big-5) 10,000 ECCV [14] [9]
2016 First impressions P. traits (Big-5) 10,000 ICPR [66] [85]
MAPTRAITS 2014 Semaine P. traits (Big-5) 44 ICMI [117, 118] [61]
WCPR 2014 YouTube vlog P. traits (Big-5) 404 MM [94] [96]
Speaker Trait 2012 I-ST P. traits (Big-5) 640 Interspeech [119] [120]
TABLE II: Challenges on apparent personality analysis.

The first challenge on apparent personality analysis was that organized at Interspeech in 2012 [120]. However, the challenge focused only in the audio modality. Interestingly, organizers of such competition found that participants had difficulty at improving the baseline, and solutions lacked creativity, hence motivating further research in terms of the feature extraction and modeling .

The MAPTRAITS [61] and WCPR [96] challenges organized in 2014 were the first ones involving both video and audio modalities, focusing on personality perception. Two tracks were launched in the MAPTRAITS challenge: a continuous estimation of personality traits through time and the recognition of traits from entire clips of video (discrete). Organizers found that participants barely obtained comparable performance as the baselines. Where the best results were obtained with audio-visual and visual only baselines, for the continuous and discrete tracks, respectively. Hence highlighting the importance of the visual modality. The WCPR challenge also comprised two tracks: one focusing on the use multimodal information and the other using only text information. The conclusions from this competition were that the problem was too hard, but better results were obtained with multimodal information than when only text was used. In both challenges, the number of available samples were quite small (only 44 samples were used in [61] and 404 in [96]), which may be the cause for the low recognition performance. Still, these were the first efforts on the usage of multimodal information for personality analysis.

More recently, ChaLearn333http://chalearnlap.cvc.uab.es organized two rounds of a challenge on personality perception from audiovisual data [9, 85]. A new dataset comprising realistic videos annotated by AMT workers was used. To the best of our knowledge, this is the largest dataset available so far for apparent personality analysis (10K samples). The challenge focused on trait recognition in short clips taken from YouTube. The winning methods were based on deep learning [14, 66]. In fact, most participants of the contest adopted deep learning methods (e.g., [83, 15]). The best performance was achieved by solutions that incorporated both audio and visual cues. For these competitions, participants succeeded at improving the performance of the baseline, achieving recognition performance above 90% of average accuracy. The main conclusion from these competitions was that accurately recognizing personality impressions from short videos is feasible, motivating further research in this topic.

Results from the latter challenge motivated a new competition in a closely related topic involving personality traits, the so called Job Candidate Screening Coopetition [39]. In this challenge, an extended version of the ChaLearn First Impression dataset [9] was considered, where the extension consisted of the new variable to be predicted and manual transcriptions of audio in the videos. Participants had to predict a variable indicating whether the person in a video would be invited or not to a job interview (round 1). In addition, participants had to develop an interface explaining their recommendations (round 2). For the latter task, participants relied on apparent personality trait predictions. Organizers concluded that the “invite-for-interview” variable could be predicted with high accuracy, and that developing explainable models is a critical aspect, yet there is still a large room for improvement in this aspect.

Dataset Year Short description Focus Labels Used in
MHHRI [115], Multimodal 2017 12 interaction sessions (~4h) captured with egocentric cameras, depth and bio-sensors, 18 participants, controlled environment Personality and engagement during HHI and HCI Self/acquaintance-assessed Big-Five, and engagement [101, 98]
ChaLearn First Impression v2 [39], Multimodal 2017 Extended version of [9], with the inclusion of hirability impressions and audio transcripts Apparent personality trait and hirability impressions Big-Five impressions, job interview variable and transcripts [27, 63, 8, 65, 64]
ChaLearn First Impression [9], Audiovisual 2016 10K short videos: ~15sec each, collected from 2762 YouTube users, 1280x720 of size, RGB, 30fps, uncontrolled environment Apparent personality trait analysis (without interaction - single person talking to a camera) Big-Five impressions [15, 83, 14, 84, 20, 66, 85]
SEMAINE [76], Multimodal 2012 959 conversations: ~5min each, 150 participants, 780x580 of size, 49.979fps, RGB and gray, frontal and profile view, controlled environment Face-to-face (interactive) conversations with sensitive artificial listener agents Metadata, transcripts, 5 affective dimensions and 27 associated categories [60, 62, 61, 41, 67, 117, 118]
Emergent LEAder (ELEA) [72], Audiovisual 2012 40 meetings: ~15min each, 27 having both audio and video, composed of 3 or 4 members, 148 participants; 6 static cameras at 25fps (multiple views) and 2 portable cameras at 30fps, controlled environment Small group interactions and emergent leadership (winter survival task) Metadata, Big-Five (self-report) and social impressions [71, 74, 103]
YouTube vlog [121, 56, 58], Audiovisual 2011 442 vlogs: ~min each, 1 video per participant, uncontrolled environment Conversational vlogs [121, 56] and apparent personality trait analysis [58] Metadata and Big-Five impressions [57, 19, 58, 40, 45, 59, 78, 93, 94, 95]
Metadata can be gender, age, number of “likes” on social media, presence of laughs, FACS, etc, and vary for each dataset.
Labels for personality perception are annotated by each independent work, as they are not provided with the database.
TABLE III: Available datasets used in personality computing, centered on the visual analysis of humans, from a vision-based perspective.

From these challenges, several lessons can be learned. First, a major difficulty for organizers of the first competitions was the scarcity of data. From that competitions, participants could barely improve the performance of the baseline and there was not too much diversity in terms of the type of solution. Larger datasets, as those provided in the more recent challenges, together with more powerful modeling techniques that can leverage such amounts of data (e.g., deep learning methods), could be the key component for the participants to succeed at improving baselines and obtaining outstanding performance. Table III shows available datasets on the topic, from a Computer Vision point of view, as well as additional details about labels, modality, etc. Secondly, the inclusion of multimodal information, e.g., as opposed to using audio or text only, increased the range of possible methodologies and information that could be used to predict personality traits. Finally, the initial competitions were not organized for consecutive years (except those organized by ChaLearn). We think continuity is a key aspect for the success of any academic challenge.

It is important to emphasize that despite there are not too many challenges on personality analysis, the progress that previous competitions have caused is remarkable. In addition, there are several related challenges that deserve to be mentioned444We consider these challenges are related because both are associated to the social signal processing field.. For instance, the EmotiW [122] and Avec [123] challenge series, that have been run for several years and whose focus is on emotion recognition, depression detection, mood classification, and multimodal data processing. Clearly, progress in these related fields is having an impact on new challenges targeting personality traits exclusively.

4 Discussion

In this section, we provide a final discussion about the research topic. First, we summarize the main contributions of this study. Then, we comment few and relevant observed aspects, and lessons learned, at different modalities of the proposed taxonomy. We summarize and discuss the changes in terms of applications, features and limitations, when temporal information started being used. Next, we discuss accuracy performance and its relation to subjectivity, as well as the importance of dataset definition as rich resources to advance the research. Finally, we comment current deep learning technologies applied on first impression analysis, expected outcomes and applications.

Summary of contributions. This study presented an up-to-date literature review on apparent personality trait analysis, centered on the visual analysis of humans. Subjectivity in data labeling from first impressions was discussed, which is an extremely novel research area. Relevant works have been reviewed, as well as their main advantages and limitation discussed. Although the proposed taxonomy classified works based on the type of data, we observed that different sets of features naturally emerged. Such categorization can help future researchers in identifying common trends easily. We discussed relevant works for real personality trait analysis and their correlation to apparent personality, which is an almost unexplored area in visual computing. We summarized and presented a set of mid-level cues highly correlated to each Big-Five trait, which can guide future researches, in particular those addressing personality traits reported to be recognized with low accuracy by existing works. Finally, we reviewed main challenges and datasets used to advance the research in the field. Moreover, prospects for future research, in different directions, have been provided.

Particularities of the different modalities. From this study, several lessons can be learned. It revealed that still images based approaches mostly focus on geometric and/or appearance facial features, using low-level or mid-level/semantic attributes to drive the recognition of personality traits. Most works within this category use ad-hoc

datasets, making the comparison with competitive approaches a big challenge. Techniques developed for image sequences usually include higher level features and analysis, such as facial emotion expressions, co-occurence event mining, head/body motion, in addition to the ones used for still images. When temporal information is available, the great majority of works tend to compute functional statistics over time or treat each frame independently, omitting large spatio-temporal interactions. We envision future studies exploiting the spatio-temporal dependencies in (audio)visual (or multimodal) data to be an essential line of research. Possible studies in this direction may focus on new temporal deep learning models, such as using 3D convolutions (to consider local motion patterns), or based on temporal models such as Recurrent Neural Networks (RNN)-LSTM, which is able to model large spatio-temporal interactions.

Beyond still images. The use of temporal information introduced the problem of defining the slice duration and location. Even though addressed in some works, these questions remain open in all related modalities. Some other issues appeared when audiovisual approaches got in focus, such as situational contexts or personality states, which are extremely important points, as they contribute to increase the complexity and subjectivity in first impression studies.

In addition to visual information, methods for the analysis of personality in videos have relied on other modalities of data. For instance, most of the works reviewed in this survey fall within the audiovisual category, including low-level acoustic features (pitch, intensity, frequencies) as well as descriptors of speaking activity, turns, pauses, looking-while-speaking (co-occurrence event), etc. This is because nonverbal audio modality has proven to carry information that can be highly correlated with personality. In the same line, lexical analysis from the audio transcriptions has proven to be very useful, this is not surprising as automatic detection of personality from text has been widely studied.

Performance vs subjectivity. A considerable number of works show that the performance of each trait varies from different feature sets. This study also revealed: 1) the ranking of best recognized traits varies with respect to reviewed works, although some tendencies have been observed; 2) the best feature set/modality, or ranking of best recognized traits, might also change for the same work from one dataset to another due to subjectivity and complexity of the task.

Public datasets as valuable resources. A major problem in personality computing in the past has been the lack of unified public datasets for allowing the accurate evaluation of methodologies for personality recognition and perception. Our review revealed that the construction of resources can be a valuable contribution. In fact, nowadays there are a few datasets available already, some of them generated in the context of academic challenges. Nevertheless, the design of new datasets and challenges will speed up the progress in the field at fast rates. We envision new, large and public datasets, considering a large amount of heterogeneous population, and exploiting the following topics could define new research directions in the next few years: 1) situational contexts or personality states in more realistic scenarios; 2) continuous predictions; 3) joint analysis of real and apparent personality; 4) observer vs. observed analysis, in the context of first impression data labeling and subjective bias analysis. Note that these topics are highly correlated, and could be somehow tackled together, being a richer source for research on the field. They could also potentially benefit from 5) an increased comprehensive personality profile (which could also consider physical and mental health [124], cognitive abilities [125], implicit bias [126], among other attributes).

The revolution of deep learning. This study revealed that most works developed for automatic personality trait analysis (either if real or apparent) are mainly based on hand-crafted features, standard machine learning approaches and single-task scenarios. Although few recent works are trained end-to-end, they do not integrate a comprehensive set of human visual cues, neither address human bias nor correlate real and apparent personality. Nevertheless, CNNs are starting to be used in first impressions with very promising results, allowing the model to analyze not only a limited set of predefined features but the whole scene with contextual information, as well as facilitating advanced spatio-temporal modeling through the use of, e.g., RNN or 3D-CNN. Furthermore, CNNs can be considered nowadays on of the most promising candidates to meet the challenges of multimodal data fusion by virtue of its high capability in extracting powerful high-level features.

Outcomes. Recent and promising results on personality computing may encourage psychologists to get interested in machine learning approaches and to contribute to the field. Along these lines, a very promising venue for research has to do with the incorporation of prior knowledge in personality analysis models. This represents a new challenge for both psychology and machine learning communities. Likewise, we believe that users of personality recognition methods would benefit from information on the decisions or recommendations made by an automatic system. Thus, a quite promising venue for research is explainability and interpretability of personality recognition methods.

Applications. This study revealed that automatic personality trait analysis is applied in a vast number of scenarios. Reviewed works are applied in social media, small groups, face-to-face or interface based interviews, HRI, HCI, vlogs, video resumes, among others contexts. From a practical point of view, the wide range of potential applications related to automatic personality perception, whose limits have not been defined yet, can benefit health, affective interfaces, learning, business and leisure, among others. However, in order to be applied only for good causes, the new generation if intelligent systems provided with personality perception capabilities will need to be more effective, explainable, and inclusive, being able to ethically generalize to different cultural and social contexts, benefiting everyone, everywhere. We anticipate personality computing will become a hot topic in the next few years with a high impact in a wide number of applications and scenarios.

Acknowledgments

This project has been partially supported by granted Spanish Ministry projects TIN2016-74946-P, TIN2015-66951-C2-2-R and TIN2017-88515-C2-1-R. We thank ChaLearn Looking at People sponsors for their support, including Microsoft Research, Google, NVIDIA Corporation, Amazon, Facebook and Disney Research.

References

  • [1] A. Vinciarelli and G. Mohammadi, “A survey of personality computing,” TAC, vol. 5, no. 3, pp. 273–291, 2014.
  • [2] P. T. Costa and R. R. Mccrae, Trait Theories of Personality.   Boston, MA: Springer US, 1998, pp. 103–121.
  • [3] D. C. Funder, “Accurate personality judgment,” Current Directions in Psychological Science, vol. 21, no. 3, pp. 177–182, 2012.
  • [4] A. Vinciarelli, Social Perception in Machines: The Case of Personality and the Big-Five Traits.   Springer, 2016, pp. 151–164.
  • [5] R. R. McCrae and O. P. John, “An introduction to the five-factor model and its applications,” Journal of Personality, vol. 60, no. 2, pp. 175–215, 1992.
  • [6] B. Abele, Andrea E.; Wojciszke, “Agency and communion from the perspective of self versus others,” Journal of Personality and Social Psychology, vol. 95, no. 5, pp. 751–763, 2007.
  • [7] R. B. Cattell and S. E. Krug, “The number of factors in the 16pf: A review of the evidence with special emphasis on methodological problems,” Educational and Psychological Measurement, vol. 46, no. 3, pp. 509–522, 1986.
  • [8] Y. Güçlütürk, U. Güçlü, X. Baró, H. J. Escalante, I. Guyon, S. Escalera, M. A. J. van Gerven, and R. van Lier, “Multimodal first impression analysis with deep residual networks,” TAC, vol. PP, no. 99, pp. 1–1, 2017.
  • [9] V. Ponce-Lopez, B. Chen, A. Places, M. Oliu, C. Corneanu, X. Baro, H. Escalante, I. Guyon, and S. Escalera, “ChaLearn LAP 2016: First round challenge on first impressions - dataset and results,” in ECCVW, 2016, pp. 400–418.
  • [10] G. J. Boyle and E. Helmes, The Cambridge handbook of personality psychology, Chapter: Methods of Personality Assessment.   Cambridge University Press, Eds: P. J. Corr & G. Matthews, pp.110-126, 2009.
  • [11] F. Pianesi, N. Mana, A. Cappelletti, B. Lepri, and M. Zancanaro, “Multimodal recognition of personality traits in social interactions,” in ICMI, 2008, pp. 53–60.
  • [12] J. Willis and A. Todorov, “First impressions: Making up your mind after a 100-ms exposure to a face,” Psychological Science, vol. 17, no. 7, pp. 592–598, 2006.
  • [13] A. Todorov, Face Value: The Irresistible Influence of First Impressions.   Princeton and Oxford: Princeton University Press, 2017.
  • [14] C.-L. Zhang, H. Zhang, X.-S. Wei, and J. Wu, “Deep bimodal regression for apparent personality analysis,” in ECCVW, 2016.
  • [15] Y. Güçlütürk, U. Güçlü, M. A. van Gerven, and R. van Lier, “Deep impression: Audiovisual deep residual networks for multimodal apparent personality trait recognition,” ECCVW, 2016.
  • [16] K. Richardson, M. Coeckelbergh, K. Wakunuma, E. Billing, T. Ziemke, P. Gomez, B. Vanderborght, and T. Belpaeme, “Robot enhanced therapy for children with autism (dream): A social model of autism,” Technology and Society Magazine, vol. 37, no. 1, pp. 30–39, 2018.
  • [17] L. S. Nguyen and D. Gatica-Perez, “Hirability in the wild: Analysis of online conversational video resumes,” IEEE Transactions on Multimedia, vol. 18, no. 7, pp. 1422–1437, 2016.
  • [18] C. Liem, M. Langer, A. Demetriou, A. Hiemstra, A. Sukma Wicaksana, M. Born, and C. König, Psychology Meets Machine Learning: Interdisciplinary Perspectives on Algorithmic Job Candidate Screening.   Springer International Publishing, 2018, pp. 197–253.
  • [19] J.-I. Biel, L. Teijeiro-Mosquera, and D. Gatica-Perez, “Facetube: predicting personality from facial expressions of emotion in online conversational video,” in ICMI, 2012, pp. 53–56.
  • [20] F. Gürpınar, H. Kaya, and A. A. Salah, “Combining deep facial and ambient features for first impression estimation,” in ECCVW, G. Hua and H. Jégou, Eds., 2016, pp. 372–385.
  • [21] A. Todorov and J. Porter, “Misleading first impressions different for different facial images of the same person,” Psychological Science, vol. 25, no. 7, pp. 1404–17, 2014.
  • [22] A. Todorov, C. Said, and S. Verosky, Personality Impressions from Facial Appearance.   Oxford Handbook of Face Perception, 2012.
  • [23] C. C. Ballew and A. Todorov, “Predicting political elections from rapid and unreflective face judgments,” in National Academy of Sciences of the USA, 104(46), 2007, pp. 17 948–17 953.
  • [24] C. Y. Olivola and A. Todorov, “Elected in 100 milliseconds: Appearance-based trait inferences and voting,” Journal of Nonverbal Behavior, vol. 34, no. 2, pp. 83–110, 2010.
  • [25] J. Joo, F. F. Steen, and S. C. Zhu, “Automated facial trait judgment and election outcome prediction: Social dimensions of face,” in ICCV, 2015, pp. 3712–3720.
  • [26] F. Funk, M. Walker, and A. Todorov, “Modelling perceptions of criminality and remorse from faces using a data-driven computational approach,” Cognition and Emotion, vol. 31, no. 7, pp. 1431–1443, 2017.
  • [27] H. Escalante, H. Kaya, A. Salah, S. Escalera, Y. Gucluturk, U. Guclu, X. Baro, I. Guyon, J. Jacques Jr., M. Madadi, S. Ayache, E. Viegas, F. Gurpinar, A. Wicaksana, C. Liem, M. van Gerven, and R. van Lier, “Explaining First Impressions: Modeling, Recognizing, and Explaining Apparent Personality from Videos,” ArXiv e-prints, Feb. 2018.
  • [28] A. C. Islam, J. J. Bryson, and A. Narayanan, “Semantics derived automatically from language corpora contain human-like biases,” Science, vol. 356, no. 6334, pp. 183–186, 2017.
  • [29] M. Walker, F. Jiang, T. Vetter, and S. Sczesny, “Universals and cultural differences in forming personality trait judgments from faces,” Social Psychological and Personality Science, vol. 2, no. 6, pp. 609–617, 2011.
  • [30] C. Sofer, R. Dotsch, M. Oikawa, H. Oikawa, D. H. J. Wigboldus, and A. Todorov, “For your local eyes only: Culture-specific face typicality influences perceptions of trustworthiness,” Perception, vol. 46, no. 8, pp. 914–928, 2017.
  • [31] C. A. M. Sutherland, A. W. Young, and G. Rhodes, “Facial first impressions from another angle: How social judgements are influenced by changeable and invariant facial properties,” British Journal of Psychology, vol. 108, no. 2, pp. 397–415, 2017.
  • [32] K. Mattarozzi, A. Todorov, M. Marzocchi, A. Vicari, and P. M. Russo, “Effects of gender and personality on first impression,” PLOS ONE, vol. 10, no. 9, pp. 1–13, 09 2015.
  • [33] R. Jenkins, D. White, X. V. Montfort, and A. M. Burton, “Variability in photos of the same face,” Cognition, vol. 121, no. 3, pp. 313 – 323, 2011.
  • [34] S. C. Rudert, L. Reutner, R. Greifeneder, and M. Walker, “Faced with exclusion: Perceived facial warmth and competence influence moral judgments of social exclusion,” Journal of Experimental Social Psychology, vol. 68, pp. 101 – 112, 2017.
  • [35] W. Fleeson, “Toward a structure- and process-integrated view of personality: Traits as density distributions of states,” Personality and Social Psychology, vol. 80, no. 6, pp. 1011–1027, 2001.
  • [36] T. Vetter and M. Walker, Computer-Generated Images in Face Perception.   Andrew J. Calder, Gillian Rhodes, Mark H. Johnson, James V. Haxby (Editors). The Oxford Handbook of Face Perception: Oxford University Press, pp. 387-399, 2011.
  • [37] D. Barratt, A. C. Rédei, A. Innes-Ker, and J. van de Weijer, “Does the kuleshov effect really exist? revisiting a classic film experiment on facial expressions and emotional contexts,” Perception, vol. 45, no. 8, pp. 847–874, 2016.
  • [38] C. Y. Olivola and A. Todorov, “Fooled by first impressions? reexamining the diagnostic value of appearance-based inferences,” Journal of Experimental Social Psychology, vol. 46, no. 2, pp. 315 – 324, 2010.
  • [39] H. Escalante, I. Guyon, S. Escalera, J. Jacques, M. Madadi, X. Baró, S. Ayache, E. Viegas, Y. Güçlütürk, U. Güçlü, M. A. J. van Gerven, and R. van Lier, “Design of an explainable machine learning challenge for video interviews,” in IJCNN, 2017, pp. 3688–3695.
  • [40] J.-I. Biel, V. Tsiminaki, J. Dines, and D. Gatica-Perez, “Hi youtube!: Personality impressions and verbal content in social video,” in ICMI, 2013, pp. 119–126.
  • [41] J. Joshi, H. Gunes, and R. Goecke, “Automatic prediction of perceived traits using visual cues under varied situational context,” in ICPR, 2014, pp. 2855–2860.
  • [42] P. Bremner, O. Celiktutan, and H. Gunes, “Personality perception of robot avatar tele-operators,” in HRI, 2016, pp. 141–148.
  • [43] B. Chen, S. Escalera, I. Guyon, V. Ponce-López, N. Shah, and M. Oliu Simón, Overcoming Calibration Problems in Pattern Labeling with Pairwise Ratings: Application to Personality Traits.   Springer International Publishing, 2016, pp. 419–432.
  • [44] J. Jacques Junior, C. Ozcinar, M. Jakovljević, X. Baró, G. Anbarjafari, and S. Escalera, “On the effect of age perception biases for real age regression (in press),” in FG, 2019, pp. 1–8.
  • [45] L. Teijeiro-Mosquera, J.-I. Biel, J. L. Alba-Castro, and D. Gatica-Perez, “What your face vlogs about: expressions of emotion and big-five traits impressions in youtube,” TAC, vol. 6, no. 2, pp. 193–205, 2015.
  • [46] I. Naim, M. I. Tanveer, D. Gildea, and M. E. Hoque, “Automated prediction and analysis of job interview performance: The role of what you say and how you say it,” in FG, 2015, pp. 1–6.
  • [47] S. C. Guntuku, L. Qiu, S. Roy, W. Lin, and V. Jakhetiya, “Do others perceive you as you want them to?: Modeling personality based on selfies,” in International Workshop on Affect & Sentiment in Multimedia, 2015, pp. 21–26.
  • [48] Y. Yan, J. Nie, L. Huang, Z. Li, Q. Cao, and Z. Wei, “Exploring relationship between face and trustworthy impression using mid-level facial features,” in ICMM, 2016, pp. 540–549.
  • [49] A. Dhall and J. Hoey, “First impressions - predicting user personality from twitter profile images,” in Human Behavior Understanding, M. Chetouani, J. Cohn, and A. Salah, Eds., 2016, pp. 148–158.
  • [50] N. Al Moubayed, Y. Vazquez-Alvarez, A. McKay, and A. Vinciarelli, “Face-based automatic personality perception,” in International Conference on Multimedia, 2014, pp. 1153–1156.
  • [51] P. Phillips, H. Wechsler, J. Huang, and P. Rauss, “The FERET database and evaluation procedure for face-recognition algorithms,” Image and vision computing, vol. 16, no. 5, pp. 295–306, 1998.
  • [52] C.-H. Lin, Y.-Y. Chen, B.-C. Chen, Y.-L. Hou, and W. Hsu, “Facial attribute space compression by latent human topic discovery,” in International Conference on Multimedia, 2014, pp. 1157–1160.
  • [53] L. Liu, D. Preotiuc-Pietro, Z. Samani, M. Moghaddam, and L. Ungar, “Analyzing personality through social media profile picture choice,” in ICWSM, 2016, pp. 211–220.
  • [54] R. Kosti, J. M. Alvarez, A. Recasens, and A. Lapedriza, “Emotion recognition in context,” in CVPR, 2017, pp. 1960–1968.
  • [55] F. Noroozi, C. A. Corneanu, D. Kaminska, T. Sapinski, S. Escalera, and G. Anbarjafari, “Survey on emotional body gesture recognition,” CoRR, vol. abs/1801.07481, 2018.
  • [56] J.-I. Biel and D. Gatica-Perez, “Voices of vlogging,” in ICWSM, 2010, pp. 211–214.
  • [57] J.-I. Biel, O. Aran, and D. Gatica-Perez, “You are known by how you vlog: Personality impressions and nonverbal behavior in youtube,” in ICWSM, 2011, pp. 446–449.
  • [58] J.-I. Biel and D. Gatica-Perez, “The youtube lens: Crowdsourced personality impressions and audiovisual analysis of vlogs,” IEEE Transactions on Multimedia, vol. 15, no. 1, pp. 41–55, 2013.
  • [59] O. Aran and D. Gatica-Perez, “Cross-domain personality prediction: from video blogs to small group meetings,” in International Conference on Multimodal Interaction, 2013, pp. 127–130.
  • [60] O. Celiktutan and H. Gunes, “Continuous prediction of perceived traits and social dimensions in space and time,” in ICIP, 2014, pp. 4196–4200.
  • [61] O. Celiktutan, F. Eyben, E. Sariyanidi, H. Gunes, and B. Schuller, “Maptraits 2014: The first audio/visual mapping personality traits challenge,” in Mapping Personality Traits Challenge and Workshop, 2014, pp. 3–9.
  • [62] O. Celiktutan, E. Sariyanidi, and H. Gunes, “Let me tell you about your personality!: Real-time personality prediction from nonverbal behavioural cues,” in FG, vol. 1, 2015, pp. 1–1.
  • [63] C. Ventura, D. Masip, and A. Lapedriza, “Interpreting CNN models for apparent personality trait regression,” in CVPRW, 2017, pp. 1705–1713.
  • [64] S. E. Bekhouche, F. Dornaika, A. Ouafi, and T. A. Abdemalik, “Personality traits and job candidate screening via analyzing facial videos,” in CVPRW, 2017.
  • [65]

    H. Kaya, F. Gürpinar, and A. A. Salah, “Multi-modal score fusion and decision trees for explainable automatic job candidate screening from video cvs,” in

    CVPRW, 2017, pp. 1651–1659.
  • [66] F. Gürpinar, H. Kaya, and A. A. Salah, “Multimodal fusion of audio, scene, and face features for first impression estimation,” in ICPR, 2016, pp. 43–48.
  • [67] O. Celiktutan and H. Gunes, “Automatic prediction of impressions in time and across varying context: Personality, attractiveness and likeability,” TAC, vol. 8, no. 1, pp. 29–42, 2017.
  • [68] H. Bilen, B. Fernando, E. Gavves, and A. Vedaldi, “Action recognition with dynamic image networks,” TPAMI, vol. PP, no. 99, pp. 1–14, 2017.
  • [69] B. Kim, D. M. Malioutov, K. R. Varshney, and A. Weller, “Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017),” ArXiv e-prints, 2017.
  • [70] K.-R. Müller, A. Vedaldi, L. K. Hansen, W. Samek, and G. Montavon, “Interpreting, explaining and visualizing deep learning workshop,” NIPS, vol. Forthcoming, 2017.
  • [71] O. Aran and D. Gatica-Perez, “One of a kind: inferring personality impressions in meetings,” in ICMI, 2013, pp. 11–18.
  • [72] D. Sanchez-Cortes, O. Aran, M. S. Mast, and D. Gatica-Perez, “A nonverbal behavior approach to identify emergent leaders in small groups,” IEEE Transactions on Multimedia, vol. 14, no. 3, pp. 816–832, 2012.
  • [73] D. Gatica-Perez, “Automatic nonverbal analysis of social interaction in small groups: A review,” Image and Vision Computing, vol. 27, no. 12, pp. 1775–1787, 2009.
  • [74] S. Okada, O. Aran, and D. Gatica-Perez, “Personality trait classification via co-occurrent multiparty multimodal event discovery,” in ICMI, 2015, pp. 15–22.
  • [75] J. Staiano, B. Lepri, R. Subramanian, N. Sebe, and F. Pianesi, “Automatic modeling of personality states in small group interactions,” in Int. Conference on Multimedia, 2011, pp. 989–992.
  • [76] G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder, “The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent,” TAC, vol. 3, no. 1, pp. 5–17, 2012.
  • [77] O. Celiktutan, P. Bremner, and H. Gunes, “Personality classification from robot-mediated communication cues,” in RO-MAN, 2016, pp. 1–2.
  • [78] J.-I. Biel, “Mining conversational social video,” Ph.D. dissertation, École Polytechnique Fédérale de Lausanne, 2013.
  • [79] L. S. Nguyen, D. Frauendorfer, M. S. Mast, and D. Gatica-Perez, “Hire me: Computational inference of hirability in employment interviews based on nonverbal behavior,” IEEE Transactions on Multimedia, vol. 16, no. 4, pp. 1018–1031, 2014.
  • [80] L. S. Nguyen, “Computational analysis of behavior in employment interviews and video resumes,” Ph.D. dissertation, École Polytechnique Fédérale de Lausanne, 2015.
  • [81] L. S. Nguyen, A. Marcos-Ramiro, M. Marrón Romera, and D. Gatica-Perez, “Multimodal analysis of body communication cues in employment interviews,” in ICMI, 2013, pp. 437–444.
  • [82] D. Gatica-Perez, D. Sanchez-Cortes, T. M. T. Do, D. B. Jayagopi, and K. Otsuka, “Vlogging over time: Longitudinal impressions and behavior in youtube,” in MUM, 2018, pp. 37–46.
  • [83] A. Subramaniam, V. Patel, A. Mishra, P. Balasubramanian, and A. Mittal, “Bi-modal first impressions recognition using temporally ordered deep audio and stochastic visual features,” in ECCVW, 2016.
  • [84] X. S. Wei, C. L. Zhang, H. Zhang, and J. Wu, “Deep bimodal regression of apparent personality traits from short video sequences,” TAC, vol. PP, no. 99, pp. 1–14, 2017.
  • [85] H. J. Escalante, V. Ponce, J. Wan., M. Riegler, C. B., A. Clapes, S. Escalera, I. Guyon, X. Baro, P. Halvorsen, H. Müller, and M. Larson, “Chalearn joint contest on multimedia challenges beyond visual analysis: An overview,” in ICPRW, 2016.
  • [86] N. Mana, B. Lepri, P. Chippendale, A. Cappelletti, F. Pianesi, P. Svaizer, and M. Zancanaro, “Multimodal corpus of multi-party meetings for automatic social behavior analysis and personality traits detection,” in Workshop on Tagging, mining and retrieval of human related activity information, 2007, pp. 9–14.
  • [87] A. N. Finnerty, S. Muralidhar, L. S. Nguyen, F. Pianesi, and D. Gatica-Perez, “Stressful first impressions in job interviews,” in ICMI, 2016, pp. 325–332.
  • [88] B. Lepri, R. Subramanian, K. Kalimeri, J. Staiano, F. Pianesi, and N. Sebe, “Employing social gaze and speaking activity for automatic determination of the extraversion trait,” in ICMI, 2010.
  • [89] ——, “Connecting meeting behavior with extraversion – a systematic study,” TAC, vol. 3, no. 4, pp. 443–455, 2012.
  • [90]

    X. Huang and A. Kovashka, “Inferring visual persuasion via body language, setting, and deep features,” in

    CVPRW, 2016, pp. 778–784.
  • [91] L. Batrinca, N. Mana, B. Lepri, F. Pianesi, and N. Sebe, “Please, tell me about yourself: automatic personality assessment using short self-presentations,” in ICMI, 2011, pp. 255–262.
  • [92] G. Chávez-Martínez, S. Ruiz-Correa, and D. Gatica-Perez, “Happy and agreeable?: Multi-label classification of impressions in social video,” in MUM, 2015, pp. 109–120.
  • [93] C. Sarkar, S. Bhatia, A. Agarwal, and J. Li, “Feature analysis for computational personality recognition using youtube personality data set,” in WCPR, 2014, pp. 11–14.
  • [94] F. Alam and G. Riccardi, “Predicting personality traits using multimodal information,” in WCPR, 2014, pp. 15–18.
  • [95] G. Farnadi, S. Sushmita, G. Sitaraman, N. Ton, M. De Cock, and S. Davalos, “A multivariate regression approach to personality impression recognition of vloggers,” in WCPR, 2014, pp. 1–6.
  • [96] F. Celli, B. Lepri, J.-I. Biel, D. Gatica-Perez, G. Riccardi, and F. Pianesi, “The workshop on computational personality recognition 2014,” in Int. Conference on Multimedia, 2014, pp. 1245–1246.
  • [97] R. Srivastava, J. Feng, S. Roy, S. Yan, and T. Sim, “Don’t ask me what i’m like, just watch and listen,” in International Conference on Multimedia, 2012, pp. 329–338.
  • [98] H. Salam, O. Çeliktutan, I. Hupont, H. Gunes, and M. Chetouani, “Fully automatic analysis of engagement and its relationship to personality in human-robot interactions,” IEEE Access, vol. 5, pp. 705–721, 2017.
  • [99] B. Ferwerda, M. Schedl, and M. Tkalcic, “Using instagram picture features to predict users’ personality,” in MMM, 2016.
  • [100] F. Celli, E. Bruni, and B. Lepri, “Automatic personality and interaction style recognition from facebook profile pictures,” in International Conference on Multimedia, 2014, pp. 1101–1104.
  • [101] O. Celiktutan and H. Gunes, “Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience,” in RO-MAN, 2015, pp. 815–820.
  • [102] R. Subramanian, Y. Yan, J. Staiano, O. Lanz, and N. Sebe, “On the relationship between head pose, social attention and personality prediction for unstructured and dynamic group interactions,” in ICMI, 2013, pp. 3–10.
  • [103] S. Fang, C. Achard, and S. Dubuisson, “Personality classification and behaviour interpretation: An approach based on feature categories,” in ICMI, 2016, pp. 225–232.
  • [104] B. Lepri, N. Mana, A. Cappelletti, F. Pianesi, and M. Zancanaro, “Modeling the personality of participants during group interactions,” in ACM UMAP, 2009, pp. 114–125.
  • [105] K. Kalimeri, B. Lepri, and F. Pianesi, “Causal-modelling of personality traits: extraversion and locus of control,” in Workshop on Social Signal Processing, 2010, pp. 41–46.
  • [106] L. Batrinca, B. Lepri, N. Mana, and F. Pianesi, “Multimodal recognition of personality traits in human-computer collaborative tasks,” in ICMI, 2012, pp. 39–46.
  • [107] L. Batrinca, B. Lepri, and F. Pianesi, “Multimodal recognition of personality during short self-presentations,” in ACM Workshop on Human Gesture and Behavior Understanding, 2011, pp. 27–28.
  • [108] F. Rahbar, S. M. Anzalone, G. Varni, E. Zibetti, S. Ivaldi, and M. Chetouani, “Predicting extraversion from non-verbal features during a face-to-face human-robot interaction,” in International Conference on Social Robotics, 2015, pp. 543–553.
  • [109] S. Anzalone, G. Varni, S. Ivaldi, and M. Chetouani, “Automated prediction of extraversion during human-humanoid interaction,” Int. Journal of Social Robotics, vol. 9, no. 3, pp. 385–399, 2017.
  • [110] G. Farnadi, G. Sitaraman, S. Sushmita, F. Celli, M. Kosinski, D. Stillwell, S. Davalos, M.-F. Moens, and M. D. Cock, “Computational personality recognition in social media,” User Modeling and User-Adapted Interaction, vol. 22, no. 2, pp. 109–142, 2016.
  • [111] M. K. Abadi, J. A. M. Correa, J. Wache, H. Yang, I. Patras, and N. Sebe, “Inference of personality traits and affect schedule by analysis of spontaneous reactions to affective videos,” in FG, vol. 1, 2015, pp. 1–8.
  • [112] L. Batrinca, N. Mana, B. Lepri, N. Sebe, and F. Pianesi, “Multimodal personality recognition in collaborative goal-oriented tasks,” IEEE Trans. on Multimedia, vol. 18, no. 4, pp. 659–673, 2016.
  • [113] K. Wolffhechel, J. Fagertun, U. Jacobsen, W. Majewski, A. Hemmingsen, C. Larsen, S. Lorentzen, and H. Jarmer, “Interpretation of appearance: The effect of facial features on first impressions and personality,” PLOS ONE, vol. 9, no. 9, pp. 1–8, 09 2014.
  • [114] F. Mairesse, M. A. Walker, M. R. Mehl, and R. K. Moore, “Using linguistic cues for the automatic recognition of personality in conversation and text,” Journal of Artificial Intelligence Research, vol. 30, no. 1, pp. 457–500, Nov. 2007.
  • [115] O. Celiktutan, E. Skordos, and H. Gunes, “Multimodal human-human-robot interactions (MHHRI) dataset for studying personality and engagement,” TAC, 2018.
  • [116] F. Celli, F. Pianesi, D. Stillwell, and M. Kosinski, “Workshop on computational personality recognition: Shared task,” in AAAI Technical Report WS-13-01 Computational Personality Recognition (Shared Task), 2013, pp. 2–5.
  • [117] H. Kaya and A. A. Salah, “Continuous mapping of personality traits: A novel challenge and failure conditions,” in Mapping Personality Traits Challenge and Workshop, 2014, pp. 17–24.
  • [118] M. Sidorov, S. Ultes, and A. Schmitt, “Automatic recognition of personality traits: A multimodal approach,” in Mapping Personality Traits Challenge and Workshop, 2014, pp. 11–15.
  • [119] A. V. Ivanov and X. Chen, “Modulation spectrum analysis for speaker personality trait recognition,” in Interspeech, 2012.
  • [120] B. Schuller, S. Steidl, A. Batliner, E. Noth, A. Vinciarelli, F. Burkhardt, R. van Son, F. Weninger, F. Eyben, T. Bocklet, G. Mohammadi, and B. Weiss, “A survey on perceived speaker traits: Personality, likability, pathology, and the first challenge,” Computer Speech and Language, vol. 29, no. 1, pp. 100–131, 2015.
  • [121] J.-I. Biel and D. Gatica-Perez, “Vlogcast yourself: Nonverbal behavior and attention in social media,” in ICMI-MLMI, 2010, pp. 50:1–50:4.
  • [122] A. Dhall, R. Goecke, J. Joshi, J. Hoey, and T. Gedeon, “Emotiw 2016: Video and group-level emotion recognition challenges,” in ICMI, 2016, pp. 427–432.
  • [123] M. Valstar, J. Gratch, B. Schuller, F. Ringeval, D. Lalanne, M. T. Torres, S. Scherer, G. Stratou, R. Cowie, and M. Pantic, “Avec 2016 - depression, mood, and emotion recognition workshop and challenge,” in ArXiv 1605.01600, 2016.
  • [124] J. E. J. Ware and C. D. Sherbourne, “The mos 36-item short-form health survey (sf-36). I. conceptual framework and item selection,” Medical Care, vol. 30, no. 6, 1992.
  • [125] J. J. Raven, “Raven progressive matrices,” in Handbook of Nonverbal Assessment.   Springer US, 2003, pp. 223–237.
  • [126] A. Baron and M. Banaji, “Implicit association test (iat),” in Encyclopedia of group processes & intergroup relations.   SAGE Publications, Inc., Thousand Oaks, CA, 1992, pp. 433–435.