Characterizing Hirability via Personality and Behavior

06/22/2020
by   Harshit Malik, et al.
10

While personality traits have been extensively modeled as behavioral constructs, we model job hirability as a personality construct. On the First Impressions Candidate Screening (FICS) dataset, we examine relationships among personality and hirability measures. Modeling hirability as a discrete/continuous variable with the big-five personality traits as predictors, we utilize (a) apparent personality annotations, and (b) personality estimates obtained via audio, visual and textual cues for hirability prediction (HP). We also examine the efficacy of a two-step HP process involving (1) personality estimation from multimodal behavioral cues, followed by (2) HP from personality estimates. Interesting results from experiments performed on ≈ 5000 FICS videos are as follows. (1) For each of the text, audio and visual modalities, HP via the above two-step process is more effective than directly predicting from behavioral cues. Superior results are achieved when hirability is modeled as a continuous vis-á-vis categorical variable. (2) Among visual cues, eye and bodily information achieve performance comparable to face cues for predicting personality and hirability. (3) Explanatory analyses reveal the impact of multimodal behavior on personality impressions; , Conscientiousness impressions are impacted by the use of cuss words (verbal behavior), and eye movements (non-verbal behavior), confirming prior observations.

READ FULL TEXT

page 3

page 6

page 8

research
06/14/2020

Leveraging Multimodal Behavioral Analytics for Automated Job Interview Performance Assessment and Feedback

Behavioral cues play a significant part in human communication and cogni...
research
02/20/2023

Explainable Human-centered Traits from Head Motion and Facial Expression Dynamics

We explore the efficacy of multimodal behavioral cues for explainable pr...
research
05/25/2019

DAVE: A Deep Audio-Visual Embedding for Dynamic Saliency Prediction

This paper presents a conceptually simple and effective Deep Audio-Visua...
research
08/29/2017

Discovering Gender Differences in Facial Emotion Recognition via Implicit Behavioral Cues

We examine the utility of implicit behavioral cues in the form of EEG br...
research
03/27/2023

TextMI: Textualize Multimodal Information for Integrating Non-verbal Cues in Pre-trained Language Models

Pre-trained large language models have recently achieved ground-breaking...
research
11/21/2019

An analysis of observation length requirements in spoken language for machine understanding of human behaviors

Automatic quantification of human interaction behaviors based on languag...
research
09/04/2023

LoRA-like Calibration for Multimodal Deception Detection using ATSFace Data

Recently, deception detection on human videos is an eye-catching techniq...

Please sign up or login with your details

Forgot password? Click here to reset