Toward Multimodal Modeling of Emotional Expressiveness

by   Victoria Lin, et al.

Emotional expressiveness captures the extent to which a person tends to outwardly display their emotions through behavior. Due to the close relationship between emotional expressiveness and behavioral health, as well as the crucial role that it plays in social interaction, the ability to automatically predict emotional expressiveness stands to spur advances in science, medicine, and industry. In this paper, we explore three related research questions. First, how well can emotional expressiveness be predicted from visual, linguistic, and multimodal behavioral signals? Second, which behavioral modalities are uniquely important to the prediction of emotional expressiveness? Third, which behavioral signals are reliably related to emotional expressiveness? To answer these questions, we add highly reliable transcripts and human ratings of perceived emotional expressiveness to an existing video database and use this data to train, validate, and test predictive models. Our best model shows promising predictive performance on this dataset (RMSE=0.65, R^2=0.45, r=0.74). Multimodal models tend to perform best overall, and models trained on the linguistic modality tend to outperform models trained on the visual modality. Finally, examination of our interpretable models' coefficients reveals a number of visual and linguistic behavioral signals–such as facial action unit intensity, overall word count, and use of words related to social processes–that reliably predict emotional expressiveness.



There are no comments yet.


page 3


Context-Dependent Models for Predicting and Characterizing Facial Expressiveness

In recent years, extensive research has emerged in affective computing o...

Sympathy Begins with a Smile, Intelligence Begins with a Word: Use of Multimodal Features in Spoken Human-Robot Interaction

Recognition of social signals, from human facial expressions or prosody ...

On Explaining Multimodal Hateful Meme Detection Models

Hateful meme detection is a new multimodal task that has gained signific...

Project Rosetta: A Childhood Social, Emotional, and Behavioral Developmental Ontology

There is a wide array of existing instruments used to assess childhood b...

Facial Asymmetry and Emotional Expression

This report is about facial asymmetry, its connection to emotional expre...

A Novel Multimodal Approach for Studying the Dynamics of Curiosity in Small Group Learning

Curiosity is a vital metacognitive skill in educational contexts, leadin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.