On the Complementarity of Images and Text for the Expression of Emotions in Social Media

02/11/2022
by   Anna Khlyzova, et al.
0

Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus category and the emotion class. We evaluate if these tasks require both modalities and find for the image-text relations, that text alone is sufficient for most categories (complementary, illustrative, opposing): the information in the text allows to predict if an image is required for emotion understanding. The emotions of anger and sadness are best predicted with a multimodal model, while text alone is sufficient for disgust, joy, and surprise. Stimuli depicted by objects, animals, food, or a person are best predicted by image-only models, while multimodal models are most effective on art, events, memes, places, or screenshots.

READ FULL TEXT

page 2

page 4

page 6

page 12

research
10/22/2022

Why Do You Feel This Way? Summarizing Triggers of Emotions in Social Media Posts

Crises such as the COVID-19 pandemic continuously threaten our world and...
research
01/15/2019

Sharing emotions at scale: The Vent dataset

The continuous and increasing use of social media has enabled the expres...
research
09/06/2019

Attending the Emotions to Detect Online Abusive Language

In recent years, abusive behavior has become a serious issue in online s...
research
07/17/2019

Gated Recurrent Neural Network Approach for Multilabel Emotion Detection in Microblogs

People express their opinions and emotions freely in social media posts ...
research
08/28/2019

Emotion Detection with Neural Personal Discrimination

There have been a recent line of works to automatically predict the emot...
research
03/13/2019

Multimodal Emotion Classification

Most NLP and Computer Vision tasks are limited to scarcity of labelled d...
research
10/18/2022

MMGA: Multimodal Learning with Graph Alignment

Multimodal pre-training breaks down the modality barriers and allows the...

Please sign up or login with your details

Forgot password? Click here to reset