REDAffectiveLM: Leveraging Affect Enriched Embedding and Transformer-based Neural Language Model for Readers' Emotion Detection

01/21/2023
by   Anoop Kadan, et al.
0

Technological advancements in web platforms allow people to express and share emotions towards textual write-ups written and shared by others. This brings about different interesting domains for analysis; emotion expressed by the writer and emotion elicited from the readers. In this paper, we propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM. Within state-of-the-art NLP tasks, it is well understood that utilizing context-specific representations from transformer-based pre-trained language models helps achieve improved performance. Within this affective computing task, we explore how incorporating affective information can further enhance performance. Towards this, we leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention. For empirical evaluation, we procure a new dataset REN-20k, besides using RENh-4k and SemEval-2007. We evaluate the performance of our REDAffectiveLM rigorously across these datasets, against a vast set of state-of-the-art baselines, where our model consistently outperforms baselines and obtains statistically significant results. Our results establish that utilizing affect enriched representation along with context-specific representation within a neural architecture can considerably enhance readers' emotion detection. Since the impact of affect enrichment specifically in readers' emotion detection isn't well explored, we conduct a detailed analysis over affect enriched Bi-LSTM+Attention using qualitative and quantitative model behavior evaluation techniques. We observe that compared to conventional semantic embedding, affect enriched embedding increases ability of the network to effectively identify and assign weightage to key terms responsible for readers' emotion detection.

READ FULL TEXT

page 7

page 20

research
04/20/2019

Language Models with Transformers

The Transformer architecture is superior to RNN-based models in computat...
research
10/31/2022

Leveraging Pre-trained Models for Failure Analysis Triplets Generation

Pre-trained Language Models recently gained traction in the Natural Lang...
research
09/12/2021

TEASEL: A Transformer-Based Speech-Prefixed Language Model

Multimodal language analysis is a burgeoning field of NLP that aims to s...
research
12/14/2022

Dual-branch Cross-Patch Attention Learning for Group Affect Recognition

Group affect refers to the subjective emotion that is evoked by an exter...
research
05/24/2022

Analysing the Greek Parliament Records with Emotion Classification

In this project, we tackle emotion classification for the Greek language...
research
04/10/2022

Pushing on Personality Detection from Verbal Behavior: A Transformer Meets Text Contours of Psycholinguistic Features

Research at the intersection of personality psychology, computer science...
research
02/16/2023

Cluster-based Deep Ensemble Learning for Emotion Classification in Internet Memes

Memes have gained popularity as a means to share visual ideas through th...

Please sign up or login with your details

Forgot password? Click here to reset