Using Knowledge-Embedded Attention to Augment Pre-trained Language Models for Fine-Grained Emotion Recognition

07/31/2021
by   Varsha Suresh, et al.
National University of Singapore
0

Modern emotion recognition systems are trained to recognize only a small set of emotions, and hence fail to capture the broad spectrum of emotions people experience and express in daily life. In order to engage in more empathetic interactions, future AI has to perform fine-grained emotion recognition, distinguishing between many more varied emotions. Here, we focus on improving fine-grained emotion recognition by introducing external knowledge into a pre-trained self-attention model. We propose Knowledge-Embedded Attention (KEA) to use knowledge from emotion lexicons to augment the contextual representations from pre-trained ELECTRA and BERT models. Our results and error analyses outperform previous models on several datasets, and is better able to differentiate closely-confusable emotions, such as afraid and terrified.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

11/29/2019

Bimodal Speech Emotion Recognition Using Pre-Trained Language Models

Speech emotion recognition is a challenging task and an important step t...
04/05/2021

Exploring Transformers in Emotion Recognition: a comparison of BERT, DistillBERT, RoBERTa, XLNet and ELECTRA

This paper investigates how Natural Language Understanding (NLU) could b...
09/21/2020

Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition

Despite the recent achievements made in the multi-modal emotion recognit...
12/21/2021

Contrast and Generation Make BART a Good Dialogue Emotion Recognizer

In dialogue systems, utterances with similar semantics may have distinct...
12/16/2020

DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion Recognition

This paper presents our pioneering effort for emotion recognition in con...
10/24/2020

Learning Fine-Grained Multimodal Alignment for Speech Emotion Recognition

Speech emotion recognition is a challenging task because the emotion exp...
01/21/2021

Analysis of Basic Emotions in Texts Based on BERT Vector Representation

In the following paper the authors present a GAN-type model and the most...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Imagine telling your chatbot that your dog just died. Instead of correctly understanding that you are experiencing grief (and offering condolences), it classifies you as feeling sad and offers to play you a happy song to cheer you up. People experience a wide range of emotions, and it is important for AI agents to correctly recognize subtle differences between emotions like sadness and grief, in order to improve their interactions with people and to avoid making a

faux pas like the chatbot above [14]. Traditionally, the vast majority of work in emotion recognition from text focuses on recognizing just six “basic” emotions [35, 1], usually happiness, surprise, sadness, anger, disgust, and fear. This set clearly fails to capture the broad spectrum of emotions that people experience and express in daily life, such as pride, guilt, and hope [41, 8, 38].

Recently, there have been efforts to focus on larger classes of emotions from text-based data with the introduction of the EmpatheticDialogues dataset [36], which consists of online conversations in 32 different emotion categories, and the GoEmotions dataset [11], which consists of Reddit comments labelled with 28 different classes. These recently-proposed datasets are an important step in training fine-grained emotion classification models that can recognize more nuanced emotions.

Concurrently, pre-trained language models such as ELECTRA [6] and BERT [12] have achieved state-of-the-art performance in NLP, such as on various text-classification tasks. Moreover, incorporating knowledge into text representations have been shown to improve model performance in various domains [37, 39]. Indeed, much of the differences between fine-grained emotion classes require deeper semantic knowledge, which may already exist in resources like emotion lexicons. Borrowing from these insights, we hypothesized that incorporating such external knowledge into existing contextualized representations will improve model performance for fine-grained emotion recognition.

In this work, we introduce Knowledge-Embedded Attention (KEA), a knowledge-augmented attention mechanism that enriches the contextual representation provided by pre-trained language models using emotional information obtained from external knowledge sources. This is achieved by incorporating the encoded emotional knowledge with the contextual representations to form a modified key matrix. This key matrix is then used to attend to the contextual representations to construct a more emotionally-aware representation of the input text that can be used to recognise emotions. We introduce two variants of KEA, (i) a word-level KEA and (ii) a sentence-level KEA, which incorporate knowledge at different text granularities.

We compare our approach with representative baselines and find that KEA-based models show improved performance for fine-grained emotion recognition. Furthermore, we perform additional analysis to show the extension of our model to generalise to other contextual encoders and also shows its efficacy using other emotion knowledge sources. Finally, we perform an in-depth case study using the EmpatheticDialogues dataset to look into the two categories of emotion classes that contribute to a majority of misclassifications, to investigate the impact of using KEA on both these categories.

Ii Related Work

Ii-a Emotion recognition from text

In recent years, emotion recognition systems are primarily modelled using neural architectures such as LSTMs, RNNs and CNNs [43, 7, 1, 48]

as they tend to outperform classical machine learning approaches that use feature-engineering

[1]. The most recent pre-trained language models use Transformer-based architectures such as ELECTRA [6] and BERT [12], and have achieved state-of-the-art performance in a variety of downstream tasks in NLP. These pre-trained language models have also been employed for emotion recognition [15, 11, 36]. However, the vast majority of the aforementioned approaches often only consider a small set of 6-8 emotion classes [35], such as happiness, surprise, sadness, anger, disgust, and fear.

Ii-B Fine-grained emotion recognition

Many AI papers implicitly rely on psychological theories that emotions exist as discrete categories, and thus formulate emotion recognition as a classification problem. Many researchers borrow Ekman’s [13] list of six “basic” emotions, or similar lists like Plutchik’s [32] eight “primary” emotions. However, these lists are far from comprehensive. People in their daily lives obviously experience a much larger set of emotions, including shame, guilt, and pride, which the majority of emotion recognition models today, being trained on a limited set of emotions, will fail to capture.

We use the term “fine-grained” emotion classification to indicate tasks with a larger number of emotion classes (minimally, greater than 8). Fine-grained emotion recognition systems are gaining traction due to their importance in the development of empathetic agents that can differentiate subtle and complex emotions. The major limiting factor has been the lack of datasets with fine-grained emotion labels. This has changed with the introduction of recent corpora such as EmpatheticDialogues [36], which consists of textual conversations labelled with 32 emotions, and GoEmotions [11], which consists of Reddit comments labelled with 28 emotions. In this work, we enhance contextual embeddings from pre-trained models such as ELECTRA using lexicon knowledge to build an emotion recognition model that scales well to fine-grained emotions.

Ii-C Knowledge-enhanced text representations

External knowledge sources are known to provide explicit knowledge for the task at hand, which can complement the representations implicitly learnt by deep learning models

[37]. Knowledge has been incorporated with representations learnt by neural architectures such as BiLSTMs, CNNs and RNNs, using different techniques such as concatenation [10] and attention-based mechanisms [25, 24, 39]. However, given the improvements offered by pre-trained language models like BERT, we focus the rest of this discussion on works which incorporate knowledge into these pre-trained models to further enhance their performance.

Integrating knowledge into contextual representation from pre-trained language models can be classified into two approaches. The first type of approach re-trains these language models from scratch by modifying the raw input [33, 49], via multi-task learning [21, 44] or by augmenting the word-embedding [50, 17, 5, 23]. Although, these methods can help improve performance in downstream tasks, they have to be retrained each time and/or redesigned if other knowledge sources need to be accommodated.

The second set of approaches enrich the contextual representation broadly either via early fusion or late fusion. Early fusion techniques incorporate knowledge source at the input/embedding stage. Generally the knowledge sources used in early fusion are linguistic in nature such as auxiliary sentences [47]. Late fusion, on the other hand, involves combining the knowledge embedding of the input and the contextual representation at the later stage. Numerous approaches exist to do this. A common way of performing late fusion is by concatenating the external knowledge embedding and contextual representations [2, 31]. Alternatively, Bruyne et al., [10] combined BERT representation with lexicon data using word-level concatenation and passed it via a BiLSTM to perform classification. Additionally, Wang et al.,[46] combined knowledge sources by pre-training knowledge data separately using models called adapters and combined them with BERT representations at a later stage. These set of approaches provide flexibility to add additional knowledge sources without the need to modify or re-train the language model depending on the kind of knowledge added.

We also consider emotional knowledge provided by lexicon data, which largely consists of association scores for emotional dimensions such as emotion intensity, valence and arousal. These ratings are generally just real-valued vectors

[27, 28]. Hence, we focus on incorporating emotional knowledge to the contextual embeddings produced by pre-trained language models via late fusion and fine-tuning them to aid in the task of emotion recognition.

Fig. 1: An overview of the proposed KEA approach. (a) The overall flow of KEA-based models. Here, is the transformed input text using the Knowledge base and is the last layer output of the pre-trained language model (e.g., ELECTRA). (b) Sentence-level KEA, where denotes the emotional encoding that is concatenated with to form the Key matrix of attention. (c) Word-level variant of KEA where and are concatenated at the word-level and passed into a BiLSTM to obtain which serves as the Key matrix for attention.

Iii Proposed Approach

Iii-a Model Description

The proposed model, in Fig. 1, embeds an input text into two latent spaces: a (i) contextual representation, and an (ii) emotional encoding. The emotional encoding is obtained from external knowledge sources such as emotional lexicons. We hypothesize that enriching contextual-representation using emotional encodings via Knowledge-Embedded Attention (KEA) will provide a richer representation of the input text, improving the final emotion classification. We introduce both a sentence-level and a word-level variant of KEA.

Contextual representations provided by pre-trained language models have been shown to improve language understanding by paying attention to all words and their surrounding context to encode a meaningful representation of the content [12, 6]. In our work we use both BERT and the recently-introduced ELECTRA, a discriminatively pre-trained Transformer model which achieves state-of-the-art performance in various downstream tasks.

We denote the representation corresponding to the hidden states from the last layer of the pre-trained models as matrix where and is the size of the output representation generated by BERT or ELECTRA (we use the -base versions of BERT and ELECTRA; =768). The representation corresponding to the [CLS] token, , is taken as the contextual representation for the entire input text for classification. Intuitively, this offers a meaningful summary of the entire input which is further enriched with external emotional knowledge via KEA.

Knowledge-Embedded Attention (KEA) incorporates emotional knowledge obtained from lexicons with contextual representations provided by pre-trained language models. To achieve this, we use knowledge obtained from emotion lexicons, which provides ratings associated to different emotion-related dimensions. Below we elaborate on the two proposed variants of KEA.

Sentence-level: In sentence-level KEA (Fig. 1a), we obtain the emotional encoding by transforming the input to a feature vector , whose dimension depends on the lexicon data used, which we denote as

. The input sequence is also padded to a dimension of fixed value, which we set to 512 as it is the default sequence length dimension of

-base. We then project the above transformed input vectors using dense layers to form the sentence-level emotional encoding where .

Following self-attention terminology [45], we concatenate the emotional encoding and contextual representation to form the Key where and we use as the Query to obtain the softmax-attention score . Intuitively, provides a meaningful representation of the entire input based on its pre-trained knowledge, and provides an emotional summary of the input based on the lexicon information. The final representation is obtained by weighting the key matrix with . Including together with into the key , helps preserve the contextual information learnt by the encoder in addition to the added emotional knowledge while re-weighting . The overall attention layer is given by:

(1)

Word-level: In word-level KEA (Fig. 1b) we modify by incorporating knowledge at the word-level. We transform each word in the input to a feature vector . Each contextual representation (i.e., from ELECTRA) is concatenated with the corresponding knowledge information , and is then projected into a latent state using a BiLSTM. We denote the hidden output state generated by the BiLSTM as :

(2)

We use where , where is hidden state dimension of the LSTM which we set to 384. In word-level KEA, serves as the Key matrix . The remaining steps follows sentence-level KEA:

(3)

Classification: Finally,

is fed into a two-layer dense network to get the output probabilities of the emotions for the corresponding input. We train the model using the standard Cross Entropy loss for single-label settings and Sigmoid cross entropy loss for multi-label settings

[11].

Iv Evaluation

Iv-a Datasets

We tested our models on three datasets which span a range of fine-grained emotion classes (11, 28, 32 classes) and text domains (tweets, forum posts, and conversations).

  • EmpatheticDialogues (ED) [36]111https://github.com/facebookresearch/EmpatheticDialogues This dataset consists of 24,850 two-way conversations in English with an average of 4.31 utterances. Each conversation is annotated with one of 32 emotion labels and the label distribution of the dataset is balanced. Note that the labels are for the entire conversation and not for each utterance. The train/validation/test split for the dataset is 19,533 / 2,770 / 2,547 samples respectively. For our input, we concatenate the utterances separating them with the [SEP] token.

  • GoEmotions [11]222https://github.com/google-research/google-research/tree/master/goemotions The (filtered) version of this dataset comprises 54k English Reddit comments annotated with one or more of 28 classes. The train/validation/test split is 43,410 / 5,426 / 5,427 samples respectively.

  • Affect in Tweets (AIT) [26] This dataset was part of SemEval-2018 Task 1: Affect in Tweets333https://competitions.codalab.org/competitions/17751 and consists of Twitter data. We utilise the dataset provided for the E-c task where each tweet is classified as one, or more, of 11 emotional states of the tweeter. The dataset comprises a total of 10,983 tweets with the train/validation/test split as 6,838 / 886 / 3,259 samples respectively.

Iv-B Evaluation Metrics

For single-label settings (EmpatheticDialogues dataset) we use top-1 accuracy, top-3 accuracy (henceforth referred to as top-1 and top-3 respectively) and macro-F1 score. For multi-label settings, we use macro F1-score, Precision and Recall. In addition, for Affect in Tweets we also report the Jaccard index, which was the primary evaluation metric in SemEval-2018 E-c task.

Iv-C Lexicon features

All results in Table I were obtained using NRC-VAD [28] (henceforth referred to as VAD) lexicon data. This knowledge source contains ratings of valence, arousal, and dominance of 20k words. The rating values range from 0 to 1 and vary from negative to positive (valence), calm to aroused (arousal), and submissive to dominant (dominance). To transform an input text into its corresponding valence, arousal, and dominance vectors, we replace each word in the utterances with the corresponding values from the lexicon. Following previous works [50], words which do not appear in the lexicon are given the mid-value score of 0.5. The choice of knowledge can be varied based on the task at hand; we show the efficacy of KEA-based methods by incorporating another knowledge source in Section V-B.

Iv-D Baselines

We compare the performance of our model with three main categories of baselines.

  • Models without pre-training: We compare with recurrent models such as (i) BiLSTM with self-attention [22], (ii) CNN+c-LSTM [34] and (iii) RCNN [20].

  • Pre-trained language models: ELECTRA [6] and BERT [12]. We obtain the pre-trained models bert-base-uncased and electra-base-discriminator from HuggingFace’s Transformers library 444https://huggingface.co/transformers/.

  • Knowledge-enhanced models: We compare with commonly used methods of incorporating knowledge with contextual representations:

    (i) , which is a simple concatenation emotional encoding obtained from to , which is then projected using dense layers to perform classification. This is the most straightforward way to incorporate knowledge [31, 2]. This serves as baseline for sentence-level knowledge incorporation. By contrast, our KEA includes an attention layer.

    (ii) which is similar to word-level KEA, concatenates knowledge in a word-level fashion to the contextual representation. This representation is further passed via single-layer BiLSTM similar to [10], with the hidden state dimension of the BiLSTM set to 384. This serves as a baseline to compare word-level knowledge incorporation.

    (iii) KET (Knowledge Enriched Transformers) [50], a knowledge-based dynamic graph attention model that enhances Transformers using VAD and ConceptNet [42] to detect emotion from conversation data.

  • In addition to the above models, we compared with state-of-the-art (SOTA) performance for all the datasets. For ED we choose Attention Gated Hierarchical Memory Network (AGHMN) [16] that uses HMN and GRU based hierarchical architecture to capture utterance-level emotions from conversation, for Affect in Tweets we compared with performance of the best team in the SemEval-2018 E-c challenge [4] and for GoEmotions we compared with performance provided by authors of the dataset [11].

As AGHMN and KET is designed primarily for conversation and require labels for each utterance, we report the performance only on ED dataset and we label every utterance in a conversation in the ED dataset with the conversation label to make these models compatible with it.

EmpatheticDialogues Affect in Tweets GoEmotions
top-1 / %
top-3 / %
F1 jaccard precision recall F1 precision recall F1
BiLSTM [22] 35.8 (0.6) 62.0 (0.8) 35.9 (0.6) 43.7 (1.0) 41.8 (1.9) 54.7 (2.2) 46.7 (0.4) 56.5 (2.6) 39.3 (2.3) 43.9 (1.0)
c-LSTM [34] 37.9 (0.1) 64.7 (0.3) 37.5 (0.2) 51.8 (0.5) 45.3 (0.8) 62.4 (0.6) 51.0 (0.8) 50.9 (1.7) 27.2 (0.8) 31.6 (1.0)
RCNN [20] 43.0 (0.6) 69.5 (0.4) 43.2 (0.6) 54.2 (0.5) 46.7 (1.4) 64.1 (1.9) 53.5 (0.5) 58.4 (1.0) 37.5 (1.0) 42.5 (0.6)
SOTA - - (-) (-) - - - (-) (-)
BERT 51.9 (0.6) 78.2 (0.5) 50.7 (1.0) 56.3 (0.8) 54.2 (2.6) 64.1 (3.8) 57.7 (0.4) 51.7 (1.9) 49.5 (2.3) 48.3 (1.5)
ELECTRA 52.8 (0.5) 78.7 (0.4) 50.9 (0.7) 57.6 (0.2) 57.2 (1.7) 61.2 (1.9) 57.6 (1.2) 47.4 (1.3) 50.4 (1.7) 47.5 (0.7)
KET [50] 36.2 (-) - 34.9 (-) - - - - - - -
48.1 (0.9) 75.0 (1.5) 45.6 (1.0) 54.9 (1.3) 39.7 (2.6) 68.2 (2.5) 49.5 (2.2) 43.8 (2.7) 44.8 (1.7) 42.3 (0.9)
52.1 (0.4) 78.0 (0.5) 50.3 (0.7) 55.7 (1.8) 47.3 (3.2) 66.3 (1.7) 54.3 (2.6) 45.7 (1.4) 48.2 (0.9) 45.6 (0.9)
53.6 (0.6) 78.5 (0.8) 52.5 (0.6) 57.7 (0.8) 50.8 (0.8) 66.9 (1.1) 57.1 (0.6) 46.1 (1.8) 50.2 (0.9) 46.8 (0.7)
54.1 (0.6) 80.5 (0.5) 53.1 (0.7) 58.3 (0.1) 57.7 (1.4) 61.9 (0.7) 59.1 (0.3) 48.6 (0.9) 52.9 (0.6) 49.6 (0.8)
TABLE I: Summary of the results obtained using test data for EmpatheticDialogues (ED), Affect in Tweets (AIT) and GoEmotions datasets. SOTA row implies the state-of-the-art performance on the three datasets, 1 is obtained by using AGHMN[16] model which predicts emotion from textual conversations, 2 and 3 are taken from [4] and [11]

respectively. For ED, we use top-1 accuracy, top-3 accuracy and macro-F1. KET and AGHMN are designed for conversation, hence we consider their performance only for ED. For the AIT and GoEmotions datasets, we compare the performances using Precision, Recall and macro-F1. For AIT we also report the Jaccard index. The metrics are averaged over 5 runs, with standard deviations reported in parentheses and best scores in bold.

Iv-E Implementation details

Input text was converted into tokens using WordPiece tokenization followed by ELECTRA preprocessing. For fine-tuning, we use Adam optimizer [18] and each input text in the batch is padded to the length of the text with the maximum length. We repeated this process with five random seeds and report the mean and the standard deviation of performance over 5 runs. For running our models, we used a Google Colaboratory instance equipped with NVIDIA Tesla T4 GPU.

For fine-tuning KEA-based models, we chose learning rates from the set {, , } and batch size from the set {10,16}. We used the Adam [18] optimiser with set to 0.9, set to 0.999, and set to 1e-08. Early stopping was done based on top-1 accuracy in the validation set for EmpatheticDialogues dataset and F1 score in Affect in Tweets and GoEmotions dataset. For Affect in Tweets dataset the input tweets were prepossessed by removing elements such as non-ascii characters, letter repetitions and extra white-spaces and replacing all the user-mentions and links to unique identifiers. We provide source code for all the implementations555https://github.com/varsha33/Fine-Grained-Emotion-Recognition.

For the BiLSTM, cLSTM and RCNN models, we used pre-trained GloVe vectors of dimension 200 as word embedding. These models were trained with a batch size of 64 using Adam optimiser and learning rate was chosen from the set {, , } based on the option that yielded the best top-1 accuracy in the validation set for EmpatheticDialogues dataset, and F1 score in Affect in Tweets and GoEmotions dataset. For the comparison with KET666https://github.com/zhongpeixiang/KET and AGHMN777https://github.com/wxjiao/AGHMN we used the implementation provided by the authors. For fair comparison we only compared with these methods for the EmpatheticDialogues dataset by applying the conversation-level label to every utterance in the conversation as these methods perform utterance-level emotion recognition taking into account the sequential nature of the conversation.

V Results and Discussion

EmpatheticDialogues Affect in Tweets GoEmotions
top-1 / %
top-3 / %
F1 jaccard precision recall F1 precision recall F1
BERT 51.9 (0.6) 78.2 (0.5) 50.7 (1.0) 56.3 (0.8) 54.2 (2.6) 64.1 (3.8) 57.7 (0.4) 51.7 (1.9) 49.5 (2.3) 48.3 (1.5)
51.8 (0.3) 77.5 (0.4) 51.0 (0.3) 56.9 (0.3) 51.9 (1.3) 66.1 (1.2) 57.7 (0.5) 46.2 (0.8) 51.1 (0.8) 47.2 (0.7)
KEA-BERT 53.3 (0.4) 79.3 (0.7) 52.4 (0.5) 57.0(0.6) 56.8 (1.5) 61.3 (1.4) 58.2 (0.2) 51.4 (2.2) 52.5 (0.6) 51.0 (0.7)
TABLE II: Generalization to other contextual encoders: Summary of the results obtained using test data, for another contextual encoder, BERT. The metrics are averaged over 5 runs, with standard deviations reported in parentheses and best scores in bold.
EmpatheticDialogues Affect in Tweets GoEmotions
Lexicon top-1 / % top-3 / % F1 jaccard precision recall F1 precision recall F1
ELECTRA - 52.8 (0.5) 78.7 (0.4) 50.9 (0.7) 57.6 (0.2) 57.2 (1.7) 61.2 (1.9) 57.6 (1.2) 47.4 (1.3) 50.4 (1.7) 47.5 (0.7)
EIL 51.7 (0.6) 77.7 (0.9) 50.1 (0.8) 55.7 (1.3) 46.1 (3.4) 60.9 (4.4) 51.4 (2.3) 46.4 (0.8) 47.5 (1.0) 45.0 (0.7)
EIL 53.7 (0.6) 79.2 (0.8) 52.7 (0.7) 57.7 (0.8) 53.3 (3.6) 65.1 (2.5) 57.1 (1.7) 44.9 (2.0) 50.8 (1.2) 46.5 (0.6)
EIL 54.0 (0.5) 80.1 (0.8) 53.0 (0.7) 58.1 (0.4) 52.8 (2.1) 66.5 (2.6) 58.1 (0.2) 47.5 (0.8) 53.0 (0.5) 49.2 (0.8)
TABLE III: Generalization to other lexicons: Summary of the results obtained using test data, while using NRC-Emotion Intensity Lexicon (EIL). The metrics are averaged over 5 runs, with standard deviations reported in parentheses and best scores in bold.

We compared our proposed KEA, using ELECTRA as a base language model, against representative baselines described in Section IV-D. We found that KEA-infused models have improved performance on fine-grained emotion classification for each of the three datasets, spanning diverse types of input text (i.e. tweets, Reddit comments and emotionally-grounded conversation data). This indicates that enhancing contextual representations using KEA helps encode complex emotions. Overall, Sentence-level KEA (KEA-ELECTRA) performs the best on most of the evaluation metrics. We note that word-level KEA (KEA-ELECTRA) only offers marginal improvement over the baseline approaches as compared to sentence-level KEA. In addition, it is interesting to note that word-level knowledge incorporation done by both and have decreased performance compared to their sentence-level counterparts and KEA-ELECTRA respectively.

V-a Generalizing to other contextual encoders

To understand whether the proposed method extends to other pre-trained models, we evaluate our approach using BERT [12] as the contextual encoder. Table II shows that word-level KEA does not exhibit improved performance over BERT. On the other hand, KEA-BERT

outperforms BERT on all the datasets on almost all of the metrics, indicating that the sentence-level KEA seems to be a more generalizable way of incorporating knowledge into contextual encoders. One possible reason for the low generalizability of word-level knowledge incorporation is that the tasks at hand have sentence-level text inputs. As a result, sentence-level KEA which extracts global information by taking the entire input into consideration encodes the emotional information in the input in a better fashion. However, it will be interesting in the future to understand how word-level knowledge could still be incorporated to bolster local information dependent tasks such as word-level sentiment analysis.

V-B Extension to other knowledge sources

To further investigate the versatility of KEA in integrating external knowledge into pre-trained language models, we made use of another lexicon, NRC Emotion Intensity Lexicon (NRC-EIL) [27] (henceforth referred to as EIL). This lexicon data provides intensity annotations which contains real-valued scores of intensity based on eight basic emotions (anger, anticipation, disgust, fear, joy, sadness, surprise, and trust) for 10k English words. In this analysis, we compare ELECTRA which does not have emotional knowledge against the three knowledge-incorporating models, which concatenates the knowledge embedding at the end, which adds knowledge in the word-level and which adds knowledge in the sentence-level.

We can see from Table III that only marginally improves the performance over ELECTRA whereas outperforms ELECTRA and , incorporating emotional knowledge more effectively to help recognise the complex set of emotions. Interestingly, while the direct concatenation of contextual representation and emotional knowledge (i.e. ) showed improved performance when compared to ELECTRA in the case of VAD from Table I, incorporating EIL instead decreased the performance by a large margin. This highlights that incorporating knowledge in the right manner is key to preserve the rich information learnt by the pre-trained models, and we can see that employing KEA shows consistent performance across knowledge sources.

V-C Case Study: Fine-grained Emotion Recognition

In this case study and error analysis, we use the EmpatheticDialogues dataset to delve deeper into the model behaviour. We classified the majority of misclassifications into two categories, C1: Emotions with differing intensities such as {annoyed, angry, furious}, {afraid, terrified} and {joyful, excited} and, C2: Emotions with nuanced differences such as {nostalgic, sentimental}, {embarrassed, ashamed} and {impressed, proud}. These label sets are challenging because the emotions within each set are similar, making it difficult for the model to distinguish them. In addition, inter-speaker differences in how they identify emotions [19] could increase the difficulty. Table IV depicts two similar conversations, where Speaker A labelled feeling terrified while Speaker B labelled feeling afraid (despite actually using the word “terrified”).

Speaker Conversation snippet Label
A
“I am so scared to live in my
neighborhood … There are people
that come around shooting their
guns…..”
Terrified
B
“I was terrified to walk home from
the bar one night … There were
gunshots nearby so I just ran home
as fast as I could …”
Afraid
TABLE IV: Two examples from the EmpatheticDialogues dataset depicting the variation observed amongst speakers for similar contexts.

Fig. 2: Excerpts from the confusion matrices to show comparisons between ELECTRA and for the two categories of misclassifications. Top row: (afraid, terrified). ELECTRA tends to classify both as terrified, while shows a marked improvement in classifying afraid, though at the cost of some correct classification of terrified. Bottom row: (nostalgic, sentimental). The KEA improvement here is marginal.

Next, we turn to how KEA improved performance, by comparing ELECTRA and in Figure 2

. We show snippets from the confusion matrix from both models to compare misclassifications amongst problematic label sets. In the example for C1 {

afraid, terrified}, KEA-based model fares better for both the emotion classes by reducing misclassification amongst them. For category C2 {nostalgic, sentimental}, there is not much improvement; this can be attributed to the interchangeable usage of these emotion labels in conversational language. We have provided more comparisons and entire confusion matrices in the Supplementary material.

V-D Limitations and Future Work

Although incorporating external emotional knowledge via KEA to pre-trained language models improves fine-grained emotion recognition, there are a number of outstanding challenges. First, the fine-grained nature of the emotion classes has not been explicitly encoded into our model architecture. Developing such model architectures that inherently capture the subtleties amongst the emotion classes could help create better representations, which could lead to improved emotion recognition performance. Second, the inter-individual variability in that exists while expressing emotions is a limitation, as seen in Table IV

where they use different labels for similar contexts. Modelling this variability is a highly challenging task. A potential solution to this problem could involve actively fine-tuning emotion recognition models to specific users. Third, while we have shown the efficacy of our model using two knowledge sources, these sources are similar in nature—in our case they both are emotion lexicons, where the “knowledge” is represented using real-valued numbers. Future work could explore how KEA can be extended to incorporate differing types of knowledge sources such as knowledge graphs, categorical data, and relational knowledge

[37] and also delve deeper into the effect that the kind of knowledge has in recognising different types of emotions. Another promising direction is the use of external knowledge to help in few-shot learning scenarios. The knowledge sources have information regarding emotion labels that do not belong to the current task which can potentially help an existing model trained with external knowledge learn unseen labels using fewer data samples. This work is a start towards equipping deep-learning models to recognise larger number of emotions and in the future we aim to address the above-mentioned challenges.

V-E Ethics Statement

Finally, we want to end on a note about ethical affective computing. At a broader level, emotion recognition technology has been coming under increasing scrutiny, due to two sets of factors: (i) increasing awareness of the limitations of technology to accurately “understand” human emotions (e.g., see [3], for limitations with facial expressions), and (ii) the deployment of such technology in applications that directly impact people [9]. Our work does not speak to (ii), but it does directly address (i), in that the motivation of our paper includes increasing the scope of text emotion classification models to go beyond six emotions. As we highlighted in the introduction, AI models today are trained on too few emotions, and this severely limits the scientific validity of these models, as well as limiting confidence in their deployment in real-life scenarios. We hope that our work, sustained over time and together with other researchers in the field, would strengthen the confidence people have about the validity of such emotion recognition technology. This will be part of an ongoing conversation to improve our technology and alleviate some of the concerns surrounding their development and deployment [29].

Vi Conclusion

In this work, we propose using KEA (Knowledge Embedded Attention) to incorporate emotional knowledge from external knowledge sources (like emotion lexicons) into contextual representations provided by pre-trained language models like ELECTRA. Across our various analyses with different contextual encoders (BERT) and other knowledge sources, we find that our Sentence-Level KEA tends to perform very well across the three datasets we considered (Tweets, Reddit posts, and online conversations), and reduces misclassification of several commonly confusable sets of emotions. This work provides a strong example of how we can improve our AI to be more emotionally-intelligent. If we want our AI to be sensitive and know when to offer condolences versus when to play an upbeat song, we need to be mindful to train our AI to handle more complex and fine-grained emotions, and at the same time modelling and being sensitive to psychological nuances, like inter-individual variation in emotion experience and expressions [30, 40, 8].

Acknowledgment

This research was done on publically available datasets.

References

  • [1] N. Alswaidan and M. E. B. Menai (2020-08) A survey of state-of-the-art approaches for emotion recognition in text. Knowl. Inf. Syst. 62 (8), pp. 2937–2987. External Links: ISSN 0219-3116, Document Cited by: §I, §II-A.
  • [2] N. Babanejad, H. Davoudi, A. An, and M. Papagelis (2020-12) Affective and contextual embedding for sarcasm detection. In Proceedings of the 28th International Conference on Computational Linguistics, pp. 225–243. External Links: Document Cited by: §II-C, 3rd item.
  • [3] L. F. Barrett, R. Adolphs, S. Marsella, A. M. Martinez, and S. D. Pollak (2019) Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest 20 (1), pp. 1–68. External Links: Document Cited by: §V-E.
  • [4] C. Baziotis, A. Nikolaos, A. Chronopoulou, A. Kolovou, G. Paraskevopoulos, N. Ellinas, S. Narayanan, and A. Potamianos (2018) NTUA-SLP at SemEval-2018 task 1: predicting affective content in tweets with deep attentive RNNs and transfer learning. In Proceedings of The 12th International Workshop on Semantic Evaluation, pp. 245–255. Cited by: 4th item, TABLE I.
  • [5] W. Chen, Y. Su, X. Yan, and W. Y. Wang (2020) KGPT: knowledge-grounded pre-training for data-to-text generation. In

    Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

    ,
    pp. 8635–8648. External Links: Link Cited by: §II-C.
  • [6] K. Clark, M. Luong, Q. V. Le, and C. D. Manning (2020) ELECTRA: pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations, Cited by: §I, §II-A, §III-A, 2nd item.
  • [7] N. Colnerič and J. Demšar (2020) Emotion recognition on twitter: comparative study and training a unison model. IEEE Transactions on Affective Computing 11 (3), pp. 433–446. External Links: Document Cited by: §II-A.
  • [8] A. S. Cowen and D. Keltner (2017) Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences 114 (38), pp. E7900–E7909. External Links: Document, ISSN 0027-8424 Cited by: §I, §VI.
  • [9] K. Crawford, R. Dobbe, T. Dryer, G. Fried, B. Green, E. Kaziunas, A. Kak, V. Mathur, E. McElroy, A. N. Sánchez, et al. (2019) AI Now 2019 report. New York, NY: AI Now Institute. Cited by: §V-E.
  • [10] L. De Bruyne, P. Atanasova, and I. Augenstein (2019) Joint emotion label space modelling for affect lexica. arXiv:1911.08782. Cited by: §II-C, §II-C, 3rd item.
  • [11] D. Demszky, D. Movshovitz-Attias, J. Ko, A. Cowen, G. Nemade, and S. Ravi (2020-07) GoEmotions: a dataset of fine-grained emotions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, External Links: Document Cited by: §I, §II-A, §II-B, §III-A, 2nd item, 4th item, TABLE I.
  • [12] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019-06) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. External Links: Document Cited by: §I, §II-A, §III-A, 2nd item, §V-A.
  • [13] P. Ekman (1999) Basic emotions. John Wiley & Sons, Ltd. External Links: ISBN 9780470013496, Document Cited by: §II-B.
  • [14] M. Huang, X. Zhu, and J. Gao (2020-04) Challenges in building intelligent open-domain dialog systems. ACM Trans. Inf. Syst. 38 (3). External Links: ISSN 1046-8188 Cited by: §I.
  • [15] Y. Huang, S. Lee, M. Ma, Y. Chen, Y. Yu, and Y. Chen (2019) EmotionX-idea: emotion bert–an affectional model for conversation. arXiv:1908.06264. Cited by: §II-A.
  • [16] W. Jiao, M. Lyu, and I. King (2020) Real-time emotion recognition via attention gated hierarchical memory network. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    pp. 8002–8009. Cited by: 4th item, TABLE I.
  • [17] P. Ke, H. Ji, S. Liu, X. Zhu, and M. Huang (2020-11) SentiLARE: sentiment-aware language representation learning with linguistic knowledge. In Proceedings of the 2020 Conference on Empirical Methods in NLP (EMNLP), pp. 6975–6988. External Links: Document Cited by: §II-C.
  • [18] D. P. Kingma and J. Ba (2015) Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, Cited by: §IV-E, §IV-E.
  • [19] M. Kotti and F. Paternò (2012) Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema. International Journal of Speech Technology, pp. 131–150. External Links: Document Cited by: §V-C.
  • [20] S. Lai, L. Xu, K. Liu, and J. Zhao (2015)

    Recurrent convolutional neural networks for text classification

    .
    AAAI Conference on Artificial Intelligence. Cited by: 1st item, TABLE I.
  • [21] Y. Levine, B. Lenz, O. Dagan, O. Ram, D. Padnos, O. Sharir, S. Shalev-Shwartz, A. Shashua, and Y. Shoham (2020-07) SenseBERT: driving some sense into BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4656–4667. External Links: Document Cited by: §II-C.
  • [22] Z. Lin, M. Feng, C. N. d. Santos, M. Yu, B. Xiang, B. Zhou, and Y. Bengio (2017) A structured self-attentive sentence embedding. arXiv:1703.03130. Cited by: 1st item, TABLE I.
  • [23] Y. Liu, Y. Wan, L. He, H. Peng, and S. Y. Philip (2021) KG-BART: knowledge graph-augmented BART for generative commonsense reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 6418–6425. Cited by: §II-C.
  • [24] Y. Ma, H. Peng, T. Khan, E. Cambria, and A. Hussain (2018) Sentic lstm: a hybrid network for targeted aspect-based sentiment analysis. Cognitive Computation 10 (4), pp. 639–650. Cited by: §II-C.
  • [25] K. Margatina, C. Baziotis, and A. Potamianos (2019-07) Attention-based conditioning methods for external knowledge integration. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3944–3951. External Links: Document Cited by: §II-C.
  • [26] S. Mohammad and S. Kiritchenko (2018-05) Understanding emotions: a dataset of tweets to study interactions between affect categories. In Proceedings of the 11th International Conference on Language Resources and Evaluation, Cited by: 3rd item.
  • [27] S. M. Mohammad (2018) Word affect intensities. In Proceedings of the 11th Ed. of the Language Resources and Evaluation Conference, Cited by: §II-C, §V-B.
  • [28] S. Mohammad (2018-07) Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 English words. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, pp. 174–184. External Links: Document Cited by: §II-C, §IV-C.
  • [29] D. C. Ong (2021) An ethical framework for guiding the development of affectively-aware artificial intelligence. In 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII), Cited by: §V-E.
  • [30] D. Ong, Z. Wu, Z. Tan, M. Reddan, I. Kahhale, A. Mattek, and J. Zaki (2019) Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset. IEEE Trans. on Affective Comput.. External Links: Document Cited by: §VI.
  • [31] M. Ostendorff, P. Bourgonje, M. Berger, J. Moreno-Schneider, G. Rehm, and B. Gipp (2019) Enriching bert with knowledge graph embeddings for document classification. arXiv:1909.08402. Cited by: §II-C, 3rd item.
  • [32] R. Plutchik (2001) The nature of emotions. American Scientist 89 (4), pp. 344–350. External Links: ISSN 00030996 Cited by: §II-B.
  • [33] N. Poerner, U. Waltinger, and H. Schütze (2020-11) E-BERT: efficient-yet-effective entity embeddings for BERT. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 803–818. External Links: Document Cited by: §II-C.
  • [34] S. Poria, E. Cambria, D. Hazarika, N. Majumder, A. Zadeh, and L. Morency (2017) Context-dependent sentiment analysis in user-generated videos. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 873–883. External Links: Document Cited by: 1st item, TABLE I.
  • [35] S. Poria, N. Majumder, R. Mihalcea, and E. Hovy (2019) Emotion recognition in conversation: research challenges, datasets, and recent advances. IEEE Access 7, pp. 100943–100953. Cited by: §I, §II-A.
  • [36] H. Rashkin, E. M. Smith, M. Li, and Y. Boureau (2019-07) Towards empathetic open-domain conversation models: a new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5370–5381. External Links: Document Cited by: §I, §II-A, §II-B, 1st item.
  • [37] A. Roy and S. Pan (2020) Incorporating extra knowledge to enhance word embedding. In Proceedings of the 29th International Joint Conference on Artificial Intelligence, IJCAI-20, pp. 4929–4935. External Links: Document Cited by: §I, §II-C, §V-D.
  • [38] K. R. Scherer and B. Meuleman (2013-03) Human emotion experiences can be predicted on theoretical grounds: evidence from verbal labeling. PLOS ONE 8 (3), pp. 1–8. External Links: Document Cited by: §I.
  • [39] B. Shin, T. Lee, and J. D. Choi (2017) Lexicon integrated CNN models with attention for sentiment analysis. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pp. 149–158. Cited by: §I, §II-C.
  • [40] E. H. Siegel, M. K. Sands, W. Van den Noortgate, P. Condon, Y. Chang, J. Dy, K. S. Quigley, and L. F. Barrett (2018) Emotion fingerprints or emotion populations? A meta-analytic investigation of autonomic features of emotion categories. Psychological Bulletin 144 (4), pp. 343—393. External Links: Document, ISSN 0033-2909 Cited by: §VI.
  • [41] A. E. Skerry, R. Saxe, A. E. Skerry, and R. Saxe (2015) Neural representations of emotion are organized around abstract event features. Curr. Biology 25 (15), pp. 1945–54. External Links: Document Cited by: §I.
  • [42] R. Speer, J. Chin, and C. Havasi (2017) ConceptNet 5.5: an open multilingual graph of general knowledge. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pp. 4444–4451. Cited by: 3rd item.
  • [43] M. Su, C. Wu, K. Huang, and Q. Hong (2018) LSTM-based text emotion recognition using semantic and emotional word vectors. In 2018 First Asian Conference on Affective Comput. and Intell. Interaction (ACII Asia), Vol. , pp. 1–6. External Links: Document Cited by: §II-A.
  • [44] H. Tian, C. Gao, X. Xiao, H. Liu, B. He, H. Wu, H. Wang, and F. Wu (2020-07) SKEP: sentiment knowledge enhanced pre-training for sentiment analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4067–4076. External Links: Document Cited by: §II-C.
  • [45] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Inf. Processing Sys., I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30, pp. 5998–6008. Cited by: §III-A.
  • [46] R. Wang, D. Tang, N. Duan, Z. Wei, X. Huang, J. ji, G. Cao, D. Jiang, and M. Zhou (2020) K-Adapter: infusing knowledge into pre-trained models with adapters. arXiv e-prints. External Links: 2002.01808 Cited by: §II-C.
  • [47] Z. Wu and D. C. Ong (2021) Context-guided BERT for targeted aspect-based sentiment analysis. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, Cited by: §II-C.
  • [48] Z. Wu, X. Zhang, T. Zhi-Xuan, J. Zaki, and D. C. Ong (2019) Attending to emotional narratives. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 648–654. Cited by: §II-A.
  • [49] Z. Zhang, X. Han, Z. Liu, X. Jiang, M. Sun, and Q. Liu (2019-07) ERNIE: enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1441–1451. External Links: Document Cited by: §II-C.
  • [50] P. Zhong, D. Wang, and C. Miao (2019) Knowledge-enriched transformer for emotion detection in textual conversations. In Proceedings of the 2019 Conference on Empirical Methods in NLP and the 9th Intl. Joint Conference on NLP (EMNLP-IJCNLP), pp. 165–176. Cited by: §II-C, 3rd item, §IV-C, TABLE I.