Emotion Action Detection and Emotion Inference: the Task and Dataset

03/16/2019 ∙ by Pengyuan Liu, et al. ∙ Peking University NetEase, Inc 0

Many Natural Language Processing works on emotion analysis only focus on simple emotion classification without exploring the potentials of putting emotion into "event context", and ignore the analysis of emotion-related events. One main reason is the lack of this kind of corpus. Here we present Cause-Emotion-Action Corpus, which manually annotates not only emotion, but also cause events and action events. We propose two new tasks based on the data-set: emotion causality and emotion inference. The first task is to extract a triple (cause, emotion, action). The second task is to infer the probable emotion. We are currently releasing the data-set with 10,603 samples and 15,892 events, basic statistic analysis and baseline on both emotion causality and emotion inference tasks. Baseline performance demonstrates that there is much room for both tasks to be improved.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Understanding a text especially a narrative involving people’s emotions needs to analysis it from all aspects and it usually requires commonsense reasoning. For example, for a given text with emotion and corresponding event “After listening to what I said, the teacher was happy and then joked with me.”, to analysis the casuality, one can easily know “listening to what I said” can be regarded as the cause which leads to the emotion happy, “joked with me” can be seen as the result or consequence (we name it as action) caused by the happy. In fact, the triple (“listening to what I said”, “happy”, “joked with me”) comprised a three-folds cause-effect chain (we name it as CEA: Cause-Emotion-Action relation) in which “happy” is the intermediate variable. On the other hand, to infer the emotion by given cause “listening to what I said”, we hardly deduce that the teacher’s emotion is happy or angry. But if we know the action “joked with me”, which is caused by teacher’s emotion, we can infer that teacher feel happy rather than angry. Actions as new emotion knowledge or common sense can help infer emotion.

Figure 1: Emotion Causality and Emotion Inference

We present CEAC, a corpus which manually annotates not only emotion, but also emotion cause events and emotion action events. Based on CEAC, we introduce emotion causality and emotion inference task. The first task is to extract a triple of cause event, emotion and action event, and the second is to infer emotion given cause and action events(See Fig. 1). Our works are inspired by but quite different from previous researches. For instance, Lee et al. (2010) built a corpus, annotated emotion cause and proposed an emotion cause detection task; Gui et al. (2016) applied this method to Weibo, but he didn’t propose any new task; Cheng Xiyao (2017) focused on current/original-subtweet-based emotion detection and annotated a multiple-user structure; Deng et al. (2013) and Ding and Riloff (2016) which introduced benefactive/malefactive events and defined affective events.

Causal relation or causality are fundamental in many disciplines, including philosophy, psychology and linguistics. As one kind of event causality, emotion causality is also critical knowledge for many NLP applications, including machine reading and comprehension Richardson et al. (2013), process extraction Scaria et al. (2013), and especially future event/scenario prediction Radinsky et al. (2012). Knowing the existence of an emotion is often insufficient to predict future events or determine the best reaction Chen et al. (2010), whereas if the emotion cause and action is known to the corresponding emotion, prediction of future events or assessment of potential intent can be done more reliably. Furthermore, emotion inference and emotion causality are useful for a wide range of NLP applications that require anticipation of people’s emotional reaction and intents, especially when they are not explicitly mentioned. For example, an ideal dialogue system should react in empathetic ways by reasoning about the human user’s mental state based on the events the user has experienced, without the user explicitly stating how they are feeling Rashkin et al. (2018). Advertisement systems on social media should be able to reason about the emotional reactions of people after events such as mass shootings and remove ads for guns which might increase social distress Goel and Isaac111https://www.nytimes.com/2016/01/30/technology/facebook-gun-sales-ban.html;Rashkin et al. (2018). Also, as one kind of pragmatic inference, emotion inference is a necessary step toward automatic narrative understanding and generation Tomai and Forbus (2010); Ding and Riloff (2016, 2018).

Our contribution in this paper is threefold: 1) we define emotion action and put it into emotion causality so that cause, emotion and action comprise an integral cause-effect chain; 2) we define and investigate emotion causality and emotion inference tasks to bridge the gap between the study of emotion cause, affective events and commonsense inference; 3) we manually label a large-scale corpus containing not only emotion, but also emotion cause events and action events.

2 Construction of CEAC

2.1 Term Definition

Emotion is the interrelated, synchronized changes in the states of all or most of the organismic subsystems in response to the evaluation of an external or internal stimulus event as relevant to major concerns of the organism Scherer (2005).

Experiencer is the person or sentient entity who has a particular emotional state. Fillmore et al. (2003).

Emotion cause refers to the event that evokes the emotional response in the Experiencer. Our definition is similar to Lee et al. (2010) where she called cause event and she regarded it refers to the immediate cause of the emotion, which can be the actual trigger event or the perception of the trigger event. We think her definition ignores the experiencer. Emotion cause is similar to the emotion-provoking event Tokuhisa et al. (2008) but he did not make clear definition. It is also like the definition of emotion stimulus Fillmore et al. (2003) but we limit the cause to the event.

Emotion action refers to the event carried out by Experiencer that reflects his or her emotion state or emotion change. We use emotion action rather than emotion expression Charles et al. (1872) because the latter focused on the facial expression, behavioral response, and physical responses of the experiencer whereas we care more about the event.

2.2 Data Collection

2.2.1 Taxonomy of emotion.

We adopt Ekman’s emotion classification Ekman (1992), which identifies six primary emotions, namely happiness, sadness, fear, anger, disgust and surprise. This list is agreed upon by most previous works including Chinese emotion analysis, so the use of this list contributes to resource sharing.

2.2.2 Emotion keywords.

We plan to construct CEAC in two stages. The first stage is to build about 10000 instances with representative emotion keywords which is the work of this paper. The second stage in future is to build about 40,000 instances with all the words from the existing Chinese emotional dictionary. Here we introduce the selection steps of representative emotion keywords on first stage.

In Scherer’s components processing model of emotion, five crucial elements of emotion are said to exist, of which feeling is the subjective experience of emotional state once it has occurred Scherer (2005). So people can feel emotions and the emotion keywords within the format “feel emotion”222Unlike English, Most frequently used Chinese emotion words are suitable for the format. are more representative in the text. So the steps for choosing emotion keywords are as follows:

  1. Find the intersection emotion keyword set (The single Chinese character word are excluded to avoid strong sense ambiguity) among the three Chinese emotion dictionary: the emotion list of Hownet333http://www.keenage.com, the emotional_word_ontology444http://ir.dlut.edu.cn/EmotionOntologyDownload and NTUSD555http://academiasinicanlplab.github.io/.

  2. To all the words in that word set, count the 2-gram “ 感到”/“feel”+“情感词”/“emotion” such as “感到高兴”/“feel happy”.

  3. Choose the top-5 frequency 2-grams for each emotion category, delete the word “感到”/“feel”, then get the emotion keywords.

Finally, 30 emotion keywords are selected as showed in table 1 below, followed by its English translation.

Emotion category Emotion keywords.
Happiness 快乐、高兴、欢乐、开心、愉快
Happy, pleased, joyful, cheerful, merry
Sadness 难过、悲伤、伤心、悲痛、痛心
Sad, sorrowful, grieved, distressed, pained
Anger 愤怒、生气、气愤、恼火、恼怒
Angry, annoy, indignant, furious, irritated
Fear 害怕、恐惧、恐慌、畏惧、提心吊胆
Fear, afraid, scare, dread, frightened
Disgust 讨厌、仇恨、厌恶、痛恨、怨恨
Disgust, hatred, detest, abhor, grudge
Surprise 惊讶、震惊、大吃一惊、惊奇、难以置信
surprised, shocked, astonished, amazed, unbelievable
Table 1: Emotion category and emotion keywords

2.2.3 Data source.

The National Language Resources Dynamic Circulation Corpus (DCC) 2005-2015666https://dcc.blcu.edu.cn. The news text is more formal and complete so it is more likely that causes and actions appear in the same news text.

2.2.4 Extraction.

We extract the passages with emotion keywords from DCC. In addition to the sentences including emotion keywords, three preceding clauses and three following clauses are kept as the context. Not all the extracted passages meet our requirement at the first stage, so we remove sentences including that: 1) are non-emotional; 2) have no experiencer; 3) don’t have emotion causes nor emotion actions; 4) have two or more emotion keywords.

2.3 Annotation scheme

Annotation format is the W3C Emotion Markup Language (EML) format and we made a slightly change for our task. The basic XML tags are: 1)emotion cause is marked by . 2)emotion keyword is marked by . 3)emotion action is marked by . 4)experiencer is marked by .

One emotion may have more than one corresponding emotion causes and actions, so (cause) and (action) tags have a “id” attribute to mark the number of causes and actions. There are two types of cause: noun/noun phrase and verb/verb phrase, so (cause) tag have a “type” attribute to mark cause type. Figure 2 shows two examples in the corpus, presented by the original simplified Chinese, followed by its English translation.

Figure 2: Examples of annotated sentences

Annotation Procedure. For each emotion keyword in each emotional sentence, two annotators manually annotate the cause(es), action(es) and experiencer independently. To each inconsistent sentence we involve a third annotator as the arbitrator.

In order to balance the number of each emotion category and each emotion keyword, for each emotion category, we set the upper limit at about 1,700 instances, and for each emotion keyword, at about 300 instances. Finally we get 10,603 annotated sentences. Table 2 shows the sentences distribution of CEAC in each category.

3 Statistics and Analysis

3.1 Data Distribution.

Emotion category Sentence number
Happiness 1773
Fear 1748
Sadness 1805
Disgust 1785
Anger 1688
Surprise 1804
Table 2: The number of sentences in each category

In CEAC, there are some sentences only containing causes, some sentences only containing actions, and sentences containing both. Table 3 shows the numbers of sentences of each type. It shows that about 77% of sentences contain only causes and 80.0% of clauses contain causes. There are very few sentences containing only actions, and very few clause containing causes and actions both. As show in the figure 3, we select an example for each type.

Figure 3: Clause containing only actions and containing causes and actions both
Item Sentence Clause
just cause 8167 (77.0%) 12782(80.0%)
just action 230 (2.2%) 3089 (19.3%)
Cause & action 2206 (20.8%) 111 (7.0%)
Total 10603 15982
Table 3: Distribution of sentence types

Table 4777There are 111 clauses containing both cause and action, so the number of clauses is less than the number of events(16093) shows the distribution of cause position and action position. Emotion causes appear much more in front of emotion keywords and Emotion actions appear vice verse. This is because news texts focus on narrative integrity and logic. In addition, time logical narration conforms to human’s thinking habit, so it is a narrative way to describe the cause first and then the result in the text.

Position Cause % Action %
Previous 3 clauses 556(4.3) 32(1)
Previous 2 clauses 1404(10.9) 40(1.3)
Previous 1 clauses 4554(35.3) 96(3)
In the same clauses 3897(30.2) 782(24.4)
Next 1 clauses 1172(9.1) 1335(41.7)
Next 2 clauses 572(4.5) 521(16.3)
Next 3 clauses 300(2.3) 229(7.2)
Other 438(3.4) 165(5.1)
Total 12893 3200
Table 4: Distribution of cause position and action position

In emotion cause type distribution, verbal causes account for 75.2%, as shown in table 5. In addition, we found all the emotion actions are verb/verb phrases.

Cause type Number Percent
Noun/noun phrase 3202 24.8%
Verb/verb phrase 9691 75.2%
Table 5: Distribution of cause type

Agreement. In order to get high quality annotated examples, we trained the annotators strictly before annotating and allowed them to discard the difficult sentences. We use the same inter-annotator agreement method of Gui et al. (2016). We reached 0.8201 for the Kappa value at clause level which is lower than Gui et al. (2016) because we need to label emotion action besides emotion cause.

Inconsistent analysis. We also analyzed the inconsistent sentences and find that the following situations may lead to inconsistent results. In the examples, the wrong annotations are marked with “*” and the correct annotations are marked with “#”.

1) Incorrectly annotate the condition of cause as emotion cause.

EX.5:(郭平原言自家*受皇帝旌表*,# 不能报答 #,因而悲伤。

Guo Pingyuan said that *he was blessed by the emperor*, but # he could not repay #, so he was sad.)

“he could not repay” is the reason why he feel sad. Though “he was blessed by the emperor” is the premise of sadness, there is no causal relationship, so we don’t think it is emotion cause.

2) The annotator incorrectly annotates actions that are contrary to emotions as emotion actions.

EX.8: (这些痛失亲人的战友们,依然忍着悲伤,*继续战斗在抗震救灾的第一线*。

These comrades who lost their loved ones still endured sorrow and *fighted against earthquake* .)

“fighted against earthquake” isn’t the action caused by sorrow but caused by repressing sorrow, so it is contrary to sorrow.

3) Actions that occur with emotions may be mistakenly annotated as emotion action.

EX.9:( 一般民众在痛恨不良商家非法添加的同时,*越来越关注其国家标准允许的各种添加剂所带来的可能的危害了*。

While the general public hates the illegal addition of bad businesses, they *are paying more and more attention to the possible harm caused by various additives allowed by their national standards*.)

Because of the simultaneous occurrence of “hate” and “are paying more and more attention to the possible”, so the latter is not annotated as emotion action.

4 Task

4.1 Task Definition

Cause-Emotion-Action Relation Extraction (Emotion Causality). We define the CEA relation extraction task as extracting or filling the slots in the triple (Cause, Emotion, Action) in a given text as below:

Given a text , where is the word in the text, a triple need to be extracted as CEA relation, where , are continuous word sequence in text respectively, is a emotion word. To our paper, is the emotion keywords listed in table 1.

Emotion Inference. We define the emotion Inference task as predict an emotion category by given an event of action/cause as below:

Given a Cause and Action event tuple (C, A), where , are continuous word sequence respectively, and a given emotion category set , it needs to select a emotion category as inference answer, i.e. , ().

4.2 Baseline Model

Cause-Emotion-Action Relation Extraction. We regard this task as a sequence labeling problem and use Bi-LSTM + CRF modelHuang et al. (2015). It uses bidirectional LSTM to encode and add CRF layer on the top model structure. Based on the conditional model , CRF model can mark the new observation sequence x by selecting the label sequence y that maximizes the conditional probability .

Emotion Inference

. We treat it as a typical classification task. We use LSTM model to encode cause events and action events respectively. When using both cause events and action events as model input, we splice different LSTM’s (one for cause events and another one for action events) final hidden states into one vector. Then feed this vector to an softmax layer and predict the final result.

5 Experiment

5.1 Dataset & Hyperparameters

Cause-Emotion-Action Relation Extraction. We directly use the 10603 texts in the CEAC data set as experimental data. The training set and test set are divided in a 4:1 ratio.

For LSTM+CRF model, char embeddings are random generate from [-0.25, 0.25] and they will be fixed during training. The sizes of these char embeddings are 300. The hidden size of three LSTMs are set to 300. The max epoch and size of batch are set to 40, 64, respectly. The Adam has been used to update parameters with learning rate 0.001 in our experiments.

Emotion Inference. For this task,we use the 10603 texts in the CEAC dataset as experimental data. The training set and test set are divided in a 4:1 ratio.

For LSTM model, word embeddings are pre-train on Wikipedia with embedding size 200. And they will not be update during training. The hidden size of three LSTMs are set to 200. The max epoch and size of batch are set to 16, 200, respectly. The Adam has been used to update parameters with learning rate 0.0005 in our experiments.

5.2 Result

Cause-Emotion-Action Relation Extraction. As shown in the table 6, the result of extracting CEA relation is poor, which reflects the difficulty of the task to some extent. In addition to the performance of the test set, We also use 2206 sentences containing both causes and actions to see whether given actions can improve the extraction result of the causes and whether given causes can improve the extraction result of the actions. The training set and test set are divided in a 4:1 ratio. As we can see in the table 7, “cause & action”item is the result of the detection of cause/action given corresponding cause/action. Obviously, after adding actions, there is 0.01 improvement in F-measure at the experiment of cause detection. After adding cause, there is 0.03 improvement in F-measure at the experiment of action detection. In a word, the result of detection will be improved by adding the corresponding action or cause information, especially in surprise, which increased 0.16 in action detection after adding the cause.

Test Data
Cause Action
Pre. Rec. F. Pre. Rec. F.
Majority 0.03 1.0 0.06 0.15 1.0 0.26
ALL 0.55 0.53 0.54 0.48 0.44 0.46
Anger 0.50 0.48 0.49 0.57 0.53 0.55
Disgust 0.55 0.49 0.52 0.46 0.39 0.42
Fear 0.52 0.50 0.51 0.36 0.33 0.34
Happiness 0.52 0.49 0.50 0.53 0.45 0.49
Sadness 0.61 0.59 0.60 0.46 0.44 0.45
Surprise 0.61 0.59 0.60 0.37 0.36 0.36
Table 6: Result of the Cause-Emotion-Action Relation Extraction task
CAUSE & ACTION CAUSE & ACTION
cause action cause action
Precision Recall F1 Precision Recall F1 Precision Recall F1 Precision recall F1
surprise 31.25 31.25 31.25 61.9 50.0 55.32 28.57 25.0 26.67 45.0 34.62 39.13
disgust 38.71 37.89 38.3 47.93 45.31 46.59 38.32 34.74 36.67 44.63 42.19 43.37
fear 40.0 38.78 39.38 42.11 36.36 39.02 44.3 35.71 39.55 43.48 36.36 39.6
happy 46.15 42.86 44.44 44.68 41.18 42.86 48.65 42.86 45.57 46.81 43.14 44.9
sadness 48.05 50.68 49.33 66.22 58.33 62.03 56.06 50.68 53.24 53.75 51.19 52.44
anger 49.36 50.0 49.68 62.8 58.19 60.41 43.68 45.45 44.44 58.86 58.19 58.52
all 44.33 44.14 44.23 54.41 49.31 51.73 44.57 41.21 42.83 50.65 47.05 48.78
Table 7: Result of the Cause-Emotion-Action Relation Extraction task on the dataset which both contain cause and action

Emotion Inference. We also extract part of results from the dataset that contained both causes and actions for comparative analysis. From the table 8, we can see that the differences between the categories of emotions in task 2 are not as great as in task 1. However, the model with more information (both cause and action) can still achieve better results (5% higher than overall test data set). The result of inferring three categories (Anger, Fear and Sadness ) has been significantly enhanced.

6 Discussion

In this section, we will analyze the experimental results and make some discussion on it.

As explained in the introduction section, the triple (cause, emotion, action) have multiple causal chains: 1) Emotions and cause events are causal. 2) Emotions lead to action events. 3) Causes and actions are also causal. Based on this, we conducted two groups of experiments: task1 and task2. In Task 1, we find that the result of extracting emotion triples CEA is poor. This is because the task itself is hard and there are many extracting contents. We also find that the result of detection will be improved by adding the corresponding action or cause information, for example, “Wang Yan was angry when she found out that her husband had derailed, so she decided to divorce her husband”. In this example, the model can detect decided to divorce her husband” as action more easier after given the cause “she found out that her husband had derailed”. In task 2, after adding more information, the results of the experiment in some emotions are improved, such as anger, fear, and sad. However, there are also some exceptions, such as disgust, happiness, surprise. After analyzing the data, we found that the causes of anger, fear and sadness are similar. For example, the events “he hurt me” can both lead to anger, fear and sadness. Therefore, the result of inferring emotions by causes alone is ineffective. When we put in the actions, for example, “retaliate”, “dodge” and “cry”, we can distinguish these three emotions and the result of experiment is improved. The imbalance of data that contain both causes and actions affect the results of the experiment. As shown in the figure 4, we can see that when the proportion of data contains both causes and actions changes, the improvement also changes regularly. For example, in surprise, the reason why the result of the whole model becomes even worse after adding the action is that the rate of data that contains actions is low, so the action becomes useless noise in that case.

ALL ACTION
Precision Recall F1 Support Precision Recall F1 Support
ALL 0.55 0.55 0.55 2110 0.63 0.51 0.50 39
Anger 0.54 0.45 0.49 345 0.71 0.28 0.40 18
Disgust 0.55 0.48 0.51 357 0.42 0.73 0.53 11
Fear 0.50 0.52 0.51 321 0.33 1.00 0.50 2
Happiness 0.60 0.66 0.63 335 0.60 0.75 0.67 4
Sadness 0.60 0.65 0.62 359 1.00 0.50 0.67 4
Surprise 0.50 0.54 0.52 393 NULL NULL NULL 0
CAUSE CAUSE & ACTION
Precision Recall F1 Support Precision Recall F1 Support
ALL 0.53 0.53 0.53 1627 0.61 0.60 0.60 444
Anger 0.41 0.30 0.34 185 0.66 0.66 0.66 142
Disgust 0.57 0.47 0.52 268 0.51 0.47 0.49 78
Fear 0.45 0.50 0.47 228 0.68 0.57 0.62 91
Happiness 0.60 0.68 0.64 291 0.62 0.50 0.56 40
Sadness 0.60 0.60 0.60 286 0.59 0.83 0.69 69
Surprise 0.51 0.55 0.53 369 0.32 0.33 0.33 23
Table 8: Results of emotion inference task
Figure 4: Visualization of the performance changes along with the ratio of data which contains both causes and actions

7 Related Works

We only list emotion event/cause data source-related works here.

Tokuhisa et al. (2008) first defined emotion-provoking event and constructed a corpus in Japanese using massive examples extracted from the web, then did sentiment polarity classification and emotion classification. Vu et al. (2014)worked on creating prevalence-ranked dictionaries of emotion-provoking events through both manual labor and automatic information extraction.

Lee et al. (2010) first proposed a task on emotion cause detection. They manually constructed a corpus from Academia Sinica Balanced Chinese Corpus. Gui et al. (2016) built a dataset using SINA city news then propose an event-driven emotion cause extraction method using multi-kernel SVMs. Ghazi et al. (2015) directly selected the emotions-directed frames in FrameNet to build an English emotion cause (or stimulus) corpus then used CRFs to detect emotion causes. Some study Gui et al. (2014) designed corpus through annotating the emotion cause expressions in Chinese Weibo and extended the rule based method to informal text in Weibo text. Cheng Xiyao (2017) focused on current/original-subtweet-based emotion detection and annotated a multiple-user structure. Gao et al. (2017) organized NTCIR-13 ECA (emotion cause analysis) task. It designed two subtasks including emotion cause detection subtask and emotion cause extraction subtask.

Deng et al. (2013) presented an annotation scheme for events that negatively or positively affect entities (benefactive/malefactive events). Then Choi et al. (2014)

constructed two sense-level lexicon of benefactive and malefactive events for opinion inference.

Ding and Riloff (2016) defined affective events as events that are typically associated with a positive or negative emotional state and aim to automatically acquire knowledge of stereotypically positive and negative events from personal blogs. Ding and Riloff (2018) defined a set of categories human need to explain the affect of events. They also manually added manual annotations of human need categories to a previous collection of affective events.

Rashkin et al. (2018) proposed Event2Mind to supporting commonsense inference on events with a specific focus on modeling stereotypical intents and reactions of people, described in short free-form text.

8 Conclusion

In this paper, first we define emotion action and put it into emotion causality so that cause, emotion and action comprise an integral cause-effect chain. Then we define and investigate emotion causality and emotion inference tasks. We manually label a large-scale corpus CEAC to support the two tasks. Finally, we report baseline performance on the tasks and it shows that: for the emotion causality, the performance of the state-of-the-art sequence labeling model are still too difficult to achieve good performance; for the emotion inference, the popular neuron model can compose embedding representations of previously unseen events and possible emotion causes and for both tasks, the emotion action does affect the result of experiment.

There is still much room for improvement in emotion causality and emotion inference task. In addition, we cannot make soundly analysis on all the experimental results now because the imbalance distribution of emotion cause and emotion action, so we aim to release 50,000 instances in the future which we believe it can significantly boost the study in both emotion causality and emotion inference research area. We are currently releasing 10,603 samples with 16,093 events to inspire work in emotion causality and emotion inference or other related task and along with gathering feedback from the research community.

References

  • Charles et al. (1872) Darwin Charles, Ekman Paul, and Prodger Phillip. 1872. The expression of the emotions in man and animals. Electronic Text Center, University of Virginia Library.
  • Chen et al. (2010) Ying Chen, Sophia Yat Mei Lee, Shoushan Li, and Chu-Ren Huang. 2010. Emotion cause detection with linguistic constructions. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 179–187. Association for Computational Linguistics.
  • Cheng Xiyao (2017) Cheng Bixiao Cheng Xiyao, Chen Ying. 2017. An emotion cause corpus for chinese microblogs with multiple-user structures. ACM Transactions on Asian and Low-Resource Language Information Processing, 17(1):1–19.
  • Choi et al. (2014) Yoonjung Choi, Lingjia Deng, and Janyce Wiebe. 2014. Lexical acquisition for opinion inference: A sense-level lexicon of benefactive and malefactive events. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 107–112.
  • Deng et al. (2013) Lingjia Deng, Yoonjung Choi, and Janyce Wiebe. 2013. Benefactive/malefactive event and writer attitude annotation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 120–125.
  • Ding and Riloff (2016) Haibo Ding and Ellen Riloff. 2016. Acquiring knowledge of affective events from blogs using label propagation. In AAAI, pages 2935–2942.
  • Ding and Riloff (2018) Haibo Ding and Ellen Riloff. 2018. Weakly supervised induction of affective events by optimizing semantic consistency. In

    Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence

    .
  • Ekman (1992) Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169–200.
  • Fillmore et al. (2003) Charles J Fillmore, Miriam RL Petruck, Josef Ruppenhofer, and Abby Wright. 2003. Framenet in action: The case of attaching. International journal of lexicography, 16(3):297–332.
  • Gao et al. (2017) Qinghong Gao, J Hu, R Xu, et al. 2017. Overview of ntcir-13 eca task. In Proceedings of the 13th NTCIR Conference. Tokyo, Japan.
  • Ghazi et al. (2015) Diman Ghazi, Diana Inkpen, and Stan Szpakowicz. 2015. Detecting emotion stimuli in emotion-bearing sentences. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 152–165. Springer.
  • Gui et al. (2016) Lin Gui, Dongyin Wu, Ruifeng Xu, Qin Lu, and Yu Zhou. 2016. Event-driven emotion cause extraction with corpus construction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1639–1649.
  • Gui et al. (2014) Lin Gui, Li Yuan, Ruifeng Xu, Bin Liu, Qin Lu, and Yu Zhou. 2014. Emotion cause detection with linguistic construction in chinese weibo text. In Natural Language Processing and Chinese Computing, pages 457–464. Springer.
  • Huang et al. (2015) Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. Computer Science.
  • Lee et al. (2010) Sophia Yat Mei Lee, Ying Chen, Shoushan Li, and Chu-Ren Huang. 2010. Emotion cause events: Corpus construction and analysis. In LREC.
  • Radinsky et al. (2012) Kira Radinsky, Sagie Davidovich, and Shaul Markovitch. 2012. Learning causality for news events prediction. In Proceedings of the 21st international conference on World Wide Web, pages 909–918. ACM.
  • Rashkin et al. (2018) Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A Smith, and Yejin Choi. 2018. Event2mind: Commonsense inference on events, intents, and reactions. arXiv preprint arXiv:1805.06939.
  • Richardson et al. (2013) Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203.
  • Scaria et al. (2013) Aju Thalappillil Scaria, Jonathan Berant, Mengqiu Wang, Peter Clark, Justin Lewis, Brittany Harding, and Christopher D Manning. 2013. Learning biological processes with global constraints. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1710–1720.
  • Scherer (2005) Klaus R Scherer. 2005. What are emotions? and how can they be measured? Social science information, 44(4):695–729.
  • Tokuhisa et al. (2008) Ryoko Tokuhisa, Kentaro Inui, and Yuji Matsumoto. 2008. Emotion classification using massive examples extracted from the web. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 881–888. Association for Computational Linguistics.
  • Tomai and Forbus (2010) Emmett Tomai and Ken Forbus. 2010.

    Using narrative functions as a heuristic for relevance in story understanding.

    In Proceedings of the Intelligent Narrative Technologies III Workshop, page 9. ACM.
  • Vu et al. (2014) Hoa Trong Vu, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Acquiring a dictionary of emotion-provoking events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers, pages 128–132.