Multi-Task Learning with Auxiliary Speaker Identification for Conversational Emotion Recognition

03/03/2020 ∙ by Jingye Li, et al. ∙ Wuhan University 0

Conversational emotion recognition (CER) has attracted increasing interests in the natural language processing (NLP) community. Different from the vanilla emotion recognition, effective speaker-sensitive utterance representation is one major challenge for CER. In this paper, we exploit speaker identification (SI) as an auxiliary task to enhance the utterance representation in conversations. By this method, we can learn better speaker-aware contextual representations from the additional SI corpus. Experiments on two benchmark datasets demonstrate that the proposed architecture is highly effective for CER, obtaining new state-of-the-art results on two datasets.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Emotion recognition has been one hot topic in natural language processing (NLP) which aims to detect emotions in texts [Wen and Wan2014, Li et al.2015]. Recently, emotion recognition in conversions has been received increasing attentions [Majumder et al.2019, Zhong et al.2019, Ghosal et al.2019]. Given a sequence of utterances by multiple speakers, conversational emotion recognition (CER) aims to recognize emotion for each utterance. CER is a typical sequence labeling problem, and end-to-end neural sequence labeling models have achieved state-of-the-art performance [Poria et al.2017, Jiao et al.2019].

Intuitively, speaker information can be greatly helpful for CER. For example, the last utterance of a same speaker could be severed as an important clue for the current utterance. Thus, how to effectively represent the speaker-sensitive utterances in conversions is critical for CER models. Previous studies, e.g., ConGCN [Zhang et al.2019] and DialogueGCN [Ghosal et al.2019]

, build graphical structures over the input utterance sequences by speaker information and then exploit graph neural networks to model the dependencies, leading to better performance for CER.

Figure 1: Illustration of multi-task learning of our method. For CER, the emotion of every utterance in conversation is predicted. For SI, several pairs of utterances will be selected for binary classification, where denotes the same speaker and otherwise.

The above models just adopt speakers as indicators to build connections over sequential utterances, using only CER training corpus to learn speaker-aware representations, which could be insufficient for speaker exploration. In most cases, we can have much larger scale corpora of raw conversions, where no emotion information is annotated. These corpora could be potentially useful to learn speaker-aware contextual representations, since the utterances as well as the corresponding speaker identities are offered jointly in them. Speaker identification (SI) could be one good alternative for this purpose. As shown in Figure 1, we can learn speaker-aware contextual representations through the raw conversions by judging whether two utterances are from the same user.

In this work, we propose to use SI as one auxiliary task in order to obtain better speaker-aware contextual representations of the conversational utterances. We exploit a multi-task learning (MTL) framework to achieve our final goal to enhance CER. For CER and SI, we use the same network structure for utterance encoding, but with different set of model parameters. We adopt BERT as the basic representation to make our baseline strong, and hierarchical bidirectional gated recurrent neural networks (Bi-GRU) are exploited at the utterance-level and conversation-level to enhance the contextual representations. Further, we unite the two tasks with two bridging network structures by using an attention network

[Bahdanau et al.2014] and a gate mechanism [Wu et al.2019] at the utterance-level and conversation-level for full mutual interaction, respectively .

We conduct experiments on two benchmark datasets to verify our framework, which are respectively EmoryNLP and MELD by name, both are sourced from the TV show of Friends. The results show that our baseline system is highly strong, achieving better performance than previous state-of-the-arts. Our final model can lead to significant improvements on the two datasets both, which demonstrates the effectiveness of our proposed method. In addition, we conduct extensive analysis work to examine the model in depth, for better understanding the advantages of our model. All codes and experimental settings will be released publicly available on for research purpose under Apache License 2.0.

2 Related Work

Emotion recognition in conversational texts is generally treated as a sequence labelling problem in the literature. Traditional approaches often use lexicon-based and acoustic features to detect emotions

[Forbes-Riley and Litman2004, Devillers and Vidrascu2006]

. Recently, deep learning based recurrent neural networks (RNN) and transformer can bring state-of-the-art performance

[Poria et al.2017, Tzirakis et al.2017, Zhong et al.2019], which is able to capture contextual utterance representations effectively. Our baseline CER model follow these settings, adopting a sophisticated RNN for contextualized representation learning.

Several studies attempt to integrate speaker information in the conversational utterances, as it can affect the final CER performance much [Hazarika et al.2018b, Hazarika et al.2018a]. majumder2019dialoguernn majumder2019dialoguernn propose a recurrent model to detect emotion by tracking party state and global state dynamically. Graph convolutional networks (GCN) have been demonstrated stronger in modeling context-sensitive and speaker-sensitive dependence in conversation [Zhang et al.2019, Ghosal et al.2019]. In this work, we propose an improved approach to better utilize speaker information by multi-task learning with a closely-related auxiliary task.

Multi-task learning aims to model multiple relevant tasks simultaneously, which is capable of exploiting potential shared features across tasks effectively. There have been a number of successful studies in the NLP community [Liu et al.2017, Ma et al.2018, Xiao et al.2018, Pentyala et al.2019, Wu et al.2019]. zhou2019multi zhou2019multi exploit language modeling as an auxiliary task to assist question generation task by multi-task learning, which motivates our work greatly. Similar to the language modeling task, speaker identification has rich training corpus, which can be collected automatically, and we exploit it to assist CER mainly.

3 Methodology

Suppose we have a conversation with consecutive utterances and speakers . Each utterance is uttered by one speaker , where the function

maps the index of the utterance into its corresponding speaker. The objective of CER is to predict the emotion label of each utterance, and the objective the auxiliary task SI is to classify whether two given utterances

in a conversation are from the same speaker. In this section, we will introduce our specific-task model and multi-task learning framework in detail. Figure 2 shows the architecture of our baseline model, which exploits attention-based hierarchical network as encoder. Figure 3 depicts the multi-task learning framework of our proposed model.

Figure 2: The overall architecture of our individual models.
Figure 3: Illustration of our proposed model for CER and SI.

3.1 Conversational Emotion Recognition

For the baseline CER model, first a hierarchy encoder is built on the input conversion utterance sequence, resulting in one feature vector for each utterance, then a standard emotion classification is performed based on the encoder sequence. Our hierarchy encoder consists of two components: individual utterance encoder and contextual utterance encoder, where the individual utterance encoder is an attention-based network with Bi-GRU, as shown by the right part of Figure

2, and the contextual utterance encoder is a Bi-GRU over the output sequence of individual utterance encoder, aiming to capture the conversation-level context.222Bi-GRU is used as the basic RNN operation here by considering both efficiency and effectiveness.

Individual Utterance Encoder

The input of our model is a sequence of utterances . Assuming that the th utterance is denoted by , where is the length of . We exploit BERT as the basic encoder because it has achieved state-of-the-art performances for a number of NLP tasks [Devlin et al.2019], obtaining the word-level output for : .

Further, we adopt a single-layer Bi-GRU to further enhance the contextualized word representation at the utterance level, which can be formulated as:


where denotes the Bi-GRU output for word .

The goal of individual utterance encoder is to derive a single feature vector for each utterance based on the covered words. Next, we need to aggregate the word-level outputs into a single vector for the utterance. We exploit an attention-based aggregation to accomplish the goal. Formally, the utterance representation is defined as follows:


where , and are model parameters, indicates vector transposition, is the vector representation for utterance . Intuitively, is the importance of word in , and the weighted sum is adopted for the utterance representation.

Contextual Utterance Encoder

After the individual utterance encoder, we obtain a sequence of utterance-level representations . These representations are solely sourced from their internal words. In order to encode utterance-level contextualized information, we build a second Bi-GRU as follows:


where is the final utterance presentations for prediction, which can capture surrounding contextual information in conversations.

Output Layer

When the final contextualized utterance feature representation

is ready, we can calculate the output probability of each candidate emotion labels by a linear transformation followed with a softmax operation:


where is one model parameter, and denotes the output distribution of emotion labels for utterance .


We optimize the CER model by minimizing the cross-entropy between the predicted emotion distribution and the true distribution. For a single conversation, the obejctive function is defined as follows:


where and are the number of utterances in a conversation and emotion labels, respectively, is the one-hot vector of the ground truth emotion for utterance , and is the predicted distribution.

3.2 Speaker Identification

As discussed before, speaker information plays an important role in CER. Here we use the same hierarchical network as CER to encoder utterances for SI. Similarly, we obtain the first stage utterance representations by individual utterance encoder, and then achieve the second stage utterance representations by contextual utterance encoder.

The goal of SI is to determine whether two selected utterances are from the same speaker, which is a binary classification problem. To reach this goal, we randomly sample pairs of utterances for classification. Note that we do not extract all utterance pairs in a conversion for a balance with CER. In addition, we do not really recognize the utterance speakers, as the classification category could be extremely large in some real scenarios, which makes the training very difficult.


Given a sampled pair of utterance representations from a single conversation, we adopt four sources of features for SI classification: (1), (2) , (3) , and (4) ( denotes element-wise multiplication). We concatenate them, apply a nonlinear MLP layer to reach the final feature vector , and then perform binary classification based on it. The overall process can be formalized as follows:


where , and are model parameters, denotes vector concatenation, and is a two-dimensional vector with one dim indicates the probability of the same speaker and the other dim denotes the probability of different speakers.


For SI, we also adopt the cross-entropy loss between the ground truth and the predicted distribution as the training objective:


where is a two-dimensional vector for the ground-truth answer, and

is one hyperparameter. We randomly sample

times for the utterance pair .

3.3 Multi-Task Learning

The two above models for CER and SI have the same encoder network structure, which brings convenience in unite them under a multi-task learning framework. In this work, we keep the task-specific encoders, and exploit two bridging network structures for mutual interaction of the hierarchical encoders, where one network is designed for the utterance-level individual utterance encoder, and the other is targeted to the conversation-level contextual utterance encoder.

Individual Utterance Encoder

For the utterance-level individual utterance encoder, we exploit a shared-private structure to connect the two tasks. Concretely, as shown by the right part of Figure 3, we add a shared Bi-GRU module to unite the two tasks. Given the BERT outputs ( ( and ), we compute shared hidden vectors by Bi-GRU first:


and then design a gated mechanism to dynamically incorporate shared feature into task-specific features. The network is mostly inspired by wu2019different wu2019different. The updated contextual word representations are computed as follows:


where is a gate to control the portion of information flowing from the shared Bi-GRU layer,

is a sigmoid function. For the SI part, we only need to substitute

into , and changes to correspondingly. Finally, we use and as the individual utterance encoder outputs for CER and SI, respectively.

Contextual Utterance Encoder

For the union of contextual utterance encoder, we adopt a cross attention mechanism to augment the utterance representation from the task of the apart side. Taken the target CER model as an example, we obtain one kind of extra features from the SI contextual utterance encoder. Concretely, assuming that the contextual utterance representations of CER and SI are and , respectively, then we compute the additional feature for by the following equations:



is one model parameter. The attention mechanism is mainly motivated by bahdanau2014neural bahdanau2014neural for feature selection from the apart side. When the SI model is the target task, the mechanism is just performed at an opposite direction, and

is obtained. Finally, we use and instead for CER and SI decoding.

Multi-Task Training

For the multi-task learning of the two tasks, we simply sum the losses of the two individual tasks together as the joint objective:


4 Experiments

dataset #conversations #utterances #avg. length
train/val/test train/val/test
EmoryNLP 659/89/79 7551/954/984 11.5
MELD 1028/114/280 9989/1109/2610 9.6
Friends 3329 61038 18.3
Table 1: Statistics of the datasets for emotion recognition in conversation. Friends is an external dataset for MTL.

4.1 Datasets

We evaluate our model on two benchmark datasets, MELD and EmoryNLP, following previous work [Zhong et al.2019]. Table 1 shows the corpus statistics.


[Poria et al.2019] This is a multimodal dataset collected from TV show of Friends. There are seven emotion categories for each utterance, including anger, sadness, disgust, surprise, fear, joy and neutral.


[Zahiri and Choi2018] This dataset is also collected from TV show scripts of Friends. The difference lies in the type of emotion labels, which includes neutral, joyful, peaceful, powerful, scared, mad and sad.

In particular, we collect a large corpus for SI training, which is the entire scripts of the TV show of Friends, a superset of MELD and EmoryNLP.

4.2 Settings

We adopt Adam as the optimizer with the batch size of 4 to train our models, where the learning rates to fine-tune BERT and the other parameters are and , respectively. Dropout rate with 0.5 is applied to avoid overfitting. The dimension sizes of the hidden states of all the BiGRUs is set to 200 on EmoryNLP and 150 on MELD. For evaluation, we exploit the standard weighted macro-F1 score as the major metric to measure all models, following zhong2019knowledge zhong2019knowledge.

4.3 Models

For a comprehensive evaluation, we compare our model with the following baselines as well:



A convolutional neural network for utterance-level classification without using contextual information at the conversation level.


[Poria et al.2017] A hierarchical classification model based on LSTM-RNN model, where contextual utterance-level features are adopted.


[Majumder et al.2019] A sophisticated RNN-based model based on three GRUs, which are used to model speakers, global contexts and historical emotions.


[Zhong et al.2019] The state-of-the-art model in the literature which exploits external commonsense knowledge to enhance the contextual utterance representation.


[Ghosal et al.2019] A GCN-based model aiming to better representing inter-speaker dependence, where GRU is used as the basic feature composition modules.


[Zhang et al.2019] A multi-modal model for CER, which also exploits GCN to model the context-sensitive and speaker-sensitive dependence.

4.4 Developmental Results

We conduct experiments on the developmental datasets of EmoryNLP and MELD to examine our proposed models.

Figure 4: Developmental experimental results of our model on EmoryNLP by varying the values of .

The Influence of

represents the number of utterance pairs sampled from a conversation for SI, aiming to leverage the combined loss of the two tasks. We investigate the influence of by ranging it from 1 to 5. Figure 4 shows the results. As shown, we can obtain the best results when on EmoryNLP and on MELD, respectively, demonstrating the importance of sampling. In addition, the optimum of different datasets may vary, which could be possibly due to the different averaged conversation length.

(a) EmoryNLP
(b) MELD
Figure 5: The influence of BERT Fine-Tuning.

The Influence of BERT Fine-Tuning

The utilizing of BERT should be carefully studied. When BERT parameters are frozen, we can save the resource cost greatly, for example, the memory of GPU. However, it may lead to significant performance decrease. Here we study the gap between BERT fine-tuning and frozen. Figure 5 shows the comparison results, where the baseline model and our final model are both investigated. As shown, we can see that by freezing the BERT parameters, drops of over 2% and 4% can be resulted on EmoryNLP and MELD, respectively, demonstrating that fine-tuning is a necessary for CER.

Interaction Mode EmoryNLP MELD
38.23 57.86
38.76 58.79
38.50 58.49
38.95 59.29
Table 2: Ablated performance on EmoryNLP and MELD, where IUE and CUE denote individual and contextual utterance encoders, respectively.

Ablation Study

To comprehensively study the effectiveness of the two bridging neural network structures at the different levels for mutual interaction, we conduct ablation experiments in detail. The results are offered in Table 2. We can see that both the two networks can bring improved performance for CER. By excluding the bridging network at IUE, our final model falls by 0.19% on EmoryNLP and 0.50% on MELD, respectively. Similarly, the model shows 0.45% and 0.80% declines on the two datasets by eliminating the bridging network at CUE, respectively. When both network structures are removed, drops by 0.72% and 1.43% points on the two datasets are resulted, respectively.

Figure 6: Visualization of the attentions, where the thresholds are 0.1 and 0.2 for words and utterances by their attention values, respectively.

4.5 Final Results

Method EmoryNLP MELD
Our (GloVe) 32.59 59.67
        +MTL 34.54 60.69
Our (ELMo) 33.55 61.10
        +MTL 34.85 61.86
Our (BERT) 34.76 61.31
        +MTL 35.92 61.90
CNN 32.59 55.02
cLSTM 32.89 56.44
DialogueRNN 31.70 57.03
DialogueGCN - 58.10
KET 34.39 58.18
ConGCN - 59.40
Table 3: Final results on the test datasets of EmoryNLP, MELD.

Table 3 shows the performance of various models on the test sections of the two datasets, respectively. We report the performance of our baseline model with three kinds of word representations, namely the pretrained glove word embeddings [Pennington et al.2014] 333, 300d, ELMO [Peters et al.2018]444, original, 5.5B and BERT555, BERT-Base, Uncased

. We can see that all our baseline results are strong and can achieve comparable performance with the previous state-of-the-art systems. The best-reported numbers on the EmoryNLP and MELD datasets are 34.39% and 59.40%, respectively. Our BERT-based baseline gives F-scores of 34.76% and 61.31%, respectively, which are both higher than the previous state-of-the-arts.

According to the results, our models with MTL can achieve better performance on two datasets as compared to their corresponding baselines. On the two datasets, the F1-score improvements based on the pretrained golve embeddings are 1.95% and 1.02%, respectively. When the contextualized ELMO representations are exploited, the improvements over the baseline are 1.30% and 0.76%, respectively. For our baseline based on BERT, the final MTL enhanced model leads to F1 increases of 1.16% and 0.59% on the two datasets, respectively. All the improvements by MTL are significant (p-value below 0.0001 by using pair-wised t-test). Interestingly, we can also find that the improvements become smaller as the baseline becomes stronger.

4.6 Visualization Analysis

For comprehensive understanding of our proposed models, we visualize the attention matrices by a case study, which is selected from the MELD test dataset. Figure 6 shows the example, where both salient words of individual utterance encoder and the key utterances of cross attention neural structure both are offered.

First, we examine the difference in salient words for individual utterance encoder. As shown, the final model treats gimme it, she, Rachel and Pheebs as strong clues for CER, which are speaker related, while misses the punctuation such as comma and period, which are mostly objective. The observation indicates that the shared Bi-GRU module can help to identify speaker-related words such as speaker names and attributes, highlighting them for further feature representation, and meanwhile can effectively exclude the unimportant objective words. In addition, the comparison further demonstrates the importance of the speaker information because of the better performance of our final model.

At the conversation-level, cross attention mechanism is used to identify closely-related utterances for a given utterance. Here we show the indexes of the related utterances to study the learned information by MTL with SI. As shown by the rightmost part of Figure 6, we can see that the cross attention mechanism can help to associate the utterances with the same speaker and the next utterances of the targeted speakers for a specific utterance. For example, for the 9th utterance, the speaker is Phoebe, and the targeted speaker is Joey. By MTL, the model connects the 7th and 11th utterances from the same speaker, and meanwhile connects the 6th utterance from the target speaker. Intuitively, these utterances could be potential evidences to recognize the current emotion, which is demonstrated by our final model.

5 Conclusion

In this work, we proposed a multi-task learning network for CER with the assistance of SI, aiming to better capture speaker-related information, which has been demonstrated important for CER. We exploited a strong baseline with BERT as backend, and then presented two neural network structures to bridge the two tasks for mutual interaction. We conducted on two benchmark datasets to verify the effectiveness of the proposed method. Results showed that our baseline is very strong, achieving the best performance compared with the previous state-of-the-art. Further, the MTL based method can boost the performance significantly, leading to a new state-of-the-art in the literature. Detailed experiments showed that our suggested components for MTL are both important. In addition, we analyzed the proposed model in depth for comprehensive understanding.


  • [Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
  • [Devillers and Vidrascu2006] Laurence Devillers and Laurence Vidrascu. Real-life emotions detection with lexical and paralinguistic cues on human-human call center dialogs. In ICSLP, 2006.
  • [Devlin et al.2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171–4186, 2019.
  • [Forbes-Riley and Litman2004] Kate Forbes-Riley and Diane Litman. Predicting emotion in spoken dialogue from multiple knowledge sources. In HLT-NAACL, pages 201–208, 2004.
  • [Ghosal et al.2019] Deepanway Ghosal, Navonil Majumder, Soujanya Poria, Niyati Chhaya, and Alexander Gelbukh. DialogueGCN: A graph convolutional neural network for emotion recognition in conversation. In EMNLP-IJCNLP, pages 154–164, 2019.
  • [Hazarika et al.2018a] Devamanyu Hazarika, Soujanya Poria, Rada Mihalcea, Erik Cambria, and Roger Zimmermann. ICON: Interactive conversational memory network for multimodal emotion detection. In EMNLP, pages 2594–2604, 2018.
  • [Hazarika et al.2018b] Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. Conversational memory network for emotion recognition in dyadic dialogue videos. In NAACL, pages 2122–2132, 2018.
  • [Jiao et al.2019] Wenxiang Jiao, Haiqin Yang, Irwin King, and Michael R. Lyu.

    Higru: Hierarchical gated recurrent units for utterance-level emotion recognition.

    In NAACL-HLT, pages 397–406, 2019.
  • [Kim2014] Yoon Kim. Convolutional neural networks for sentence classification. In EMNLP, pages 1746–1751, 2014.
  • [Li et al.2015] Shoushan Li, Lei Huang, Rong Wang, and Guodong Zhou. Sentence-level emotion classification with label and context dependence. In ACL-IJCNLP, pages 1045–1053, 2015.
  • [Liu et al.2017] Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. Adversarial multi-task learning for text classification. In ACL, pages 1–10, 2017.
  • [Ma et al.2018] Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H Chi. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In KDD, pages 1930–1939, 2018.
  • [Majumder et al.2019] Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. Dialoguernn: An attentive rnn for emotion detection in conversations. In AAAI, pages 6818–6825, 2019.
  • [Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543, 2014.
  • [Pentyala et al.2019] Shiva Pentyala, Mengwen Liu, and Markus Dreyer. Multi-task networks with universe, group, and task feature learning. In ACL, pages 820–830, 2019.
  • [Peters et al.2018] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018.
  • [Poria et al.2017] Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency.

    Context-dependent sentiment analysis in user-generated videos.

    In ACL, pages 873–883, 2017.
  • [Poria et al.2019] Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In ACL, pages 527–536, 2019.
  • [Tzirakis et al.2017] Panagiotis Tzirakis, George Trigeorgis, Mihalis A Nicolaou, Björn W Schuller, and Stefanos Zafeiriou. End-to-end multimodal emotion recognition using deep neural networks. J-STSP, 11(8):1301–1309, 2017.
  • [Wen and Wan2014] Shiyang Wen and Xiaojun Wan. Emotion classification in microblog texts using class sequential rules. In AAAI, pages 187–193, 2014.
  • [Wu et al.2019] Lianwei Wu, Yuan Rao, Haolin Jin, Ambreen Nazir, and Ling Sun. Different absorption from the same sharing: Sifted multi-task learning for fake news detection. In EMNLP-IJCNLP, pages 4636–4645, 2019.
  • [Xiao et al.2018] Liqiang Xiao, Honglun Zhang, and Wenqing Chen. Gated multi-task network for text classification. In NAACL-HLT, pages 726–731, 2018.
  • [Zahiri and Choi2018] Sayyed M. Zahiri and Jinho D. Choi. Emotion detection on TV show transcripts with sequence-based convolutional neural networks. In

    The Workshops of the The Thirty-Second AAAI Conference on Artificial Intelligence

    , pages 44–52, 2018.
  • [Zhang et al.2019] Dong Zhang, Liangqing Wu, Changlong Sun, Shoushan Li, Qiaoming Zhu, and Guodong Zhou. Modeling both context- and speaker-sensitive dependence for emotion detection in multi-speaker conversations. In IJCAI, pages 5415–5421, 2019.
  • [Zhong et al.2019] Peixiang Zhong, Di Wang, and Chunyan Miao. Knowledge-enriched transformer for emotion detection in textual conversations. In EMNLP-IJCNLP, pages 165–176, 2019.
  • [Zhou et al.2019] Wenjie Zhou, Minghua Zhang, and Yunfang Wu. Multi-task learning with language modeling for question generation. In EMNLP-IJCNLP, pages 3385–3390, 2019.