Towards Explainable and Controllable Open Domain Dialogue Generation with Dialogue Acts

07/19/2018 ∙ by Can Xu, et al. ∙ Beihang University Microsoft 0

We study open domain dialogue generation with dialogue acts designed to explain how people engage in social chat. To imitate human behavior, we propose managing the flow of human-machine interactions with the dialogue acts as policies. The policies and response generation are jointly learned from human-human conversations, and the former is further optimized with a reinforcement learning approach. With the dialogue acts, we achieve significant improvement over state-of-the-art methods on response quality for given contexts and dialogue length in both machine-machine simulation and human-machine conversation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently, there is a surge of interest on dialogue generation for chatbots which aim to naturally and meaningfully converse with humans on open domain topics Vinyals and Le (2015)

. Although often called “non-goal-oriented” dialogue systems, such conversational agents are often built to keep users engaged in human-machine interactions as long as possible

Ram et al. (2018). While most of the existing effort is paid to generating relevant and diverse responses for static contexts Serban et al. (2016, 2017b); Sordoni et al. (2015); Li et al. (2015), it is not clear if relevance and diversity are sufficient to engagement in dynamic human-machine interactions, and if not, what else are needed to achieve the engagement.

In this work, we investigate the following problems: (1) how to understand human engagement in their social chat; (2) how to imitate such behavior in dialogue generation; (3) how to learn such a dialogue model; and (4) if the model can control its responses in interactions and thus enhance user engagement.

We design dialogue acts that can describe how human behave regarding to conversational contexts in their social interactions. The dialogue acts, when applied to real data, give rise to an interesting finding that in addition to replying with relevance and diversity, people are used to driving their social chat by constantly switching to new contexts and properly asking questions. Such behavior is less explored before, and thus is difficult for the existing end-to-end learning methods to imitate. To mimic the behavior, we propose modeling open domain dialogue generation as an alternation of dialogue act selection and response generation where the dialogue acts control the types of the generated responses and thus manage the flow of interactions as policies. The model is learnt from large scale human-human dialogues tagged with a dialogue act classifier, and the policy of act selection is further optimized for long-term conversation through a reinforcement learning approach. Our model enjoys several advantages over the existing models: (1) the dialogue acts provide interpretation to response generation from a discourse perspective; (2) the dialogue acts enhance diversity of responses by expanding the search space from language to act

language; (3) the dialogue acts improve user engagement in human-machine interactions; and (4) the dialogue acts allow engineers to control their systems by picking responses from their desired acts. Evaluation results on large scale test data indicate that our model can significantly outperform state-of-the-art methods in terms of quality of generated responses regarding to given contexts and lead to long-term conversation in both machine-machine simulation and human-machine conversation.

Our contributions in this work include: (1) design of dialogue acts that represent human behavior regarding to conversational contexts and insights from analysis of human-human interactions; (2) joint modeling of dialogue act selection and response generation in open domain dialogue generation; (3) proposal of learning the model through a supervised learning approach and a reinforcement learning approach; (4) empirical verification of the effectiveness of the model through automatic metrics, human annotations, machine-machine simulation, and human-machine conversation.

Dialogue Acts Definitions Examples
Context Maintain Statement (CM.S) A user or a bot aims to maintain the current conversational context (e.g., topic) by giving information, suggesting something, or commenting on the previous utterances, etc. there are many good places in Tokyo.” after ”I plan to have a tour in Tokyo this summer.”.
Context Maintain Question (CM.Q) A user or a bot asks a question in the current context. Questions cover 5W1H and yes-no with various functions such as context clarification, confirmation, knowledge acquisition, and rhetorical questions, etc. where are you going to stay in Tokyo?” after “I plan to have a tour in Tokyo this summer.”.
Context Maintain Answer (CM.A) A response or an answer to the previous utterances in the current context. this summer.” after “when are you going to Tokyo?”.
Context Switch Statement (CS.S) Similar to CM.S, but the user or the bot tries to switch to a new context (e.g., topic) by bringing in new content. I plan to study English this summer.” after “I plan to have a tour in Tokyo this summer.”.
Context Switch Question (CS.Q) A user or a bot tries to change the context of conversation by asking a question. When will your summer vacation start?” after “I plan to have a tour in Tokyo this summer.”
Context Switch Answer (CS.A) The utterance not only replies to the previous turn, but also starts a new topic. I don’t know because I have to get an A+ in my math exam.” after “when are you going to Tokyo?”.
Others (O) greetings, thanks, and requests, etc.. thanks for your help.
Table 1: Definition of dialogue acts.

2 Dialogue Acts for Social Engagement

2.1 Definition of Dialogue Acts

We define our dialogue acts by extending the tags (Jurafsky et al., 1997; Stolcke et al., 2006) based on the DAMSL annotation scheme (Core and Allen, 1997). Specifically, we merge some acts and define two high-level ones that describe how people behave regarding to conversational contexts in their interactions. As will be seen later, the extension brings us insights on engagement in social chat. Details of the dialogue acts are described in Table 1.

The dialogue acts in Table 1 are generally applicable to open domain dialogues from various sources in different languages such as Twitter, Reddit, Facebook, Weibo (www.weibo.com), and Baidu Tieba (https://tieba.baidu.com/), etc. Existing annotated data sets (e.g., the Switchboard Corpus111https://github.com/cgpotts/swda) do not have dialogue acts regarding to conversational contexts. Therefore, it is not clear how such dialogue acts depict human behavior in interactions, and there are no large scale data available for learning dialgoue generation with the dialogue acts either. To resolve these problems, we build a data set.

2.2 Data Set

We crawled million dyadic dialogues (conversations between two people) from Baidu Tieba. Baidu Tieba is the largest Reddit-like forum in China which allows users to communicate with each other through one posting a comment and the other one replying to the comment. We randomly sample million dialogues as a training set, thousand dialogues as a validation set, and dialogues as a test set. These data are used to learn a dialogue generation model later. We employ the Standford Chinese word segmenter (https://nlp.stanford.edu/software/tokenizer.shtml) to tokenize utterances in the data. Table 2 reports statistics of the data.

 

train val test
dialogues 9M 90k 1000
Min. turns per dialogue 3 5 5
Max. turns per dialogue 50 50 50
Avg. turns per dialogue 7.68 7.67 7.66
Avg. words per utterance 15.81 15.89 15.74

 

Table 2: Statistics of the experimental data sets.

For dialogue act learning, we randomly sample dialogues from the training set and recruit native speakers to label dialogue acts for each utterance according to the definitions in Table 1. Table 3 shows a labeling example from one annotator. Each utterance receives labels, and the Fleiss’ kappa of the labeling work is , indicating moderate agreement among the labelers.

 

Turns Dialogue Acts
A: 万里长城很漂亮! The Great Wall of China is beautiful! CM.S
B: 你在长城看日落了吗? Did you see the sunset on the Great Wall? CM.Q
A: 是的,那是最漂亮的景色。 Yes, it’s the most beautiful scenery. CM.A
B: 上次我去的时候人很多。 It was very crowded when I visited there last time CS.S
A: 我只待了一小会儿,人太多了! I only stayed there for a while. Too many vistors! CM.S

 

Table 3: An example of dialogue with labeled acts.

2.3 Insights from the labeled data

The frequencies of the dialogue acts in terms of percentages of the total number of utterances in the labeled data are CM.S %, CM.Q %, CM.A %, CS.S %, CS.Q %, CS.A %, and O %. In addition to the numbers, we also get further insights from the data that are instructive to our dialogue generation learning:

Context switch is a common skill to keep conversation going. In fact, we find that % dialogues contain at least one CS.* act. The average number of turns of dialogues that contain at least one CS.* is , while the average number of turns of dialogues that do not contain a CS.* is . When dialogues are shorter than turns, only % of them contain a CS.*, but when dialogues exceed turns, more than % of them contain a CS.*. Because there are no specific goals in their conversations, people seldom stay long in one context. The average number of turns before context switch is . We also observed consecutive context switch in many dialogues (%). The numbers suggest dialogue generation with smooth context switch and moderate context maintenance.

Question is an important building block in open domain conversation. In fact, % CM.* are CM.Q and the percentage is even higher in CS.* which is %. People need to ask questions in order to maintain contexts. The average number of turns of contexts with questions (i.e., consecutive CM.* with at least one CM.Q) is , while the average number of turns of contexts without questions is only . The observation indicates that a good dialogue model should be capable of asking questions properly, as suggested by Li et al. (2017a). A further step to study human’s questioning behavior is to look into types and functions of questions. We leave it as future work.

The observations raise new challenges that are difficult for the existing end-to-end methods to tackle (e.g., smoothly interleaving context blocks with switch actions), and thus encourage us to create a new model. Before elaborating the model, we first build a classifier that can automatically tag large scale dialogues with the dialogue acts.

2.4 Dialogue Act Classification

We aim to learn a classifier from where represents a dialogue with the -th utterance and the labeled dialogue act. Given a new dialogue , can sequentially tag the utterances in with dialouge acts by taking , , and the predicted

as inputs and outputting a vector

where the

-th element representing the probability of

being tagged as the -th dialogue act.

We parameterize

using neural networks. Specifically,

and

are first processed by bidirectional recurrent neural networks with gated recurrent units (biGRUs)

(Chung et al., 2014) respectively. Then the last hidden states of the two biGRUs are concatenated with an embedding of

and fed to a multi-layer perceptron (MLP) to calculate a dialogue act distribution. Formally, suppose that

where is the embedding of the -th word, then the -th hidden state of the biGRU is given by where is the -th state of a forward GRU, is the -th state of a backward GRU, and is a concatenation operator. and are calculated by

(1)

Similarly, we have as the -th hidden state of . Let be the embedding of , then is defined by a two-layer MLP:

(2)

where we pad zeros for

and in . We learn by minimizing cross entropy with . Let be the probability of being the -th dialogue act and be the -th element of , then the objective function of learning is formulated as

(3)

We randomly split the labeled dialogues as // dialogues with // utterances for training/validation/test. Details of model training are given in Appendix. The learned classifier achieves an accuracy of % on the test data. We employ it to tag the training, validation, and test sets in Table 2.

(a) generation network
(b) policy network
Figure 1: Policy network and generation network.

3 Dialogue Generation Model

3.1 Supervised Learning

We aim to learn a dialogue generation model from where refers to a human-human dialogue with the -th utterance and the dialogue act tagged by the classifier in Section 2.4. Given as a new dialogue session, can generate a response as the next turn of the dialogue.

Our dialogue model consists of a policy network and a generation network. A dialogue act is first selected from the policy network according to the conversation history, and then a response is generated from the generation network based on the conversation history and the dialogue act. Formally, the dialogue model can be formulated as

(4)

where is the selected dialogue act for the -th turn, and is the response. and are the policy network and the generation network respectively. is the space of dialogue acts.

Figure 1

(b) shows the architecture of the policy network. The utterance sequence and the act sequence are encoded with a hierarchical encoder and a GRU encoder respectively. Then, the last hidden states of the two encoders are concatenated and fed to an MLP to calculate a probability distribution of dialogue acts for the next turn. Formally,

, is first transformed to hidden vectors through a biGRU parameterized as Equation (1). Then, is processed by a GRU parameterized as . In parallel, is transformed to by . is then defined by

(5)

We build the generation network in a sequence-to-sequence framework. Here, we simplify as since decoding natural language responses from long conversation history is challenging. Figure 1(a) illustrates the architecture of the generation network. The only difference from the standard encoder-decoder architecture with an attention mechanism Bahdanau et al. (2015) is that in encoding, we concatenate and , and attach to the top of the long sentence as a special word. The technique here is similar to that in zero-shot machine translation (Johnson et al., 2016). Formulation details are given in Appendix.

The dialogue model is then learned by minimizing the negative log likelihood of :

(6)

where . Through supervised learning, we fit the dialogue model to human-human interactions in order to learn their conversational patterns and human language. However, supervised learning does not explicitly encourage long-term conversation (e.g., % dialogues in our training set are no more than turns), and the policy network is optimized without awareness of what is going to happen in the future when a dialogue act is selected. This motivates us to further optimize the model through a reinforcement learning approach.

3.2 Reinforcement Learning

We optimize the dialogue model through self-play (Li et al., 2016b; Lewis et al., 2017) where we let two models learned with the supervised approach talk to each other in order to improve their performance. In the simulation, a dialogue is initialized with a message sampled from the training set. Then, the two models continue the dialogue by alternately taking the conversation history as an input and generating a response (top one in beam search) until turns ( in our experiments).

To speed up training and avoid generated responses diverging from human language, we fix the generation network and only optimize the policy network by reinforcement learning. Thus, the policy in learning is naturally defined by the policy network with a state and an action. We define a reward function as

(7)

where is the expected dialogue length after taking under , is the expected response relevance within the conversation, , and . Through Equation (7), we try to encourage actions that can lead to long (measured by ) and reasonable (measured by ) conversations.

To estimate

and , we fix and construct a dialogue set ( in our experiments) by sampling after with self-play. , where , is randomly sampled from the top 5 beam search results of according to Equation (4). Inspired by (Li et al., 2016b), we terminate a simulated dialogue if (1) && , or (2) , or (3) the length of the dialogue reaches , where denotes the representation of an utterance given by the encoder of . Condition (1) means three consecutive turns are (semantically) repetitive, and Condition (2) means one agent gives repetitive responses in two consecutive turns. Both conditions indicate a high probability that the conversation falls into a bad infinite loop. and are then estimated by

where , and is the dual LSTM model proposed in (Lowe et al., 2015) which measures the relevance between a response and a context. We train with the million crawled data through negative sampling. The objective of learning is to maximize the expected future reward:

(8)

The gradient of the objective is calculated by Reinforce algorithm (Williams, 1992):

(9)

where the baseline is empirically set as .

4 Experiment

4.1 Experiment Setup

Our experiments are conducted with the data in Table 2. The following methods are employed as baselines: (1) S2SA: sequence-to-sequence with attention (Bahdanau et al., 2015) in which utterances in contexts are concatenated as a long sentence. We use the implementation with Blocks (https://github.com/mila-udem/blocks); (2) HRED: the hierarchical encoder-decoder model in (Serban et al., 2016) implemented with the source code available at (https://github.com/julianser/hed-dlg-truncated); (3) VHRED: the hierarchical latent variable encoder-decoder model in (Serban et al., 2017b) implemented with the source code available at (https://github.com/julianser/hed-dlg-truncated); and (4) RL-S2S: dialogue generation with reinforcement learning (Li et al., 2016b). We implement the algorihtm by finishing the code at (https://github.com/liuyuemaicha/Deep-Reinforcement-Learning-for-Dialogue-Generation-in-tensorflow).

All baselines are implemented with the recommended configurations in the literatures. We denote our Dialogue Act aware Generation Model with only Supervised Learning as SL-DAGM, and the full model (supervised learning + reinforcement learning) as RL-DAGM. Implementation details are given in Appendix.

4.2 Response Generation for Given Contexts

The first experiment is to check if the proposed models can generate high-quality responses regarding to given contexts. To this end, we take the last turn of each test dialogue as ground truth, and feed the previous turns as a context to different models for response generation. Top one responses from beam search (beam size) of different models are collected, randomly shuffled, and presented to native speakers to judge their quality. Each response is rated by the three annotators under the following criteria: 2: the response is not only relevant and natural, but also informative and interesting; 1: the response can be used as a reply, but might not be informative enough (e.g.,“Yes, I see” etc.); 0: the response makes no sense, is irrelevant, or is grammatically broken.

0 1 2 Kappa
S2SA 0.478 0.478 0.044 0.528
HRED 0.447 0.456 0.097 0.492
VHRED 0.349 0.471 0.180 0.494
RL-S2S 0.393 0.462 0.142 0.501
SL-DAGM 0.279 0.475 0.244 0.508
RL-DAGM 0.341 0.386 0.273 0.485
(a) Human annotations. Ratios are calculated by combining labels from the three judges.
Machine-Machine Human-Machine
RL-S2S 4.36 4.54
SL-DAGM 7.36 5.24
RL-DAGM 7.87 5.58
(b) Average dialogue length in machine-machine and human-machine conversations.
Table 4: Evaluation Results

Table 4

(a) summarizes the annotation results. Improvements from our models over the baseline methods are statistically significant (t-test, p-value

). Besides human annotations, we also compare different models using automatic metrics with the ground truth. These metrics include BLEU (Papineni et al., 2002), embedding based metrics (Liu et al., 2016) such as Embedding Average (Average), Embedding Extrema (Extrema), and Embedding Greedy (Greedy), and ratios of distinct unigrams (distinct-1) and bigrams (distinct-2) in the generated responses which are employed in (Li et al., 2015) to measure response diversity. Table 5 reports the results.

BLEU-1 BLEU-2 Average Extrema Greedy Distinct-1 Distinct-2
S2SA 4.67 1.18 21.45 16.68 21.53 0.033 0.069
HRED 3.70 1.06 16.87 13.58 20.15 0.062 0.139
VHRED 6.10 1.76 20.83 16.17 21.36 0.079 0.225
RL-S2S 5.57 1.83 20.72 16.73 20.64 0.100 0.213
SL-DAGM 6.23 2.07 20.68 16.42 21.52 0.200 0.466
RL-DAGM 6.77 2.12 21.18 16.97 21.76 0.223 0.503
Table 5: Automatic evaluation results. Numbers in bold mean that improvement from the model on that metric is statistically significant over the baseline methods (t-test, p-value ).

We can see that diversity of responses is significantly improved with the dialogue acts. This is supported by the much more responses from the two models in Table 4

(a) and the significant improvement on distinct n-grams in Table

5. The reason is that we search a response not only from a language space, but also from an act space. The dimension of dialogue acts provides further variations to the generated responses. On the other hand, due to the diversity, responses from our models may diverge from the ground truth sometimes. This is why improvements on other automatic metrics are not significant. To further explain the advantages of our models, we show an example in Table 6. Besides responses from the dialogue acts selected by our models, we also show responses from other reasonable but not selected acts. With the dialogue acts, the generated responses become really rich, from confirmation (CM.Q) to an open question (CS.Q) and then to a long informative statement (CS.S). More importantly, the dialogue acts let us know why we have such responses: both SL-DAGM and RL-DAGM try to switch to new topics (e.g., Xiamen, noodle, and plan etc.) in order to continue the conversation. One can also change the flow of the conversation by picking responses from other dialogue acts. The example demonstrates that besides good performance, our models enjoy good interpretability and controllability as well. We show more such examples in Appendix.

To further understand how the dialogue acts affect response generation, we collect generated responses from a specific dialogue act for the contexts of the test dialogues, and characterize the responses with the following metrics: (1) distinct-1 and distinct-2; (2) words out of context (OOC): ratio of words that are in the generated responses but not contained by the contexts; and (3) average length of the generated responses (Ave Len).

Table 7 reports the results. In general, responses generated from CS.* are longer, more informative, and contain more new words than responses generated from CM.*, which has been illustrated in Table 6. Another interesting finding is that statements and answers are generally more informative than questions in both CS.* and CM.*. In addition to these metrics, we also calculate BLEU scores and embedding based metrics, but do not observe significant difference among responses from different dialogue acts. The reason might be that these metrics are based on comparsion of the generated responses and human responses, but human responses in the test set are inherently mixture of responses from different dialogue acts.

Context Responses
一起吃晚餐? 中饭好吗? 中饭只能在公司吃 。 那我不能来了。我在休假。
Have dinner together? how about lunch? I can only have my lunch at company. Then I cannot join you because I am in my vacation.
S2SA: 我也是这么想的。 I think so
HRED: 放假了啊? You are already in vacation?
VHRED: 哈哈哈。 Haha.
RL-S2S: 我已经在吃了。 I am having lunch now.
SL-DAGM :好吧,我刚从厦门回来,想在食堂吃碗面。 OK. I am just back from Xiamen, and want to have noodle in cafeteria. (CS.S)
RL-DAGM :放假有什么安排。 What are you going to do for your vacation? (CS.Q)
CM.Q:放假了吗? You are already in vacation?
CM.S:我还以为你没休假呢。 I thought you were at work.
Table 6: An example of response generation. Utterances in the context are split by “”.
Distinct-1 Distinct-2 OOC Ave Len
CM.S 0.114 0.262 0.091 5.57
CM.Q 0.092 0.220 0.038 5.21
CM.A 0.119 0.269 0.094 5.58
CS.S 0.250 0.521 0.168 8.21
CS.Q 0.223 0.460 0.152 5.85
CS.A 0.244 0.500 0.166 8.42
Table 7: Characteristics of the generated responses from different dialogue acts.

4.3 Engagement Test

Figure 2: Average dialogue length of human-machine conversation in terms of different testers.

Secondly, we study conversation engagement with the proposed models. Experiments are conducted through machine-machine simulation and human-machine conversation. In both experiments, we compare SL-DAGM and RL-DAGM with RL-S2S, as RL-S2S is the only baseline optimized for future success. Responses from all models are randomly sampled from the top

beam search results. Average length of dialogues is employed as an evaluation metric as in

Li et al. (2016b).

Machine-machine simulation is conducted in a way similar to (Li et al., 2016b) in which we let two bots equipped with the same model talk with each other in simulated dialogues. Each dialogue is initialized with the first utterance of a test example, and terminated according to the termination conditions for reward estimation in Section 3.2. In human-machine conversation, we recruit native speakers as testers and ask them to talk with the bots equipped with the three models. Every time, a bot is randomly picked for a tester, and the tester does not know which model is behind. Every tester finishes dialogues with each bot. To make a fair comparison, we let the bots start dialgoues. A starting message in a dialogue is randomly sampled from the test data and copied times for all the bots. A dialogue is terminated if (1) the tester thinks the conversation cannot be continued (e.g., due to bad relevance or repetitive content etc.); or (2) the bot gives repetitive responses in two consecutive turns (measured by ). The evaluation metric is calculated with the total dialogues for each model.

Table 4 (b) reports the evaluation results. In both experiments, SL-DAGM and RL-DAGM can lead to longer conversations, and the improvements from both models over the baseline are statistically significant (t-test, p-value ). Improvements in human-machine conversation are smaller than those in machine-machine simulation, indicating the gap between the simulation environment and the real conversation environment and encouraging us to consider online optimization in human-machine conversations in the future. RL-DAGM is better than SL-DAGM in both experiments, indicating the efficacy of reinforcement learning. In addition to the overall average length, we also show the distributions of average length of dialogues across different testers in human-machine conversation in Figure 2

. Although there exists variance among the testers, the overall trend is consistent with the numbers in Table

4 (b).

The reason that our models are better is that they captured conversational patterns in human-human interactions and obtained further optimization through reinforcement learning. First, the models can pro-actively switch contexts in a smooth way. In machine-machine simulation, % (SL) and % (RL) dialogues contain at least one CS.*; and in human-machine conversation, the two percentages are % (SL) and % (RL) respectively. More interestingly, in machine-machine simulation, average lengths of dialogues without CS.* are only (SL) and (RL) respectively which are comparable with or even worse than RL-S2S, while average lengths of dialogues with CS.* are (SL) and (RL) respectively. The results demonstrate the importance of context switch for engagement in open domain conversation and one signficant effect of RL is promoting context switch in interactions for future engagment even with sacrifice on relevance of the current turn (e.g., more 0 responses than SL-DAGM in Table 4 (a)). Second, the models can drive conversations by asking questions. In machine-machine simulation, % (SL) and % (RL) dialogues contain at least one question. The percentages in human-machine conversation are % (SL) and % (RL) respectively. We show examples of machine-machine simulation and human-machine conversation in Appendix.

5 Related Work

A common practice for building an open domain dialogue model is to learn a generative model in an end-to-end fashion. On top of the basic sequence-to-sequence with attention architecture (Vinyals and Le, 2015; Shang et al., 2015), various extensions have been proposed to tackle the “safe response” problem (Li et al., 2015; Mou et al., 2016; Xing et al., 2017a); to model complicated structures of conversational contexts (Serban et al., 2016; Sordoni et al., 2015; Xing et al., 2017b); to bias responses to some specific persona or emotions (Li et al., 2016a; Zhou et al., 2017); and to pursue better optimization strategies (Li et al., 2017b, 2016b). In this work, we consider open domain dialogue generation with dialogue acts. Unlike task-oriented dialogue systems Young et al. (2013); Wen et al. (2016) where task specific dialogue acts have been extensively applied for dialogue management, only a little work on open domain dialogue modeling takes dialogue acts into account. Most of the existing work stops at performing utterance classification or clustering (Kim et al., 2010, 2012; Ivanovic, 2005; Wallace et al., 2013; Ritter et al., 2010). Recently, Zhao et al. (2017) incorporate dialogue acts in the Switchboard Corpus as prior knowledge into dialogue generation. Serban et al. (2017a) leverage dialogue acts as features in their response selection model. Our work is unique in that we design special dialogue acts to explain social interactions, control open domain response generation, and thus guide human-machine conversations.

6 Conclusion

We design dialogue acts to describe human behavior in social interactions and propose open domain dialogue generation with the dialogue acts as policies. The dialogue model is learned through a supervised learning approach a reinforcement learning approach. Empirical studies show that the proposed models can significantly outperform state-of-the-art methods in terms of both response quality and user engagement.

References

7 Appendix

7.1 Generation Network

Suppose that where is the embedding of the -th word, then the -th hidden state of the encoder is given by where

Positions of and in and are padded with zeros. Let , then in decoding the -th word , is summarized as a context vector through an attention mechanism:

where and are parameters, and is the -th hidden state of the decoder GRU in which is calculated by

The generation probability of is then defined as

where is a vector with only one element indicating the index of in the vocabulary. is finally defined as

7.2 Implementation Details of the Dialogue Act Classifier

We randomly split the labeled dialogues as , , and dialogues for training, validation, and test respectively. Utterances in the three sets are , , and respectively. In training, we represent dialogue acts as probability distributions by averaging the labels given by the three annotators. For example, if an utterance is labeled as “CM.S”, “CM.S”, and “CS.S”, then the probability distribution is . In test, we predict the dialogue act of an utterance by . To avoid overfitting, we pre-train word embeddings using word2vec222https://code.google.com/archive/p/word2vec/ with an embedding size of on the million data and fix them in training. We set the embedding size of the dialogue acts and the hidden state size of the biGRUs as , and the dimensions of the first layer and the second layer of the MLP as and

respectively. We optimize the objective function (i.e., Equation (3) in the submission) using back-propagation and the parameters are updated by stochastic gradient descent with AdaDelta algorithm

(Zeiler, 2012). The best performing model on the validation data is picked up for test.

7.3 Implementation Details of the Dialogue Model

In learning of the generation network, we set the size of word embedding as and the size of hidden vectors as in both the encoder and the decoder. Both the encoder vocabulary and the decoder vocabulary contain words. Words out of the vocabularies are replaced by a special token “UNK”. We employ AdaDelta algorithm (Zeiler, 2012) to train the generation network with a batch size . We set the initial learning rate as

and reduce it by half if perplexity on validation begins to increase. We stop training if the perplexity on validation keeps increasing in two successive epochs.

In learning of the policy network, we set the size of word embedding, the size of dialogue act, and the size of hidden states of the biGRU as . There are neurons in the first layer of the MLP and neurons in the second layer of the MLP. Vectors in the policy network have smaller sizes than those in the generation network because the complexity of dialogue act prediction is much lower than language generation.

In reinforcement learning, the size of mini-batch is and learning rate is fixed as . To estimate the reward, we train a dual LSTM (Lowe et al., 2015) with the size of word embedding and the size of hidden states as . Responses from the simulated dialogues are generated with a beam size .

In RL-S2S, we define responses as dull responses according to the frequency of responses in the training set. Table 8 gives the responses.

No. Chinese responses English translations
1 我不知道 I do not know.
2 我觉得你说得对 I think you are right.
3 你是男的女的 Are you a man or a woman?
4 嗯 我知道 I see.
5 我也不知道 I do not know either.
6 你说的对 You are right.
7 我也是这么想的 I think so.
8 好啊 OK.
Table 8: Dull responses for learning RL-S2S.

7.4 More Examples of Response Generation

We compare SL-DAGM and RL-DAGM with baseline models in terms of response quality for given contexts with more examples in Table 9.

Context Responses
你眼袋略大,常熬夜? 我大学毕业后没有熬夜过? 早睡早起好孩子 是的
Your have dark circles under your eyes. Always stay up late? I never stay up late after I graduate from my university. Good boy! Early to bed, early to rise. Yes.
S2SA: 我也是这么想的。 I think so
HRED: 哈哈。 Haha.
VHRED: 你是在夸我吗? So you are praising me?
RL-S2S: 我也是好孩子。 I am a good boy too.
SL-DAGM :我经常倒时差睡眠不好。 But I often have poor sleep because of jet lag. (CS.S)
RL-DAGM :想问你每天几点起床。 Then when do you get up everyday? (CS.Q)
CM.Q: 你要睡了吗? Are you going to sleep?
CM.S: 我也是早睡。 I sleep early as well.
我们一起合租怎么样? 多少钱? 800 加我微信
How about renting an apartment together? How much? 800. Then add me as a contact in WeChat.
S2SA: 邮件发你了。 I have sent you an email.
HRED: 你加我微信可以吗? Could you please add me in your WeChat?
VHRED: 加我微信。 Add me in your WeChat.
RL-S2S: 邮件发你了。 I have sent you an email.
SL-DAGM: 我想和你一起玩桌游。 I want to play board games with you. (CS.S)
RL-DAGM: 什么时候联系。 When can I contact you? (CM.Q)
CS.Q: 你在哪个区? Which district do you live in?
CM.S: 给我微信号。 Give me your WeChat ID.
Table 9: More examples of response generation. Utterances in the context are split by “”.

7.5 Examples in Engagment Test

Table 10 gives some examples on machine-machine simulation. Unlike the dialogues from RL-S2S which quickly converge to loops, dialogues from our models smoothly move forward under the management of the dialogue acts. The dialogue acts let us know why such responses are generated and make the simulated dialogues closer to human dialogues with moderate context continuation and jumping out of the contexts at proper timing. Table 11 and Table 12 show some examples from the test of human-machine conversation. We denote a machine turn as “M” and a human turn as “H”. After each example, we give the reason of termination in which “EOD-H” means the dialogue is terminated by the tester and “EOD-R” means the dialogue is terminated by the repetition check with the next generated turn attached. Compared to dialogues with the baseline, dialogues with our models can go deeper with much richer content, although a side-effect is that sometimes responses from CS.* might be nonsense (e.g., the first example of SL-DAGM). This sheds light on our future direction to further improve the generation network with knowledge.

RL-S2S SL-DAGM RL-DAGM
A: 无锡妹子你好!
B: 我是无锡的,你在哪儿?
A: 我在上海。
B: 我也在上海。
A: 我也在上海。
A: Hello, Wuxi girl!
B: I come from Wuxi. How about you?
A: I come from Shanghai.
B: I come from Shanghai too.
A: I come from Shanghai too.
A: 无锡妹子你好! (CM.S)
B: 你也无锡的? (CM.Q)
A: 嗯嗯,无锡的。 (CM.A)
B: 我在无锡上学呢。 (CM.S)
A: 无锡有什么好吃的吗?(CS.Q)
A: Hello, Wuxi girl! (CM.S)
B: You also come from Wuxi? (CM.Q)
A: Yes, I am. (CM.A)
B: I am a college student in Wuxi. (CM.S)
A: Is there any delicious food in Wuxi? (CS.Q)
A: 无锡妹子你好! (CM.S)
B: 你好,你也在江苏? (CS.Q)
A: 不是土著,在江苏读书。(CS.S)
B: 江苏本地人很多啊。 (CM.S)
A: 但是我身边都是外地人。(CM.S)
A: Hello, Wuxi girl! (CM.S)
B: Hello, you also in Jiangsu province? (Wuxi is a city in Jiangsu) (CS.Q)
A: I am not born in Jiangsu, but I go to college in Jiangsu. (CS.S)
B: Many people in Jiangsu are natives. (CM.S)
A: But most people around me are not natives. (CM.S)
A: 去新加坡马来西亚可以学英语.
B: 新加坡有好玩的吗?
A: 有好玩的吗?
B: 新加坡有什么好玩的?我想去看看。
A: I can learn English in Singapore and Malaysia.
B: Are there any interesting places in Singapore?
A: Are there any interesting places?
B: Are there any interesting places in Singapore? I want to have a tour in Singapore.
A: 去新加坡马来西亚可以学英语。 (CM.S)
B: 你英文名是什么? (CS.Q)
A: 我应该可以写出来。 (CM.S)
B: 那你姓什么。 (CM.S)
A: 我什么时候写出来? (CS.Q)
A: I can learn English in Singapore and Malaysia. (CM.S)
B: What is your English name? (CS.Q)
A: I think I can write it out. (CM.S)
B: What is your family name? (CM.S)
A: When shall I write my English name? (CS.Q)
A: 去新加坡马来西亚可以学英语。 (CM.S)
B: 你也想去新加坡学英语? (CM.Q)
A: 一样一样。 (CM.A)
B: 你是什么专业的? (CS.Q)
A: 我们学校有什么专业? (CM.Q)
A: I can learn English in Singapore and Malaysia. (CM.S)
B: You also want to go there to learn English? (CM.Q)
A: Yep. (CM.A)
B: What are you major in? (CS.Q)
A: What majors do we have? (CM.Q)
Table 10: Comparison of simulated dialogues from different models.
RL-S2S SL-DAGM RL-DAGM
M: 喝汤就饱了。
H: 可以减肥。
M: 减肥会不会很痛苦啊?
H: 不吃不喝确实很难受。
M: 我都不敢吃了。
H: 那就不吃了?
EOD-R (我也不敢吃了。)
M: I am full with only soup.
H: Then you can lose weight.
M: Is it tough to lose weight?
H: It’s indeed tough when you have to diet.
M: I have to diet.
H: Then diet?
EOD-R (I have to diet as well.)
M: 喝汤就饱了。 (CM.S)
H: 可以减肥。 (CS.S)
M: 吃不胖的体质伤不起啊! (CM.S)
H: 羡慕。 (CM.A)
M: 我正在增肥。 (CS.S)
H: 多吃冰淇淋。 (CM.S)
M: 冰与火之歌都吃过了。 (CM.S)
EOD-H
M: I am full with only soup. (CM.S)
H: Then you can lose weight. (CS.S)
M: I really want to get a little fatter. (CM.S)
H: I am jealous of you! (CM.A)
M: I am trying to gain weight. (CS.S)
H: Then have more ice cream. (CM.S)
M: I even ate Song of Ice and Fire.(CM.S)
EOD-H
M: 喝汤就饱了。 (CM.S)
H: 可以减肥。 (CS.S)
M: 你吃蛋白粉? (CS.Q)
H: 我不吃蛋白粉。 (CM.A)
M: 酵素粉是什么?(CS.Q)
H: 减肥神物。 (CM.A)
M: 我去试试! (CM.S)
H: 快去吧,吃酵素要坚持。 (CM.S)
M: 药店有卖酵素粉的吗?多少钱啊? (CS.Q)
H: 有,不贵。 (CM.A)
EOD-R (请问药店有卖的吗?多少钱?)
M: I am full with only soup. (CM.S)
H: Then you can lose weight. (CS.S)
M: Do you eat albumen powder? (CS.Q)
H: Nope. (CM.A)
M: What is enzyme powder? (CS.Q)
H: Something can make you slim. (CM.A)
M: I want to try it! (CM.S)
H: Go ahead, and do not give up. (CM.S)
M: Can I get enzyme powder in pharmacy? How much is it? (CS.Q)
H: Yes you can. It’s not expensive. (CM.A)
EOD-R (Can I get it in pharmacy? How much?)
Table 11: Example 1 of human-machine conversation. “M” means a machine turn, and “H” means a human turn.
RL-S2S SL-DAGM RL-DAGM
M: 你用淡妆比较好看。
H: 嗯 我听你的。
M: 谢谢你,一起努力啊。
H: 一起努力啊。
EOD-R (一起努力!)
M: Light makeup looks good for you.
H: Well, you are right.
M: Thank you. Let’s do it together!
H: Do it together!
EOD-R (Do it together!)
M: 你用淡妆比较好看。 (CM.S)
H: 嗯 我听你的。 (CM.A)
M: 只用粉底和口红吧。 (CS.S)
H: 你推荐个粉底。 (CM.S)
M: 资生堂的不错。 (CM.S)
H: 这个好。 (CM.S)
M: 因为我用过一段时间,觉得很不错(CM.S)
H: 那太好了。 (CM.S)
M: 你用过几次?(CM.Q)
H: 我没用过(CM.A)
EOD-R(你用过几次?)
M: Light makeup looks good for you. (CM.S)
H: Well, you are right. (CM.A)
M: Only use foundation and lipstick. (CS.S)
H: Recommend me a brand of foundation. (CM.S)
M: Shiseido is good. (CM.S)
H: That is great. (CM.S)
M: Because I used to try it for sometime, and feel good. (CM.S)
H: Sounds good. (CM.S)
M: How many times have you used it? (CM.Q)
H: Never. (CM.A)
EOD-R (How many times have you used it?)
M: 你用淡妆比较好看。 (CM.S)
H: 嗯 我听你的。 (CM.A)
M: 画浓妆会不会气色很差? (CS.Q)
H: 不会的。 (CM.A)
M: 那有什么副作用? (CS.Q)
H: 皮肤可能不好。 (CM.S)
EOD-R (有什么副作用?)
M: Light makeup looks good for you. (CM.S)
H: Well, you are right. (CM.A)
M: Does heavy makeup make your complexion look bad? (CS.Q)
H: Nope. (CM.S)
M: Then is there any side-effect? (CS.Q)
H: My skin will become worse. (CM.S)
EOD-R (is there any side-effect?)
Table 12: Example 2 of human-machine conversation. “M” means a machine turn, and “H” means a human turn.