DAL: Dual Adversarial Learning for Dialogue Generation

06/23/2019 ∙ by Shaobo Cui, et al. ∙ Tsinghua University Baidu, Inc. 0

In open-domain dialogue systems, generative approaches have attracted much attention for response generation. However, existing methods are heavily plagued by generating safe responses and unnatural responses. To alleviate these two problems, we propose a novel framework named Dual Adversarial Learning (DAL) for high-quality response generation. DAL is the first work to innovatively utilizes the duality between query generation and response generation to avoid safe responses and increase the diversity of the generated responses. Additionally, DAL uses adversarial learning to mimic human judges and guides the system to generate natural responses. Experimental results demonstrate that DAL effectively improves both diversity and overall quality of the generated responses. DAL outperforms the state-of-the-art methods regarding automatic metrics and human evaluations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, open-domain dialogue systems are gaining much attention owing to their great potential in applications such as educational robots, emotional companion, and chitchat. The existing approaches for open-domain dialogue systems can be divided into two categories: retrieval-based approaches Hu et al. (2014); Ji et al. (2014) and generative approaches Ritter et al. (2011); Shang et al. (2015). The retrieval-based approaches are based on conventional information retrieval techniques and strongly rely on the underlying corpus Wang et al. (2013); Lu and Li (2013). Since the capability of retrieval-based approaches is strongly limited by corpus, generative approaches are becoming more and more prominent in the field of open-domain dialogue research. The de facto backbone of generative approaches is the Seq2Seq model Bahdanau et al. (2014)

, which is essentially an encoder-decoder neural network architecture. Despite their success, Seq2Seq model and its variants 

Sordoni et al. (2015); Vinyals and Le (2015) are heavily plagued by safe responses (generic and dull responses such as “I don’t know” or “Me too”) and unnatural responses (such as “I want to go, but I don’t want to go”).

In this paper, we propose a novel framework named Dual Adversarial Learning (DAL) to alleviate the aforementioned two problems. DAL consists of two generative adversarial networks (GANs): one for query generation and the other for response generation. The response generation model is used to transfer from the query domain to the response domain , while the query generation model is for transformation from to . Here we consider the response generation task and the query generation task as dual

tasks. The generators of these two GANs are connected through the duality constraint. As such, in DAL, there are two kinds of signals that jointly instruct the optimization of generators: (1) the dual signal from the duality constraint between these two generators; (2) the adversarial signal from the discriminators. The dual signal is utilized to model the mutual relation between query generation and response generation. We use an instance to better illustrate this mutual relation: for a given query “Where to have dinner?”, compared with a safe response “I don’t know”, a more diverse and specific response “The Indian cuisine around the corner is great” usually has a higher probability of being transformed back to the given query. DAL takes full advantage of this intuition via dual learning, which avoids generating safe responses and improves the diversity of the generated responses. Additionally, in order to make the generated responses as natural as possible, the adversarial signal in DAL mimics human judges to alleviate unnatural responses. We compare DAL with state-of-the-art methods through extensive experiments, and DAL demonstrates superior performance regarding automatic metrics, human evaluations, and efficiency.

There are crucial differences between our dual approach and Maximum Mutual Information (MMI) Li et al. (2016) though both utilize the reverse dependency to improve the diversity of the generated responses. Due to the challenging mutual information objective, the distribution is same as that in vanilla Seq2Seq in MMI. More specifically, in MMI is trained only by maximum likelihood objective at training time (we use

to denote the probability distribution of predicting the response

given the query ). The mutual information in MMI is utilized only at inference time, and the inference process is not only time-consuming but also inaccurate in MMI. However, in our dual approach is trained by not only the maximum likelihood objective but also the diversity objective (duality constraint) at training time. Since the dual approach directly incorporates the reverse dependency information at the training time, it can avoid the time-consuming inference plaguing MMI. Additionally, the dual approach does not need to maintain a large size optional response set for the time-consuming reranking strategy in MMI-bidi (one variant of MMI). The dual approach shows its efficiency superiority over MMI in real-life applications, which is shown in our efficiency experiment.

Our dual approach is quite different from the reinforcement learning based structure having two Seq2Seq models in

Zhang et al. (2018a)222Our dual approach is finished independently with this work in addition to the crucial difference. We did not notice this paper until our work is done.. In Zhang et al. (2018a), , which generates a response given a query , uses the conditional probability calculated by as the coherence measure to guide in the reinforcement learning process. Similarly, , which generates a query given a response , uses the conditional probability calculated by as the coherence measure to guide in the reinforcing learning process. However, in our work, we utilize the joint probability to connect these two Seq2Seq models and thus avoid unstable and time-consuming reinforcement learning in the dual approach. Recently, we notice that Zhang et al. Zhang et al. (2018b) propose to use an adversarial learning method named adversarial information maximization (AIM) to improve the informativeness and diversity of generated responses. Though AIM also uses two reverse models, it involves the calculation of and . Our DAL, however, involves the calculation of and . As for the model structure, those two reverse Seq2Seq models in AIM share the discriminator while the two reverse Seq2Seq models in DAL have their own discriminators. The reason for these difference is that the objective of our DAL is to enforce the dual constraint while AIM is to use adversarial learning method to optimize a variational lower bound on mutual information between query and response. Besides, our DAL framework is strongly different from previous structures that are composed of two GANs, such as CycleGAN Zhu et al. (2017), DiscoGAN Kim et al. (2017) and DualGAN Yi et al. (2017). Those works can only be utilized on the image translation task and two generators are connected by cycle consistency, i.e., for each image in domain , the image translation cycle is supposed to bring to the original image: . However, cycle consistency

is difficult to be applied into the text generation task. In our paper, we use the

joint distribution of query-response pairs rather than cycle consistency to enforce the duality between these two dual generators.

The contributions of this paper are listed as follows:

To the best of our knowledge, this is the first work that adopts the duality to avoid safe responses in open-domain dialogue systems. It sheds light on the utility of query generation in improving the performance of response generation.

DAL is a novel framework that integrates dual learning and adversarial learning, which complementary and jointly contributes to generating both diverse and natural responses.

The rest of this paper is organized as follows. The related work is firstly reviewed. The DAL framework is introduced in Section 3 and the training of DAL is described in Section 4. Experimental results are shown in Section 5, followed by the conclusion of this paper in Section 6.

2 Related Work

(a) The architecture of DAL.
(b) The architecture of the discriminator.
Figure 1: Dual Adversarial Learning.

2.1 Dual Learning

Many machine learning tasks have emerged in dual forms, such as dual neural machine translation (dual-NMT) 

He et al. (2016), image classification and conditional image generation van den Oord et al. (2016). Dual learning He et al. (2016) is proposed on the assumption that the dual correlation could be used to improve both the primal task and its dual task: the primal task aims to map from input space to output space , whereas the dual task takes samples from space and maps to space . Tang et al. Tang et al. (2017) implemented a dual framework for the question answering system. Their model regards the answer selection (given a question and its several candidate answers, select the most satisfying answer to answer the question) and the question generation as dual tasks, which increases the performance of both.

2.2 Adversarial Learning

Adversarial learning Goodfellow et al. (2014), or Generative Adversarial Network (GAN), has been proven to be a promising approach for generation tasks. GAN achieves great success on the image generation task Huang et al. (2017). However, since the decoding phase in the Seq2Seq model involves sampling discrete words, GAN cannot be directly applied to the generative approach for text generation. By regarding the sequence generation as an action-taking problem in reinforcement learning, Li et al. Li et al. (2017) proposed to apply GAN to dialogue generation, in which the output of the discriminator is used as the reward for the generator’s optimization. Xu et al. Xu et al. (2017) introduced an approximate embedding layer to solve the non-differentiable problem caused by the discrete decoding phase.

2.3 Work on the Safe Response Problem

There is some existing work on the safe response problem. The first kind of approach is to introduce specific keywords Mou et al. (2016) or topic information Xing et al. (2017) into the generated responses. These methods shift the difficulty from diverse response generation to keyword or topic prediction, which are also challenging tasks. The second kind of approach takes the reverse dependency (the query generation task given the responses) into consideration. Li et al. Li et al. (2016) considered the reverse dependency and proposed Maximum Mutual Information (MMI) method, which is empirically plagued by ungrammatical responses (MMI-antiLM) and huge decoding space (MMI-bidi).

3 DAL Framework

In this section, we firstly give an overview of the DAL framework in Section 3.1 and then elaborate the discriminators and dual generators in Section 3.2 and Section 3.3 separately. Finally, we discuss the reason why duality promotes diversity in Section 3.4.

3.1 Overview

The architecture of DAL is presented in Figure 0(a). The real query and response are denoted by and , whereas the generated query and response are denoted as and . DAL consists of two GANs (one for query generation and the other for response generation). Generators are denoted by and and the corresponding discriminators are denoted as and . The input of is a real query and the output is the generated response . Similarly, for , the input is a real response and the output is the generated query . For , the input is the ficto-facto query-response pair , and the output

is estimated probability of the query-response pair being human-generated, which is estimated by

. Analogously, the input of is the ficto-facto pair , and the output is the estimated probability of the input pair being human-generated. and are connected by the duality constraint derived from the joint probability . The adversarial signal from discriminators, , , are passed to the corresponding generators as the reward through policy gradient.

(a) An example corpus.
(b) Queries and responses with duality constraint.
Figure 2: An example to illustrate why duality promotes diversity.

3.2 Discriminator

The discriminator mimics a human judge and guides the generator to generate natural utterances. The architecture of the discriminator is shown in Figure 0(b)

. Gated Recurrent Unit (GRU) based 

Bahdanau et al. (2014) neural networks are used to obtain the query embedding and the response embedding

. The concatenation vector

is used as the abstract representation of the query-response pair. is further passed through two fully-connected layers. The output of the last fully-connected layer is the estimated probability of the query-response pair being human-generated. The objective of the discriminator is formalized as follows:

(1)

where denotes the real-world query-response distribution. For the response generation task, is and is , while for the query generation task, is and is .

3.3 Dual Generators

Both generators adopt the conventional encoder-decoder Seq2Seq structure, in which GRU is used as the basic unit. The correlation between the dual tasks (query generation and response generation) can be represented with the joint probability :

(2)

where and are language models pre-trained on the query corpus and the response corpus. In this paper, we use smoothed bigram language models for both and . and are the dual generators. Both and

can be obtained through the markov chain rule:

where and are the formulation of the decoders in the Seq2Seq model.

3.4 Duality Promotes Diversity

To better illustrate why duality increases the diversity of the generated responses, we show some query-response pair examples in Figure 2(a). In Figure 1(a), each directional arrow starts from a query while ends at its corresponding response. It can be observed that: (1) Safe response “I don’t know” connects to many queries, i.e., {}. (2) More diverse and specific response “The Indian cuisine around the corner is great”, nevertheless, exactly corresponds to only one query “Where to have dinner?”. 333There may exist several other queries that can be replied using “The Indian cuisine around the corner is great”. But this number is much smaller than those that can be replied using “I don’t know”. For simplicity, we only show only one query here for the response “The Indian cuisine around the corner is great”. This would not affect the following analysis.

In the training process of , the increase of , denoted by , is much bigger than the increase of , denoted by . Formally,

The reason behind this phenomenon is as follows. The safe response relates with queries . When is provided with or , is optimized to increase the log conditional probability or , it is inevitable that will decrease to a certain extent, since these log conditional probabilities share the same parameters . The same principle applies to when is provided with or . However, the diverse response is uniquely connected to the query , in that case, takes all efforts to increase .

With the duality constraint in Eq. 2, we obtain:

(3)

Since both and are obtained from the pre-trained language models, both of them are constant for any query-response pair . is also constant for any . Take the formulation of Eq. 3, we can obtain:

From above equation, we observe that the increase of , denoted as , and the increase of , denoted by , is supposed to be equal for any query-response pair , since is constant during the training process. Therefore,

in turn makes

When finishes its training process, we obtain . This indicates that it is more likely for to assign higher probability to the diverse response given the query.

We use Figure 1(b) to visually explain this intuition. We suppose that both queries and responses “possess” their own spatial space. The coordinates of the ellipse and the rectangle represent the locations of the query and the response in the spatial space. The distance between and represents the probability of transforming between and , namely and . The shorter the distance, the larger the probability. When and are provided with a query-response pair , the training objectives of and are to increase the probability and , i.e., to shorten the distance between and . Since the safe response corresponds to , the position of this safe response is determined by all involved queries. Because each of these involved queries attempts to “drag” close to itself, the safe response “chooses” to keep a distance with each of them to balance the involved queries. However, the diverse response corresponds to exactly one query . “selects” to stay as close to as possible. As it can be seen from the figure, the distance between and is much shorter than the distance between and , i.e., is much larger than . In other words, with the duality constraint, tends to generate diverse responses rather than safe responses.

4 Training of DAL

Duality Constraint for Diversity

Direct enforcement of the constraint in Eq. 2 is intractable. The duality constraint in Eq. 2 can be transformed into a regularization term:

(4)

We minimize to enforce the duality constraint in order to generate more diverse responses.

Adversarial Signal for Naturalness

The decoding phase in the Seq2Seq model involves sampling discrete words. This discrete sampling makes the optimization of the generator based upon the discriminator’s guidance non-differentiable. To circumvent the non-differentiable obstacle, we optimize each generator through reinforcement learning. The policy gradient is applied to pass the discriminator’s adversarial signal to the generator. The discriminator gives a score based on its judgment of how likely the generated is human-generated:

For response generation, is , is , is , is the real query and is the generated response. Analogously, in query generation, is , is , is , is the real response and is the generated query. is used as the reward for the optimization of . With the likelihood ration trick Williams (1992); Sutton et al. (2000), the gradient of can be approximated as:

where

is used to reduce the variance of the estimation while keeping the estimation unbiased, and

is the probability distribution defined by the generator .

Combined Gradient

In DAL, the gradient for updating each generator is the weighted combination of  (for natural responses) and  (for avoidance of safe responses):

(5)

Teacher Forcing

When the generator is trained with only the adversarial signals from the discriminator and the duality constraint, the training process of the generator easily collapses. This is because the discriminator sometimes is remarkably better than the corresponding generator in certain training batches. The discriminator can easily discriminate all the generated utterances from real ones. The generator realizes that it generates low-quality samples but cannot figure out the good standard. To stabilize the training process, after each update with the combined gradient or , the generators are provided with real query-response pairs and are strengthened with maximum likelihood training, which is also known as Teacher Forcing Li et al. (2017); Lamb et al. (2016).

0:  Two language models and pre-trained on the query corpus and the response corpus.
0:   and
1:  Randomly initialize ,, , .
2:  Pre-train and using maximum likelihood estimation objective.
3:  Pre-train and by Eq. 1.
4:  while models have not converged do
5:     for  do
6:        Sample from real-world data.
7:        Update by Eq. 1 with and .
8:        Update by Eq. 1 with and .
9:     end for
10:     for  do
11:        Sample from real-world data.
12:        Update by in Eq. 5.
13:        Teacher Forcing: update with
14:        Update by in Eq. 5.
15:        Teacher Forcing: update with
16:     end for
17:  end while
Algorithm 1 Training of DAL.

The training procedure of DAL is presented in Algorithm 1. Firstly, we use maximum likelihood estimation to pre-train and . Analogously, and are also pre-trained according to Eq. 1. After the pre-training phase, each generator is optimized by both duality constraint and adversarial signal, followed with the regularization of Teacher Forcing. The corresponding discriminators are simultaneously optimized.

5 Experiments

5.1 Experimental Settings

A Sina Weibo dataset Zhou et al. (2017) is employed to train the models. We treat each query-response pair as a single-turn conversation. Attention mechanism Luong et al. (2015)

is applied in all the methods to enhance the performance. All the methods are implemented based on the open source tools Pytorch

Paszke et al. (2017) and OpenNMT Klein et al. (2017). Our experiments are conducted on a Tesla K40 cluster. For better replication, we detail the experiment settings, model parameters and preprocessing strategies in the appendix document. Further, we will release our code to the open-source community after the anonymous paper period.

In order to verify the effectiveness of DAL, we compare the following methods:
Seq2Seq: the standard Seq2Seq model Sutskever et al. (2014).
MMI-anti: the mutual information method Li et al. (2016), which uses an anti-language model in inference.
MMI-bidi: the mutual information method Li et al. (2016), which first generates a N-best response set with and then reranks this response set with in inference.
Adver-REIN: the adversarial method adopting REINFORCE algorithm Li et al. (2017).
GAN-AEL: the adversarial method with an approximate embedding layer to solve the non-differentiable problem Xu et al. (2017).
DAL-Dual (ours): DAL trained only with maximum likelihood (Teacher Forcing) and duality constraint ( or .
DAL-DuAd (ours): DAL-Dual with adversarial learning (Algorithm 1).

Both DAL-Dual and DAL-DuAd are methods proposed by us: the former incorporates the dual signal only, while the later combines the dual signal and the adversarial signal.

5.2 Experimental Results

We firstly evaluate DAL on the task of generating of diverse responses. Then we resort to human annotators to evaluate the overall quality of the generated responses. Finally, we present several cases generated by all the involved method.

Response Diversity Measured by Distinct

DISTINCT is a well-recognized metric to evaluate the diversity of the generated responses Li et al. (2016); Xing et al. (2017). In our experiment, we employ DISTINCT-1 and DISTINCT-2, which calculate distinct unigrams and bigrams in the generated responses respectively. Table 1 presents the results of the five methods.

Method DISTINCT-1 DISTINCT-2
Seq2Seq 0.031 0.137
MMI-anti 0.033 0.141
MMI-bidi 0.034 0.143
Adver-REIN 0.036 0.145
GAN-AEL 0.038 0.149
DAL-Dual (ours) 0.052 0.209
DAL-DuAd (ours) 0.049 0.201
Table 1: Results of diversity evaluation.

From Table 1, we have the following observations: (1) Both MMI-anti and MMI-bidi slightly improve the performance as compared with Seq2Seq. MMI-bidi heavily relies on the diversity of the N-best response set generated by . When is not large enough to include some infrequently-occurring responses into the optional set, this set may lack diversity, and thus the ultimate response obtained with the reranking strategy also lacks diversity. However, when is large, some responses having low coherence with the given query will be included in the optional set, and such responses may be selected as the final response, which hurts the performance of MMI-bidi. Therefore, the selection of is an arduous task. MMI-anti also heavily relies on the anti-language model to obtain diverse responses. (2) Compared with Seq2Seq, our DAL-Dual improves diversity by 67.7% measured by DISTINCT-1 and 52.6% measured by DISTINCT-2, which reveals the effectiveness of the dual approach in improving diversity. (3) As expected, compared with Adver-Rein and GAN-AEL, our DAL-DuAd further improves the diversity of the generated responses. This observation proves our assumption that, with the guidance of discriminators and , the generator is able to influence the generator to produce more diverse responses.

We do notice that DAL-Dual achieves slightly better performance than DAL-DuAd on diversity. The reason is that sometimes adversarial methods tend to generate some short but quality responses such as “Let’s go!” for given queries such as “We can have dinner together tonight. ” or “There is an exhibition at the National Museum.”. However, this short but natural response would harm diversity metrics. However, this does not violate the assumption that adversarial signal makes generated responses more natural. To demonstrate further boost brought by adversarial signal, we conduct two pair-wise experiments in appendix file.

Response Quality Evaluated by Human

Figure 3: Case study.

Since the word overlap-based metrics such as BLEU Papineni et al. (2002) and embedding-based metrics are inappropriate for response quality evaluation due to their low correlation with human judgment Liu et al. (2016); Mou et al. (2016), we resort to human annotators to evaluate the overall quality of the generated responses. We employ annotators to evaluate the quality of 200 responses generated from each of the aforementioned methods. Three annotators are required to score the overall quality of the generated responses. 2: the response is natural, relevant and informative. 1: the response is appropriate for the given query but may not be very informative. 0: the response is completely irrelevant, incoherent or contains syntactic errors. The final score for each response is the average of the scores from all the annotators. The human evaluation results are listed in Table 2.

Method Human rating Kappa
Seq2Seq 0.470 0.56
MMI-anti 0.568 0.46
MMI-bidi 0.523 0.60
Adver-REIN 0.767 0.49
GAN-AEL 0.758 0.52
DAL-Dual (ours) 0.730 0.47
DAL-DuAd (ours) 0.778 0.50
Table 2: Results of human elevation: response quality.

The agreement among annotators is calculated with Fleiss’ kappa Fleiss (1971). The agreement ratio is in a range from 0.4 to 0.6, showing moderate agreement. Based on the results, we have the following observations: (1) DAL-DuAd achieves the highest quality score, indicating that our DAL-DuAd has the ability to produce coherent and informative responses. (2) Adver-REIN and GAN-AEL also obtain fairly good pointwise scores. This is because the adversarial learning mechanism effectively guides the generated responses to be close to the human-generated responses. (3) Compared with Seq2Seq, MMI-anti and MMI-bidi, our DAL-Dual obtains relatively satisfactory performance on overall quality. It shows that the dual signal can also improve the overall quality.

Pairwise Experiment

We conduct the pairwise evaluation on {Seq2Seq, DAL-Dual} and {DAL-Dual, DAL-DuAd}. The former aims to evaluate the dual signal while the latter targets on assessing the adversarial signal. 200 queries are used to evaluate the methods. The comparison results are shown in Table 3. The annotator agreement ratio is also in a range from 0.4 to 0.6, which is interpreted as moderate agreement Fleiss (1971). The comparison between {Seq2Seq, DAL-Dual} shows that DAL-Dual outperforms Seq2Seq, which demonstrates the effectiveness of the dual signal in improving the overall quality. Furthermore, the comparison between {DAL-Dual, DAL-DuAd} proves that the guidance from the discriminator can further improve the overall quality. The pairwise results verify that the dual signal and the adversarial signal in DAL collaborate together to enhance the overall quality of the generated responses.

Method Wins Ties Losses Kappa
Seq2Seq 24.25% 45.00% 30.75% 0.47
DAL-Dual 30.75% 45.00% 24.25%
DAL-Dual 23.50% 49.00% 27.50% 0.49
DAL-DuAd 27.50% 49.00% 23.50%
Table 3: Results of human elevation: pairwise.

Case Study

We present several cases in Figure 3. For the first case involving

the content on the mouse pad

, most of the baselines generate generic responses such as“Come on!”, “Haha!” or “It’s nothing!”. On the contrary, our DAL-Dual and DAL-DuAd method produce much more diverse and informative responses, such as “You are so cute!” and “I also catch such an idea.”. These two entertaining responses are also topically coherent and logically consistent with the given query. In the second cases, our methods are also capable of capturing the topic amazing country shown in the query, and well generate the diverse and coherent responses following the topic of the query, such as “What an amazing country!” or “It is really amazing!”. In contrast, the baselines still tend to provide safe responses lacking diversity to different queries.

5.3 Comparison of Efficiency

Efficiency is a crucial factor for real-life applications such as online chatbots. We conduct an experiment to evaluate the efficiency of all the methods under study. The efficiency experiment is conducted ten times on one Tesla K40m GPU whose memory is 11471M. The average time consumed by each method to generate the responses for 1000 queries is reported in Figure 4. MMI-bidi-5, MMI-bidi-10 and MMI-bidi-20 denote the MMI-bidi method with the N-best size of 5, 10 and 20 respectively. We can see that MMI-anti and GAN-AEL are the most time-consuming in all the baselines. Besides, we note that MMI-bidi method with the reranking strategy, even with a relatively small N-best size of 5, consumes much longer time than our methods, which severely limits MMI-bidi’s application in practice. However, Seq2Seq, Adver-REIN, DAL-Dual and DAL-DuAd have very similar efficiency performance. Compared with Seq2Seq and Adver-REIN, DAL-Dual and DAL-DuAd achieve much better performance on diversity and overall quality. Therefore, DAL is more suitable for real-life applications.

Figure 4: Time consumed by different methods.

6 Conclusion

We propose a novel framework named DAL to alleviate two prominent problems (safe responses and unnatural responses) plaguing dialogue generation. The dual learning proposed in this paper is the first effort to utilize the reverse dependency between queries and responses to reduce the probability of safe response generation and improve the diversity of the generated responses. Adversarial learning makes the generated responses as natural to human-generated ones as possible. DAL seamlessly integrates dual learning and adversarial learning, which are complementary to each other. Experimental results show that DAL achieves better performance than the state-of-the-art methods in terms of diversity, overall quality and efficiency.

References

  • Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
  • Fleiss (1971) Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS, pages 2672–2680.
  • He et al. (2016) Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In NIPS, pages 820–828.
  • Hu et al. (2014) Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In NIPS, pages 2042–2050.
  • Huang et al. (2017) Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, and Serge Belongie. 2017. Stacked generative adversarial networks. In CVPR, pages 1866–1875. IEEE.
  • Ji et al. (2014) Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988.
  • Kim et al. (2017) Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. 2017. Learning to discover cross-domain relations with generative adversarial networks. In ICML, pages 1857–1865.
  • Klein et al. (2017) Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. Proceedings of ACL 2017, System Demonstrations, pages 67–72.
  • Lamb et al. (2016) Alex M Lamb, Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In NIPS, pages 4601–4609.
  • Li et al. (2016) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL-HLT, pages 110–119.
  • Li et al. (2017) Jiwei Li, Will Monroe, Tianlin Shi, Sėbastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In EMNLP, pages 2157–2169.
  • Liu et al. (2016) Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016.

    How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation.

    In EMNLP, pages 2122–2132.
  • Lu and Li (2013) Zhengdong Lu and Hang Li. 2013. A deep architecture for matching short texts. In NIPS, pages 1367–1375.
  • Luong et al. (2015) Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP, pages 1412–1421.
  • Mou et al. (2016) Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In COLING, pages 3349–3358.
  • van den Oord et al. (2016) Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. 2016. Conditional image generation with pixelcnn decoders. In NIPS, pages 4790–4798.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311–318.
  • Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.
  • Ritter et al. (2011) Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In EMNLP, pages 583–593.
  • Shang et al. (2015) Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In ACL, volume 1, pages 1577–1586.
  • Sordoni et al. (2015) Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In NAACL-HLT, pages 196–205.
  • Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112.
  • Sutton et al. (2000) Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pages 1057–1063.
  • Tang et al. (2017) Duyu Tang, Nan Duan, Tao Qin, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027.
  • Vinyals and Le (2015) Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869.
  • Wang et al. (2013) Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In EMNLP, pages 935–945.
  • Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256.
  • Xing et al. (2017) Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI, pages 3351–3357.
  • Xu et al. (2017) Zhen Xu, Bingquan Liu, Baoxun Wang, SUN Chengjie, Xiaolong Wang, Zhuoran Wang, and Chao Qi. 2017. Neural response generation via gan with an approximate embedding layer. In EMNLP, pages 617–626.
  • Yi et al. (2017) Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. 2017.

    Dualgan: Unsupervised dual learning for image-to-image translation.

    In ICCV, pages 2868–2876. IEEE.
  • Zhang et al. (2018a) Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2018a. Reinforcing coherence for sequence to sequence model in dialogue generation. In IJCAI, pages 4567–4573.
  • Zhang et al. (2018b) Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Systems, pages 1810–1820.
  • Zhou et al. (2017) Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2017. Emotional chatting machine: Emotional conversation generation with internal and external memory. arXiv preprint arXiv:1704.01074.
  • Zhu et al. (2017) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, pages 2242–2251. IEEE.