An Attentional Neural Conversation Model with Improved Specificity

06/03/2016 ∙ by Kaisheng Yao, et al. ∙ Microsoft The Chinese University of Hong Kong 0

In this paper we propose a neural conversation model for conducting dialogues. We demonstrate the use of this model to generate help desk responses, where users are asking questions about PC applications. Our model is distinguished by two characteristics. First, it models intention across turns with a recurrent network, and incorporates an attention model that is conditioned on the representation of intention. Secondly, it avoids generating non-specific responses by incorporating an IDF term in the objective function. The model is evaluated both as a pure generation model in which a help-desk response is generated from scratch, and as a retrieval model with performance measured using recall rates of the correct response. Experimental results indicate that the model outperforms previously proposed neural conversation architectures, and that using specificity in the objective function significantly improves performances for both generation and retrieval.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, neural network based conversation models 

[Serban et al.2015b, Sordoni et al.2015b, Vinyals and Le2015, Shang et al.2015]

have emerged as a promising complement to traditional partially observable Markov decision process (POMDP) models 

[Young et al.2013]. The neural net based techniques require little in the way of explicit linguistic knowledge, e.g., creating a semantic parser, and therefore have promise of scalability, flexibility, and language independence. Broadly speaking, there are two approaches to building a neural conversation model. The first is to train what is essentially a conversation-conditioned language model, which is used in generative mode to produce a likely response to a given conversation context. The second approach is retrieval-based, and aims at selecting a good response from a list of candidates taken from the training corpus.

Neural conversation models are usually trained similarly to neural machine translation models 

[Sutskever et al.2014, Cho et al.2014], which treat response generation as a surface-to-surface transformation. While simple, it is possible that these models would benefit from explicit modeling of conversational dynamics, specifically the attention and intention processes hypothesized in discourse theory [Grosz and Sidner1986]. A recent improvement along these lines is the hierarchical recurrent encoder-decoder in [Serban et al.2015b, Sordoni et al.2015b] that incorporates two levels of recurrent networks, one for generating words and one for modeling dependence between conversation turns. In this paper, we extend this approach further, and propose an explicit intention/attention model.

We further tackle a second key problem of neural conversation models, their tendency to generate generic, non-specific responses [Vinyals and Le2015, Li et al.2016]. To address the problem of specificity, a maximum mutual information (MMI) method for generation was proposed in  [Li et al.2016], which models both sides of the conversation, each conditioned on the other. While we find this method effective, training the additional model doubles the computational cost.

The contributions of this paper are as follows. First, we introduce a novel attention with intention neural conversation model that integrates an attention mechanism into a hierarchical model. Without the attention mechanism, the model resembles the models in [Serban et al.2015b] but performs better. Visualization of the intention layer in the model shows that it indeed is relevant to intent. Second, we address the specificity problem by incorporating inverse document frequency (IDF) [Salton and Buckley1988, Ramos2003]

into the training process. The proposed training algorithm uses reinforcement learning with the IDF value of the generated sentences as a reward signal. To the best of our knowledge, this is the first method incorporating specificity into the training objective function. Empirically, we find that it performs better than the dual-model method of 

[Li et al.2016]

. Lastly, we demonstrate that the proposed model also performs well for retrieval-based conversation modeling. Using a recently proposed evaluation metric 

[Lowe et al.2015, Lowe et al.2016], we observed that this model was able to incorporate term-frequency inverse document frequency (TF-IDF) [Salton and Buckley1988] and significantly outperformed a TF-IDF retrieval baseline and the model without using TF-IDF.

2 The model

The proposed model is in the encoder-decoder framework [Sutskever et al.2014]

but incorporates a hierarchical structure to model dependence between turns of conversation process. The encoder network processes the user input and represents it as a vector. This vector is the input to a recurrent network that models context or intention to generate response in the decoder network. The decoder generates a response sequence word-by-word. For each word, the decoder uses an attention model on the words in the user input. Following

[Grosz and Sidner1986], we refer the conversation context as intention. Because an attention model is used in the decoder, we denote this model as attention with intention (AWI) neural conversation model. A detailed illustration for a particular turn is in Figure 1. We elaborate each component of this model in the following sections.

Figure 1: Illustration of the AWI model in one turn to generate a response sequence [X,Y,Z] for input sequence [A,B,C,D] and the previous response [X’,Y’,Z’].

2.1 Encoder network

Given a user input sequence with length at turn , the encoder network converts it into a sequence of vectors , with vector denoting a word embedding representation of the word at position

. The model uses a feed-forward network to process this sequence. It has two inputs. The first is a simple word unigram feature, extracted as the average of the word embeddings in

. The second input is a representation of the previous response. This representation is also a word unigram feature, but is applied on the past response. The output, , from the top layer of the feed-forward network is a vector representation of the user input. In addition to this vector representation, the encoder network outputs the vector sequence .

2.2 Intention network

The middle layer represents a conversation context, which we denote it as the intention hypothesized in [Grosz and Sidner1986]. At turn , it is a vector . To model turn-level dynamics, the activity in is dependent on the activity in the previous turn and the user input at the current turn. Therefore, it is natural to represent as follows


where is a tanh operation. and are matrices to transform inputs to a space with dimension . Usually, we apply multiple layers of the above non-linear processing, and the higher layer only applies to the output from the layer below. Notice that s are untied across layers. The output from the top one is the representation of the intention network.

2.3 Decoder network

Decoder network has output , a vector with dimension of vocabulary size

. Each element of the vector represents a probability of generating a particular word at position

. This probability is conditioned on , the word generated before, the above described intention vector and the encoder output . To compute this probability, it uses softmax on a response vector . The probability is defined as follows


The decoder uses the following modules to generate response vector .


For position , the hidden state of the RNN is recursively computed as


where , and each represent the previous hidden state, the word generated and the response vector for word position . , and are matrices to transform their right side inputs to a space with dimension . During training, is a one-hot vector with its non-zero element corresponding to the index of the word at . During test, it is still a one-hot vector but the index of the non-zero element is from beam search or greedy search at position . We apply multiple layers of the above process on , with the higher layer uses only the lower layer output in the left side of (3). Parameters such as are untied across layers. The top level response in (3) is the RNN output. To incorporate conversation context, the initial state at is set to from the intention network.

Attention layer

We use the content-based attention mechanism [Bahdanau et al.2015, Luong et al.2015]. It is a single layer neural network that aligns the target side hidden state at the previous position with the source side representation at word position . The alignment weight is computed as follows


where and are matrices to transform their inputs to a space with dimension . is a vector. The softmax operation in (5) is normalized on the user input sequence. in (6) is the output of the attention layer.

Response generation

We use the following feed-forward network to generate a response vector , using the decoder RNN output and the attention layer output ; i.e.,


where and are matrices to transform their right side inputs to a space with dimension . Similarly as those networks described above, we use untied multiple layers to generate , and the input to the higher layer only use the output from the layer below. The top layer output is fed into softmax operation in Eq. (2) to compute the probability of generating words.

Input similarity feature

We construct a linear direct connection between user inputs and the output layer. To achieve this, a large matrix with dimension is used to project each input word to a high dimension vector with dimension size . The projected words are then averaged to have a vector with dimension . This vector is added to the response vector , so that Eq. (2) is computed as . Since is applied to all word positions at turn , it provides a global bias to the output distribution.

3 Training and decoding algorithms

This section presents training and decoding algorithms for response generation and retrieval. Section 3.1 is the standard cross-entropy training. It is used for training both generation and retrieval models. Section 3.2 introduces training and decoding algorithms to enhance specificity for generation. Algorithms for training and decoding for retrieval are described in Sec. 3.3.

3.1 Maximum-likelihood training

The standard training method maximizes the probability of predicting the correct word given user input , context , and the past prediction ; i.e., the objective is maximizing the following log-likelihood w.r.t. the model parameter ,


A problem with this training is that the learned models are not optimized for the final metric [Och2003, Ranzato et al.2016]. Another problem is that decoding to maximize sentence likelihood typically results in non-specific high-frequency words [Li et al.2016].

3.2 Improving specificity for generation

We propose using inverse document frequency (IDF) [Salton and Buckley1988] to measure specificity of a response. IDF is used in decoding in Sec. 3.2.2. We describe a novel algorithm in Sec. 3.2.3 that incorporates IDF in training objective.

3.2.1 Specificity

IDF for a word is defined as


where is the number of sentences in a corpus and denotes a sentence in that corpus. The denominator represents the number of sentences in which the word appears. A property of IDF is that words occur very frequently have small IDF values.

We further define a sentence-level IDF as the average of IDF values of words in a sentence; i.e.,


where the denominator is the number of occurrence of words in sentence . A corpus-level IDF value is similarly computed on a corpus with an average operation as and the denominator in the equation is the number of occurrence of words in the corpus.

3.2.2 Reranking with IDF

One way to improve specificity is using IDF to rerank hypotheses from beam search decoding. The length-normalized log-likelihood scores of these hypotheses are interpolated with sentence-level IDF scores. Tuning the interpolation weight is on a development set using minimum error rate training (MERT) 

[Och2003]. The interpolation weight that achieves the highest BLEU [Papineni et al.2002] score on the development set is used for testing.

3.2.3 Incorporating IDF in training objective

Alternatively, we cast our problem of optimizing a model directly for specificity in the reinforcement learning framework [Sutton and Barto1988]. The decoder is an agent with its policy from Eq. (2). Its action is to generate a word using the policy, and therefore it has actions to take at each time. At the end of generating a whole sequence of words for response, the agent receives a reward, calculated as the sentence level IDF score of the generated response. Training therefore should find a policy that maximizes the expected reward.

This problem can be solved using REINFORCE [Williams1992], in which the gradient to update model parameter is calculated as follows

where is the IDF score of the generated response at turn . is called reinforcement baseline. It in practice can be set empirically as an arbitrary number to improve convergence [Zaremba and Sutskever2015]

. One convenient way of estimating the baseline is the mean of the IDF values on the training set, which is used in this paper.

Notice that the IDF score is computed on the decoded responses , but the log-likelihood is computed on the correct response. Therefore, the algorithm improves likelihood of the correct response and also encourages generating responses with high IDF scores.

3.3 Training and decoding for retrieval

The conversation model can be used for retrieval of the correct responses from candidates. We briefly describe TF-IDF in Sec. 3.3.1. Section 3.3.2 presents the algorithm to train AWI model for retrieval. Section 3.3.3 combines the model with TF-IDF. Notice that TF-IDF uses IDF to penalize non-specific words, combining the AWI model with TF-IDF should have improved specificity, which could lead to improved performances for retrieval.

3.3.1 Tf-Idf

Term-frequency inverse document frequency (TF-IDF) [Salton and Buckley1988] is an established baseline for conversation model used for retrieval [Lowe et al.2015]. The term-frequency (TF) is a count of the number of times a word appears in a given context, and IDF puts penalty on how often the word appears in the corpus. The TF-IDF is a vector for a context computed as follows for a word in a context ,

where is the number of times the word occurs in the context

. For retrieval, a vector for a conversation history is firstly created, with element computed as above for each word in the conversation history. Then, a TF-IDF vector is computed for each response. Similarity of these vectors are measured using Cosine similarity. The responses with the top

similarities are selected as the top outputs.

3.3.2 Training models with ranking criterion

In order to train AWI model for retrieval, the model needs two types of responses. The positive response is the correct one, and the negative responses are those randomly sampled from the training set. For a response , its length-normalized log-likelihood is computed as follows,


where is computed using Eq. (8) with substituted by . is the number of words in .

The objective is to have high recall rates such that the correct responses are ranked higher than negative responses. To this end, the algorithm maximizes the difference between the length-normalized log-likelihood of the correct responses to the length-normalized log-likelihood of negative responses; i.e.,


where is the correct response and is the negative response.

3.3.3 Ranking with AWI together with TF-IDF

Naturally, the length-normalized log-likelihood score from the model trained in Sec. 3.3.2 can be interpolated with the similarity score from TF-IDF in Sec. 3.3.1. The optimal interpolation weight is selected on a development set if it achieves the best recall rates.

4 Experiments

4.1 Data

We use a real-world commercial data set to evaluate the models. The data contains human-human dialogues from a helpdesk chat service for services of Office and Windows. In this service, costumers seek helps on computer and software related issues from human agents. Training set consists of 141,204 dialogues with 2 million turns. The average number of turns in a dialogue is 12, with the largest number of 140 turns and the minimum number of 1 turn. More than 90% of dialogues have 25 or fewer turns. The number of tokens is 25,410,683 in the source side and is 37,796,000 in the target side. The vocabulary size is 8098 including words from both source and target sides. Development and test sets each have 10,000 turns. The test set has 125,451 tokens in its source side and 187,118 in its target side.

4.2 Training Details

Unless otherwise stated, all of the recurrent networks have two layers. The encoder network uses word embedding initialized from 300-dimension GLOVE vector [Pennington et al.2014] trained from 840 billion words. Therefore, the embedding dimension is 300. The hidden layer dimension for encoder, , is 1000. Decoder dimension, , is 1000. The intention network has a 300 dimension vector; i.e., . The alignment dimension

is 100. All parameters, except biases, are initialized using uniform distribution in

but are scaled to be inversely proportional to the number of parameters. All bias vectors are initialized to zero.

The maximum number of training epochs is 10. We use RMSProp with Momentum 

[Tieleman and Hinton2012] to update models. We use perplexity to monitor the progress of training; learning rate is halved if the perplexity on the development set is increased. The gradient is re-scaled whenever its norm exceeds 5.0. To speed-up training, dialogues with the same number of turns are processed in one batch. The batch size is 20.

Hyper-parameters such as initial learning rate and dimension sizes are optimized on the development set. These parameters are then used on the test set. Decoding uses beam search with a beam width of 1. Decoding stops when an end of sentence is generated.

4.3 Evaluation metrics

As our model is used both for generation and retrieval, we use some established measures in the literature for comparison. The first measure is BLEU [Papineni et al.2002]

, which uses the N-gram 

[Dunning1994] to compute similarities between references and responses, together with a penalty for sentence brevity. We use BLEU with 4-gram. While BLEU may unfairly penalize paraphrases with different wording, it has found correlated well with human judgement on responses generation tasks [Galley et al.2015]. The second measure is perplexity [Brown et al.1992], which measures the likelihood of generating a word given observations. We use it in section 4.4.1 to compare the proposed model with respect to other neural network models that also report perplexity. However, since our training algorithms in Sec. 3.2 is designed to improve specificity, which is not directly correlated with the standard likelihood, we only report perplexity in section 4.4.1. The third metric is corpus level IDF score for specificity, computed in (10).

Since our model is also used for retrieval, we adopt a response selection measure proposed in [Lowe et al.2015], in which the performance of a conversation model is measured by the recall rate of those correct responses in the top ranks. This metric is called Recall@k (R@k). The model is asked to select the most likely responses, and it is correct if the true response is among these responses. The number of candidates for retrieval is 10, following [Lowe et al.2015]. This measure is observed to correlate well with human judgment for retrieval based conversation model [Lowe et al.2016, Liu et al.2016].

4.4 Performance as a generation model

4.4.1 Comparison with other methods

Models BLEU Perplexity
N-gram 0.0 280.5
Seq2Seq [Vinyals and Le2015] 1.82 12.64
HRED [Serban et al.2015a] 6.14 13.82
AWI 9.29 11.52
Table 1: Results on the test set.

We compared the AWI model with the sequence-to-sequence (Seq2Seq) [Vinyals and Le2015] and the hierarchical recurrent encoder-decoder (HRED) [Serban et al.2015a]

models. All of the models had a two layers of encoder and decoder. The hidden dimensions for the encoder and decoder were set to 200 in all of the models. The hidden dimension for the intention network was set to 50. All of the models had their optimal setup on the development set. Both Seq2Seq and HRED used long short-term memory networks 

[Hochreiter and Schmidhuber1997]. The number of parameters was approximately for Seq2Seq and for HRED. AWI didn’t have the input similarity feature and it had parameters. Greedy search was used in this experiment.

Table 1 shows that AWI model outperforms other neural network models both in BLEU and perplexity. For comparison, BLEU and perplexity scores from an unconditional N-gram model are also reported, which are much worse than those from neural network based models. The BLEU score for n-gram was obtained by sampling from the n-gram and comparing the sampled response to a typical response. Its response has BLEU score of 0.08 if using BLEU with 1-gram. Because it cannot be matched in 4 gram with the typical response, its BLEU with 4 gram is 0. On one experiment, not shown in the table due to space limitation, with smaller training set, we observed these models performed worse yet similarly. This suggests that the benefit of incorporating hierarchical structure in both HRED and AWI is more apparent with larger training data.

4.4.2 Results with specificity improved models

AWI 11.42 2.35
AWI + sampling 7.02 2.76
AWI + MMI [Li et al.2016] 11.47 2.37
AWI + IDF 11.43 2.83
IR-AWI 11.70 2.40
Table 2: Performance for generation.

We report BLEU and IDF scores in table 2. The baseline is AWI trained with standard cross-entropy in Sec. 3.1. For comparison, we used a sampling method [Mikolov et al.2011] to generate responses, denoted as ”AWI + sampling”. Using sampling would lead to an appropriate number of infrequent words and therefore an IDF score that is similar to that of the reference responses. Indeed this is observed in ”AWI + sampling” in the table. It has an IDF score of 2.76, close to the IDF score of 2.74 from the training set. However, sampling produces worse BLEU scores, though it has higher IDF score than AWI.

We also report result using MMI method for decoding [Li et al.2016], denoted as ”AWI + MMI”. This uses a backward directional model trained for generating source from target, and its decoding uses reranking algorithm in Sec. 3.2.2. The optimal interpolation weight for the backward directional model was 0.05. Both BLEU and IDF scores are improved. On the other hand, ”AWI + MMI” requires an additional model as complicated as the baseline AWI model.

Alternatively, AWI results are reranked with the sentence-level IDF scores using algorithm in Sec. 3.2.2. The optimal interpolation weight to IDF was 0.035. This result, denoted as ”AWI + IDF”, has improved BLEU and IDF scores, in comparison to the baseline AWI. Compared against ”AWI + MMI”, it has similar BLEU but higher IDF scores. This suggests that IDF score is more directly related to specificity than using MMI.

The result from using specificity as reward to train a model in Sec. 3.2.3 is denoted as IDF-rewarded AWI model or ”IR-AWI”. The reinforcement baseline was empirically set to 1.0. We also experimented with a larger value to 1.5 for the baseline and didn’t observe much performance differences. ”IR-AWI” consistently outperforms ”AWI” and ”AWI + MMI”.

4.4.3 Analysis

Figure 2 uses t-SNE [van der Maaten and Hinton2008] to visualize the intention vectors. It shows clear clusters even though training intention vectors doesn’t use explicit labels. In order to relate these clusters with explicit meaning, we look at the responses generated from these intention vectors, and tag similar responses with the same color. Some responses types are clearly clustered such as ”Greeting” and ”Close this chat”. Others types are though more distributed and cannot find a clear tag for these responses. We therefore leave them untagged.

Figure 2: t-SNE visualization of intention vectors.

We also show two examples of responses in Tables 3 and 4 from AWI and IR-AWI. Responses from AWI+MMI and AWI+IDF are the same as from AWI, so we only list responses from AWI and IR-AWI. These responses are reasonable. However, the IR-AWI responses in these tables are more specific than the generic responses from AWI.

okay, thank you for that information. alright, kindly click the link below and update me once you are on the page that is asking for a six digit code http://webpage.
Table 3: Examples of responses. Conversation history is as follows. User said, ”i don’t have another computer that supports miracast. the adapter appears as a ’device’ but does not get connected. do you want remote access to it?” Agent replied, ”would that be alright for you ?” User then said, ”yes.” For reference, the human agent responds, ”alright, kindly click the link below and update me once you are on the page http://webpage, i would like to set your expection.”
may i know how did you upgrade to windows 10? may i have the product key for windows 8.1?
Table 4: Examples of responses. Conversation history is as follows. User said, ”windows activate error code : errorcode,” Agent replied, ”i am sorry for the inconvenience. but nothing to worry about, i will surely help you with this. may i know the previous os?” User then said, ”8.1.” For reference, the human agent responds, ”okay, may i know, do you have a product key for windows 8.1?”

4.5 Performance for retrieval

We report recall rates, R@1 and R@5, in Table 5. Clearly, AWI model trained with ranking criterion in Sec. 3.3.2 outperforms TF-IDF. Importantly, AWI model was able to combine TF-IDF using method described in Sec. 3.3.3, obtaining significant performance improvement.

Models R@1 R@5
TF-IDF 28.54 73.95
AWI 33.57 77.01
AWI + TF-IDF 40.70 85.39
Table 5: Retrieval results for the models using 1 in 10 recall rates (%).

5 Related work

Our work is related both to goal and non-goal oriented dialogue systems as the proposed model can be used as a language generation component in a goal-oriented dialogue [Young et al.2013] or simply to produce chit-chat style dialogue without a specific goal [Ritter et al.2010, Banchs and Li2012, Ameixa et al.2014]. Whereas traditionally a language generation component [Henderson et al.2014, Gasic et al.2013, Wen et al.2015] rely on explicit state [Williams2009] in POMDP framework for goal-oriented dialogue system [Young et al.2013], the proposed model may relax such requirement. However, grounding the generated conversation with actions and knowledge is not studied in this paper. It will be a future work.

The proposed model is related to the recent works in [Shang et al.2015, Vinyals and Le2015, Sordoni et al.2015a], which use an encoder-decoder framework to model conversation. The closest work is in  [Sordoni et al.2015a]. This model differs from that work in using an attention model, an additional input similarity feature, and its decoder architecture. Importantly, this model is used not only for generation as in those previous work, but also for retrieval.

Prior work to potentially increase specificity or diversity aims at producing multiple outputs [Carbonell and Goldstein1998, Gimpel et al.2013] and our work is the same as in [Li et al.2016] to produce a single nontrivial output. Instead of using an objective function in [Li et al.2016] that has an indirect relation to specificity, our model uses a specificity measure directly for training and decoding.

6 Conclusions

We have presented a novel attentional neural conversation model with enhanced specificity using IDF. It has been evaluated for both response generation and retrieval. We have observed significant performance improvements in comparison to alternative methods.


  • [Ameixa et al.2014] David Ameixa, Luisa Coheur, Pedro Fialho, and Paulo Quaresma. 2014. Luck, i am your father: dealing with out-of-domain requests by using movies subtitles. In Intelligent Virtual Agents.
  • [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA.
  • [Banchs and Li2012] Rafael E. Banchs and Haizhou Li. 2012. IRIS: a chat-oriented dialogue system based on the vector space model. In ACL.
  • [Brown et al.1992] Peter Brown, Vincent Pietra, Robert Mercer, Stephen Pietra, and Jennifer Lai. 1992. An estimate of an upper bound for the entropy of English. Computational Linguistics, 18(1):31–40.
  • [Carbonell and Goldstein1998] Jaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. Research and development in information retrieval, pages 335–336.
  • [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In EMNLP.
  • [Dunning1994] Ted Dunning. 1994. Statistical identification of language. Technical Report Technical Report MCCS 94-273, New Mexico State University.
  • [Galley et al.2015] Michel Galley, Chris Quirk, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltableu: A discriminative metric for generation tasks with intrinsically diverse targets. In ACL, pages 445–450.
  • [Gasic et al.2013] M. Gasic, C. Breslin, M. Henderson, D. Kim, M. Szummer, B. Thomson, P. Tsiakoulis, and S. Young. 2013. Online policy optimisation of Bayesian spoken dialogue systems via human interaction. In ICASSP.
  • [Gimpel et al.2013] Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In EMNLP.
  • [Grosz and Sidner1986] Barbara J. Grosz and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12:175–204.
  • [Henderson et al.2014] Matthew Henderson, Blaise Thomson, and Steve Young. 2014.

    Word-based dialog state tracking with recurrent neural networks.

  • [Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8).
  • [Li et al.2016] Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promopting objective function for neural conversation model. In NAACL.
  • [Liu et al.2016] Chia-Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: an empirical study of unsupervised evaluation metics for dialogue response generation. In arXiv:1603.08023 [cs.CL].
  • [Lowe et al.2015] Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In SIGDIAL.
  • [Lowe et al.2016] Ryan Lowe, Iulian V. Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. On the evluation of dialogue systems with next utterance classification. In submitted to SIGDIAL.
  • [Luong et al.2015] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP.
  • [Mikolov et al.2011] Tomas Mikolov, Stefan Kombrink, Lukas Burget, Jan Honza Cernock, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In ICASSP, pages 5528–5531.
  • [Och2003] Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In ACL.
  • [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL.
  • [Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543.
  • [Ramos2003] Juan Ramos. 2003. Using TF-IDF to determine word relevance in document queires. In ICML.
  • [Ranzato et al.2016] Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR.
  • [Ritter et al.2010] Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversation. In NAACL.
  • [Salton and Buckley1988] Gerard Salton and Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval. Information Processing and Management, 24(5):513–523.
  • [Serban et al.2015a] Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015a. Building end-to-end dialogue systems using generative hierachical neural network models. In AAAI.
  • [Serban et al.2015b] Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015b. Hierarchical neural network generative models for movie dialogues. In arXiv:1507.04808.
  • [Shang et al.2015] Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In ACL.
  • [Sordoni et al.2015a] Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015a. A neural network approach to context-sensitive generation of conversation responses. In NAACL.
  • [Sordoni et al.2015b] Allessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jacob G. Simonsen, and Jian-Yun Nie. 2015b. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In CIKM.
  • [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Neural Information Processing Systems (NIPS), pages 3104–3112, Montréal.
  • [Sutton and Barto1988] Richard S. Sutton and Andrew G. Barto. 1988. Reinforcement learning: An introduction. MIT Press.
  • [Tieleman and Hinton2012] T. Tieleman and G. Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. In

    COURSERA: Neural Networks for Machine Learning

  • [van der Maaten and Hinton2008] Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data usin t-SNE. JMLR.
  • [Vinyals and Le2015] Oriol Vinyals and Quoc V. Le. 2015. A nerual converstion model. In

    ICML Deep Learning Workshop

  • [Wen et al.2015] T.-H. Wen, M. Gasic, N. Mrksic, P.-H. Su, D. Vandyke, and S. Young. 2015.

    Semantically conditioned LSTM-based natural language generation for spoken dialogue systems.

    In EMNLP.
  • [Williams1992] Ronald Williams. 1992. Simple statistical gradient-flow algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256.
  • [Williams2009] Jason D. Williams. 2009. Spoken dialogue systems: Challenges, and opportunities for research. In ASRU.
  • [Young et al.2013] Steve Young, Milica Gasic, Blaise Thomson, and Jason D. Williams. 2013. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101:1160–1179.
  • [Zaremba and Sutskever2015] Wojciech Zaremba and Ilya Sutskever. 2015. Reinforcement learning neural turing machines. In arXiv:1505.00521 [cs.LG].