Hierarchical Encoder Decoder RNN (HRED) with Truncated Backpropagation Through Time (Truncated BPTT)
We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.READ FULL TEXT VIEW PDF
Hierarchical Encoder Decoder RNN (HRED) with Truncated Backpropagation Through Time (Truncated BPTT)
A neural conversation model
default code to run in lab
Dialogue systems, also known as interactive conversational agents, virtual agents and sometimes chatterbots, are used in a wide set of applications ranging from technical support services to language learning tools and entertainment[Young et al.2013, Shawar and Atwell2007]. Dialogue systems can be divided into goal-driven systems, such as technical support services, and non-goal-driven systems, such as language learning tools or computer game characters. Our current work focuses on the second case, due to the availability of large corpora of this type, though the model may eventually prove useful for goal-driven systems also.
Perhaps the most successful approach to goal-driven systems has been to view the dialogue problem as a partially observable Markov decision process (POMDP)[Young et al.2013]. Unfortunately, most deployed dialogue systems use hand-crafted features for the state and action space representations, and require either a large annotated task-specific corpus or a horde of human subjects willing to interact with the unfinished system. This not only makes it expensive and time-consuming to deploy a real dialogue system, but also limits its usage to a narrow domain. Recent work has tried to push goal-driven systems towards learning with few examples using constraints on the POMDP [Gasic et al.2013] as well as learning the observed features themselves with neural network models [Henderson, Thomson, and Young2014], yet such approaches still require either hand-crafted features or large corpora of annotated task-specific simulated conversations.
On the other end of the spectrum are the non-goal-driven systems [Ritter, Cherry, and Dolan2011, Banchs and Li2012, Ameixa et al.2014]. Most recently Sordoni et al. sordoni2015aneural and Shang et al. shang2015neural have drawn inspiration from the use of neural networks in natural language modeling and machine translation tasks [Cho et al.2014]. There are several motivations for developing non-goal-driven systems. First, they may be deployed directly for tasks which do not naturally exhibit a directly measurable goal (e.g. language learning) or simply for entertainment. Second, if they are trained on corpora related to the task of a goal-driven dialogue system (e.g. corpora which cover conversations on similar topics) then these models can be used to train a user simulator, which can then train the POMDP models discussed earlier [Young et al.2013, Pietquin and Hastie2013]
. This would alleviate the expensive and time-consuming task of constructing a large-scale task-specific dialogue corpus. In addition to this, the features extracted from the non-goal-driven systems may be used to expand the state space representation of POMDP models[Singh et al.2002]. This can help generalization to dialogues outside the annotated task-specific corpora.
Our contribution is in the direction of end-to-end trainable, non-goal-driven systems based on generative probabilistic models. We define the generative dialogue problem as modeling the utterances and interactive structure of the dialogue. As such, we view our model as a cognitive system, which has to carry out natural language understanding, reasoning, decision making and natural language generation in order to replicate or emulate the behavior of the agents in the training corpus. Our approach differs from previous work on learning dialogue systems through interaction with humans[Young et al.2013, Gasic et al.2013, Cantrell et al.2012, Mohan and Laird2014], because it learns off-line through examples of human-human dialogues and aims to emulate the dialogues in the training corpus instead of maximize a task-specific objective function. Contrary to explanation-based learning [Mohan and Laird2014] and rule-based inference systems [Langley et al.2014], our model does not require a predefined state or action space representation. These representations are instead learned directly from the corpus examples together with inference mechanisms, which map dialogue utterances to dialogue states, and action generation mechanisms, which map dialogue states to dialogue acts and stochastically to response utterances. We believe that training such a model end-to-end to minimize a single objective function, and with minimum reliance on hand-crafted features, will yield superior performance in the long run. Furthermore, we focus on models which can be trained efficiently on large datasets and which are able to maintain state over long conversations.
We experiment with the well-established recurrent neural networks (RNN) and-gram models. In particular, we adopt the hierarchical recurrent encoder-decoder (HRED) [Sordoni et al.2015a] and demonstrate that it is competitive with other models in the literature. We extend the model architecture to better suit the dialogue task. We show that performance can be substantially improved by bootstrapping from pretrained word embeddings and by pretraining the model on a larger question-answer pair (Q-A) corpus. To carry out experiments, we introduce the MovieTriples dialogue dataset based on movie scripts.
Modeling conversations on micro-blogging websites with generative probabilistic models was first proposed by Ritter et al. ritter2011data. They view the response generation problem as a translation problem, where a post needs to be translated into a response. Generating responses was found to be considerably more difficult than translating between languages, likely due to the wide range of plausible responses and lack of phrase alignment between the post and the response.
Later, Shang et al. shang2015neural proposed to use the recurrent neural network framework for generating responses on micro-blogging websites. This was followed up by Sordoni et al. sordoni2015aneural, who extended the framework from status-reply pairs to triples of three consecutive utterances.
To the best of our knowledge, Banchs et al. banchs2012iris were the first to suggest using movie scripts to build dialogue systems. Conditioned on one or more utterances, their model searches a database of movie scripts and retrieves an appropriate response. This was later followed up by Ameixa et al. ameixa2014luke, who demonstrated that movie subtitles could be used to provide responses to out-of-domain questions using an information retrieval system.
We consider a dialogue as a sequence of utterances involving two interlocutors. Each contains a sequence of tokens, i.e. , where
is a random variable taking values in the vocabularyand representing the token at position in utterance . The tokens represent both words and speech acts
, e.g. pause and end of turn tokens. A generative model of dialogue parameterizes a probability distribution- governed by parameters
- over the set of all possible dialogues of arbitrary lengths. The probability of a dialoguecan be decomposed:
, , i.e. the tokens preceding in the utterance . The task is analogous to language modeling, with the critical difference that speech acts are included as separate tokens. Sampling from the model can be performed as in standard language modeling: sampling one word at a time from the conditional distribution conditioned on the previously sampled words.
Using standard -grams to compute joint probabilities over dialogues, e.g. computing probability tables for each token given the
preceding tokens, suffers from the curse of dimensionality and is intractable for any realistic vocabulary size. To overcome this, Bengio et al. Bengio-nnlm2003 proposed a distributed (dense) vector representation of words, calledword embeddings, which parameterizes
as a smooth function using a neural network. By means of such distributed representations, the recurrent neural network (RNN) based language model[Mikolov et al.2010] has pushed state-of-the-art performance by learning long -gram contexts while avoiding data sparsity issues. Overall, RNNs have performed well on a variety of NLP tasks such as machine translation [Cho et al.2014, Sutskever, Vinyals, and Le2014, Bahdanau, Cho, and Bengio2015] and information retrieval [Sordoni et al.2015a].
A recurrent neural network (RNN) models an input sequence of tokens using the recurrence:
where is called a recurrent, or hidden, state and acts as a vector representation of the tokens seen up to position . In particular, the last state may be viewed as an order-sensitive compact summary of all the tokens. In language modeling tasks, the context information encoded in is used to predict the next token in the sentence:
The functions and are typically defined as:
The matrix contains the input word embeddings, i.e. each column is a vector corresponding to token in the vocabulary . Due to the size of the model vocabulary , it is common to approximate the matrix with a low-rank decomposition, i.e. , where and , and . This approach has also the advantage that the embedding matrix may separately be bootstrapped (e.g. learned) from larger corpora. Analogously, the matrix represents the output word embeddings, where each possible next token is projected into another dense vector and compared to the hidden state . The probability of seeing token at position increases if its corresponding embedding vector is “near” the context vector . The parameter is called a recurrent parameter, because it links to
. All parameters are learned by maximizing the log-likelihood of the parameters on a training set using stochastic gradient descent.
Our work extends the hierarchical recurrent encoder-decoder architecture (HRED) proposed by Sordoni et al. sordoni2015ahier for web query suggestion. In the original framework, HRED predicts the next web query given the queries already submitted by the user. The history of past submitted queries is considered as a sequence at two levels: a sequence of words for each web query and a sequence of queries. HRED models this hierarchy of sequences with two RNNs: one at the word level and one at the query level. We make a similar assumption, namely, that a dialogue can be seen as a sequence of utterances which, in turn, are sequences of tokens. A representation of HRED is given in Figure 1.
In dialogue, the encoder RNN maps each utterance to an utterance vector. The utterance vector is the hidden state obtained after the last token of the utterance has been processed. The higher-level context RNN keeps track of past utterances by processing iteratively each utterance vector. After processing utterance , the hidden state of the context RNN represents a summary of the dialogue up to and including turn , which is used to predict the next utterance . This hidden state can be interpreted as the continuous-valued state of the dialogue system. The next utterance prediction is performed by means of a decoder RNN, which takes the hidden state of the context RNN and produces a probability distribution over the tokens in the next utterance. The decoder RNN is similar to the RNN language model [Mikolov et al.2010], but with the important difference that the prediction is conditioned on the hidden state of the context RNN. It can be interpreted as the response generation module of the dialogue system. The encoder, context and decoder RNNs all make use of the GRU hidden unit [Cho et al.2014]
. Every-where else we use the hyperbolic tangent as activation function. It is also possible to use the maxout activation function between the hidden state and the projected word embeddings of the decoder RNN[Goodfellow et al.2013]. The same encoder RNN and decoder RNN parameters are used for every utterance in a dialogue. This helps the model generalize across utterances. Further details of the architecture are described by Sordoni et al. sordoni2015ahier.
For modeling dialogues, we expect the HRED model to be superior to the standard RNN model for two reasons. First, because the context RNN allows the model to represent a form of common ground between speakers, e.g. to represent topics and concepts shared between the speakers using a distributed vector representation, which we hypothesize to be important for building an effective dialogue system [Clark and Brennan1991]. Second, because the number of computational steps between utterances is reduced. This makes the objective function more stable w.r.t. the model parameters, and helps propagate the training signal for first-order optimization methods [Sordoni et al.2015a].
In HRED, the utterance representation is given by the last hidden state of the encoder RNN. This architecture worked well for web queries, but may be insufficient for dialogue utterances, which are longer and contain more syntactic articulations than web queries. For long utterances, the last state of the encoder RNN may not reflect important information seen at the beginning of the utterance. Thus, we also experiment with a model where the utterance encoder is a bidirectional RNN. Bidirectional RNNs run two chains: one forward through the utterance tokens and another backward, i.e. reversing the tokens in the utterance. The forward hidden state at position summarizes tokens preceding position and the backwards hidden state summarizes tokens following position .111The output of the bidirectional RNN is always based on the utterance before the current utterance of the decoder RNN. To obtain a fixed-length representation for the utterance, we summarize the information in the forward and backward RNN hidden states by either: 1) taking the concatenation of the last state of each RNN as input to the context RNN, or 2) applying pooling over the temporal dimension of each chain, and taking the concatenation of the two pooled states as input to the context RNN.222 pooling over an utterance is defined as , where is the encoder RNN hidden state at position , and is the length of the utterance. The bidirectional structure will effectively introduce additional short term dependencies, which has proven useful in similar architectures [Bahdanau, Cho, and Bengio2015, Sutskever, Vinyals, and Le2014]. In experiments below, we refer to this variant as HRED-Bidirectional.
The commonsense knowledge that the dialogue interlocutors share may be difficult to infer if the dataset is not sufficiently large. Therefore, our models may be improved by learning word embeddings from larger corpora. We choose to initialize our word embeddings with Word2Vec333 http://code.google.com/p/word2vec/ [Mikolov et al.2013] trained on the Google News dataset containing about 100 billion words. The sheer size of the dataset ensures that the embeddings contain rich semantic information about each word.
To learn a good initialization point for all model parameters, instead of only the word embeddings, we can further pretrain the model on a large non-dialogue corpus, which covers similar topics and types of interactions between interlocutors. One such corpus is the Q-A SubTle corpus containing about 5.5M Q-A pairs constructed from movie subtitles [Ameixa et al.2014]. We construct an artificial dialogue dataset by taking each pair as a two-turn dialogue and use this to pretrain the model.
The MovieTriples dataset has been developed by expanding and preprocessing the Movie-DiC dataset by Banchs et al. Banchs:2012:MMD:2390665.2390716 to make it fit the generative dialogue modeling framework. The dataset is available upon request. Movie scripts span a wide range of topics, contain long interactions with few participants and relatively few spelling mistakes and acronyms. As observed by Forchini forchini2009spontaneity: “movie language can be regarded as a potential source for teaching and learning spoken language features”. Therefore, we believe bootstrapping a goal-driven spoken dialogue system based on movie scripts will improve performance.
We used the Python-based natural language toolkit NLTK [Bird, Klein, and Loper2009]
to perform tokenization and named-entity recognition. All names and numbers were replaced with the<person> and <number> tokens respectively [Ritter, Cherry, and Dolan2010]. To reduce data sparsity further, all tokens were transformed to lowercase letters, and all but the 10,000 most frequent tokens were replaced with a generic <unk> token.
We then generated “triples” , i.e. dialogues of three turns between two interlocutors A and B, for which A emits a first utterance , B responds with and A responds with a last utterance [Sordoni et al.2015b]. To capture the interactive dialogue structure, a special end-of-utterance token is appended to all utterances and a continued-utterance token between breaks in lines from the same speaker. To avoid co-dependencies between triples coming from the same movie, we first split the movies into training, validation and test set, and then construct the triples. Statistics are reported in Table 1.
We evaluate the different variants of our HRED model, and compare against several alternatives, including basic -gram models [Goodman2001], a standard (non-hierarchical) RNN trained on the concatenation of the utterances in each triple, and a context-sensitive model (DCGM-I) recently proposed by Sordoni et al. sordoni2015aneural.
Accurate evaluation of a non-goal-driven dialogue system is an open problem [Galley et al.2015, Pietquin and Hastie2013, Schatzmann, Georgila, and Young2005]. There is no well-established method for automatic evaluation, and human-based evaluation is expensive. Nevertheless, for probabilistic language models word perplexity is a well-established performance metric [Bengio et al.2003, Mikolov et al.2010], and has been suggested for generative dialogue models previously [Pietquin and Hastie2013]. We define word perplexity:
for a model with parameters , dataset with triples , and the number of tokens in the entire dataset. Lower perplexity is indicative of a better model. Perplexity explicitly measures the model’s ability to account for the syntactic structure of the dialogue (e.g. turn-taking) and the syntactic structure of each utterance (e.g. punctuation marks). In dialogue, the distribution over the words in the next utterance is highly multi-modal, e.g. there are many possible answers, which makes perplexity particularly appropriate because it will always measure the probability of regenerating the exact reference utterance.
We also consider the word classification error (also known as word error-rate). This is defined as the number of words in the dataset the model has predicted incorrectly divided by the total number of words in the dataset.444For a word prediction to be counted as correct, both the word and its position in the utterance must be correct. Each word contributes either zero or to the count, which means that it is more robust to unlikely (e.g. unpredictable) words. However, it is also less fine-grained than word perplexity. Instead of measuring the whole distribution, it only measures the regions of high probability.
Ultimately, we care about generating syntactically and semantically coherent dialogues. For example, utterances which are grammatically correct and reflect the distribution of topics in the corpus, and whole dialogues which reflect the interaction patterns and topical evolutions of the dialogues in the corpus. Despite having been proposed before [Pietquin and Hastie2013, Schatzmann, Georgila, and Young2005]
. it is not clear how well word perplexity and word classification errors correlate with this goal. Nevertheless, optimizing probabilistic models using word perplexity has shown promising results in several machine learning tasks including statistical machine translation[Auli et al.2013, Sutskever, Vinyals, and Le2014, Bahdanau, Cho, and Bengio2015], speech recognition [Hinton et al.2012, Deng and Li2013] and image caption generation [Kiros, Salakhutdinov, and Zemel2014, Vinyals et al.2015]. Based on these empirical findings, we expect to be able to discriminate between models based on word perplexity, and to use word classification error and qualitative analysis of generated dialogues to understand the performance of the models in depth.
Absolute Discounting N-Gram
Witten-Bell Discounting N-Gram
HRED + Word2Vec
RNN + SubTle
HRED + SubTle
HRED-Bi. + SubTle
. Standard deviations are shown for all neural models. Best performances are marked in bold.
|Reference (, )||MAP||Target ()|
: yeah , okay .
: well , i guess i ’ ll be going now .
|i ’ ll see you tomorrow .||yeah .|
: oh . <continued_utterance> oh .
: what ’ s the matter , honey ?
|i don ’ t know .||oh .|
: it ’ s the cheapest .
: then it ’ s the worst kind ?
|no , it ’ s not .||they ’ re all good , sir .|
: <person> ! what are you doing ?
: shut up ! c ’ mon .
|what are you doing here ?||what are you that crazy ?|
To train the neural network models, we optimized the log-likelihood of the triples using the recently proposed Adam optimizer [Kingma and Ba2014].555We truncated all triples to have a maximum size of tokens, and could therefore apply backpropagation on the full token sequences.
tokens, and could therefore apply backpropagation on the full token sequences.
The best hyperparameters of the models were chosen by early stopping with patience on the validation set perplexity[Bengio2012]. We initialized the recurrent parameter matrices as orthogonal matrices, and all other parameters from a Gaussian random distribution with mean zero and standard deviation . For the baseline RNN, we tested hidden state spaces , and . For HRED we experimented with encoder and decoder hidden state spaces of size , and . Based on preliminary results and due to GPU memory limitations, we limited ourselves to size when not bootstrapping or bootstrapping from Word2Vec, and to size when bootstrapping from SubTle. Preliminary experiments showed that the context RNN state space at and above performed similarly, so we fixed it at when not bootstrapping or bootstrapping from Word2Vec, and to when bootstrapping from SubTle. For all models, we used word embedding of size when bootstrapping from SubTle and of size otherwise. To help generalization, we used the maxout activation function, between the hidden state and the projected word embeddings of the decoder RNN, when not bootstrapping and when bootstrapping from Word2Vec. We used pooling for all HRED models, except when bootstrapping from SubTle since it appeared to perform slightly worse.
Our embedding matrix was initialized using the publicly available 300 dim. Word2Vec embeddings trained on the Google News corpus [Mikolov et al.2013]. Special dialogue tokens, which did not exist in the Word2Vec embeddings, were initialized from a Gaussian random distribution as before. The training procedure was carried out in two stages. First, we trained each neural model with fixed Word2Vec embeddings. During this stage, we also trained the speech act and placeholder tokens, together with tokens not covered by the original Word2Vec embeddings. In the second stage, we then trained all parameters of each neural model until convergence.
We processed the SubTle corpus by following the same procedure as used for MovieTriples, but treating the last utterances as empty. The final SubTle corpus contained 5,503,741 Q-A pairs, and a total of 93,320,500 tokens. When bootstrapping from the SubTle corpus, we found that all models performed slightly better when randomly initializing and learning the word embeddings from SubTle compared to fixing the word embeddings to those given by Word2Vec. For this reason, we do not report results combining bootstrapping from the SubTle corpus with Word2Vec word embeddings.
The HRED models were pretrained for approximately four epochs on theSubTle dataset, after which performance did not appear to improve further. Then, we fine-tuned the pretrained models on the MovieTriples dataset holding the word embeddings fixed.
Our results are presented in Table 2. All neural models beat state-of-the-art -grams models substantially w.r.t. both word perplexity and word classification error. Without bootstrapping, the RNN model performs similarly to the more complex DCGM-I and HRED models. This can be explained by the size of the dataset, which makes it easy for the HRED and DCGM-I model to overfit. The last four lines of Table 2 confirm that bootstrapping the model parameters achieves significant gains in both measures. Bootstrapping from SubTle is particularly useful since it allows a gain of nearly 10 perplexity points compared to the HRED model without bootstrapping. We believe that this is because it trains all model parameters, unlike bootstrapping from Word2Vec, which only trains the word embeddings.
In general, we find that the gains due to architectural choice are smaller than those obtained by bootstrapping, which can be explained by the fact that we are in a regime of relatively little training data compared to other natural language processing tasks, such as machine translation, and hence we would expect the differences to grow with more training data and longer dialogues. Overall, the bidirectional structure appears to capture and retain information from theand utterances better than either the RNN and the original HRED model. This confirms our earlier hypothesis, and demonstrates the potential of HRED for modeling long dialogues.
We also considered the use of beam-search for RNNs [Graves2012] to approximate the most probable (MAP) last utterance, , given the first two utterances, and . MAP outputs are presented in Table 3 for HRED-Bidirectional bootstrapped from SubTle corpus. As shown here, the model often produced sensible answers. However, in fact, the majority of the predictions were generic, such as I don’t know or I’m sorry. We observed the same phenomenon for the RNN model, and similar observations can be inferred by remarks in the recent literature [Sordoni et al.2015b, Vinyals and Le2015]. To the best of our knowledge, we are the first to emphasize and discuss it in details.777After publishing the first draft of this paper, Li et al. DBLP:journals/corr/LiGBGD15 investigated this problem further and proposed to change the objective function at test time to also maximize the mutual information between the generated utterance and the previous utterances.
There are several possible explanations for this behavior. Firstly, due to data scarcity, the models may only have learned to predict the most frequent utterances. Since the dialogues are inherently ambiguous and multi-modal, predicting them accurately would require more data than other natural language processing tasks. Secondly, the majority of tokens were punctuation marks and pronouns. Since every token is weighted equally during training, the gradient signal of the neural networks is dominated by these punctuation and pronoun tokens. This makes it hard for the neural networks to learn topic-specific embeddings and even harder to predict diverse utterances. This suggests exploring neural architectures which explicitly separate semantic structure from syntactic structure. Finally, the context of a triple may be too short. In that case, the models should benefit from longer contexts and by conditioning on other information sources, such as semantic and visual information.
An important implication of this observation is that metrics based on MAP outputs (e.g. cosine similarity, BLEU, Levenshtein distance) will primarily favor models that output the same number of punctuation marks and pronouns as are in the test utterances, as opposed to similar semantic content (e.g. nouns and verbs). This would be systematically biased and not necessarily in any way correlate with the objective of producing appropriate responses. We therefore cannot justify the use of such metrics when the results are known to lack diversity.
Nevertheless, we also note that this problem did not occur when we generated stochastic samples (as opposed to the MAP outputs). In fact, the stochastic samples contained a large variety of topic-specific words and often appeared to maintain the topic of the conversation.
We have demonstrated that a hierarchical recurrent neural network generative model can outperform both -gram based models and baseline neural network models on the task of modeling utterances and speech acts. To support our investigation, we introduced a novel dataset called MovieTriples based on movie scripts, which are suitable for modeling long, open domain dialogues close to human spoken language. In addition to the recurrent hierarchical architecture, we found two crucial ingredients for improving performance: the use of a large external monologue corpus to initialize the word embeddings, and the use of a large related, but non-dialogue, corpus in order to pretrain the recurrent net. This points to the need for larger dialogue datasets.
Empirical performance of all models was measured using perplexity. While this is an established measure for generative models, in the dialogue setting utterances may be overwhelmed by many common words especially arising from colloquial or informal exchanges. It may be fruitful to investigate other measures of performance for generative dialogue systems. We also considered actual responses produced by the model. The MAP outputs tended to produce somewhat generic, but conversationally acceptable, responses. Stochastic samples from the model produced more diverse dialogues.
Future work should study models for full length dialogues, as opposed to triples, and model other speech acts, such as interlocutors entering or leaving the dialogue and executing actions. Finally, our analysis of the model MAP outputs suggests that it would be beneficial to include longer and additional context, including other modalities such as audio and video.
The authors acknowledge IBM Research, NSERC, Canada Research Chairs, Nuance Foundation, CIFAR and Compute Canada for funding and resources. The authors thank Ryan Lowe, Laurent Charlin and Nissan Pow for constructive feedback. The authors thank Rafael E. Banchs for providing the Movie-DiC dataset, and Luisa Coheur for providing the SubTle dataset. The authors also thank the anonymous AAAI reviewers for their helpful feedback.
Conference on Neural Information Processing Systems, Workshop on Deep Learning and Unsupervised Feature Learning.
AAAI Conference on Artificial Intelligence, 387–394.
The Knowledge Engineering Review28(01):59–73.
Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system.Journal of Artificial Intelligence Research 105–133.