A toy chatbot powered by deep learning and trained on data from Reddit
Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., "I don't know") regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations.READ FULL TEXT VIEW PDF
Diversity is a long-studied topic in information retrieval that usually
In this paper we propose a neural conversation model for conducting
Maximum Mutual information (MMI), which models the bidirectional depende...
Countermeasures to effectively fight the ever increasing hate speech onl...
Responses generated by neural conversational models tend to lack
Natural language generation (NLG) is a critical component in conversatio...
The sequence-to-sequence (Seq2Seq) model generates target words iterativ...
A toy chatbot powered by deep learning and trained on data from Reddit
Please aware that the data provided is already outdated. Sample data would be uploaded for users to test on their own.
This code is taken directly from https://github.com/pender/chatbot-rnn. Customized to work with python 3.5 and tensorflow 1.0.
Conversational agents are of growing importance in facilitating smooth interaction between humans and their electronic devices, yet conventional dialog systems continue to face major challenges in the form of robustness, scalability and domain adaptation. Attention has thus turned to learning conversational patterns from data: researchers have begun to explore data-driven generation of conversational responses within the framework of statistical machine translation (SMT), either phrase-based [Ritter et al.2011], or using neural networks to rerank, or directly in the form of sequence-to-sequence (Seq2Seq) models [Sordoni et al.2015, Shang et al.2015, Vinyals and Le2015, Wen et al.2015, Serban et al.2016]. Seq2Seq models offer the promise of scalability and language-independence, together with the capacity to implicitly learn semantic and syntactic relations between pairs, and to capture contextual dependencies [Sordoni et al.2015] in a way not possible with conventional SMT approaches [Ritter et al.2011].
An engaging response generation system should be able to output grammatical, coherent responses that are diverse and interesting. In practice, however, neural conversation models tend to generate trivial or non-committal responses, often involving high-frequency phrases along the lines of I don’t know or I’m OK [Sordoni et al.2015, Vinyals and Le2015, Serban et al.2016]. Table 1 illustrates this phenomenon, showing top outputs from Seq2Seq models. All the top-ranked responses are generic. Responses that seem more meaningful or specific can also be found in the N-best lists, but rank much lower. In part at least, this behavior can be ascribed to the relative frequency of generic responses like I don’t know in conversational datasets, in contrast with the relative sparsity of more contentful alternative responses.111In our training dataset from the OpenSubtitles database (OSDb), sentences contain the sequence I don’t know, a high rate considering huge diversity of this dataset.
It appears that by optimizing for the likelihood of outputs given inputs, neural models assign high probability to “safe” responses. This objective function, common in related tasks such as machine translation, may be unsuited to generation tasks involving intrinsically diverse outputs. Intuitively, it seems desirable to take into account not only the dependency of responses on messages, but also the inverse, the likelihood that a message will be provided to a given response.
|Input: What are you doing?|
|0.86 I don’t know.||1.09 Get out of here.|
|1.03 I don’t know!||1.09 I’m going home.|
|1.06 Nothing.||1.09 Oh my god!|
|1.09 Get out of the way.||1.10 I’m talking to you.|
|Input: what is your name?|
|0.91 I don’t know.||…|
|0.92 I don’t know!||1.55 My name is Robert.|
|0.92 I don’t know, sir.||1.58 My name is John.|
|0.97 Oh, my god!||1.59 My name’s John.|
|Input: How old are you?|
|0.79 I don’t know.||…|
|1.06 I’m fine.||1.64 Twenty-five.|
|1.17 I’m all right.||1.66 Five.|
|1.17 I’m not sure.||1.71 Eight.|
We propose to capture this intuition by using Maximum Mutual Information (MMI), first introduced in speech recognition [Bahl et al.1986, Brown1987], as an optimization objective that measures the mutual dependence between inputs and outputs. Below, we present practical strategies for neural generation models that use MMI as an objective function. We show that use of MMI results in a clear decrease in the proportion of generic response sequences, generating correspondingly more varied and interesting outputs.
The approach we take here is data-driven and end-to-end. This stands in contrast to conventional dialog systems, which typically are template- or heuristic-driven even where there is a statistical component[Levin et al.2000, Oh and Rudnicky2000, Ratnaparkhi2002, Walker et al.2003, Pieraccini et al.2009, Young et al.2010, Wang et al.2011, Banchs and Li2012, Chen et al.2013, Ameixa et al.2014, Nio et al.2014].
We follow a newer line of investigation, originally introduced by Ritter et al. ritter2011data, which frames response generation as a statistical machine translation (SMT) problem. Recent progress in SMT stemming from the use of neural language models [Sutskever et al.2014, Gao et al.2014, Bahdanau et al.2015, Luong et al.2015] has inspired attempts to extend these neural techniques to response generation. Sordoni et al. Sordoni2015 improved upon Ritter et al. ritter2011data by rescoring the output of a phrasal SMT-based conversation system with a Seq2Seq model that incorporates prior context. Other researchers have subsequently sought to apply direct end-to-end Seq2Seq models [Shang et al.2015, Vinyals and Le2015, Wen et al.2015, Yao et al.2015, Serban et al.2016]. These Seq2Seq
models are Long Short-Term Memory (LSTM) neural networks[Hochreiter and Schmidhuber1997] that can implicitly capture compositionality and long-span dependencies. [Wen et al.2015] attempt to learn response templates from crowd-sourced data, whereas we seek to develop methods that can learn conversational patterns from naturally-occurring data.
Prior work in generation has sought to increase diversity, but with different goals and techniques. Carbonell and Goldstein MMR and Gimpel et al. Gimpel2013 produce multiple outputs that are mutually diverse, either non-redundant summary sentences or N-best lists. Our goal, however, is to produce a single non-trivial output, and our method does not require identifying lexical overlap to foster diversity.222Augmenting our technique with MMR-based [Carbonell and Goldstein1998] diversity helped increase lexical but not semantic diversity (e.g., I don’t know vs. I haven’t a clue), and with no gain in performance.
On a somewhat different task, Mao et al. [Section 6]mao2014deep utilize a mutual information objective in image caption retrieval. Below, we focus on the challenge of using MMI in response generation, comparing the performance of MMI models against maximum likelihood.
Given a sequence of inputs , an LSTM associates each time step with an input gate, a memory gate and an output gate, respectively denoted as , and . We distinguish and where
denotes the vector for an individual text unit (for example, a word or sentence) at time stepwhile denotes the vector computed by LSTM model at time by combining and . is the cell state vector at time , and
denotes the sigmoid function. Then, the vector representationfor each time step is given by:
where , , , . In Seq2Seq generation tasks, each input is paired with a sequence of outputs to predict: . The LSTM defines a distribution over outputs and sequentially predicts tokens using a softmax function:
denotes the activation function betweenand , where is the representation output from the LSTM at time . Each sentence concludes with a special end-of-sentence symbol EOS. Commonly, input and output use different LSTMs with separate compositional parameters to capture different compositional patterns.
During decoding, the algorithm terminates when an EOS token is predicted. At each time step, either a greedy approach or beam search can be adopted for word prediction. Greedy search selects the token with the largest conditional probability, the embedding of which is then combined with preceding output to predict the token at the next step.
In the response generation task, let denote an input message sequence (source) where denotes the number of words in . Let (target) denote a sequence in response to source sequence , where EOS}, is the length of the response (terminated by an EOS token) and denotes a word token that is associated with a dimensional distinct word embedding . denotes vocabulary size.
The standard objective function for sequence-to-sequence models is the log-likelihood of target given source , which at test time yields the statistical decision problem:
As discussed in the introduction, we surmise that this formulation leads to generic responses being generated, since it only selects for targets given sources, not the converse. To remedy this, we replace it with Maximum Mutual Information (MMI) as the objective function. In MMI, parameters are chosen to maximize (pairwise) mutual information between the source and the target :
This avoids favoring responses that unconditionally enjoy high probability, and instead biases towards those responses that are specific to the given input. The MMI objective can written as follows:333Note:
We use a generalization of the MMI objective which introduces a hyperparameterthat controls how much to penalize generic responses:
An alternate formulation of the MMI objective uses Bayes’ theorem:
which lets us rewrite Equation 9 as follows:
This weighted MMI objective function can thus be viewed as representing a tradeoff between sources given targets (i.e., ) and targets given sources (i.e., ).
Although the MMI optimization criterion has been comprehensively studied for other tasks, such as acoustic modeling in speech recognition [Huang et al.2001], adapting MMI to Seq2Seq training is empirically nontrivial. Moreover, we would like to be able to adjust the value in Equation 9 without repeatedly training neural network models from scratch, which would otherwise be extremely time-consuming. Accordingly, we did not train a joint model (), but instead trained maximum likelihood models, and used the MMI criterion only during testing.
Responses can be generated either from Equation 9, i.e., or Equation 10, i.e., . We will refer to these formulations as MMI-antiLM and MMI-bidi, respectively. However, these strategies are difficult to apply directly to decoding since they can lead to ungrammatical responses (with MMI-antiLM) or make decoding intractable (with MMI-bidi). In the rest of this section, we will discuss these issues and explain how we resolve them in practice.
The second term of functions as an anti-language model. It penalizes not only high-frequency, generic responses, but also fluent ones and thus can lead to ungrammatical outputs. In theory, this issue should not arise when is less than 1, since ungrammatical sentences should always be more severely penalized by the first term of the equation, i.e., . In practice, however, we found that the model tends to select ungrammatical outputs that escaped being penalized by .
Again, let be the length of target . in Equation 9 can be written as:
We replace the language model with , which adapts the standard language model by multiplying by a weight that is decremented monotonically as the index of the current token increases:
The underlying intuition here is as follows. First, neural decoding combines the previously built representation with the word predicted at the current step. As decoding proceeds, the influence of the initial input on decoding (i.e., the source sentence representation) diminishes as additional previously-predicted words are encoded in the vector representations.444Attention models [Xu et al.2015] may offer some promise of addressing this issue. In other words, the first words to be predicted significantly determine the remainder of the sentence. Penalizing words predicted early on by the language model contributes more to the diversity of the sentence than it does to words predicted later. Second, as the influence of the input on decoding declines, the influence of the language model comes to dominate. We have observed that ungrammatical segments tend to appear in the later parts of the sentences, especially in long sentences.
We adopt the most straightforward form of by setting up a threshold () by penalizing the first words where555We experimented with a smooth decay in rather than a stepwise function, but this did not yield better performance.
The objective in Equation 9 can thus be rewritten as:
where direct decoding is tractable.
Direct decoding from is intractable, as the second part (i.e., ) requires completion of target generation before can be effectively computed. Due to the enormous search space for target , exploring all possibilities is infeasible.
For practical reasons, then, we turn to an approximation approach that involves first generating N-best lists given the first part of objective function, i.e., standard Seq2Seq model . Then we rerank the N-best lists using the second term of the objective function. Since N-best lists produced by Seq2Seq models are generally grammatical, the final selected options are likely to be well-formed. Model reranking has obvious drawbacks. It results in non-globally-optimal solutions by first emphasizing standard Seq2Seq objectives. Moreover, it relies heavily on the system’s success in generating a sufficiently diverse N-best set, requiring that a long list of N-best lists be generated for each message.
Nonetheless, these two variants of the MMI criterion work well in practice, significantly improving both interestingness and diversity.
. We adopt a deep structure with four LSTM layers for encoding and four LSTM layers for decoding, each of which consists of a different set of parameters. Each LSTM layer consists of 1,000 hidden neurons, and the dimensionality of word embeddings is set to 1,000. Other training details are given below, broadly aligned with sutskever2014sequence.
LSTM parameters and embeddings are initialized from a uniform distribution in [, ].
Stochastic gradient decent is implemented using a fixed learning rate of 0.1.
Batch size is set to 256.
Gradient clipping is adopted by scaling gradients when the norm exceeded a threshold of 1.
Our implementation on a single GPU processes at a speed of approximately 600-1200 tokens per second on a Tesla K40.
The model described in Section 4.3.1 was trained using the same model as that of , with messages () and responses () interchanged.
As described in Section 4.3.1, decoding using
can be readily implemented by predicting tokens at each time-step. In addition, we found in our experiments that it is also important to take into account the length of responses in decoding. We thus linearly combine the loss function with length penalization, leading to an ultimate score for a given targetas follows:
where denotes the length of the target and denotes associated weight. We optimize and using MERT [Och2003] on N-best lists of response candidates. The N-best lists are generated using the decoder with beam size . We set a maximum length of 20 for generated candidates. At each time step of decoding, we are presented with candidates. We first add all hypotheses with an EOS token being generated at current time step to the N-best list. Next we preserve the top unfinished hypotheses and move to next time step. We therefore maintain beam size of 200 constant when some hypotheses are completed and taken down by adding in more unfinished hypotheses. This will lead the size of final N-best list for each input much larger than the beam size.
We generate N-best lists based on and then rerank the list by linearly combining , , and . We use MERT to tune the weights and on the development set.666As with MMI-antiLM, we could have used grid search instead of MERT, since there are only 3 features and 2 free parameters. In either case, the search attempts to find the best tradeoff between and according to Bleu (which tends to weight the two models relatively equally) and ensures that generated responses are of reasonable length.
|Model||of training instances||Bleu||distinct-1||distinct-2|
|SMT [Ritter et al.2011]||50M||3.60||.098||.351|
|SMT+neural reranking [Sordoni et al.2015]||50M||4.44||.101||.358|
We used an extension of the dataset described in Sordoni et al. Sordoni2015, which consists of 23 million conversational snippets randomly selected from a collection of 129M context-message-response triples extracted from the Twitter Firehose over the 3-month period from June through August 2012. For the purposes of our experiments, we limited context to the turn in the conversation immediately preceding the message. In our LSTM models, we used a simple input model in which contexts and messages are concatenated to form the source input.
For tuning and evaluation, we used the development dataset (2118 conversations) and the test dataset (2114 examples), augmented using information retrieval methods to create a multi-reference set, as described by Sordoni et al. Sordoni2015. The selection criteria for these two datasets included a component of relevance/interestingness, with the result that dull responses will tend to be penalized in evaluation.
In addition to unscripted Twitter conversations, we also used the OpenSubtitles (OSDb) dataset [Tiedemann2009], a large, noisy, open-domain dataset containing roughly 60M-70M scripted lines spoken by movie characters. This dataset does not specify which character speaks each subtitle line, which prevents us from inferring speaker turns. Following Vinyals et al. (2015), we make the simplifying assumption that each line of subtitle constitutes a full speaker turn. Our models are trained to predict the current turn given the preceding ones based on the assumption that consecutive turns belong to the same conversation. This introduces a degree of noise, since consecutive lines may not appear in the same conversation or scene, and may not even be spoken by the same character.
This limitation potentially renders the OSDb dataset unreliable for evaluation purposes. For evaluation purposes, we therefore used data from the Internet Movie Script Database (IMSDB),777IMSDB (http://www.imsdb.com/) is a relatively small database of around 0.4 million sentences and thus not suitable for open domain dialogue training. which explicitly identifies which character speaks each line of the script. This allowed us to identify consecutive message-response pairs spoken by different characters. We randomly selected two subsets as development and test datasets, each containing 2k pairs, with source and target length restricted to the range of [6,18].
For parameter tuning and final evaluation, we used Bleu [Papineni et al.2002], which was shown to correlate reasonably well with human judgment on the response generation task [Galley et al.2015]. In the case of the Twitter models, we used multi-reference Bleu. As the IMSDB data is too limited to support extraction of multiple references, only single reference Bleu was used in training and evaluating the OSDb models.
We did not follow Vinyals and Le vinyals2015neural in using perplexity as evaluation metric. Perplexity is unlikely to be a useful metric in our scenario, since our proposed model is designed to steer away from the standardSeq2Seq model in order to diversify the outputs. We report degree of diversity by calculating the number of distinct unigrams and bigrams in generated responses. The value is scaled by total number of generated tokens to avoid favoring long sentences (shown as distinct-1 and distinct-2 in Tables 2 and 3).
We first report performance on Twitter datasets in Table 2, along with results for different models (i.e., Machine Translation and MT+neural reranking) reprinted from Sordoni et al. Sordoni2015 on the same dataset. The baseline is the Seq2Seq model with its standard likelihood objective and a beam size of 200. We compare this baseline against greedy-search Seq2Seq [Vinyals and Le2015], which can help achieve higher diversity by increasing search errors.888Another method would have been to sample from the distribution to increase diversity. While these methods have merits, we think we ought to find a proper objective and optimize it exactly, rather than cope with an inadequate one and add noise to it.
Machine Translation is the phrase-based MT system described in [Ritter et al.2011]. MT features include commonly used ones in Moses [Koehn et al.2007], e.g., forward and backward maximum likelihood “translation” probabilities, word and phrase penalties, linear distortion, etc. For more details, refer to Sordoni et al. Sordoni2015.
MT+neural reranking is the phrase-based MT system, reranked using neural models. N-best lists are first generated from the MT system. Recurrent neural models generate scores for N-best list candidates given the input messages. These generated scores are re-incorporated to rerank all the candidates. Additional features to score [1-4]-gram matches between context and response and between message and context (context and message match CMM features) are also employed, as in Sordoni et al. Sordoni2015.
MT+neural reranking achieves a Bleu score of 4.44, which to the best of our knowledge represents the previous state-of-the-art performance on this Twitter dataset. Note that Machine Translation and MT+neural reranking are trained on a much larger dataset of roughly 50 million examples. A significant performance boost is observed from MMI-bidi over baseline Seq2Seq, both in terms of Bleu score and diversity.
The beam size of 200 used in our main experiments is quite conservative, and Bleu scores only slightly degrade when reducing beam size to 20. For MMI-bidi, Bleu scores for beam sizes of 200, 50, 20 are respectively 5.90, 5.86, 5.76. A beam size of 20 still produces relatively large N-best lists (173 elements on average) with responses of varying lengths, which offer enough diversity for the model to have a significant effect.
All models achieve significantly lower Bleu scores on this dataset than on the Twitter dataset, primarily because the IMSDB data provides only single references for evaluation. We note, however, that baseline Seq2Seq models yield lower levels of unigram diversity (distinct-1) on the OpenSubtitles dataset than on the Twitter data (0.0056 vs 0.017), which suggests that other factors may be in play. It is likely that movie dialogs are much more concise and information-rich than typical conversations on Twitter, making it harder to match gold-standard responses and causing the learned models to strongly favor safe, conservative responses.
Table 3 shows that the MMI-antiLM model yields a significant performance boost, with a Bleu score increase of up to and a more than jump in unigram diversity. Our interpretation of this huge performance improvement is that the diversity and complexity of input messages lead standard Seq2Seq models to generate very conservative responses,999The strings I don’t know, I don’t know what you are talking about, I don’t think that is a good idea, and Oh my god constitute percent of all generated responses. which fail to match the more interesting reference strings typical of this dataset. This interpretation is also supported by the fact that the MMI-bidi model does not produce as significant a performance boost as MMI-antiLM. In the case of MMI-bidi, N-best lists generated using standard Seq2Seq models remain conservative and uninteresting, attenuating the impact of later reranking. An important potential limitation of MMI-bidi model is thus that its performance hinges on the initial generation of a highly diverse, informative N-best list.
|I did not get the report from an mi-6 agent.||I don’t think that’s a good idea.||You did the right thing, did you?|
|You haven’t been given an assignment in this case.||I don’t know what you are talking about.||I’ve been looking all over for you.|
|I’m losing my grip.||I don’t know what you are talking about.||I’m the only one in the world.|
|I am ready to help.||Come on, come on||I have something we need to talk about.|
|You programmed me to gather intelligence. That’s all I’ve ever done.||You know that, don’t you?||You do have fun, don’t you?|
|I mean, we’d have to talk to him.||I mean, I don’t know.||I mean, he’s a good guy.|
|This has nothing to do with him.||You don’t know anything about him, do you?||I know him as much.|
|How come you never say it?||I don’t know||Because I don’t want to hurt you|
|He is the one behind this scheme to crash x-net.||I don’t know what you are talking about||Yes, he is, isn’t he?|
|How much time do you have here?||I don’t know.||Not long enough. Sorry, sir.|
|Ohh I’ve never seen that! How long does it take you guys to learn the drill?||I don’t even know.||Like 2 weeks ago!!|
|You’ll know when we sign Heskey in 2 weeks||I’ll see what I can do||I sure hope so..|
|I’m waiting!! I have even got biscuits waiting for you??||I have no idea what you’re talking about||I’ll send you a pic when I get home xx|
|My parents actually had touch too much played at their wedding.||I have no idea what you’re talking about||How old are they??|
|Say anything you want… I know you love me and I love you.||I love you so much.||I love you too, baby.|
|I am off all week next week||What time you going?||What time you going?|
|How are you doing?||I’m good, thanks.||I’m good, you?|
|SMT [Ritter et al.2011]||0.29||[0.25, 0.32]|
|SMT+neural reranking||0.28||[0.25, 0.32]|
|Seq2Seq (baseline)||0.11||[0.07, 0.14]|
|Seq2Seq (greedy)||0.08||[0.04, 0.11]|
|Input: What are you doing?|
|1. I’ve been looking for you.||4. I told you to shut up.|
|2. I want to talk to you.||5. Get out of here.|
|3. Just making sure you’re OK.||6. I’m looking for a doctor.|
|Input: What is your name?|
|1. Blue!||4. Daniel.|
|2. Peter.||5. My name is John.|
|3. Tyler.||6. My name is Robert.|
|Input: How old are you?|
|1. Twenty-eight.||4. Five.|
|2. Twenty-four.||5. 15.|
|3. Long.||6. Eight.|
We employed crowdsourced judges to provide evaluations for a random sample of 1000 items in the Twitter test dataset. Table 6 shows the results of human evaluations between paired systems. Each output pair was ranked by 5 judges, who were asked to decide which of the two outputs was better. They were instructed to prefer outputs that were more specific (relevant) to the message and preceding context, as opposed to those that were more generic. Ties were permitted. Identical strings were algorithmically assigned the same score. The mean of differences between outputs is shown as the gain for MMI-bidi over the competing system. At a significance level of , we find that MMI-bidi outperforms both baseline and greedy Seq2Seq systems, as well as the weaker SMT and SMT+RNN baselines. MMI-bidi outperforms SMT in human evaluations despite the greater lexical diversity of MT output.
Separately, judges were also asked to rate overall quality of MMI-bidi output over the same 1000-item sample in isolation, each output being evaluated by 7 judges in context using a 5-point scale. The mean rating was 3.84 (median: 3.85, 1st Qu: 3.57, 3rd Qu: 4.14), suggesting that overall MMI-bidi output does appear reasonably acceptable to human judges.101010In the human evaluations, we asked the annotators to prefer responses that were more specific to the context only when doing the pairwise evaluations of systems. The absolute evaluation was conducted separately (on different days) on the best system, and annotators were asked to evaluate the overall quality of the response, specifically Provide your impression of overall quality of the response in this particular conversation.
In Tables 4 and 5, we present responses generated by different models. All examples were randomly sampled (without cherry picking). We see that the baseline Seq2Seq model tends to generate reasonable responses to simple messages such as How are you doing? or I love you. As the complexity of the message increases, however, the outputs switch to more conservative, duller forms, such as I don’t know or I don’t know what you are talking about. An occasional answer of this kind might go unnoticed in a natural conversation, but a dialog agent that always produces such responses risks being perceived as uncooperative. MMI-bidi models, on the other hand, produce far more diverse and interesting responses.
We investigated an issue encountered when applying Seq2Seq models to conversational response generation. These models tend to generate safe, commonplace responses (e.g., I don’t know) regardless of the input. Our analysis suggests that the issue is at least in part attributable to the use of unidirectional likelihood of output (responses) given input (messages). To remedy this, we have proposed using Maximum Mutual Information (MMI) as the objective function. Our results demonstrate that the proposed MMI models produce more diverse and interesting responses, while improving quality as measured by Bleu and human evaluation.
To the best of our knowledge, this paper represents the first work to address the issue of output diversity in the neural generation framework. We have focused on the algorithmic dimensions of the problem. Unquestionably numerous other factors such as grounding, persona (of both user and agent), and intent also play a role in generating diverse, conversationally interesting outputs. These must be left for future investigation. Since the challenge of producing interesting outputs also arises in other neural generation tasks, including image-description generation, question answering, and potentially any task where mutual correspondences must be modeled, the implications of this work extend well beyond conversational response generation.
We thank the anonymous reviewers, as well as Dan Jurafsky, Alan Ritter, Stephanie Lukin, George Spithourakis, Alessandro Sordoni, Chris Quirk, Meg Mitchell, Jacob Devlin, Oriol Vinyals, and Dhruv Batra for their comments and suggestions.
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1100–1111.
Deep captioning with multimodal recurrent neural networks (m-RNN).ICLR.
Trainable approaches to surface natural language generation and their application to conversational dialog systems.Computer Speech & Language, 16(3):435–455.
Proc. of ICML Deep Learning Workshop.
NIPS workshop on Machine Learning for Spoken Language Understanding and Interaction.