Open-domain human-computer dialog systems are attracting increasing attention in the NLP community. With the development of deep learning, sequence-to-sequence (Seq2Seq) neural networks, or more generally encoder-decoder frameworks, are among the most popular models for utterance generation in dialog systemsShang et al. (2015); Li et al. (2016); Mou et al. (2016); Serban et al. (2017).
In previous studies, researchers have proposed a variety of approaches to address the problem of universal replies, ranging from heuristically modified training objectivesLi et al. (2016), diversified decoding algorithms Vijayakumar et al. (2016), to content-introducing approaches Mou et al. (2016); Xing et al. (2016).
Although universal replies have been alleviated to some extent, there lacks an empirical explanation to the curious question: Why does the same Seq2Seq model tend to generate shorter and less meaningful sentences in a dialog system than in a machine translation system?
Considering the difference between dialog and translation data, our intuition is that the dialog system encounters a severe unaligned problem: an utterance may have multiple equally plausible replies, which may have different meanings. On the contrary, the translation datasets typically have a precise semantic matching between the source and target sides. This conjecture is casually expressed in our previous work Mou et al. (2016), but is not supported by experiments.
In this paper, we propose a method to verify the conjecture by mimicking the unaligned scenario in machine translation datasets. We propose to shuffle the source and target sides of the translation pairs to artificially build a conditional distribution of target sentences with multiple plausible data points. By doing so, we manage to shorten the length and lower the “information” of generated sentences in a Seq2Seq machine translation system. This shows evidence that the unaligned problem could be one reason that causes short and meaningless replies in neural dialog systems.
To summarize, this paper systematically compares Seq2Seq dialog and translation systems, and provides an explanation to the question: Why do neural dialog systems tend to generate short and meaningless replies? Our study also sheds light on the future development of neural dialog systems as well as the application scenarios where Seq2Seq models are appropriate.
We hypothesize that given a source sequence, the conditional distribution of the target sequence having multiple plausible points is one cause of the deficiency of Seq2Seq models in dialog systems.
Let us denote the source sequence by and the target sequence by . Both (orthodox) training and prediction objectives are to maximize
, where the conditional probabilityis modeled by a Seq2Seq neural network with parameters .
In a machine translation system, the source and target information generally aligns well, although some meanings could have different expressions. Figure 1a shows a continuous analog of .
In an open-domain dialog system, however, an utterance can have a variety of replies that are (nearly) equally plausible. For example, given a user-issued utterance “What are you going to do?” there could be multiple replies like “having lunch,” “watching movies,” and “sleeping,” shown in Figure 1
b with an analog of continuous random variables. There is no particular reason why one reply should be favored over another without further context. Even with context, this problem could not be fully solved because of the true randomness of dialog.
The above is, perhaps, the most salient difference between dialog and translation datasets. While it is tempting to think of Seq2Seq’s performance in this way Mou et al. (2016), there does not exist a practical approach to verify the conjecture.
3 Experimental Protocol
3.1 Mimicking a “Dialog Scenario” in the Machine Translation
We propose to mimic the “unaligned” property in a translation dataset by shuffling the source and target pairs. This ensures the resulting conditional distribution to have multiple plausible data points, whereas other settings of translation remain unchanged, making a rigorous controlled experiment.
Formally speaking, let be the training dataset in a translation setting, where is a particular data point containing a source and target sentence pair; in total we have data points.
The shuffled dataset is , where and is a random permutation of . In this way, we artificially construct a conditional target distribution that allows multiple plausible sentences conditioned on a particular source sentence.
Notice that, for the sake of constructing a distribution where the target sentences can have multiple plausible data points, there is no need to generate multiple random target sentences for a particular source sentence. In fact, it is preferred NOT, so that the experiment is more controlled. In the case where we generate a single target sentence for a source sentence , can still be viewed as samples from the marginal (unconditioned) distribution , and thus the desired “unaligned” property is in place.
It is straightforward to shuffle a subset of the translation dataset. Details are not repeated here. This helps to analyze how Seq2Seq models behave when the “unaligned” problem becomes more severe.
It should also be mentioned that the shuffling trick is previously used in shuffle to compare the robustness of Seq2Seq models and phrase-based statistical machine translation in terms of BLEU scores. Our paper contains a novel insight that shuffling datasets mimics the unaligned property in dialog datasets, which facilitates the comparison between Seq2Seq dialog and translation systems.
3.2 The Seq2Seq Model and Datasets
We adopted a modern Seq2Seq model (with an attention mechanism) as the neural network for both dialog and translation systems. The encoder is a bidirectional recurrent neural network with gated recurrent units (GRUs), whereas the decoder comprises two GRU state transition blocks and an attention mechanism in betweenSennrich et al. (2017).111Code downloaded from https://github.com/EdinburghNLP/nematus
For the dialog system, we used the Cornell Movie-Dialogs Corpus dataset,222Available at https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html containing 221k samples. For machine translation, we used the WMT-2017 dataset333Available at http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/ and focus on English-to-Germany translation; it contains 5.8M samples.
We first tried a normal machine translation setting and achieved comparable results to a baseline in baseline; thus our replication of the machine translation system is fair. In all settings, we used the same model and hyperparameters so that our comparison is also fair.
Appendix A provides detailed model description and experimental setup.
|# words||% of Ref|
correlation obtained by fitting a linear regression of the encoding/decoding step with hidden states.
Overall Performance. Table 1 presents the BLEU scores of dialog machine translation systems. In open-domain dialog, BLEU-2 exhibits some (not large) correlation with human satisfaction, although BLEU scores are generally low. For machine translation, we achieved 27.2 BLEU for the normal setting, which is comparable to 28.4 achieved by a baseline method in baseline.
If we begin to shuffle the translation dataset, we see that BLEU drops gradually and finally reaches near zero if the training set is completely random (100% shuffled). The results are not surprising and also reported in shuffle. This provides a quick understanding on how the Seq2Seq is influenced by shuffled data.
Length, Negative Log-Probability, and Entropy. We now compare the length, probability, and entropy of dialog and translation systems, as well as the shuffling setting (Table 2). The length metric counts the number of words in a generated reply.444In some cases, an RNN fails to terminate by repeating a same word. Here, we assume a same word can be repeated at most four times. The negative log-probability is computed as , where denotes all replies and is the unigram distribution of words in the training set. Entropy is defined as , where is the unigram distribution in generated replies. Intuitively, both negative log-probability and entropy evaluate how much “content” is contained in the replies. These metrics are used in previous work Serban et al. (2017); Mou et al. (2016),555In our previous work Mou et al. (2016), the negative log-probability is mis-interpreted as entropy after email correspondence with some other peer researcher. and obviously most relevant to our research question.
We first compare the dialog system with machine translation, both in a normal setting (no shuffling). We observe that, the dialog system does generate short and meaningless replies with lower length, negative log-probability, and entropy metrics than references, as opposed to machine translation where Seq2Seq’s generated sentences are comparable to references in terms of these statistics. Quantitatively, the length is 20% shorter than references. The negative log-probability and entropy decrease by 0.71 and 0.99, respectively; a decrease of 1 in negative log-probability and entropy metrics is large because they are logarithmic metrics. Although with a well-engineered Seq2Seq model (with attention, beam search, etc.), the phenomenon is less severe than a vanilla Seq2Seq in seq2BF, it is still perceivable and worth investigating.
We then applied the shuffling setting to the translation system. With the increase of shuffling rate, the Seq2Seq translation model precisely exhibits the phenomenon as a dialog system: the length decreases, the negative log-probability decreases, and the entropy decreases. In particular, the decreasing negative log probability implies that the generated words are more frequently appearing in the training set, whereas the decreasing entropy implies that the distribution of generated sentences spread less across the vocabulary. In other words, artificially constructing an unaligned property in translation datasets—with all other settings remain unchanged—enables to reproduce the phenomenon in a dialog system. This shows evidence that the unaligned property could be one reason that causes the problem of short and meaningless replies in a dialog system.
Correlation between Time Step and Hidden States. MTlength conduct an empirical study analyzing “Why Neural Translations are the Right Length?” They observe that, even the semantic of translation is not good, the length of generated reply is likely to be correct. They further find that some dimensions in RNN states are responsible for memorizing the current length in the process of sequence generation; the result is also reported in visualizeRNN previously. MTlength apply linear regression to predict the time step during sequence modeling based on hidden states, and compute the correlation as a quantitative measure.
Since a dialog system usually generates short replies (and thus not right length), we are curious what the correlation would be in a dialog system as well as shuffled translation settings. The results are shown in in Table 3. We find that the dialog system exhibits low correlation, and that the correlation also decreases in machine translation if data are shuffled (but not as worse as dialog systems). One inconsistent result, however, is that for the 100% shuffled dataset, the correlation in the encoder side becomes 99%, while the decoder correlation also increases to 85%. We currently do not have good explanation to this.
5 Conclusion and Discussion
In this paper, we addressed the question why dialog systems generate short and meaningless replies. We managed to reproduce this phenomenon in a well-behaving translation system by shuffling training data, artificially mimicking the scenario that a source sentence can have multiple equally plausible target sentences.
Admittedly, it is impossible to construct exactly the same scenario as dialog by using translation datasets (otherwise the translation just becomes dialog). However, the unaligned property is a salient difference, and by controlling this, we observe the desired phenomenon. Therefore it could be one cause of short and meaningless replies in dialog systems.
Our findings also explain why referring to additional information—including dialog context Tian et al. (2017), keywords Mou et al. (2016) and knowledge bases Vougiouklis et al. (2016)—helps dialog systems: the number of plausible target sentences decreases if the generation is conditioned on more information; this intuition is helpful for future development of Seq2Seq dialog systems. Moreover, our experiments suggest that Seq2Seq models are more suitable to applications where the source and target information is aligned.
We would like to thank Daqi Zheng and Yiping Song for helpful discussion.
- Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations.
- Karpathy et al. (2015) Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078 .
- Koehn (2017) Philipp Koehn. 2017. Statistical Machine Translation (Chapter 13: Neural Machine Translation). arXiv preprint arXiv:1709.07809 .
- Li et al. (2016) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 110–119. https://doi.org/10.18653/v1/N16-1014.
- Miceli Barone et al. (2017) Antonio Valerio Miceli Barone, Jindřich Helcl, Rico Sennrich, Barry Haddow, and Alexandra Birch. 2017. Deep architectures for neural machine translation. In Proceedings of the Conference on Machine Translation. pages 99–107. http://www.aclweb.org/anthology/W17-4710.
- Mou et al. (2016) Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of the 26th International Conference on Computational Linguistics. pages 3349–3358. http://aclweb.org/anthology/C16-1316.
Rush et al. (2015)
Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015.
A neural attention model
for abstractive sentence summarization.
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 379–389. https://doi.org/10.18653/v1/D15-1044.
- Sennrich et al. (2017) Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: A toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics. pages 65–68. https://doi.org/doi.org/10.18653/v1/E17-3017.
Serban et al. (2017)
Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle
Pineau, Aaron C Courville, and Yoshua Bengio. 2017.
A hierarchical latent variable encoder-decoder model for generating
Proceedings of the 31st AAAI Conference on Artificial Intelligence. pages 3295–3301.
- Shang et al. (2015) Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pages 1577–1586. https://doi.org/10.3115/v1/P15-1152.
- Shi et al. (2016) Xing Shi, Kevin Knight, and Deniz Yuret. 2016. Why neural translations are the right length. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 2278–2282. https://doi.org/10.18653/v1/D16-1248.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. pages 3104–3112.
- Tian et al. (2017) Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, and Dongyan Zhao. 2017. How to make context more useful? An empirical study on context-aware neural conversational models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). pages 231–236. https://doi.org/10.18653/v1/P17-2036.
- Vijayakumar et al. (2016) Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424 .
- Vinyals et al. (2015) Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In . pages 3156–3164.
- Vougiouklis et al. (2016) Pavlos Vougiouklis, Jonathon Hare, and Elena Simperl. 2016. A neural network approach for knowledge-driven response generation. In Proceedings of the 26th International Conference on Computational Linguistics. pages 3370–3380. http://www.aclweb.org/anthology/C16-1318.
- Xing et al. (2016) Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2016. Topic augmented neural response generation with a joint attention mechanism. arXiv preprint arXiv:1606.08340 .
Appendix A Experimental Setup
a.1 Neural Network
We use the neural network in toolkit as our model. The encoder is a bidirectional recurrent neural network with gated recurrent units (GRUs). Let us consider one direction , where is the input embedding at the time step and is the hidden state. The computation of one step is given by
where ’s and ’s are weights; is the function and is element-wise product.
Applying GRU-RNN to both directions and concatenating the resulting hidden states, we obtain the representation of the th word in the source as
The decoder is an RNN with two blocks of GRUs and an attention mechanism sandwiched in between. The first block of GRU computes an intermediate representation for the th word in the target as , where is the embedding of the last word .
is used to compute attention vector as
A context vector is computed as
Then is fed to the second block of GRU as
Finally, , , and
are fed to a fully connected layer and a softmax layer for prediction of the wordat the time step in the decoder.
a.2 Hyperparameter Settings
In our all experiments, word embeddings were 512d. We used Adam to optimize all parameters, with initial learning rate 0.0001. The dropout rate was set to 0.2. We set the mini-batch size to 60 to fit to GPU memory. In machine translation, RNN was 1024d and the vocabulary size was 30k in each language, whereas in the dialog model, the RNN was 1000d and the vocabulary size was 50k. For prediction beam search (beam size 12) was adopted to generate a translation or a reply.