Neural text generation, as a key but challenging task in NLP, has been widely studied recently. Text generation has been applied to various scenarios, such as dialog generation Vinyals and Le (2015), story generation Roemmele (2016), machine translation Sutskever et al. (2014)2015)
and image captioningVinyals et al. (2015).
Novel methods for text generation are constantly developed, but the evaluation of text generation is much less touched Huang et al. (2019). Even worse, it is common that the results provided by prior models contradict one another, and thus it is hard to identify state-of-the-art models for some task. After thorough investigation of existing open-source projects on text generation, we observe two common problems in model evaluation.
One problem is that models are tested on different datasets and with different experimental settings, thereby making these models incomparable. Even on the same dataset, the settings used in evaluation, such as the data split (training/test), the method of tokenization and the size of vocabularies, differ unconsciously, thereby leading to unfair comparison between models and untenable conclusions. As shown in Figure 1 (a), the subtle difference in vocabularies can completely change the results. However, uncovering the differences among these settings can be extremely difficult, which makes results hardly reproducible.
The other problem lies in that the metrics of text generation are rather complicated, leading to inconsistency among different implementations. For example, as shown in Figure 1 (b), the truncation of the vocabulary brings unk (unknown tokens) into ground truth, and the original BLEU metric Papineni et al. (2002) favors the sentence containing more unk . In (c), we present several implementations of perplexity Brown et al. (1992), leading to very different results.
These problems severely prevent us from comparing different models fairly, reproducing existing models and implementing new models. There is a heavy burden in checking the details of experimental settings, metric implementations, and more. To this end, we develop Conversational Toolkit (CoTK), an open-source 111CoTK is available at https://github.com/thu-coai/cotk with Apache License 2.0. toolkit as a python package. CoTK is mainly developed for open-domain conversation generation, but other tasks of text generation are also supported. CoTK is designed to achieve two goals:
Empowering fast development. CoTK helps handle the cumbersome issues in data loading, processing, evaluation, and reproduction, so that the researchers can concentrate on the most creative part, i.e., the implementation of novel models.
Empowering fair evaluation. CoTK is specially designed for ensuring fair comparison, where a unique hash code can signify whether the experimental results are comparable.
In CoTK, we provide:
Data loaders. We build data loaders for common text generation tasks, where the data loaders handle the whole procedure before sending data into the models, including reading files, processing, and packing sample batches.
Metrics. CoTK covers commonly used metrics in text generation tasks. Each evaluation result will be tagged with a hash code, which can be used to verify the fairness of comparisons.
A tool for publication and reproduction. CoTK can track the code and experimental environment, which enables researchers to publish models or reproduce others’ results conveniently.
Resources and benchmark models. We collect some public datasets and commonly used benchmark models, which facilitate development of new models and comparison with existing ones.
2 Design and Structure
CoTK aims at supporting researchers through the entire lifetime of model development. As shown in Figure 2, we divide the development procedure into four steps: data processing, model implementation, evaluation, and publication. CoTK characterizes itself in three aspects: data loaders for data processing, metrics for evaluation, and a tool for publication and reproduction. For model implementation, CoTK is compatible with many existing toolkits. That is, researchers can be supported in data processing, evaluation, and publication from CoTK while implementing models with other toolkits, such as Texar Hu et al. (2019) and Fairseq Ott et al. (2019)
, in PyTorchPaszke et al. (2019)2016)
or other deep learning frameworks.
2.1 Data Loader
Data loader helps users prepare data for deep learning models. Following user-specified settings, data loader can read files, make tokenization, build vocabularies and pack sentences to mini-batches.
A task is specified by the configurations shown in Table 1. We mainly support text generation (without input) Sutskever et al. (2011), single-turn dialog generation Vinyals and Le (2015) and multi-turn dialog generation Sordoni et al. (2015). By assembling different types of input and output, CoTK can be easily extended to various tasks, such as machine translation Sutskever et al. (2014), controllable conversation generation Zhou et al. (2018).
|Text Generation (w/o input)||Sentence|
|Single-Turn Dialog||Sentence Sentence|
|Multi-Turn Dialog||Context Sentence|
|Machine Translation||Sentence Sentence|
|Controllable Generation||(Sentence, Label) Sentence|
The means of tokenization are usually ignored but can largely affect the experimental results. We provide widely used Puckt tokenizer Kiss and Strunk (2006) as well as tokenizers for GPT-2 Radford et al. (2019) and other pretraining models.
It is a common choice to filter out rare words in the vocabulary. However, fair comparison among the models trained with different vocabularies is not trivial, as shown by the example in Figure 1 (a). To this end, we split a vocabulary into two parts:
Frequent vocabulary (). Frequent vocabulary contains frequent words from the training set. It is the vocabulary used by most models.
Rare vocabulary (). Rare vocabulary contains the remaining words from the training and the test set, which cannot be generated by most models except the copy mechanism He et al. (2017). Note that and have no intersection.
In the training stage, models only see and they regard all words not in as unk . In the test stage, models are evaluated on . Although rare words cannot be generated by most models, they are crucial for evaluation. Our metrics are designed to achieve fair comparison as long as does not change. Supposing that two models are trained with different frequent vocabularies, and for instance, they can be fairly compared by adjusting the rare vocabularies to keep .
Hash Code for Data Loader
Since it is difficult to track the differences among various data loaders, CoTK provides hash codes to identify each part of the data loader including the input data, vocabularies and settings (shown in Table 2). For example, if two data loaders have the same General Hash code, their data, vocabularies and settings are guaranteed to be the same. This is implemented by computing SHA-256 given the corresponding parts of data loaders as input. A usage case is presented in Section 3.1.
|Hash Code||Object to be Identified|
|Raw Data Hash||Raw Text Data|
|Data Hash||Tokenized Data|
|General Hash||All Above|
CoTK covers commonly used metrics in text generation tasks, as shown in Table 3.
|Text Generation (Without Input)|
|Perplexity Brown et al. (1992)|
|Self-BLEU Zhu et al. (2018)|
|Forward / Backward BLEU Shi et al. (2018)|
|Forward / Reverse Perplexity Zhao et al. (2018)|
|BLEU Papineni et al. (2002)|
Distinct N-gramLi et al. (2016)
|BOW Embedding Forgues et al. (2014)|
|Machine Translation & Text Summarization|
|ROUGE Lin (2004)|
|METEOR Banerjee and Lavie (2005)|
In CoTK, metrics are implemented to achieve fair comparison among models in different experimental settings. We will take perplexity and BLEU as examples to introduce our implementation.
The original perplexity is calculated as
where is the -th token in the ground truth, and is given by the model . Supposing two models are trained with different frequent vocabularies, denoted as and respectively, where , and there exists but . Then the model B should predict the exact , but the model A only need to predict a unk . It is unfair to compare the two models based on the original perplexity, as shown in Figure 1 (a).
Similar to Ahn et al. (2016), we distribute the probability of unk evenly to the rare words:
are frequent vocabulary and rare vocabulary respectively. This method converts the predicted probability distribution overto a distribution over , so that the perplexity can always be fairly compared as long as keeps unchanged.
The BLEU metric may be affected by two issues: different tokenizers bring different token sets; BLEU may favor sentences with unk , as shown in Figure 1 (b).
In CoTK’s BLEU, we first concatenate tokens for both hypotheses and references and then make tokenization again by Puckt tokenizer. This step standardizes the tokenization. Then we count the matches of n-grams following the original BLEU, but we never match n-grams containing unk . It is because unk is not a real token and should be always regarded as mismatched.
This modification greatly extends applicability, which ensures fair comparison regardless of tokenization methods or vocabulary sets adopted by generation models.
Hash Code for Metric
Hash codes generated for metrics can track the settings and the reference data, where two metric scores are comparable if and only if they have the same hash code. The implementation of the hash code in each metric can be different. For example, the hash code of perplexity computes the SHA-256 hash given the reference sentences, the frequent vocabulary, and rare vocabulary as input. However, the computation of the hash code for BLEU only uses the tokenized reference sentences as input, because BLEU does not rely on the vocabulary set for fair comparison.
The hash codes has several advantages: It avoids human errors such as inconsistent settings; It saves researchers from memorizing the requirements of each metric for fair comparison. A case of usage is presented in Section 3.1.
|Text Generation (Without Input)||MSCOCO Chen et al. (2015)|
|EMNLP2017 WMT222Only monolingual corpus used. http://statmt.org/wmt17/translation-task.html|
|Single-Turn Dialog||OpenSubtitles Tiedemann (2016)|
|Multi-Turn Dialog||Ubuntu Lowe et al. (2015)|
|SwitchBoard333https://catalog.ldc.upenn.edu/LDC97S62Zhao et al. (2017)|
|Text Generation (Without Input)||GRUGraves (2013); Chung et al. (2014)|
|TransformersVaswani et al. (2017)|
|GPT2-finetuneRadford et al. (2019)|
|VAEKingma and Welling (2014)|
|Single-Turn Dialog||Seq2Seq-GRUSutskever et al. (2014)|
|Seq2Seq-TransVaswani et al. (2017)|
|GPT2-finetuneWolf et al. (2019)|
|Multi-Turn Dialog||HREDSordoni et al. (2015)|
|CVAEZhao et al. (2017)|
2.3 Publication and Reproduction
To further improve reproducibility, we develop a tool that helps researchers publish their code and experimental results.
Publication: If a user wants to share the results with the community, the user should follow a few steps: (1) Use the version control system git444https://git-scm.com/ to track code updates. (2) Write code that generates a result file when executed. (3) Execute the code from our tool. These three steps track the code, the results, and the running environment. Then all these data can be uploaded to our website555http://coai.cs.tsinghua.edu.cn/dashboard/ or GitHub666https://github.com/, which are accessible to the community. We highlight that the results contain hash codes, which guarantee fair comparison with other results.
Reproduction: If a user wants to reproduce the results, the user only needs to run our tool to fetch the data uploaded by another user, including the code and the running environment.
Dashboard : The dashboard is a website that maintains the results uploaded by users, which makes it convenient to compare performances of models and find state-of-the-art models. As our unique feature, users can submit the results by running the code from other users, which further facilitates reproduction.
2.4 Resources and Benchmark Models
To improve usability, we further provide resources and benchmark models compatible with our toolkit. The resources include benchmark datasets, pretrained model weights and more, which can be automatically downloaded by data loaders in CoTK. Some resources and benchmark models we provide are presented in Table 4 and Table 5.
2.5 Other Features
CoTK is specially designed for deep learning models, where all APIs receive batched data. Batched sentences can be directly converted to/from tensors. This feature avoids errors of manipulating paddings.
Compatibility: CoTK is not dependent on the deep learning frameworks, such as TensorFlow Abadi et al. (2016) and PyTorch Paszke et al. (2019). CoTK is also compatible with many other toolkits of text generation, such as Texar Hu et al. (2019) and Fairseq Ott et al. (2019), which means the model implemented with these tookits can be evaluated or published with CoTK. Moreover, the comparison across frameworks is also possible.
Extensibility: CoTK is highly extensible, where new tasks, metrics and benchmark models can be easily integrated into the toolkit. We believe that CoTK can grow with the advancements of text generation in the community.
3 Proof-of-Concept Examples
3.1 Hash Codes of Different Settings
We present an example to demonstrate how hash codes can identify differences in settings. We choose a subset of OpenSubtitles as our dataset (Origin), and modify it for four settings: Shuffled data. The lines of the data file are shuffled, which only affects the order of the samples. Small vocabulary. The size of frequent vocabulary is changed from 1323 to 752. Different tokenizers. The tokenizer is changed from the Punkt tokenizer to the tokenizer of BERT 777The tokenizers of BERT are bert-base-uncased from https://github.com/huggingface/transformers.. Corrupted data. A sample from the dataset is removed.
The result is presented in Figure 4. Shuffling data does not change any hash code because it does not affect training or evaluation. On the dataset of small vocabulary, Vocab Hash and Setting Hash codes are different. However, hash codes for metrics do not change since the result is still comparable. On the dataset of different tokenizers, the Perplexity Hash is changed, because comparison under different tokenizers is not supported by perplexity. On corrupted data, all the hash codes are changed, where Raw Data Hash signifies that it is a different dataset.
3.2 Comparison under Different Vocabularies
We present an example to demonstrate that we can achieve fair comparison under different vocabularies. We train a GRU text generation model on the dataset MSCOCO with different frequent vocabularies, and show how the vocabulary size affects the result.
|CoTK’s Perplexity||Original Perplexity|
The result is presented in Table 6. When , the model reaches the best CoTK’s perplexity. However, the original perplexity will get smaller as decreases, and it will reach 1 when . It shows that the original perplexity is not a fair metric under different vocabularies.
3.3 Evaluation of Benchmark Models
We demonstrate the evaluation results of some benchmark models on text generation (without input) and single-turn dialog generation, as shown in Table 7 and Table 8. The details of implementation and metrics are presented in the appendix.
Notice that the perplexity of GPT2-ft(finetune) cannot be fairly compared with the other models, because their tokenization are different. However, the other metrics, including BLEU, S-BLEU Zhu et al. (2018), F/B/H-BLEU Shi et al. (2018), F/R-PPL Zhao et al. (2018), Distinct-2 Li et al. (2016), standardize the tokenization with the same method of BLEU described in Section 2.2, so the results of these metrics are comparable among the models.
3.4 Other Examples
More examples about usage, model publication and reproduction are demonstrated in the appendix.
4 Related Work
Unlike PyTorch-NLP Petrochuk (2018), torchtext888https://github.com/pytorch/text, AllenNLP Gardner et al. (2018), and GluonNLP Guo et al. (2019) that provide common modules and utilities in NLP, CoTK mainly focuses on text generation. Analogous to ours, Texar Hu et al. (2019) and Fairseq Ott et al. (2019) provide state-of-the-art models in text generation. However, these toolkits are largely targeting at model implementation. CoTK characterizes itself by focusing more on data processing, evaluation and reproduction, but also allowing the compatibility with other toolkits: the models implemented by these toolkits can be easily evaluated with CoTK.
Many toolkits provide data loaders as we do. Texar and Fairseq implement utilities for loading data and provide benchmarks for text generation tasks. PyTorch-NLP, torchtext and AllenNLP provide a similar function for general NLP tasks.
In comparison, as a unique feature, CoTK uses hash code to identify the differences and remind researchers when experimental settings change. Furthermore, the data loaders work with our implemented metrics to realize fair comparison across different settings and datasets.
The evaluation of text generation are less touched by previous toolkits. Most of toolkits, such as torchtext, PyTorch-NLP and Texar, only provide few metrics like BLEU. Although NLTK Loper and Bird (2002) and AllenNLP contain evaluation modules, few of which are designed for text generation.
We provide unified APIs to receive batched samples, which are convenient for deep learning models. Moreover, hash code plays an essential role in our toolkit to achieve fair comparison.
Publication and Reproduction
Publication and reproduction are rarely addressed by existing toolkits for text generation. Here we list two applications to achieve a similar function. Sacred Greff et al. (2017) is an experiment management tool, where the configurations, codes and results are tracked for reproduction. It is only for individuals and not designed for sharing results with the community. Paper with Code999https://paperswithcode.com/ collects the evaluation results and leaderboards for different tasks. However, the results are manually filled and can be hard to reproduce. CoTK can track the codes, the results and the running environment automatically and publish them to the community, which is more convenient and efficient for comparison and reproducibility.
5 Conclusion and Future Work
In this paper, we introduce CoTK, a toolkit for fast development and fair evaluation of text generation. CoTK provides support through the entire lifetime of model development and addresses issues that are often ignored but lead to unfair comparison. With CoTK, researchers can easily handle data processing, model evaluation, and reproduction. Our special design signifies when and which metric cannot be fairly compared. CoTK can grow with the development of text generation in the community where more tasks, metrics, resources and benchmark models can be constantly integrated into our toolkit. We believe that this toolkit will not only facilitate researchers to develop text generation models, but also support fair comparison among models and promote the reproducibility of these models.
- Abadi et al. (2016) Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2016, Savannah, GA, USA, November 2-4, 2016, pages 265–283.
- Ahn et al. (2016) Sungjin Ahn, Heeyoul Choi, Tanel Pärnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. arXiv preprint arXiv:1608.00318.
- Banerjee and Lavie (2005) Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL 2005, Ann Arbor, Michigan, USA, June 29, 2005, pages 65–72.
Brown et al. (1992)
Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, Jennifer C. Lai,
and Robert L. Mercer. 1992.
An estimate of an upper bound for the entropy of english.Computational Linguistics, 18(1):31–40.
- Chen et al. (2015) Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
- Chung et al. (2014) Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
- Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186.
- Forgues et al. (2014) Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevêque, and Réal Tremblay. 2014. Bootstrapping dialog systems with word embeddings. In
- Gardner et al. (2018) Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640.
- Graves (2013) Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv: 1308.0850.
- Greff et al. (2017) Klaus Greff, Aaron Klein, Martin Chovanec, Frank Hutter, and Jürgen Schmidhuber. 2017. The sacred infrastructure for computational research. In Proceedings of the Python in Science Conferences-SciPy Conferences.
- Guo et al. (2019) Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, et al. 2019. Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing. arXiv preprint arXiv:1907.04433.
- He et al. (2017) Shizhu He, Cao Liu, Kang Liu, and Jun Zhao. 2017. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 199–208.
- Hu et al. (2019) Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Wanrong Zhu, Devendra Singh Sachan, and Eric P. Xing. 2019. Texar: A modularized, versatile, and extensible toolkit for text generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28 - August 2, 2019, Volume 3: System Demonstrations, pages 159–164.
- Huang et al. (2019) Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2019. Challenges in building intelligent open-domain dialog systems. arXiv preprint arXiv:1905.05709.
- Kingma and Welling (2014) Diederik P. Kingma and Max Welling. 2014. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.
- Kiss and Strunk (2006) Tibor Kiss and Jan Strunk. 2006. Unsupervised multilingual sentence boundary detection. Computational Linguistics, 32(4):485–525.
- Li et al. (2016) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110–119.
- Lin (2004) Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics.
- Loper and Bird (2002) Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. arXiv preprint cs/0205028.
- Lowe et al. (2015) Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the SIGDIAL 2015 Conference, The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2-4 September 2015, Prague, Czech Republic, pages 285–294.
- Ott et al. (2019) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.
- Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311–318.
- Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 8024–8035.
- Petrochuk (2018) Michael Petrochuk. 2018. Pytorch-nlp: Rapid prototyping with pytorch natural language processing (nlp) tools. https://github.com/PetrochukM/PyTorch-NLP.
- Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
- Roemmele (2016) Melissa Roemmele. 2016. Writing stories with help from recurrent neural networks. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 4311–4342.
- Rush et al. (2015) Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 379–389.
- Shi et al. (2018) Zhan Shi, Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2018. Toward diverse text generation with inverse reinforcement learning. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4361–4367.
- Sordoni et al. (2015) Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. 2015. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management, CIKM 2015, Melbourne, VIC, Australia, October 19 - 23, 2015, pages 553–562.
- Sutskever et al. (2011) Ilya Sutskever, James Martens, and Geoffrey E. Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 1017–1024.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112.
- Tiedemann (2016) Jörg Tiedemann. 2016. Finding alternative translations in a large corpus of movie subtitle. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 3518–3522.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008.
- Vinyals and Le (2015) Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869.
- Vinyals et al. (2015) Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In
- Wolf et al. (2019) Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149.
- Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 5754–5764.
- Zhao et al. (2018) Junbo Jake Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, and Yann LeCun. 2018. Adversarially regularized autoencoders. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pages 5897–5906.
- Zhao et al. (2017) Tiancheng Zhao, Ran Zhao, and Maxine Eskénazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 654–664.
- Zhou et al. (2018) Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 730–739.
- Zhu et al. (2018) Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-12, 2018, pages 1097–1100.
Appendix A An Example of Development Procedure
With the help of CoTK, it is convenient to develop a novel model, as an example of the development procedure shown in Figure 5.
In (a), we show the main code for data processing, model training and evaluation. The three parts of code are explained as follows:
Model training. Our data loader provides batches of data, where sentences are converted to index and padded. This format is commonly used by most of the text generation models.
Model evaluation. The model can generate data in batches, which are passed to a metric (Section refsec:metric) , BleuCorpusMetric, in our case. The metric object produces the result with hash codes.
In (b), we show two commands for publishing and reproducing results, respectively. The first command uploads the code to Github, the running environment and the results to our dashboard. The second command shows how to access and reproduce the results in one line.
Appendix B An Example of Comparison Under Different Tokenizers
As aforementioned, our implementation of BLEU supports fair comparison on the dataset with different tokenizers. Here we train a GRU seq2seq model on the dataset OpenSubtitles with Punkt tokenizer Kiss and Strunk (2006), and the tokenizers101010The tokenizers of BERT, GPT2, XLNet are bert-base-uncased, gpt2, xlnet-base-cased respectively, from https://github.com/huggingface/transformers.of BERT Devlin et al. (2019), GPT2 Radford et al. (2019) and XLNet Yang et al. (2019). The result is presented in Table 9. These scores are directly comparable with our implementation.
Appendix C Evaluation Details of Benchmark Models
Here we present the details of the metrics in the evaluation of text generation (without input) and single-turn dialog tasks. All the implementation can be found in the code of CoTK.
PPL(Perplexity) Brown et al. (1992). Perplexity is a common metric for text generation models.
BLEU Papineni et al. (2002). The metric used in the single-turn dialog task shows the overlap between generated responses and ground-truth responses. We use BLEU-4.
S-BLEU(Self-BLEU) Zhu et al. (2018). The metric used in the text generation (without input) task shows the diversity of generated sentences. We adopt BLEU-4 and use 1,000 sentences as samples.
F/B/H-BLEU(Forward/Backward/Harmony BLEU) Shi et al. (2018). The metric used in the text generation (without input) task shows fluency, diversity and overall quality of generated sentences, respectively. We adopt BLEU-4 and use 1,000 sentences as references.
F/R-PPL(Forward/Reverse Perplexity) Zhao et al. (2018). The metric used in both tasks shows the fluency/diversity of generated sentences. We adopt a 5-gram language model with Kneser–Ney smoothing trained on the test set and use 10,000 sentences as references.
Distinct N-gram Li et al. (2016). The metric used in the single-turn dialog task shows the diversity of generated sentences. We use Distinct-2.
c.2 Benchmark Models
The implementation details of benchmark models for text generation (without input) and single-turn dialog tasks are presented in Table 10 and Table 11, respectively. All the implementations are publicly available111111https://thu-coai.github.io/cotk˙docs/index.html#model-zoo.
|Decoding Strategy||Random Sampling|
|Decoding Strategy||Random Sampling|
|Decoding Strategy||Random Sampling|
|Encoder Features||200 (bidirectional)|
|Decoding Strategy||Top-10 Sampling|
|Decoding Strategy||Top-10 Sampling|
|Decoding Strategy||Top-10 Sampling|