Texygen: A Benchmarking Platform for Text Generation Models

by   Yaoming Zhu, et al.

We introduce Texygen, a benchmarking platform to support research on open-domain text generation models. Texygen has not only implemented a majority of text generation models, but also covered a set of metrics that evaluate the diversity, the quality and the consistency of the generated texts. The Texygen platform could help standardize the research on text generation and facilitate the sharing of fine-tuned open-source implementations among researchers for their work. As a consequence, this would help in improving the reproductivity and reliability of future research work in text generation.


page 1

page 2

page 3

page 4


c-TextGen: Conditional Text Generation for Harmonious Human-Machine Interaction

In recent years, with the development of deep learning technology, text ...

R2D2: Robust Data-to-Text with Replacement Detection

Unfaithful text generation is a common problem for text generation syste...

MixingBoard: a Knowledgeable Stylized Integrated Text Generation Platform

We present MixingBoard, a platform for quickly building demos with a foc...

Automated Chess Commentator Powered by Neural Chess Engine

In this paper, we explore a new approach for automated chess commentary ...

MTG: A Benchmarking Suite for Multilingual Text Generation

We introduce MTG, a new benchmark suite for training and evaluating mult...

CoTK: An Open-Source Toolkit for Fast Development and Fair Evaluation of Text Generation

In text generation evaluation, many practical issues, such as inconsiste...

The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation

Recent text generation research has increasingly focused on open-ended d...