Texygen: A Benchmarking Platform for Text Generation Models

02/06/2018
by   Yaoming Zhu, et al.
0

We introduce Texygen, a benchmarking platform to support research on open-domain text generation models. Texygen has not only implemented a majority of text generation models, but also covered a set of metrics that evaluate the diversity, the quality and the consistency of the generated texts. The Texygen platform could help standardize the research on text generation and facilitate the sharing of fine-tuned open-source implementations among researchers for their work. As a consequence, this would help in improving the reproductivity and reliability of future research work in text generation.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/08/2019

c-TextGen: Conditional Text Generation for Harmonious Human-Machine Interaction

In recent years, with the development of deep learning technology, text ...
05/25/2022

R2D2: Robust Data-to-Text with Replacement Detection

Unfaithful text generation is a common problem for text generation syste...
05/17/2020

MixingBoard: a Knowledgeable Stylized Integrated Text Generation Platform

We present MixingBoard, a platform for quickly building demos with a foc...
09/23/2019

Automated Chess Commentator Powered by Neural Chess Engine

In this paper, we explore a new approach for automated chess commentary ...
08/13/2021

MTG: A Benchmarking Suite for Multilingual Text Generation

We introduce MTG, a new benchmark suite for training and evaluating mult...
02/03/2020

CoTK: An Open-Source Toolkit for Fast Development and Fair Evaluation of Text Generation

In text generation evaluation, many practical issues, such as inconsiste...
09/14/2021

The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation

Recent text generation research has increasingly focused on open-ended d...