Language GANs Falling Short

11/06/2018
by   Massimo Caccia, et al.
1

Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE) models trained with teacher forcing have constantly been reported as weak baselines, where poor performance is attributed to exposure bias; at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, wake make several surprising observations with contradict common beliefs. We first revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model's conditional distributions. Second, we leverage the control over the quality / diversity tradeoff given by this parameter to evaluate models over the whole quality-diversity spectrum, and find MLE models constantly outperform the proposed GAN variants, over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality / diversity trade off than adversarial training, while being easier to train, easier to cross-validate, and less computationally expensive.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2020

Meta-CoTGAN: A Meta Cooperative Training Paradigm for Improving Adversarial Text Generation

Training generative models that can generate high-quality text with suff...
research
05/25/2019

Quantifying Exposure Bias for Neural Language Generation

The exposure bias problem refers to the training-inference discrepancy c...
research
08/04/2022

A Representation Modeling Based Language GAN with Completely Random Initialization

Text generative models trained via Maximum Likelihood Estimation (MLE) s...
research
09/15/2022

Distribution Aware Metrics for Conditional Natural Language Generation

Traditional automated metrics for evaluating conditional natural languag...
research
05/23/2019

Training language GANs from Scratch

Generative Adversarial Networks (GANs) enjoy great success at image gene...
research
08/24/2021

Reducing Exposure Bias in Training Recurrent Neural Network Transducers

When recurrent neural network transducers (RNNTs) are trained using the ...
research
10/07/2020

TeaForN: Teacher-Forcing with N-grams

Sequence generation models trained with teacher-forcing suffer from issu...

Please sign up or login with your details

Forgot password? Click here to reset