DeepAI AI Chat
Log In Sign Up

Learning to Write with Cooperative Discriminators

by   Ari Holtzman, et al.
University of Washington

Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models, but when used to generate natural language their output tends to be overly generic, repetitive, and self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the notion of communicative goals described by linguistic principles such as Grice's Maxims. We propose learning a mixture of multiple discriminative models that can be used to complement the RNN generator and guide the decoding process. Human evaluation demonstrates that text generated by our system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.


page 1

page 2

page 3

page 4


Syntactically Informed Text Compression with Recurrent Neural Networks

We present a self-contained system for constructing natural language mod...

Controlling Linguistic Style Aspects in Neural Language Generation

Most work on neural natural language generation (NNLG) focus on controll...

Judge the Judges: A Large-Scale Evaluation Study of Neural Language Models for Online Review Generation

Recent advances in deep learning have resulted in a resurgence in the po...

Deconvolutional Paragraph Representation Learning

Learning latent representations from long text sequences is an important...

Regularization and nonlinearities for neural language models: when are they needed?

Neural language models (LMs) based on recurrent neural networks (RNN) ar...