DeepAI AI Chat
Log In Sign Up

Learning to Write with Cooperative Discriminators

05/16/2018
by   Ari Holtzman, et al.
University of Washington
0

Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models, but when used to generate natural language their output tends to be overly generic, repetitive, and self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the notion of communicative goals described by linguistic principles such as Grice's Maxims. We propose learning a mixture of multiple discriminative models that can be used to complement the RNN generator and guide the decoding process. Human evaluation demonstrates that text generated by our system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/08/2016

Syntactically Informed Text Compression with Recurrent Neural Networks

We present a self-contained system for constructing natural language mod...
07/09/2017

Controlling Linguistic Style Aspects in Neural Language Generation

Most work on neural natural language generation (NNLG) focus on controll...
01/02/2019

Judge the Judges: A Large-Scale Evaluation Study of Neural Language Models for Online Review Generation

Recent advances in deep learning have resulted in a resurgence in the po...
08/16/2017

Deconvolutional Paragraph Representation Learning

Learning latent representations from long text sequences is an important...
01/23/2013

Regularization and nonlinearities for neural language models: when are they needed?

Neural language models (LMs) based on recurrent neural networks (RNN) ar...