Long Text Generation via Adversarial Training with Leaked Information

09/24/2017
by   Jiaxian Guo, et al.
0

Automatically generating coherent and semantically meaningful text has many applications in machine translation, dialogue systems, image captioning, etc. Recently, by combining with policy gradient, Generative Adversarial Nets (GAN) that use a discriminative model to guide the training of the generative model as a reinforcement learning policy has shown promising results in text generation. However, the scalar guiding signal is only available after the entire text has been generated and lacks intermediate information about text structure during the generative process. As such, it limits its success when the length of the generated text samples is long (more than 20 words). In this paper, we propose a new framework, called LeakGAN, to address the problem for long text generation. We allow the discriminative net to leak its own high-level extracted features to the generative net to further help the guidance. The generator incorporates such informative signals into all generation steps through an additional Manager module, which takes the extracted features of current generated words and outputs a latent vector to guide the Worker module for next-word generation. Our extensive experiments on synthetic data and various real-world tasks with Turing test demonstrate that LeakGAN is highly effective in long text generation and also improves the performance in short text generation scenarios. More importantly, without any supervision, LeakGAN would be able to implicitly learn sentence structures only through the interaction between Manager and Worker.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2016

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

As a new way of training generative models, Generative Adversarial Nets ...
research
12/01/2017

Text Generation Based on Generative Adversarial Nets with Latent Variable

In this paper, we propose a model using generative adversarial net (GAN)...
research
08/20/2019

ARAML: A Stable Adversarial Training Framework for Text Generation

Most of the existing generative adversarial networks (GAN) for text gene...
research
06/22/2020

Efficient text generation of user-defined topic using generative adversarial networks

This study focused on efficient text generation using generative adversa...
research
05/04/2020

Improving Adversarial Text Generation by Modeling the Distant Future

Auto-regressive text generation models usually focus on local fluency, a...
research
06/01/2019

Adversarial Generation and Encoding of Nested Texts

In this paper we propose a new language model called AGENT, which stands...
research
11/15/2019

CatGAN: Category-aware Generative Adversarial Networks with Hierarchical Evolutionary Learning for Category Text Generation

Generating multiple categories of texts is a challenging task and draws ...

Please sign up or login with your details

Forgot password? Click here to reset