Language Modeling with Generative AdversarialNetworks

04/08/2018
by   Mehrad Moradshahi, et al.
0

Generative Adversarial Networks (GANs) have been promising in the field of image generation, however, they have been hard to train for language generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them which causes high levels of instability in training GANs. Consequently, past work has resorted to pre-training with maximum-likelihood or training GANs without pre-training with a WGAN objective with a gradient penalty. In this study, we present a comparison of those approaches. Furthermore, we present the results of some experiments that indicate better training and convergence of Wasserstein GANs (WGANs) when a weaker regularization term is enforcing the Lipschitz constraint.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2017

Language Generation with Recurrent Generative Adversarial Networks without Pre-training

Generative Adversarial Networks (GANs) have shown great promise recently...
research
05/19/2022

Why GANs are overkill for NLP

This work offers a novel theoretical perspective on why, despite numerou...
research
05/31/2017

Adversarial Generation of Natural Language

Generative Adversarial Networks (GANs) have gathered a lot of attention ...
research
09/26/2017

On the regularization of Wasserstein GANs

Since their invention, generative adversarial networks (GANs) have becom...
research
08/04/2022

A Representation Modeling Based Language GAN with Completely Random Initialization

Text generative models trained via Maximum Likelihood Estimation (MLE) s...
research
01/15/2018

Unsupervised Cipher Cracking Using Discrete GANs

This work details CipherGAN, an architecture inspired by CycleGAN used f...
research
05/23/2019

Training language GANs from Scratch

Generative Adversarial Networks (GANs) enjoy great success at image gene...

Please sign up or login with your details

Forgot password? Click here to reset