Collaborative Training of GANs in Continuous and Discrete Spaces for Text Generation

10/16/2020
by   Yanghoon Kim, et al.
0

Applying generative adversarial networks (GANs) to text-related tasks is challenging due to the discrete nature of language. One line of research resolves this issue by employing reinforcement learning (RL) and optimizing the next-word sampling policy directly in a discrete action space. Such methods compute the rewards from complete sentences and avoid error accumulation due to exposure bias. Other approaches employ approximation techniques that map the text to continuous representation in order to circumvent the non-differentiable discrete process. Particularly, autoencoder-based methods effectively produce robust representations that can model complex discrete structures. In this paper, we propose a novel text GAN architecture that promotes the collaborative training of the continuous-space and discrete-space methods. Our method employs an autoencoder to learn an implicit data manifold, providing a learning objective for adversarial training in a continuous space. Furthermore, the complete textual output is directly evaluated and updated via RL in a discrete space. The collaborative interplay between the two adversarial trainings effectively regularize the text representations in different spaces. The experimental results on three standard benchmark datasets show that our model substantially outperforms state-of-the-art text GANs with respect to quality, diversity, and global consistency.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

09/01/2021

OptAGAN: Entropy-based finetuning on text VAE-GAN

Transfer learning through large pre-trained models has changed the lands...
04/23/2019

TextKD-GAN: Text Generation using KnowledgeDistillation and Generative Adversarial Networks

Text generation is of particular interest in many NLP applications such ...
04/04/2019

Generative Adversarial Networks for text using word2vec intermediaries

Generative adversarial networks (GANs) have shown considerable success, ...
06/13/2017

Adversarially Regularized Autoencoders

While autoencoders are a key technique in representation learning for co...
03/05/2020

WGAN-based Autoencoder Training Over-the-air

The practical realization of end-to-end training of communication system...
10/13/2019

Rethinking Exposure Bias In Language Modeling

Exposure bias describes the phenomenon that a language model trained und...
05/15/2018

Generating Continuous Representations of Medical Texts

We present an architecture that generates medical texts while learning a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.