Meta-CoTGAN: A Meta Cooperative Training Paradigm for Improving Adversarial Text Generation

03/12/2020
by   Haiyan Yin, et al.
0

Training generative models that can generate high-quality text with sufficient diversity is an important open problem for Natural Language Generation (NLG) community. Recently, generative adversarial models have been applied extensively on text generation tasks, where the adversarially trained generators alleviate the exposure bias experienced by conventional maximum likelihood approaches and result in promising generation quality. However, due to the notorious defect of mode collapse for adversarial training, the adversarially trained generators face a quality-diversity trade-off, i.e., the generator models tend to sacrifice generation diversity severely for increasing generation quality. In this paper, we propose a novel approach which aims to improve the performance of adversarial text generation via efficiently decelerating mode collapse of the adversarial training. To this end, we introduce a cooperative training paradigm, where a language model is cooperatively trained with the generator and we utilize the language model to efficiently shape the data distribution of the generator against mode collapse. Moreover, instead of engaging the cooperative update for the generator in a principled way, we formulate a meta learning mechanism, where the cooperative update to the generator serves as a high level meta task, with an intuition of ensuring the parameters of the generator after the adversarial update would stay resistant against mode collapse. In the experiment, we demonstrate our proposed approach can efficiently slow down the pace of mode collapse for the adversarial text generators. Overall, our proposed method is able to outperform the baseline approaches with significant margins in terms of both generation quality and diversity in the testified domains.

READ FULL TEXT
research
01/31/2020

Self-Adversarial Learning with Comparative Discrimination for Text Generation

Conventional Generative Adversarial Networks (GANs) for text generation ...
research
11/06/2018

Language GANs Falling Short

Generating high-quality text with sufficient diversity is essential for ...
research
04/05/2020

A Discriminator Improves Unconditional Text Generation without Updating the Generator

We propose a novel mechanism to improve a text generator with a discrimi...
research
04/05/2020

A Discriminator Improves Unconditional Text Generation without Updating the Generato

We propose a novel mechanism to improve a text generator with a discrimi...
research
04/29/2020

Generating Safe Diversity in NLG via Imitation Learning

Deep-learning models for language generation tasks tend to produce repet...
research
05/06/2020

Token Manipulation Generative Adversarial Network for Text Generation

MaskGAN opens the query for the conditional language model by filling in...
research
04/24/2023

Towards Mode Balancing of Generative Models via Diversity Weights

Large data-driven image models are extensively used to support creative ...

Please sign up or login with your details

Forgot password? Click here to reset